Mon blog-notes à moi que j'ai

Blog personnel d'un sysadmin, tendance hacker

Compilation veille Twitter & RSS #2015-18

La moisson de liens pour les semaine du 4 au 8 mai 2015. Ils ont, pour la plupart été publiés sur mon compte Twitter. Les voici rassemblés pour ceux qui les auraient raté.

Bonne lecture


Introducing FIDO: Automated Security Incident Response
We’re excited to announce the open source release of FIDO (Fully Integrated Defense Operation - apologies to the FIDO Alliance for acronym collision), our system for automatically analyzing security events and responding to security incidents.


The Discovery of Apache ZooKeeper’s Poison Packet
ZooKeeper, for those who are unaware, is a well-known open source project which enables highly reliable distributed coordination. It is trusted by many around the world, including PagerDuty. It provides high availability and linearizability through the concept of a leader, which can be dynamically re-elected, and ensures consistency through a majority quorum.
Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language
Varnish Software has just released Varnish API Engine, a high performance HTTP API Gateway which handles authentication, authorization and throttling all built on top of Varnish Cache. The Varnish API Engine can easily extend your current set of APIs with a uniform access control layer that has built in caching abilities for high volume read operations, and it provides real-time metrics.
HAProxy’s load-balancing algorithm for static content delivery with Varnish
HAProxy supports many load-balancing algorithms which may be used in many different type of cases.
That said, cache servers, which deliver most of the time the static content from your web applications, may require some specific load-balancing algorithms.
Leveraging your CDN to cache « uncacheable » content
For years, traditional CDNs have taught us that any content that requires application logic (like HTML pages, API calls, or AJAX requests) is « dynamic » and therefore uncacheable. As applications have gotten smarter and deployed more logic, more and more content has been classified under this category. This makes it difficult for companies to deliver content in real time to their users without sacrificing performance. Over time, this approach has caused caching strategies to suffer and lessened the impact a CDN can have on efficient content delivery.
Elements of Scale: Composing and Scaling Data Platforms
As software engineers we are inevitably affected by the tools we surround ourselves with. Languages, frameworks, even processes all act to shape the software we build.
Likewise databases, which have trodden a very specific path, inevitably affect the way we treat mutability and share state in our applications.


A New Model for Networking Is Needed
The enterprise network has undergone two major transitions since the introduction of computing as a pervasive business resource. First, the client/server era introduced networking and created the need for basic local-area network (LAN) connectivity. During this era, LANs lived in isolation, and there were several competing connectivity methods including SNA, AppleTalk and LANtastic.


Chef Audit Mode Introduction
I have been working with the audit mode feature introduced in Chef version 12.1.0 – previously announced was the audit-cis cookbook. Audit mode allows users to write custom rules (controls) in Chef recipes using new DSL helpers. In his ChefConf 2015 talk, « Compliance At Velocity, » James Casey goes into more of the background and reasoning for this. For now, I wanted to share a few tips with users who may be trying out this feature, too.


Docker Container Scheduling as a Bin Packing Problem
For the internal OpenDNS engineering hackathon earlier this month, I used data from our Quadra system to develop a Docker container scheduler. The tool combines historical data about container resource consumption with a mathematical model to best decide which host should run each container. This formulation is a type of bin packing problem, and I used the JuliaOpt project’s JuMP package to formulate the solution in the Julia programming language.
Docker Tutorial 8 – Troubleshooting with Sysdig
This is a casual Docker tutorial series. We will start out first with very simple sessions on how to install and use the docker run command. In future videos we will hit more advanced topics.
Understanding Docker Security and Best Practices
Nathan McCauley and I have been working on a bunch of things since joining Docker. One area that we noticed is lacking is in the availability of information around Docker architecture and best practices in securely configuring and deploying Dockerized applications. This knowledge exists across the vast community of Docker users but we realized that we just haven’t gotten around to writing it down and sharing with everyone else.

Software Engineering

Blackfire on cloudControl
Today, we’re proud to announce that Blackfire is now available on the cloudControl Add-on Marketplace.
So You Think Monolith is the Only Alternative to Microservices
The Microservices community is keen to paint the non-mciroservies architectures as « monoliths ». This is a false dichotomy, as many have said. I’ll make my attempt at explaining why so.
You Cannot Have Exactly-Once Delivery
I’m often surprised that people continually have fundamental misconceptions about how distributed systems behave. I myself shared many of these misconceptions, so I try not to demean or dismiss but rather educate and enlighten, hopefully while sounding less preachy than that just did. I continue to learn only by following in the footsteps of others. In retrospect, it shouldn’t be surprising that folks buy into these fallacies as I once did, but it can be frustrating when trying to communicate certain design decisions and constraints.
Distributed Messaging with ZeroMQ
With the increased prevalence and accessibility of cloud computing, distributed systems architecture has largely supplanted more monolithic constructs. The implication of using a service-oriented architecture, of course, is that you now have to deal with a myriad of difficulties that previously never existed, such as fault tolerance, availability, and horizontal scaling. Another interesting layer of complexity is providing consistency across nodes, which itself is a problem surrounded with endless research. Algorithms like Paxos and Raft attempt to provide solutions for managing replicated data, while other solutions offer eventual consistency.


All-in-one Docker with Grafana, InfluxDB, and cloudwatch-to-graphite for AWS/Beanstalk monitoring
I have derived the Docker container docker-grafana-influxdb-cloudwatch that bundles Grafana dashboards, InfluxDB for metrics storage, and runs cloudwatch-to-graphite as a cron job to fetch selected metrics from AWS CloudWatch and feed them into the InfluxDB using its Graphite input plugin. It is configured so that you can run it in AWS Elastic Beanstalk (the main problem being that only a single port can be exposed – I therefore use Nginx to expose the InfluxDB API needed by Grafana at :80/db/).
Pingpong: A reintroduction
About a year ago we introduced Pingpong, our open-source analytics app for anything with a URL. And in the past few months, we’ve beefed up the functionality, polished the interface, and given it a shiny new landing page. If you haven’t played around with Pingpong yet, there’s no better time to check it out and start getting deeper HTTP request data.
Testing our homepage with KissMetrics
We recently used KissMetrics to measure the performance of our upcoming homepage test, which I wrote about on the KissMetrics blog. In that post, I mentioned my method of working backwards from your desired results to determine which reports to run. Once you know the latter, you can work out a required list of events and properties, including where they should be implemented. The goal of this post is to teach you how the process works—enough so that you can replicate it in your own testing methodology. I need to give credit upfront to our VP of Marketing and Sales, Ryan Butters, who taught me this approach.

Log management

Elasticsearch: You Know, For Logs
The data platform team at OpenDNS is always looking at new technologies to improve our real time search platform. Consequently, we have been keeping a close eye on Elasticsearch for quite a while, and even use it for some internal tools and metrics. OpenDNS is now looking at using Elasticsearch as a real time search engine for our DNS log data. OpenDNS needs a powerful real time logging and search platform for several reasons. First and foremost is for our customers. Our customers need to be able to identify malicious activity on their networks as it is happening so they can respond promptly. Any time spent waiting for the data to come in is time that infections could be spreading or attacks could be gaining momentum. Similarly, we at OpenDNS use this data to monitor our own systems using several different metrics. If something goes wrong, we need to know right away so we can fix the problem before it propagates. Overall, getting data in real time means that the people monitoring this data can react in real time. This reaction time could mean the difference between a minor headache and a catastrophic problem.
Elasticsearch: You Know, For Logs [Part 2]
In Part 1 of this series, Elasticsearch proved that it could be configured to consume log data by using Index Templates and an time schema based on time frames. In order to move forwards with Elasticsearch it also needs to be easily scalable while maintaining high availability. This post will first explore three different node roles and how to use them scale an Elasticsearch cluster while maintaining a balanced workload. Moving forwards, this post will talk about a few different failure cases and how to protect a cluster from them.


MySQL & MariaDB

MySQL BInlog Events Use case and Examples
This is in continuation to the previous post, if you have not read that, I would advise you to go through that before continue reading.
I discussed about three use cases in my last post, here I will explain them in detail.
MySQL Binlog Events – reading and handling information from your Binary Log
MySQL replication is among the top features of MySQL. In replication data is replicated from one MySQL Server (also knows as Master) to another MySQL Server(also known as Slave). mysql-binlog-events is a set of libraries which work on top of replication and open directions for myriad of use cases like extracting data from binary log files, building application to support heterogeneous replication, filtering events from binary log files and much more.
Keep your MySQL data in sync when using Tungsten Replicator
MySQL replication isn’t perfect and sometimes our data gets out of sync, either by a failure in replication or human intervention. We are all familiar with Percona Toolkit’s pt-table-checksum and pt-table-sync to help us check and fix data inconsistencies – but imagine the following scenario where we mix regular replication with the Tungsten Replicator
Query and Password Filtering with the MariaDB Audit Plugin
The MariaDB Audit Plugin has been included in MariaDB Server by default since version 5.5.37 and 10.0.9. It’s also pre-loaded in MariaDB Enterprise. The Audit Plugin as of version 1.2.0 includes new filtering options which are very useful. This article explains some aspects of them.


Call me maybe: Elasticsearch 1.5.0
Nine months ago, in June 2014, we saw Elasticsearch lose both updates and inserted documents during transitive, nontransitive, and even single-node network partitions. Since then, folks continue to refer to the post, often asking whether the problems it discussed are still issues in Elasticsearch.


Live Documentation in Vertica using comment on
A very important statement in Vertica is the comment on statement, this statement allows you to create a sort of extended proprieties on the objects you create in the database.

Big Data

Apache Phoenix Joins Cloudera Labs
Apache Phoenix is an efficient SQL skin for Apache HBase that has created a lot of buzz. Many companies are successfully using this technology, including, where Phoenix first started.
Stream Processing and Probabilistic Methods: Data at Scale
Stream processing and related abstractions have become all the rage following the rise of systems like Apache Kafka, Samza, and the Lambda architecture. Applying the idea of immutable, append-only event sourcing means we’re storing more data than ever before. However, as the cost of storage continues to decline, it’s becoming more feasible to store more data for longer periods of time. With immutability, how the data lives isn’t interesting anymore. It’s all about how it moves.

Management & Organisation

DataOps: Creating a Culture of Data Analysts
Today’s rapidly evolving workplace provides a context where our decision makers must be able to leverage their data to make optimal judgments. This is not limited to managers and organizational leaders.