Mon blog-notes à moi que j'ai

Blog personnel d'un sysadmin, tendance hacker

Compilation veille Twitter & RSS #2016-28

La moisson de liens pour la semaine du 11 au 15 juillet 2016. Ils ont, pour la plupart, été publiés sur mon compte Twitter. Les voici rassemblés pour ceux qui les auraient raté.

Bonne lecture

Security & Privacy

Notes on Hijacking GSM/GPRS Connections
As shown in previous blogposts we regularly work with GSM/GPRS basestations for testing devices with cellular uplinks or to simply run a private network during TROOPERS. Here the core difference between a random TROOPERS attendee and a device we want to hack is the will to join our network, or not! While at the conference we hand out own SIM cards which accept the TROOERPS GSM network as their « home network » some device need to be pushed a little bit.
Why You Shouldn’t Roll Your Own Authentication
« Should I roll my own authentication? »
Given how easy it is to build an authentication system with Rails’ has_secure_password and the authenticate method (as shown in Hartl’s tutorial), why would you jump straight to a gem like Devise, which is hard to understand and customize?
In this article, I hope to lay down the case for why I think the answer to my first question is, « No, you shouldn’t roll your own authentication for a production app. »
DNS Privacy
The DNS is normally a relatively open protocol that smears its data (which is your data and mine too!) far and wide. Little wonder that the DNS is used in many ways, not just as a mundane name resolution protocol, but as a data channel for surveillance and as a common means of implementing various forms of content access control. But all this is poised to change. Now that the Snowden files have sensitized us to the level of such activities, we have become acutely aware that many of our tools are just way too trusting, way too chatty, and way too easily subverted. First and foremost in this collection of vulnerable tools is the Domain Name System.
Décryptage du nouveau règlement européen sur la protection des données personnelles
Le nouveau règlement européen 2016/679 sur la protection des données personnelles vient d’être adopté. Sa mise en application directe et de manière harmonisée dans l’ensemble des Etats membres sera obligatoire dès mai 2018 et remplacera la Directive 95/46/CE. Voici les atouts des normes dans la mise en application de ce nouveau règlement
Le chiffrement pour protéger votre vie privée sur Internet
Il y a quelques semaines, nous publiions un article vous donnant 4 conseils faciles à suivre pour créer une première ligne de défense pour vos données et votre vie privée en ligne. Nous continuons dans cette lancée, en nous attaquant ce mois-ci au chiffrement : un nom qui, dans l’imaginaire collectif, est souvent associé à des processus compliqués, mais permet de garantir qu’elles ne soient compréhensibles que par les personnes à qui elles sont destinées. En réalité, des efforts sont faits depuis quelques années pour que la cryptographie, bien que compliquée dans sa conception, soit accessible et utilisable par tous.
Security (SSL) has a Performance Tax
While on a call with a customer last week, I was faced with the question of why the customer’s site had slowed down, even after they had switched to full SSL.
Well, to be honest, it’s pretty obvious; SSL is more expensive from a web performance perspective. However, it wasn’t until I saw this chart that I realized how bad it has gotten.
Sécurité des mobiles: anatomie d’une arnaque sur Android
Il y a longtemps que je ne m’étais pas intéressé à la sécurité sur les portables. Au travers d’une arnaque survenue à un proche voici quelques trucs et astuces pour résoudre le problème et pour s’en prémunir.
Operator Level DNS Hijacking
Following my recent research on DNS hijacking and the cases I have personally observed, I wondered whether this is a common practice among the operators. With the help of RIPE Atlas, I started to think of a solution to figure out whether such practice is widespread in other areas of the world.

System Engineering

Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications
As Kubernetes users scale their production deployments we’ve heard a clear desire to deploy services across zone, region, cluster and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster multi-zone deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.
Sysdig Tracers: Open-source tracing for applications, systems, and networks
Sysdig tracers let you track and measure spans of execution in a distributed software system. You can instrument almost anything with a Sysdig tracer – a method in your software, a service call, a network request, a shell command execution, a script, and any other thing that can happen in a computer system.
Once you instrument something with a sysdig tracer, you can monitor how long it takes to complete, you can observe the system activity taking place inside it or you can trace how it progresses through your system.
Kubernetes in Rancher: the further evolution
Kubernetes was the first external orchestration platform supported by Rancher, and since its release, it has become one of the most widely used among our users, and continues to grow rapidly in adoption. As Kubernetes has evolved, so has Rancher in terms of adapting new Kubernetes features. We’ve started with supporting Kubernetes version 1.1, then switched to 1.2 as soon as it was released, and now we’re working on supporting the exciting new features in 1.3. I’d like to walk you through the features that we’ve been adding support for during each of these stages.
Autoscaling in Kubernetes
Customers using Kubernetes respond to end user requests quickly and ship software faster than ever before. But what happens when you build a service that is even more popular than you planned for, and run out of compute? In Kubernetes 1.3, we are proud to announce that we have a solution: autoscaling. On Google Compute Engine (GCE) and Google Container Engine (GKE) (and coming soon on AWS), Kubernetes will automatically scale up your cluster as soon as you need it, and scale it back down to save you money when you don’t.
More data, more data
The life of a request to CloudFlare begins and ends at the edge. But the afterlife! Like Catullus to Bithynia, the log generated by an HTTP request or a DNS query has much, much further to go.
This post comes from CloudFlare’s Data Team. It reports the state of processing these sort of edge logs, including what’s worked well for us and what remains a challenge in the time since our last post from April 2015.
rktnetes brings rkt container engine to Kubernetes
As part of Kubernetes 1.3, we’re happy to report that our work to bring interchangeable container engines to Kubernetes is bearing early fruit. What we affectionately call « rktnetes » is included in the version 1.3 Kubernetes release, and is ready for development use. rktnetes integrates support for CoreOS rkt into Kubernetes as the container runtime on cluster nodes, and is now part of the mainline Kubernetes source code. Today it’s easier than ever for developers and ops pros with container portability in mind to try out running Kubernetes with a different container engine.
Minikube: easily run Kubernetes locally
While Kubernetes is one of the best tools for managing containerized applications available today, and has been production-ready for over a year, Kubernetes has been missing a great local development platform.
For the past several months, several of us from the Kubernetes community have been working to fix this in the Minikube repository on GitHub. Our goal is to build an easy-to-use, high-fidelity Kubernetes distribution that can be run locally on Mac, Linux and Windows workstations and laptops with a single command.
Introducing Complex Builds for Docker
A great strength of Docker is its simplicity. Dockerfile, the recipe for building a Docker image, is an example of this simplicity. Few commands and a flat structure make it easier to understand and use.
This simplicity however comes with a price: as the requirements for containerised applications become more complex, the simplicity of Dockerfile can become a hinderance in delivering the desired results.


Making Facebook self-healing: Automating proactive rack maintenance
We always want Facebook’s products and services to work well, for anyone who uses them, no matter where they are in the world. This motivates us to be proactive in detecting and addressing problems in our production infrastructure, so we can avoid failures that could slow down or interrupt service to the millions of people using Facebook at any given time.
Monitoring de synchronisation ADSL avec Telegraf, InfluxDB, Grafana, et un peu d’huile de coude
Avec les problèmes à répétition que rencontre la connexion ADSL de ma mère depuis un an, et la Freebox n’enregistrant que les douze derniers évènements de déconnexion/connexion, j’ai cherché une solution pour collecter plus longuement ces infos et en disposer sous forme visuelle claire. Après avoir lu l’article de Guillaume sur ce genre de solution, et disposant d’un Raspberry Pi B+ en pré-retraite, je me suis dit qu’elle était toute indiquée pour le but recherché. Récit d’un petit voyage pas toujours tranquille.
Monitoring Docker Containers - docker stats, cAdvisor, Universal Control Plane
This article was originally published on Couchbase by Arun Gupta and with his permission, we’re sharing it here for Codeship readers.
VividCortex Doesn’t Do Root Cause Analysis… and That’s a Good Thing
Root cause mentality is the idea that when an issue appears within a system, it’s always possible to find a single, underlying reason why, if you just dig deep enough.
Much of the time, this approach to problem-solving makes sense, it feels intuitive, and it leads to efficient solutions. However, it’s a rationale that doesn’t hold up universally — especially when facing problems of higher complexity. When applied to database monitoring, for instance, this logic might lead people to assume that an « ideal » monitoring product should come equipped with powerful root cause analysis: the ability to seek out, identify, and point to the root cause of an issue within the database system. From there, in theory, crafting an effective solution should be easy. Unfortunately, this approach seriously overlooks the actual complexity of modern-day systems. And that oversight can be costly.

Software Engineering

Neither self nor this: Receivers in Go
When getting started with Go, there is a strong temptation to bring baggage from your previous language. It’s a heuristic which is usually helpful, but sometimes counter-productive and inevitably results in regret.
Go does not have classes and objects, but it does have types that we can make many instances of. Further, we can attach methods to these types and they kind-of start looking like the classes we’re used to. When we attach a method to a type, the receiver is the instance of the type for which it was called.
Global Languages Support at Netflix - Testing Search Queries
Having launched the Netflix service globally in January, we now support search in 190 countries. We currently support 20 languages, and this will continue to grow over time. Some of the most challenging language support was added while launching in Japan and Korea as well as in the Chinese and Arabic speaking countries. We have been working on tuning the language specific search prior to each launch by creating and tuning the localized datasets of the documents and their corresponding queries. While targeting a high recall for the launch of a new language, our ranking systems focus on increasing the precision by ranking the most relevant results high on the list.
How we reduced boilerplate and handled asynchronous actions with Redux
At Algolia, we do a lot of front-end JavaScript, because for us UX and UI are key components of a great search – we want the search experience to be perfect across the entire stack.
Recently, we’ve been playing quite a lot with React in our projects, especially in Instantsearch.js, an open source library for building complex search UI widgets. So far our experience has been very positive; it was so good that we eventually decided to introduce React onto our dashboard, along with Redux to handle shared state across components.
Feature Branching Using Feature Flags
Feature branching allows developers to effectively collaborate around a central code base by keeping all changes to a specific feature in its own branch. With the addition of feature flags, feature branching becomes even more powerful and faster by separating feature release management from code deployment.
Reducing Video Loading Time - prefetching video during preroll
At Dailymotion, we do our best to enhance our viewers experience.
It is a known fact that video buffering is a big factor of user frustration, lots of studies and articles have shown that video rebuffering has a significant impact on user engagement.
Python 3 Testing: An Intro to unittest
The unittest module is actually a testing framework that was originally inspired by JUnit. It currently supports test automation, the sharing of setup and shutdown code, aggregating tests into collections and the independence of tests from the reporting framework.

Web performances

Lepton image compression: saving 22% losslessly from images at 15MB/s
We are pleased to announce the open source release of Lepton, our new streaming image compression format, under the Apache license.
Lepton achieves a 22% savings reduction for existing JPEG images, by predicting coefficients in JPEG blocks and feeding those predictions as context into an arithmetic coder. Lepton preserves the original file bit-for-bit perfectly. It compresses JPEG files at a rate of 5 megabytes per second and decodes them back to the original bits at 15 megabytes per second, securely, deterministically, and in under 24 megabytes of memory.


The mobile device lab at the Prineville data center
As more people around the world come online for the first time, we want to make sure our apps and services work well for everyone. This means we need to understand the performance implications of a code change on both high-end and typical devices, as well as on a variety of operating systems. We have thousands of changes each week, and given the code intricacies of the Facebook app, we could inadvertently introduce regressions that take up more data, memory, or battery usage.

Databases Engineering


And the big one said « Rollover » - Managing Elasticsearch time-based indices efficiently
Anybody who uses Elasticsearch for indexing time-based data such as log events is accustomed to the index-per-day pattern: use an index name derived from the timestamp of the logging event rounded to the nearest day, and new indices pop into existence as soon as they are required. The definition of the new index can be controlled ahead of time using index templates.

MySQL & MariaDB

Using Ceph with MySQL
Over the last year, the Ceph world drew me in. Partly because of my taste for distributed systems, but also because I think Ceph represents a great opportunity for MySQL specifically and databases in general. The shift from local storage to distributed storage is similar to the shift from bare disks host configuration to LVM-managed disks configuration.


What’s New in Vertica 7.2.3: Check Constraints
In the world of big data, it’s often useful to specify requirements that must be met by each column in a database table. For example, if your table contains retail prices for clothing products, you would want to make sure none of the values are negative. As of Vertica 7.2.3, you can accomplish this by specifying check constraints. Check constraints let you specify that data must meet certain criteria to be loaded into a table. You specify check constraints using a SQL predicate (a Boolean expression) that Vertica uses to evaluate each column of a table. The predicate cannot access data that is stored in other tables or database objects.

Data Engineering & Analytic

Building a Data Science Portfolio: Storytelling with Data (Part 2: Data Exploration)
The following post (Part 2 of two parts) by Vik Paruchuri, founder of data science learning platform Dataquest, offers some detailed and instructive insight about data science workflow (regardless of the tech stack involved, but in this case, using Python). We re-publish it here for your convenience.
Before we dive into exploring the data [see Part 1 for steps relating to data preparation], we’ll want to set the context, both for ourselves, and anyone else that reads our analysis. One good way to do this is with exploratory charts or maps. In this case, we’ll map out the positions of the schools, which will help readers understand the problem we’re exploring.
Billions of Messages a Day - Yelp’s Real-time Data Pipeline
This is the first post in a series covering Yelp’s real-time streaming data infrastructure. Our series will explore in-depth how we stream MySQL updates in real-time with an exactly-once guarantee, how we automatically track & migrate schemas, how we process and transform streams, and finally how we connect all of this into datastores like Redshift and Salesforce.
What Goes Down Better Come Up a.k.a. Adventures in Hbase Diagnostics
Earlier this year, the feedly cloud had a rough patch: API requests and feed updates started slowing down, eventually reaching the point where we experienced outages for a few days during our busiest time of the day (weekday mornings). For a cloud based company, being down for any period of time is soul-crushing never mind for a few mornings in a row. This led to a lot of frustration, late nights, and general questioning of the order of the universe. But with some persistence we managed to get through it and figure out the problem. We thought it may be some combination of instructive and interesting for others to hear, so we’re sharing the story.
From Pig to Spark: An Easy Journey to Spark for Apache Pig Developers
As a data analyst that primarily used Apache Pig in the past, I eventually needed to program more challenging jobs that required the use of Apache Spark, a more advanced and flexible language. At first, Spark may look a bit intimidating, but this blog post will show that the transition to Spark (especially PySpark) is quite easy.
How to Build a Play Recommendation Engine for the Avignon Festival with Dataiku DSS
Today I’m going to tell you about a project that was inspired by an overheard conversation during lunch: Alivia Smith (who you are already familiar with if you are an avid reader of our blog) was struggling with the schedule of the Avignon Festival, a French theater festival; struggling because there are so many plays and events happening, but no real guide or documentation to help her decide on her schedule. (Note: the 2016’s Avignon Off festival regroups over 50 different plays, both French and foreign, for about 300 representations in an 18-days period.)
Using Spark GraphFrames to Analyze Facebook Connections
Sooner or later, if you eyeball enough data sets, you will encounter some that look like a graph, or are best represented a graph. Whether it’s social media, computer networks, or interactions between machines, graph representations are often a straightforward choice for representing relationships among one or more entities. The practice of using graphs to model real-life phenomenon in data structures has been around as long as computing itself.

Network Engineering

IPv6 at LinkedIn Part I
To celebrate the anniversary of World IPv6 Day (June 6, 2011), we at LinkedIn wanted to mark the occasion in a significant way. We’ve worked for a number of months on enabling IPv6 in our data centers. We’ve designed the new architecture and prepared network, systems, and tools, so on June 6 we enabled IPv6 in one of our staging environments. This was a milestone toward having functional dual stack IPv4/IPv6 in all our data centers, which will be our next goal before we start to retire, or begin « chippIn » away at, IPv4.
In this series of posts, we’ll discuss different aspects of migrating from IPv4 to IPv6, specifically including some of the challenges larger organizations like LinkedIn face in that process. But first, a little background information on IPv4, IPv6, and the rationale behind the migration process.
SYN Flood Mitigation with synsanity
GitHub hosts a wide range of user content, and like all large websites this often causes us to become a target of denial of service attacks. Around a year ago, GitHub was on the receiving end of a large, unusual and very well publicised attack involving both application level and volumetric attacks against our infrastructure.
Our users rely on us to be highly available and we take this seriously. Although the attackers are doing the wrong thing, there’s no use blaming the attacker for their attacks being successful. Our commitment is to own our own availability, and that we have a responsibility to mitigate these sorts of attacks to the maximum extent technically possible.

Management & Organization

What is a DevOps Engineer?
DevOps: Is it a methodology or a role?
According to Wikipedia, DevOps is « a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other IT professionals while automating the process of software delivery and infrastructure changes. » The industry indeed usually tends to state that DevOps is a « methodology » or « culture. » But when you look at job listings, you will still see open positions specifically for « DevOps engineers. »
Be a teammate, not a hero
Maybe you’ve been reading comic books for decades, or you’re just vaguely familiar with them as the source material for Hollywood’s biggest blockbusters. Either way, you may not know that while cartooning can be an intensely individual process in which a single person shapes a whole story, making comics is often a complex team effort, shaped by the dynamics between writers and artists. Much like the director, designers, and actors in a film, a comic artist often has as much responsibility as the writer for things like tone, pacing, and story and character development. And a lot of this work is done remotely, between the editors in a publisher’s office and independent creators all over the world. Needless to say, clear communication of everything from the biggest ideas to the smallest details is critical, but not always guaranteed.
Adventures in SRE-land: Welcome to Google Mission Control
We do have a Mission Control at Google, named in honor of NASA’s Christopher C. Kraft Jr. Mission Control Center, pictured here. But at Google, Mission Control is not a place. It’s a six month rotation program for engineers working on product development to experience what it’s like to be a Site Reliability Engineer (SRE). The goal is to increase the number of engineers who understand the challenges of building and operating a high reliability service at Google’s scale.
HumanOps: Making Operations Human
What is the number one sysadmin skill?
The ability to problem solve, right? We’re not talking about sudoku and crosswords here. Errors and delays can cost millions. With scale comes complexity, and an exponential increase in things that could go south. In production. At four in the morning.
And here lies the challenge. Sysadmins are not superhumans. They are susceptible to stress and fatigue just like everybody else.