Mon blog-notes à moi que j'ai

Blog personnel d'un sysadmin, tendance hacker

Compilation veille Twitter & RSS #2016-38

La moisson de liens pour la semaine du 19 au 23 septembre 2016. Ils ont, pour la plupart, été publiés sur mon compte Twitter. Les voici rassemblés pour ceux qui les auraient raté.

Bonne lecture

Security & Privacy

Opportunistic Encryption: Bringing HTTP/2 to the unencrypted web
Encrypting the web is not an easy task. Various complexities prevent websites from migrating from HTTP to HTTPS, including mixed content, which can prevent sites from functioning with HTTPS.
Opportunistic Encryption provides an additional level of security to websites that have not yet moved to HTTPS and the performance benefits of HTTP/2. Users will not see a security indicator for HTTPS in the address bar when visiting a site using Opportunistic Encryption, but the connection from the browser to the server is encrypted.
How Dropbox securely stores your passwords
It’s universally acknowledged that it’s a bad idea to store plain-text passwords. If a database containing plain-text passwords is compromised, user accounts are in immediate danger. For this reason, as early as 1976, the industry standardized on storing passwords using secure, one-way hashing mechanisms (starting with Unix Crypt). Unfortunately, while this prevents the direct reading of passwords in case of a compromise, all hashing mechanisms necessarily allow attackers to brute force the hash offline, by going through lists of possible passwords, hashing them, and comparing the result. In this context, secure hashing functions like SHA have a critical flaw for password hashing: they are designed to be fast. A modern commodity CPU can generate millions of SHA256 hashes per second. Specialized GPU clusters allow for calculating hashes at a rate of billions per second.
Introducing TLS 1.3
The encrypted Internet is about to become a whole lot snappier. When it comes to browsing, we’ve been driving around in a beat-up car from the 90s for a while. Little does anyone know, we’re all about to trade in our station wagons for a smoking new sports car. The reason for this speed boost is TLS 1.3, a new encryption protocol that improves both speed and security for Internet users everywhere. As of today, TLS 1.3 is available to all CloudFlare customers.
Encryption Week
Since CloudFlare’s inception, we have worked tirelessly to make encryption as simple and as accessible as possible. Over the last two years, we’ve made CloudFlare the easiest way to enable encryption for web properties and internet services. From the launch of Universal SSL, which gives HTTPS to millions of sites for free, to the Origin CA, which helps customers encrypt their origin servers, to the « No Browser Left Behind » initiative, which ensures that the encrypted Internet is available to everyone, CloudFlare has pushed to make Internet encryption better and more widespread.
Le nouveau guide de recommandations de sécurité relatives à TLS
Le protocole Transport Layer Security (TLS) 1 est une des solutions les plus répandues pour la protection des flux réseau. Le nouveau guide TLS de l’ANSSI s’adresse à tous les publics qui souhaitent se familiariser ou interagir avec ce protocole : responsables de la sécurité des systèmes d’information, administrateurs ou encore développeurs de solutions souhaitant sécuriser des échanges d’information par l’intermédiaire de TLS.
Cooking with Onions: Finding the OnionBalance
This blog post is the first part of the Cooking with Onions series which aims to highlight various interesting developments on the .onion space. This particular post presents a technique for efficiently scaling busy onion services.
Le CyberSOC, un centre opérationnel pour la détection d’incidents de sécurité
La seule protection périmétrique des réseaux et systèmes d’information des entreprises a vécu. Si la mise en place de barrières filtrantes entre l’entreprise et le monde extérieur et en particulier l’internet (avec filtrage du traffic, détection de virus ou malware, etc …) reste un élément de protection indispensable, elles ne garantissent pas, à elles seules, l’entreprise contre l’ensemble des menaces informatiques sur la disponibilité, l’intégrité et la confidentialité des données sensibles de l’entreprise. Voici pourquoi.

System Engineering

Introducing the GitHub Load Balancer
At GitHub we serve billions of HTTP, Git and SSH connections each day. To get the best performance we run on bare metal hardware. Historically one of the more complex components has been our load balancing tier. Traditionally we scaled this vertically, running a small set of very large machines running haproxy, and using a very specific hardware configuration allowing dedicated 10G link failover. Eventually we needed a solution that was scalable and we set out to create a load balancer solution that would run on commodity hardware in our typical data center configuration.
High performance network policies in Kubernetes clusters
Since the release of Kubernetes 1.3 back in July, users have been able to define and enforce network policies in their clusters. These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific « segment » label, for example. Policies control traffic between those segments and even traffic to or from external sources.
How Twitter deploys its widgets JavaScript
Deploys are hard and it can be frustrating to do them. Many bugs manifest themselves during deploys, especially when there are a large number of code changes. Now what if a deploy also goes out to millions of people at once? Here’s the story of how my team makes a deploy of that scale safely and with ease.
Running Consul at scale - Service discovery in the cloud
I had the pleasure to present this at Velocity NY in New York, New York.
I’ve embedded the slides and source code
On the fly virtualization with Cappsule
Have you ever been pwned because of a malicious document? Do you spend way too much time installing throw-away VMware or VirtualBox virtual machines to improve your OPSEC or to test random software found on the Internet? Do you trust your web browser? Users should be able to execute any software without putting the entire system security at stake. Cappsule virtualizes any software on the fly (e.g. web browser, office suite, media player) into lightweight VMs called cappsules thanks to hardware virtualization. Attacks are confined inside cappsules and therefore don’t have any impact on the host OS. Applications don’t need to be repackaged and their usage remains the same for the end user: it is completely transparent. Moreover, the OS doesn’t need to be reinstalled nor modified.
Tuning NGINX
While NGINX is much younger than other web servers, it has quickly become a popular choice. Part of its success is based on it being the web server of choice for those looking for a lightweight and performant web server.
In today’s article, we’ll be taking an out-of-the-box instance of NGINX and tuning it to get more out of an already high-performance web server. While not a complete tuning guide, this article should provide readers with a solid understanding of tuning fundamentals and a few common NGINX tuning parameters.


Zabbix: How to set host encryption parameters via the API?
Zabbix 3.0 introduced a major new feature – encryption between Zabbix components. If you’d like to add a new host with encryption enabled, you would go to the documentation of the host.create method… and be surprised. It says nothing about the encryption at all. You might continue to the host object page, but that wouldn’t have anything on encryption either.
Zabbix: Added a template to group, nothing happened
There are some conceptual things new Zabbix users sometimes misunderstand, and a popular one is about adding templates to groups.
Sometimes users expect that adding a template to the same group as hosts will make the template affect the hosts. It does not.
Building an NGINX Access Log Monitoring Dashboard
NGINX is still trailing relatively far behind Apache, but there is little doubt that it is gaining more and more popularity — w3tech has NGINX usage at 31%, trailing behind Apache’s 51%. This trend contradicts certain difficulties the NGINX community sometimes laments such as its lack of ease-of-use and quality documentation. For now, it seems NGINX’s low memory usage, concurrency, and high performance are good enough reasons to put those issues aside.

Software Engineering

Redesigning the HHVM JIT compiler for better performance
With more than 1 billion people using Facebook every day, we have to ensure that our computing infrastructure runs efficiently and scales as we continue to grow. In early 2013, we replaced our original HipHop Compiler, which statically compiled PHP to C++, with the HipHop Virtual Machine (HHVM). HHVM uses a just-in-time (JIT) compilation approach to execute PHP and Hack programs with improved efficiencies while maintaining the flexibility that PHP developers are accustomed to.
Zuul 2 : The Netflix Journey to Asynchronous, Non-Blocking Systems
We recently made a major architectural change to Zuul, our cloud gateway. Did anyone even notice!? Probably not… Zuul 2 does the same thing that its predecessor did – acting as the front door to Netflix’s server infrastructure, handling traffic from all Netflix users around the world. It also routes requests, supports developers’ testing and debugging, provides deep insight into our overall service health, protects Netflix from attacks, and channels traffic to other cloud regions when an AWS region is in trouble. The major architectural difference between Zuul 2 and the original is that Zuul 2 is running on an asynchronous and non-blocking framework, using Netty. After running in production for the last several months, the primary advantage (one that we expected when embarking on this work) is that it provides the capability for devices and web browsers to have persistent connections back to Netflix at Netflix scale. With more than 83 million members, each with multiple connected devices, this is a massive scale challenge. By having a persistent connection to our cloud infrastructure, we can enable lots of interesting product features and innovations, reduce overall device requests, improve device performance, and understand and debug the customer experience better. We also hoped the Zuul 2 would offer resiliency benefits and performance improvements, in terms of latencies, throughput, and costs. But as you will learn in this post, our aspirations have differed from the results.
The 280-Year-Old Algorithm Inside Google Trips
Algorithms Engineering is a lot of fun because algorithms do not go out of fashion: one never knows when an oldie-but-goodie might come in handy. Case in point: Yesterday, Google announced Google Trips, a new app to assist you in your travels by helping you create your own « perfect day » in a city. Surprisingly, deep inside Google Trips, there is an algorithm that was invented 280 years ago.

Web performances

Performance Impact of Third Party Components
Understanding the performance and impact third party content has on a website isn’t a new thing. Back in 2010 Steve Souders wrote about the complexity of third party content and published a table on the impact some components had on performance. The world of the web has changed quite a bit since then but the need to understand the impact third party content has on page performance has not. Well maybe it has become even more important since then.

Databases Engineering

MySQL & MariaDB

Introducing MySQL InnoDB Cluster – A Hands-On Tutorial
Traditionally, setting up high availability (HA) in MySQL has been a challenging task, especially for people without advanced knowledge of MySQL. From understanding concepts and technologies, to the tooling, specific commands and files to execute and edit, there’s a lot of things you need to know even when planning a test deployment (the Quick Start Guide for Group Replication should give you an idea). So many people end up procrastinating setting up HA until disaster strikes and downtime happens.
MySQL 8.0: Performance Schema Instrumentation of Server Errors
In MySQL 8.0.0, the Performance Schema can now instrument server errors.


Instant Aggregations: Rewriting Queries for Fun and Profit
This is the final post in a three-part series about Instant Aggregations. Read how it all started in The Tale of Caching and Why It Matters from Simon Willnauer and Shay Banon and the meaty middle detailed in The Great Query Refactoring: Thou Shalt Only Parse Once. Enjoy the trilogy!

Data Engineering & Analytics

Lead your own data science projects with the 3 Ps
There’s a lot of literature on learning the technical aspect of data science: statistics, machine learning, data munging, big data. This material will serve you well when starting out or working under a lead. But what about when you are ready to spread your wings and lead a project yourself or embark on a project independently? Here you need a different sort of storytelling - the type that communicates why you are working on a project, what the value is, and what you have accomplished. Without these skills, you run the risk of aimlessly seeking a solution without much to show for it. The last thing you want is to be a deer in the headlights when someone asks you what the business value of your work is. Pair the 3 Vs of big data with the 3 Ps of model development to increase the success rate of your project. Read on to learn how to detail the problem, the process, and your progress on any data science project.

Network Engineering

All Quiet in the IPv4 Internet?
All Quiet in the IPv4 Internet? In 2016, IPv4 exhaustion is on everyone’s lips: four out of five Regional Internet Registries have run out of freely available address space.
In 2016, IPv4 exhaustion is on everyone’s lips: four out of five Regional Internet Registries have run out of freely available address space.

Management & Organization

Site Reliability Engineering: DevOps 2.0
Has there ever been a better time to be in DevOps? TV shows like « Person of Interest » and « Mr. Robot » are getting better at showing what developers actually do, using chunks of working code. Movies like Michael Mann’s « Blackhat » (2015) won praise from Google’s security team for its DevOps accuracy in a few scenes. Look around and you’ll discover elements of DevOps culture filtering out into wider society, such as people in all walks of life discussing their uptime or fast approaching code lock.
On the other hand, perhaps the biggest thorn in the side of DevOps is that developers and operations teams don’t normally get along well. Developers want to rush ahead and compile some groundbreaking code under extremely tight schedules, while operations teams try to slow everyone down to identify systemic risks from accidents or malicious actors. Both teams want to end up with a better user experience, but getting there becomes a power struggle over what that experience truly means.
The dream that brought DevOps together is for someone who can be half dev and half ops. That split desire is exactly the point of the SRE (site reliability engineer).
Cognitive Analytics: ‘Operations Thinking’ for Development
In a recent blog post by my colleague Payal Chakravarty, « Synthetic Monitoring – the Start of the DevOps Monitoring Journey, » he discussed how developers, testers and operations staff all need to ensure their internet and intranet mobile applications and web applications are tested and operate successfully from different points of presence around the world.
La compassion au travail
Je suis récemment tombé sur l’émission Brain Games, et l’expérience que j’ai vue m’a semblé intéressante à partager. L’épisode tournait autour de la compassion.
L’expérience se déroulait en trois parties.