La moisson de liens pour la semaine du 27 juin au 1er juillet 2016. Ils ont, pour la plupart, été publiés sur mon compte Twitter. Les voici rassemblés pour ceux qui les auraient raté.
Security & Privacy
- Getting Started with Shield Document Level Security in Elasticsearch
- Elastic Shield is capable of filtering documents using query criteria. In this blog post I’ll demonstrate how to use this feature by using a simple, two document data set in Elasticsearch where documents will be filtered for the user according to the value of a field and displayed in a simple Kibana visualization. Users and roles will be created with the Users and Roles API.
- Le chiffrement ne suffira pas
- Le chiffrement, s’il n’est pas encore dans tous nos usages — et loin s’en faut, chez la plupart des utilisateurs, est nettement devenu un argument marketing et une priorité pour les entreprises qui distribuent logiciels et services. En effet, le grand public est beaucoup plus sensible désormais à l’argument de la sécurité de la vie privée. Donc les servicesqui permettent la communication en ligne rivalisent d’annonces pour promettre et garantir une sécurité toujours plus grande et que l’on puisse activer d’un simple clic.
Que faut-il croire, à qui et quoi pouvons-nous confier nos communications ?
L’article de Hannes Hauswedell que nous avons traduit nous aide à faire un tri salutaire entre les solutions logicielles du marché, pointe les faux-semblants et les failles, puis nous conduit tranquillement à envisager des solutions fédérées et pair à pair reposant sur des logiciels libres. Des réseaux de confiance en somme, ce qui est proche de l’esprit de l’initiative C.H.A.T.O.N.S portée par Framasoft et qui suscite déjà un intérêt grandissant.
Comme d’habitude les commentaires sont ouverts et libres si vous souhaitez par exemple ajouter vos découvertes à ce recensement critique forcément incomplet.
- etcd3: A new etcd
- Over the past few months, CoreOS has been diligently finalizing the etcd3 API beta, testing the system and working with users to make etcd even better. In practice, etcd3 is already integrated into a large-scale distributed system, Kubernetes, and we have implemented distributed coordination primitives including distributed locks, elections, and software transactional memory, to ensure the etcd3 API is flexible enough to support a variety of applications. Today we’re proud to announce that etcd3 is ready for general use.
- HTTP/2 Server Push with multiple assets per Link header
- In April we announced that we had added experimental support for HTTP/2 Server Push to all CloudFlare web sites. We did this so that our customers could iterate on this new functionality.
Our implementation of Server Push made use of the HTTP Link header as detailed in W3C Preload Working Draft.
We also showed how to make Server Push work from within PHP code and many people started testing and using this feature.
- Continuous Integration at Segment
- As part of our push to open up what’s going on internally at Segment – we’d like to share how we run our CI builds. Most of our approaches follow standard practices, but we wanted to share a few tips and tricks we use to speed up our build pipeline.
Powering all of our builds are CircleCI, Github, and Docker Hub. Whenever there’s a push to Github, the repository triggers a build on CircleCI. If that build is a tagged release, and passes the tests, we build an image for that container.
The image is then pushed to Docker Hub, and is ready to be deployed to our production infrastructure.
- NGINX and Zookeeper, Dynamic Load Balancing and Deployments
- This post is adapted from a webinar hosted at nginx.conf from September 22-24, 2015 by Derek DeJonghe.
- How Facebook Live Streams to 800,000 Simultaneous Viewers
- Fewer companies know how to build world spanning distributed services than there are countries with nuclear weapons. Facebook is one of those companies and Facebook Live, Facebook’s new live video streaming product, is one one of those services.
- Prometheus and Kubernetes up and running
- You may have read recently on this blog about CoreOS investing development resources in the open source Prometheus monitoring system. Prometheus provides complete container cluster monitoring: instrumentation, collection, querying, and alerting. Monitoring is an integral part of ensuring infrastructure reliability and performance through observability, and Prometheus is a unified approach to monitoring all the components of a Kubernetes cluster, including the control plane, the worker nodes, and the applications running on the cluster.
- Increasing Resource Efficiency with Microscaling
- According to Gartner, the average data center utilization worldwide is around 10 to 15 percent, which isn’t great for resource efficiency. The leaders in resource utilization, Google and Netflix in particular, do a lot better at 50 to 70 percent.
Unfortunately, resource efficiency is probably going to get worse if we don’t do anything about it. Public cloud and automation tools make it easy to over-provision. Often that’s the only way to handle complexity and unpredictable demand (after all, it’s generally better to over-provision than to fall over).
- Lossless compression with Brotli for a bit of Pied Piper on the backend
- In HBO’s Silicon Valley, lossless video compression plays a pivotal role for Pied Piper as they struggle to stream HD content at high speed.
Inspired by Pied Piper, we created our own version of their algorithm last year at Hack Week. In fact, we’ve extended that work and have a bit-exact, lossless media compression algorithm that achieves extremely good results on a wide array of images. (Stay tuned for more on that!)
However, to help our users sync and collaborate faster, we also need to work with a standardized compression format that already ships with most browsers. In that vein, we’ve been working on open source improvements to the Brotli codec, which will make it possible to ship bits to our business customers using 4.4% less of their bandwidth than through gzip.
- The complete guide to Go net/http timeouts
- When writing an HTTP server or client in Go, timeouts are amongst the easiest and most subtle things to get wrong: there’s many to choose from, and a mistake can have no consequences for a long time, until the network glitches and the process hangs.
HTTP is a complex multi-stage protocol, so there’s no one-size fits all solution to timeouts. Think about a streaming endpoint versus a JSON API versus a Comet endpoint. Indeed, the defaults are often not what you want.
In this post I’ll take apart the various stages you might need to apply a timeout to, and look at the different ways to do it, on both the Server and the Client side.
- How Uber Engineering Increases Safe Driving with Telematics
- Across the globe, nearly 1,250,000 people die in road crashes each year¹. At Uber, we’re determined to decrease this number by raising awareness of driving patterns to our partners.
In fact, an entire team at Uber focuses on building technology to encourage safer driving. On Uber Engineering’s Driving Safety team, we write code to measure indicators of unsafe driving and help driver partners stay safe on the road. We measure our success by how much we can decrease car crashes, driving-related complaints, and trips during which we detect unsafe driving.
- Jepsen: Crate 0.54.9 version divergence
- In the last Jepsen analysis, we saw that RethinkDB 2.2.3 could encounter spectacular failure modes due to cluster reconfiguration during a partition. In this analysis, we’ll talk about Crate, and find out just how many versions a row’s version identifies.
- Visualizing Concurrency in Go
- One of the strongest sides of Go programming language is a built-in concurrency based on Tony Hoare’s CSP paper. Go is designed with concurrency in mind and allows us to build complex concurrent pipelines. But have you ever wondered – how various concurrency patterns look like?
- Introducing automatic object detection to visual search
- When we launched visual search last year, we gave a first look at what’s possible when you use images as search queries. Now, more than 130 million visual searches are done every month, as people search for the objects, styles and colors they see in Pins and get related recommendations. It’s a whole new kind of search, and a technological challenge.
- Memory optimization for feeds on Android
- Millions of people use Facebook on Android devices every day, scrolling through News Feed, Profile, Events, Pages, and Groups to interact with the people and information they care about. All of these different feed types are powered by a platform created by the Android Feed Platform team, so any optimization we make to the Feed platform could potentially improve performance across our app. We focus on scroll performance, as we want people to have a smooth experience when scrolling through their feeds.
- Elasticsearch Percolator Continues to Evolve
- In 5.0 the percolator is much more flexible and has many improvements. For example, to be able to skip evaluating most queries. All of this is part of the second major refactoring since Elasticsearch 1.0.0, which made the percolator scale with the number of shards and nodes in your cluster. However the underlying mechanism of the percolator hasn’t been changed since this feature was released back in version 0.15.0. If you didn’t make use of query metadata tagging the execution time of the percolator was always linear to the amount of percolator queries, because all percolator queries had to be evaluated all the time. The main purpose of this refactoring was to address this, so that in many cases not all percolator queries have to be evaluated when percolating a document.
- 10 Elasticsearch Concepts You Need to Learn
- Getting acquainted with ELK lingo is one of the first things you’re going to have to do when starting out with the stack. Just like with any programming language, there are some basic concepts that once internalized, make the learning curve less traumatic.
We’ve put together ten of the most important concepts you’re going to want to understand. While the concepts apply specifically to Elasticsearch, they are also are important to understand when operating the stack as a whole. When applicable — and to make it easier to understand — we will compare concepts to parallel terms in the world of relational databases.
MySQL & MariaDB
- Rescuing a crashed pt-online-schema-change with pt-archiver
- This article discusses how to salvage a crashed pt-online-schema-change by leveraging pt-archiver and executing queries to ensure that the data gets accurately migrated. I will show you how to continue the data copy process, and how to safely close out the pt-online-schema-change via manual operations such as RENAME TABLE and DROP TRIGGER commands. The normal process to recover from a crashed pt-online-schema-change is to drop the triggers on your original table and drop the new table created by the script. Then you would restart pt-online-schema-change. In this case, this wasn’t possible.
- MySQL with Docker – Performance characteristics
- Docker presents new levels of portability and ease of use when it comes to deploying systems. We have for some time now released Dockerfiles and scripts for MySQL products, and are not surprised by it steadily gaining traction in the development community.
Data Engineering & Analytic
- Discovery and Consumption of Analytics Data at Twitter
- The Data Platform team at Twitter maintains systems to support and manage the production and consumption of data for a variety of business purposes, including publicly reported metrics (e.g., monthly or daily active users), recommendations, A/B testing, ads targeting, etc. We run some of the largest Hadoop clusters in the world, with a few of them larger than 10K nodes storing more 100s of petabytes of data, with more than 100K daily jobs processing 10s of petabytes of data per day. On the Hadoop Distributed File System (HDFS), Scalding is used for ETL (Extract, Transform, and Load), data science and analytics, while Presto is employed for interactive querying. Vertica (or MySQL) is used for querying commonly aggregated datasets, and for Tableau dashboards. Manhattan is our distributed database used to serve live real-time traffic.
- Wide & Deep Learning: Better Together with TensorFlow
- The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (« sparrows can fly » and « pigeons can fly ») and generalizing those learnings to apply to things we haven’t seen before (« animals with wings can fly »). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (« penguins can’t fly »). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization?
- Stream Processing Hard Problems - Part 1: Killing Lambda
- We live in an age where we want to know relevant things happening around the world as soon as they happen; an age where digital content is updated instantly based on our likes and dislikes; an age where credit card fraud, security breaches, device malfunctions and site outages need to be detected and remedied as soon as they happen. It is an age where events are captured at scale and processed in real time. Real time event processing (stream processing) is not new, however it is now ubiquitous and has reached massive scale.
- Securing Network Infrastructure for DNS Servers
- It’s possible that you have some DoS or DDoS attacks every day or at least every week on your DNS servers if you work for an Internet service provider. We want to mitigate the side effects of these attacks in a simple way - by using a stateless firewall filter or access control list (ACL) without the use of any Intrusion Detection Prevention (IDP) or and Intrusion Detection System (IDS). We can’t stop the attacks with this document, but we can help to mitigate them.
Management & Organization
- 3 Reasons Why People Write Insanely Bad Code
- I have been contemplating over the last number of days as to why we as an industry keep producing bad code all the time.
The number of developers who are able to build systems with high quality code are far and few in between, as the cliche goes, like a needle in a haystack.
- The DevOps (R)evolution: Part 1
- There is no doubting that DevOps is the new kid on the block; every organization I talk with these days has a vague notion of what DevOps is and a sense of which DevOps practices they would like to introduce.
- The DevOps (R)evolution: Part 2
- Part 1 of this post suggested that we should recognize that the shift to a DevOps world is not a revolution, but an evolution of our rich heritage of delivery practices. In part 2 I conclude my summary of the different eras of « best practice » before addressing the questions originally posed.