Machine Learning

Vol. 16 No. 3 – May-June 2018

Machine Learning

Mind Your State for Your State of Mind

The interactions between storage and applications can be complex and subtle.

Applications have had an interesting evolution as they have moved into the distributed and scalable world. Similarly, storage and its cousin databases have changed side by side with applications. Many times, the semantics, performance, and failure models of storage and applications do a subtle dance as they change in support of changing business requirements and environmental challenges. Adding scale to the mix has really stirred things up. This article looks at some of these issues and their impact on systems.

by Pat Helland

GitOps: A Path to More Self-service IT

IaC + PR = GitOps

GitOps lowers the bar for creating self-service versions of common IT processes, making it easier to meet the return in the ROI calculation. GitOps not only achieves this, but also encourages desired behaviors in IT systems: better testing, reduction of bus factor, reduced wait time, more infrastructure logic being handled programmatically with IaC, and directing time away from manual toil toward creating and maintaining automation.

by Thomas A. Limoncelli

The Mythos of Model Interpretability

In machine learning, the concept of interpretability is both important and slippery.

Supervised machine-learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? Models should be not only good, but also interpretable, yet the task of interpretation appears underspecified. The academic literature has provided diverse and sometimes non-overlapping motivations for interpretability and has offered myriad techniques for rendering interpretable models. Despite this ambiguity, many authors proclaim their models to be interpretable axiomatically, absent further argument. Problematically, it is not clear what common properties unite these techniques. This article seeks to refine the discourse on interpretability. First it examines the objectives of previous papers addressing interpretability, finding them to be diverse and occasionally discordant. Then, it explores model properties and techniques thought to confer interpretability, identifying transparency to humans and post hoc explanations as competing concepts. Throughout, the feasibility and desirability of different notions of interpretability are discussed. The article questions the oft-made assertions that linear models are interpretable and that deep neural networks are not.

by Zachary C. Lipton