AI

RSS
Sort By:

Putting Machine Learning into Production Systems:
Data validation and software engineering for machine learning

Breck et al. share details of the pipelines used at Google to validate petabytes of production data every day. With so many moving parts it’s important to be able to detect and investigate changes in data distributions before they can impact model performance. "Software Engineering for Machine Learning: A Case Study" shares lessons learned at Microsoft as machine learning started to pervade more and more of the company’s systems, moving from specialized machine-learning products to simply being an integral part of many products and services.

by Adrian Colyer | October 7, 2019

0 comments

The Effects of Mixing Machine Learning and Human Judgment:
Collaboration between humans and machines does not necessarily lead to better outcomes.

Based on the theoretical findings from the existing literature, some policymakers and software engineers contend that algorithmic risk assessments such as the COMPAS software can alleviate the incarceration epidemic and the occurrence of violent crimes by informing and improving decisions about policing, treatment, and sentencing. Considered in tandem, these findings indicate that collaboration between humans and machines does not necessarily lead to better outcomes, and human supervision does not sufficiently address problems when algorithms err or demonstrate concerning biases.

by Michelle Vaccaro, Jim Waldo | September 16, 2019

1 comments

Troubling Trends in Machine Learning Scholarship:
Some ML papers suffer from flaws that could mislead the public and stymie future research.

Flawed scholarship threatens to mislead the public and stymie future research by compromising ML’s intellectual foundations. Indeed, many of these problems have recurred cyclically throughout the history of AI and, more broadly, in scientific research. In 1976, Drew McDermott chastised the AI community for abandoning self-discipline, warning prophetically that "if we can’t criticize ourselves, someone else will save us the trouble." The current strength of machine learning owes to a large body of rigorous research to date, both theoretical and empirical. By promoting clear scientific thinking and communication, our community can sustain the trust and investment it currently enjoys.

by Zachary C. Lipton, Jacob Steinhardt | April 24, 2019

0 comments

Knowledge Base Construction in the Machine-learning Era:
Three critical design points: Joint-learning, weak supervision, and new representations

More information is accessible today than at any other time in human history. From a software perspective, however, the vast majority of this data is unusable, as it is locked away in unstructured formats such as text, PDFs, web pages, images, and other hard-to-parse formats. The goal of knowledge base construction is to extract structured information automatically from this "dark data," so that it can be used in downstream applications for search, question-answering, link prediction, visualization, modeling and much more.

by Alex Ratner, Christopher Ré | July 26, 2018

0 comments

The Mythos of Model Interpretability:
In machine learning, the concept of interpretability is both important and slippery.

Supervised machine-learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world?

by Zachary C. Lipton | July 17, 2018

1 comments

Prediction-Serving Systems:
What happens when we wish to actually deploy a machine learning model to production?

This installment of Research for Practice features a curated selection from Dan Crankshaw and Joey Gonzalez, who provide an overview of machine learning serving systems. What happens when we wish to actually deploy a machine learning model to production, and how do we serve predictions with high accuracy and high computational efficiency? Dan and Joey’s selection provides a thoughtful selection of cutting-edge techniques spanning database-level integration, video processing, and prediction middleware.

by Dan Crankshaw, Joseph Gonzalez | April 25, 2018

1 comments

Making Money Using Math:
Modern applications are increasingly using probabilistic machine-learned models.

A big difference between human-written code and learned models is that the latter are usually not represented by text and hence are not understandable by human developers or manipulable by existing tools. The consequence is that none of the traditional software engineering techniques for conventional programs (such as code reviews, source control, and debugging) are applicable anymore. Since incomprehensibility is not unique to learned code, these aspects are not of concern here.

by Erik Meijer | February 22, 2017

2 comments

The Chess Player who Couldn’t Pass the Salt:
AI: Soft and hard, weak and strong, narrow and general

The problem inherent in almost all nonspecialist work in AI is that humans actually don’t understand intelligence very well in the first place. Now, computer scientists often think they understand intelligence because they have so often been the "smart" kid, but that’s got very little to do with understanding what intelligence actually is. In the absence of a clear understanding of how the human brain generates and evaluates ideas, which may or may not be a good basis for the concept of intelligence, we have introduced numerous proxies for intelligence, the first of which is game-playing behavior.

by George Neville-Neil | December 26, 2016

0 comments

Natural Language Translation at the Intersection of AI and HCI:
Old questions being answered with both AI and HCI

The fields of artificial intelligence (AI) and human-computer interaction (HCI) are influencing each other like never before. Widely used systems such as Google Translate, Facebook Graph Search, and RelateIQ hide the complexity of large-scale AI systems behind intuitive interfaces. But relations were not always so auspicious. The two fields emerged at different points in the history of computer science, with different influences, ambitions, and attendant biases. AI aimed to construct a rival, and perhaps a successor, to the human intellect. Early AI researchers such as McCarthy, Minsky, and Shannon were mathematicians by training, so theorem-proving and formal models were attractive research directions.

by Spence Green, Jeffrey Heer, Christopher D. Manning | June 28, 2015

1 comments

AI Gets a Brain:
New technology allows software to tap real human intelligence.

In the 50 years since John McCarthy coined the term artificial intelligence, much progress has been made toward identifying, understanding, and automating many classes of symbolic and computational problems that were once the exclusive domain of human intelligence. Much work remains in the field because humans still significantly outperform the most powerful computers at completing such simple tasks as identifying objects in photographs - something children can do even before they learn to speak.

by Jeff Barr, Luis Felipe Cabrera | June 30, 2006

0 comments

AI in Computer Games:
Smarter games are making for a better user experience. What does the future hold?

If you’ve been following the game development scene, you’ve probably heard many remarks such as: "The main role of graphics in computer games will soon be over; artificial intelligence is the next big thing!" Although you should hardly buy into such statements, there is some truth in them. The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players’ immersion and fun.

by Alexander Nareyek | February 24, 2004

0 comments