ACM TechNews

RSS

Timely Topics for IT Professionals

ACM TechNews Home

Recent News

Research Scientists to Use Network Much Faster Than Internet

The New York Times

The U.S. National Science Foundation has provided a five-year, $5-million grant to deploy a series of ultra-high-speed fiber-optic cables to link West Coast university laboratories and supercomputer centers into the Pacific Research Platform. The network is designed to keep up with the acceleration of data compilation in disciplines such as physics, astronomy, and genetics, moving torrents of data at speeds of 10 Gbps to 100 Gbps among participating labs and institutions. The network will not have a direct Internet connection, and Larry Smarr at the University of California, San Diego's (UCSD) California Institute for Telecommunications and Information Technology says it also will function as a template for future computer networks. "I believe that this infrastructure will be for decades to come the kind of architecture by which you use petascale and exascale computers," Smarr says. Moreover, the platform has been outfitted with hardware security measures to shield it from cyberattacks that typically target Internet-linked computers. The network also will enable new types of distributed computing for scientific applications. UCSD's Frank Wuerthwein cites as an example the migration of experimental data to a single location for use by researchers running experiments in remote sites as high-speed links become more widely available.

From "Research Scientists to Use Network Much Faster Than Internet"
The New York Times (07/31/15) John Markoff
View Full Article

Facial Recognition Tool 'Works in Darkness'

BBC News

A new tool can identify people in complete darkness by using their thermal signature and matching infrared images with ordinary photos, using a deep neural network system to process the pictures and recognize faces. Saquib Sarfraz and Rainer Stiefelhagen at the Karlsruhe Institute of Technology are behind the research into facial recognition in thermal images. During testing, the researchers say the system has demonstrated an 80-percent success rate, working 55 percent of the time with one image. However, "more training data and a more powerful architecture" could lead to better results, Sarfraz says. Tom Heseltine, head of research for U.K. face recognition company Aurora, says the ability to use thermal infrared and match the images against standard color photographs is an interesting approach, noting the biggest advantage is the ability to operate in the dark without active infrared illumination. The researchers note that with improved accuracy, the system could serve as a law enforcement tool.

From "Facial Recognition Tool 'Works in Darkness'"
BBC News (07/30/15) Ian Westbrook
View Full Article

NASA and Google's Quantum Artificial Intelligence Laboratory

iTech Post

A joint project between the U.S. National Aeronautics and Space Administration's (NASA) Ames Space Research Center, Google, and the Universities Space Research Association seeks to apply quantum computing to artificial intelligence (AI). The Quantum AI Laboratory team's goal is to collaborate on the implementation and testing of designs for inference processors and quantum optimization, based on knowledge from the D-Wave quantum annealing architecture and recent theoretical breakthroughs. The lab employs NASA's 512-qubit Vesuvius D-Wave Two quantum computer, which has been retrofitted to isolate it from noise and vibrations as well as cooling it to near-absolute zero. The Vesuvius system is being used by NASA scientists to investigate areas where quantum computing may significantly improve the agency's ability to solve optimization problems in space exploration, aeronautics, and Earth and space sciences. Pattern recognition, machine learning, distributed coordination and navigation, mission planning and scheduling, and system diagnostics and anomaly detection are among its potential uses. The advancement of machine learning is relevant to Google's success, and the company has set a new objective of applying its experimentation into quantum computing to real-world results. One goal is building more precise models for myriad processes via the combined power of traditional data centers and highly specialized extreme computing.

From "NASA and Google's Quantum Artificial Intelligence Laboratory"
iTech Post (07/30/2015) Vlad Tverdohleb
View Full Article

Big Data Challenge for Food Resilience

Government Computer News

Microsoft Research and the U.S. Department of Agriculture (USDA) have launched a new contest with the goal of exploring the impact of climate change on the U.S.'s food system. The Innovation Challenge for Food Resilience encourages developers and researchers to build applications that make use of USDA data to provide actionable insights to farmers, agriculture businesses, scientists, or consumers. USDA will make key datasets available on Microsoft's Azure cloud computing platform, enabling complex models to be processed in a timely manner and results to be delivered remotely to users on laptops and mobile devices. Participants can combine the data with data from other government agencies. Microsoft has built a Farm Data Dashboard, which provides a basic interface to the datasets on Azure, and the company will grant cloud-computing awards to aid participants. Users can choose from more than 31 million available records and either download them in bulk or modify the application programming interface to pull specific data. The challenge offers $60,000 in prizes, including a top prize of $25,000. The deadline for entries is Nov. 20, and winners will be picked in December. "Microsoft and the USDA...hope that the challenge will provide a great incentive for developers and researchers interested in data science to put together some great applications helping address the USA's food resiliency needs," says Microsoft Research's Daron Green.

From "Big Data Challenge for Food Resilience"
Government Computer News (07/30/15) Derek Major
View Full Article

How Google Translate Squeezes Deep Learning Onto a Phone

Google Research Blog

The latest version of Google Translate works with deep neural nets to enable object recognition/translation on a mobile phone without an Internet link. The first step is the app identifying letters in the image taken by the phone camera, which it does by studying blobs of pixels with similar color that are also near other similar blobs of pixels. A convolutional neural network trained on letters and non-letters helps the app recognize each letter, and the next step is to look up those letters in a dictionary to get translations. The final step involves rendering the translation on top of the original words in the same style as the original, by examining the colors surrounding the letters and using that to delete the original letter, and then drawing the translation on top using the original foreground color. To support visual, real-time translation on low-end mobile phones without a cloud connection, Google researchers developed a very small neural net, and severely limited how much training data it could handle. The researchers produced tools that would yield a fast iteration time and good visualizations. Achieving real-time performance required heavy optimization and hand-tuning the math operations by using the mobile processor's SIMD instructions and adjusting matrix multiplies to fit processing into all levels of cache memory.

From "How Google Translate Squeezes Deep Learning Onto a Phone"
Google Research Blog (07/29/15) Otavio Good
View Full Article

A Programming Language for Robot Swarms

Technology Review

Ecole Polytechnique de Montreal researchers have developed Buzz, a programming language designed to govern the movements of heterogeneous robot swarms and lead to self-organized behavior. The researchers say Buzz accommodates two opposing swarm control strategies--a bottom-up approach in which each robot is controlled individually, and a top-down approach in which the swarm is controlled en masse. "We believe that a language for robot swarms must combine both bottom-up and top-down primitives, allowing the developer to pick the most comfortable level of abstraction to express a swarm algorithm," the researchers say. They also note Buzz enables intuitive command combinations with predictable outcomes that make it relatively simple to use. In addition, the language has scalability so it can be used in different-sized swarms. Team leader Carlo Pinciroli says the lack of a standardized programming language for swarms is a significant obstacle to future progress because there is no easy way for researchers to share their work and build on each other's advances. "We believe that one of the most important aspects of Buzz is its potential to become an enabler for future research on real-world, complex swarm robotics systems," the researchers note. Among the team's future plans is developing a library of established swarm behaviors that will become building blocks for the future work of others.

From "A Programming Language for Robot Swarms"
Technology Review (07/29/15)
View Full Article

These Artworks Were Made by Algorithms

Motherboard

The 2015 International Joint Conference on Artificial Intelligence (IJCAI) focuses on the relationship between artificial intelligence (AI) and art. Researchers are exploring how AI can contribute to the arts, and how the arts might contribute to the development of AI. Some of the works on display this week in Buenos Aires, Argentina, demonstrate the many different ways AI can be used to create art of its own. For example, artist Jon McCormack creates pieces with an automated software drawing program that uses principles of ecosystem dynamics and biological evolution to create an infinite number of line drawings. AI researcher Simon Colton created a piece using evolutionary software to evolve the placement of flowers in the individual circles. Another image assembles three-dimensional digital objects on a virtual canvas, based on a source image. Meanwhile, an algorithm called Photogrowth simulates the behavior of species of artificial ants using their trails to produce a rendering of an original input image. The exhibition and conference also features AI involved in disciplines including dance and music.

From "These Artworks Were Made by Algorithms"
Motherboard (07/29/15) Victoria Turk
View Full Article

Study Shows Co-operative Robots Learn and Adapt Quickly Through Natural Language

CIO Australia

Robots can learn and adjust rapidly to their surroundings solely via natural-language processing, according to a study published as part of the International Joint Conference on Artificial Intelligence in Argentina. The researchers created a dialogue agent for a mobile robot that can be placed within a workplace environment and quickly learn to perform delivery and navigation tasks to assist human workers without needing to be initially trained on a large body of annotated data. The agent automatically induces training examples from conversations it has with people, using a semantic parser to incrementally learn the meaning of previously unseen words. The agent also can conduct multi-entity reasoning while performing navigation tasks. The researchers note this strategy is stronger than keyword search, and is applicable to any context in which robots are assigned high-level goals in natural language. More than 300 users engaged with the agent via the Amazon Mechanical Turk Web interface and 20 users via a wheeled Segbot in an office. The agent initially clarifies with the user what they mean by a request, prompting the user to ask the question another way so it can learn different ways of saying the same thing. Future development will concentrate on applying the agent to speech recognition software, with researchers probing whether it can automatically learn to correct consistent speech recognition errors.

From "Study Shows Co-operative Robots Learn and Adapt Quickly Through Natural Language"
CIO Australia (07/29/15) Rebecca Merrett
View Full Article

Beyond Just 'Big' Data

IEEE Spectrum

As new big data technologies advance and the challenges of extracting meaning from these massive data sets shift, it seems likely computer scientists will need an entirely new vocabulary to define these various trends. For example, big data enthusiasts sometimes categorize storage units as "brontobytes," each of which adds up to 1,000 trillion terabytes. A unit of 1,000 brontobytes is called a "geobyte." Accompanying the new lexicon of big data storage units are new terms for data professionals, which include specialists in building data models, or data architects; managers of data sources, or data stewards/custodians; translators of data into visual form, or data visualizers; and those who change how a company does business based on analyzing company data, or data change agents. Moreover, a new kind of journalism, data journalism or data-driven journalism, is emerging to apply statistics, programming, and other digital data and tools to generate or mold news stories. In addition, big data is being sub-categorized into finer definitions such as thick data, which refers to data combining quantitative and qualitative analysis. There also is long data--which extends back in time centuries or millennia---and hot data, which is used constantly and therefore must be easily and quickly accessible. Meanwhile, cold data can be less readily available, as its use is relatively infrequent.

From "Beyond Just 'Big' Data"
IEEE Spectrum (07/28/15) Paul McFedries
View Full Article

UCF-Developed Software Analyzes Fat in Seconds

UCF Today

New computer-vision techniques can give doctors a more complete analysis of fat than images alone. In 2009, scientists discovered adults have brown--or good--fat cells that are beneficial for fighting off weight gain and potentially slowing cancer growth. Software developed by Ulas Bagci, an assistant professor in University of Central Florida's Center for Research in Computer Vision, and colleagues differentiates between brown and white fat tissues. Moreover, the software distinguishes whether the more common white fat lives just below the skin in subcutaneous cells or deeper in visceral cells, potentially wrapping around organs. The computer-vision techniques can analyze images much better than the human eye, and the software can read results from positron-emission tomography and computed-tomography scans in about a minute. "These computer-derived results can help a doctor plan a treatment or surgery more accurately and much faster than before," Bagci says. The software also functions with markers, such as contrast dyes, which can lead to an even more accurate understanding of the disease extent, severity, and cause. Bagci used similar software to help researchers at the U.S. National Institutes of Health studying vaccine approaches for MERS-CoV, a severe acute respiratory illness.

From "UCF-Developed Software Analyzes Fat in Seconds"
UCF Today (07/28/15) Barbara Abney
View Full Article

Quantum Computing: Diode-Like Breakthrough Surmounts Roadblock

EE Times

Niels Bohr Institute researchers have developed a diode-like component that enables single photons to be emitted and flow in only one direction depending upon whether their spin is "up" or "down." The researchers also developed a kind of photon delay line, and they say these new photonic components can be applied to developing practical quantum computers. "Our research focus is on the development of photonic hardware for quantum technology and [we] may have a number of technological applications, both short term and longer term," says Niels Bohr professor Peter Lodahl. The quantum dot single-photon emitter sends up-spin single photons in one direction down a waveguide and down-spin single photons in the opposite direction, creating a quantum computer component that separates quantum bits based on their encoding. "In future quantum computers it is essential to be able to control the interaction between light and matter--photons and quantum dots in our case," Lodahl says. The delay-line type of component for photons is similar to delay line components found in conventional electronics. "Our discovery of the different interaction depending on propagation direction opens new possibilities of controlling light-matter interaction enabling the construction of novel photonic chips constituting basic hardware for quantum-computing technology," Lodahl says.

From "Quantum Computing: Diode-Like Breakthrough Surmounts Roadblock"
EE Times (07/29/15) R. Colin Johnson
View Full Article

InBloom May Be Dead but the Dream Lives on at Carnegie Mellon

EdSurge (CA)

LearnSphere is a new $5-million federally funded project at Carnegie Mellon University that seeks to become the world's largest "open repository of education data," says project leader Ken Koedinger. He wants education researchers and software developers to upload their data comprising the millions of keystrokes students make as they answer questions, hit backspace, or even sit idly. Koedinger's team wants to form a "distributed infrastructure" that gives researchers access to data on someone else's computer. A key challenge for Koedinger's team is cleaning up the data so outside researchers can easily analyze it yet ensure no information identifies a student. The goal is translating research questions into computer commands that can be run on any dataset. Koedinger recently studied how much students learned when they were taking a free online course in introductory psychology. He asked what increased student learning the most--videos, reading assignments, or online interactive tasks. "Our model suggests, for every activity you do, you get six times the bump than for every video you watch," Koedinger says. He also notes there are key differences between LearnSphere and the defunct inBloom non-profit project, such as not allowing any personal information from school records to enter LearnSphere.

From "InBloom May Be Dead but the Dream Lives on at Carnegie Mellon"
EdSurge (CA) (07/28/15) Jill Barshay
View Full Article