FINLEY NEWSLETTERS

THE GRID

COMMUNIQUE

Social Links

Artificial Intelligence for Smartphones

March 9, 2015 By Finley Engineering in

Here’s a technology development worth keeping an eye on.

Researchers at leading high-tech giants and universities are making “dramatic advances” in artificial intelligence (AI) — so much so that smartphones may soon be equipped with the first AI apps based on an AI process called “deep learning.” 

Researchers are using deep learning capable of tracking everything from workouts to emotions to mimic the way the human brain processes information, creating “virtual neurons and synapses” that are able to process and make use of information from images and audio, according to a Feb. 9 MIT Technology Review post.

With deep learning, devices learn from the visual and audio stimuli they are fed. The more stimuli, the stronger the connections made between virtual neurons and exchanged across virtual synapses. That leads to greater recognition capabilities and higher levels of information processing, the article explains. Applications for deep learning include facial and feature recognition in images.

Last year Facebook researchers used deep learning to build a system that can determine nearly as well as a human whether two different photos show the same person, and Google used the technology to create software that describes complicated images in short sentences, notes MIT Technology Review author Rachel Metz. She notes, however, that to date most such efforts have involved groups of extremely powerful computers.

Tapping into powerful computers in cloud-based data centers at the other end of smartphone and mobile device connections could be the key to bringing deep learning AI capabilities to those devices. The smartphones would still need to be quite powerful, but according to the MIT article, some smartphones already could meet the requirements.

The MIT article quotes Bell Labs Principal Scientist Nic Lane, who believes deep learning could improve the performance of mobile sensing apps by filtering out unwanted sounds from a microphone or remove unwanted signals in the data gathered by an accelerometer.

A researcher at Microsoft Research Asia last year used deep learning techniques to try to determine whether a smartphone could collect data by an accelerometer on the user’s wrist and use the data to determine whether the user was eating soup or brushing his or her teeth. The researcher also tested whether the phone could determine peoples’ emotions or identities from recordings of their speech.”

In a research paper, Lane and research partner Petko Georgiev reported that the AI software was 10 percent more accurate at recognizing such information than other methods and that the neural network was as accurate as other methods at identifying speakers and emotions.

“It’s all about, I think, instilling intelligence into devices so that they are able to understand and react to the world—by themselves,” Lane told MIT Technology Review.