Like a “Limitless” Pill for Deep Neural Networks

Dyllan explores technology and the human condition for Tech Emergence. His interests include but are not limited to whiskey, kimchi, and Catahoulas.

Like a

Deep learning was one of 2015’s buzz terms. With the impact it’s made on AI, it might even be a household term in the new year.

The branch of machine learning that uses sets of algorithms to distinguish patterns in tons of data have been employed by giants like Facebook and Google in efforts to shepherd in an age of AI. At the core of our current deep learning models are artificial neural networks. Modeled after biological neural networks such as those in our brains, these artificial networks have become virtually vital to the present state of AI.

The difficulty is that these artificial neural networks can’t compute at the speed or accuracy that our brains do. Thus one of machine learning’s most present obstacles is the development of artificial neurons that can function at an accelerated rate.

small_1420

Song Han may have done just that. Once an intern at Google, now a PhD candidate at Stanford University, Han is currently researching under the advisory of networking pioneer Dr Bill Dally. Under Dally’s guidance, Han is working to develop a small chip called EIE which is intended to increase the role of static random access memory (SRAM) while scaling down networks to more manageable and efficient sizes in a technique called deep compression – i.e. EIE is sort of like a “Limitless” pill for deep neural networks.

These combined methods would help diminish the role of dynamic random access memory (DRAM) which, according to Han, is much more expensive and energy efficient. In fact, Han told Next Platform that EIE could save over 65 percent of energy. “With deep compression and EIE, when we do the inference, we put the model directly into SRAM versus going to DRAM, which is on the order of 100X more energy-consuming.”

When Han’s chip was tested across various deep neural network measuring suites, it displayed energy efficiency up to 24,000 times better than DRAM on a CPU. Meanwhile it could perform interference operations up to 189 times faster via SRAM than DRAM.

EIE relies on the deep compression to bolster its performance, particularly across mobile devices which companies like Google, Baidu, and Facebook have begun to emphasize. Indeed, Baidu is regarded as a “mobile first” companies for it’s emphasis on on-the-go searches. This portable offering demands special and focused power in order to teach neural networks and enable them to deliver results to mobile end users.

Han’s chip has yet to go into production but the eager researcher has already caught the attention of a number of hardware manufacturers, inducing Huawei Technology, the world’s largest telecom equipment manufacturing company. As such, we might not notice EIE in our devices – but they may well soon be in them.

 Credits: iStockphoto, Cosmonio

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the ‘AI Advantage’ newsletter:

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.

Thank you, we will keep you in updates!