(Back to Session Schedule)

The 19th Asia and South Pacific Design Automation Conference

Session 3S  Special Session: Neuron Inspired Computing using Nanotechnology
Time: 15:50 - 17:30 Tuesday, January 21, 2014
Location: Room 302
Organizer: Kevin Cao (Arizona State University, U.S.A.), Sarma Vrudhula (Arizona State University, U.S.A.)

3S-1 (Time: 15:50 - 16:20)
Title(Invited Paper) A Silicon Nanodisk Array Structure Realizing Synaptic Response of Spiking Neuron Models with Noise
Author*Takashi Morie, Haichao Liang, Yilai Sun, Takashi Tohara (Kyushu Institute of Technology, Japan), Makoto Igarashi, Seiji Samukawa (Tohoku University, Japan)
Pagepp. 185 - 190
Keywordnanostructure, nanodevice, spiking neuron, fluctuation, noise
AbstractIn the implementation of spiking neuron models, which can achieve realistic neuron operation, generation of post-synaptic potentials (PSPs) is an essential function. We have already proposed a new nanodisk array structure for generating PSPs using delay in electron hopping among nanodisks. Generated PSPs have fluctuation caused by stochastic electron movement. Noise or fluctuation is effectively used in neural processing. In this paper, we review our proposed structure and show fluctuation controllability based on single-electron circuit simulation.

3S-2 (Time: 16:20 - 16:50)
Title(Invited Paper) Energy Efficient In-Memory Machine Learning for Data Intensive Image-Processing by Non-Volatile Domain-Wall Memory
Author*Hao Yu, Yuhao Wang, Shuai Chen, Wei Fei (Nanyang Technological University, Singapore), Chuliang Weng, Junfeng Zhao, Zhulin Wei (Huawei Shannon Laboratory, China)
Pagepp. 191 - 196
Keywordneural network, logic-in-memory, non-volatile memory, domain wall, image processing
AbstractImage processing in conventional logic-memory I/O-integrated systems will incur significant communication congestion at memory I/Os for excessive big image data at exa-scale. This paper explores an in-memory machine learning on neural network architecture by utilizing the newly introduced domain-wall nanowire, called DW-NN. We show that all operations involved in machine learning on neural network can be mapped to a logic-in-memory architecture by non-volatile domain-wall nanowire. Domain-wall nanowire based logic is customized for in machine learning within image data storage. As such, both neural network training and processing can be performed locally within the memory. The experimental results show that system throughput in DW-NN is improved by 11.6x and the energy efficiency is improved by 92x when compared to conventional image processing system.
Slides

3S-3 (Time: 16:50 - 17:20)
Title(Invited Paper) Lessons from the Neurons Themselves
Author*Louis Scheffer (Howard Hughes Medical Institute, U.S.A.)
Pagepp. 197 - 200
KeywordNeuromorphic, Artificial neuron, neurons
AbstractNatural neural circuits, optimized by millions of years of evolution, are fast, low power, and robust, all characteristics we would love to have in systems we ourselves design. Recently there have been enormous advances in understanding how neurons implement computations within the brain of living creatures. Can we use this new-found knowledge to create better artificial system? What lessons can we learn from the neurons themselves, that can help us create better neuromorphic circuits?
Slides