Title | (Invited Paper) A Silicon Nanodisk Array Structure Realizing Synaptic Response of Spiking Neuron Models with Noise |
Author | *Takashi Morie, Haichao Liang, Yilai Sun, Takashi Tohara (Kyushu Institute of Technology, Japan), Makoto Igarashi, Seiji Samukawa (Tohoku University, Japan) |
Page | pp. 185 - 190 |
Keyword | nanostructure, nanodevice, spiking neuron, fluctuation, noise |
Abstract | In the implementation of spiking neuron models, which can
achieve realistic neuron operation, generation of post-synaptic
potentials (PSPs) is an essential function. We have already
proposed a new nanodisk array structure for generating PSPs
using delay in electron hopping among nanodisks. Generated PSPs
have fluctuation caused by stochastic electron movement. Noise
or fluctuation is effectively used in neural processing. In this
paper, we review our proposed structure and show fluctuation
controllability based on single-electron circuit simulation. |
Title | (Invited Paper) Energy Efficient In-Memory Machine Learning for Data Intensive Image-Processing by Non-Volatile Domain-Wall Memory |
Author | *Hao Yu, Yuhao Wang, Shuai Chen, Wei Fei (Nanyang Technological University, Singapore), Chuliang Weng, Junfeng Zhao, Zhulin Wei (Huawei Shannon Laboratory, China) |
Page | pp. 191 - 196 |
Keyword | neural network, logic-in-memory, non-volatile memory, domain wall, image processing |
Abstract | Image processing in conventional logic-memory I/O-integrated systems will incur significant communication congestion at memory I/Os for excessive big image data at exa-scale. This paper explores an in-memory machine learning on neural network architecture by utilizing the newly introduced domain-wall nanowire, called DW-NN. We show that all operations involved in machine learning on neural network can be mapped to a logic-in-memory architecture by non-volatile domain-wall nanowire. Domain-wall nanowire based logic is customized for in machine learning within image data storage. As such, both neural network training and processing can be performed locally within the memory. The experimental results show that system throughput in DW-NN is improved by 11.6x and the energy efficiency is improved by 92x when compared to conventional image processing system. |
Slides |
Title | (Invited Paper) Lessons from the Neurons Themselves |
Author | *Louis Scheffer (Howard Hughes Medical Institute, U.S.A.) |
Page | pp. 197 - 200 |
Keyword | Neuromorphic, Artificial neuron, neurons |
Abstract | Natural neural circuits, optimized by millions of years of evolution, are fast, low power, and robust, all characteristics we would love to have in systems we ourselves design.
Recently there have been enormous advances in understanding how neurons implement computations within the brain of living creatures.
Can we use this new-found knowledge to create better artificial system? What lessons can we learn from the neurons themselves, that can help us create better neuromorphic circuits? |
Slides |