(Back to Session Schedule)

The 19th Asia and South Pacific Design Automation Conference

Session 7S  Special Session: Brain Like Computing: Modelling, Technology, and Architecture
Time: 10:10 - 12:15 Thursday, January 23, 2014
Location: Room 302
Chair: Ahmed Hemani (KTH, Sweden)

7S-1 (Time: 10:10 - 10:40)
Title(Invited Paper) Spiking Brain Models: Computation, Memory and Communication Constraints for Custom Hardware Implementation
Author*Anders Lansner, Ahmed Hemani, Nasim Farahini (KTH, Sweden)
Pagepp. 556 - 562
KeywordBrain, Neural network, custom VLSI, BCPNN, Associative memory
AbstractWe estimate the computational capacity required to simulate in real time the neural information processing in the human brain. We show that the computational demands of a detailed implementation are beyond reach of current technology, but that some biologically plausible reductions of problem complexity can give performance gains between two and six orders of magnitude, which put implementations within reach of tomorrow’s technology.

7S-2 (Time: 10:40 - 11:10)
Title(Invited Paper) Advanced Technologies for Brain-Inspired Computing
Author*Fabien Clermidy, Rodolphe Heliot, Alexandre Valentian (CEA-LETI, France), Christian Gamrat, Olivier Bichler, Marc Duranton (CEA-LIST, France), Bilel Blehadj, Olivier Temam (INRIA, France)
Pagepp. 563 - 569
KeywordNeuromorphic, Memristor, 3D TSV, 3D monolithic
AbstractThis paper aims at presenting how new technologies can overcome classical implementation issues of Neural Networks. Resistive memories such as Phase Change Memories and Conductive-Bridge RAM can be used for obtaining low-area synapses thanks to programmable resistance also called Memristors. Similarly, the high capacitance of Through Silicon Vias can be used to greatly improve analog neurons and reduce their area. The very same devices can also be used for improving connectivity of Neural Networks as demonstrated by an application. Finally, some perspectives are given on the usage of 3D monolithic integration for better exploiting the third dimension and thus obtaining systems closer to the brain.
Slides

7S-3 (Time: 11:10 - 11:40)
Title(Invited Paper) GPGPU Accelerated Simulation and Parameter Tuning for Neuromorphic Applications
AuthorKristofor D. Carlson, Michael Beyeler, *Nikil Dutt, Jeffrey L. Krichmar (UC Irvine, U.S.A.)
Pagepp. 570 - 577
Keywordgraphics processing units, spiking neural networks, evolutionary algorithms, GPUs, SNNs
AbstractNeuromorphic engineering takes inspiration from biology to design brain-like systems that are extremely low-power, fault-tolerant, and capable of adaptation to complex environments. The design of these artificial nervous systems involves both the development of neuromorphic hardware devices and the development neuromorphic simulation tools. In this paper, we describe a simulation environment that can be used to design, construct, and run spiking neural networks (SNNs) quickly and efficiently using graphics processing units (GPUs). We then explain how the design of the simulation environment utilizes the parallel processing power of GPUs to simulate large-scale SNNs and describe recent modeling experiments performed using the simulator. Finally, we present an automated parameter tuning framework that utilizes the simulation environment and evolutionary algorithms to tune SNNs. We believe the simulation environment and associated parameter tuning framework presented here can accelerate the development of neuromorphic software and hardware applications by making the design, construction, and tuning of SNNs an easier task.
Slides

7S-4 (Time: 11:40 - 12:10)
Title(Invited Paper) A Scalable Custom Simulation Machine for the Bayesian Confidence Propagation Neural Network Model of the Brain
AuthorNasim Farahini, *Ahmed Hemani, Anders Lansner (KTH, Sweden), Fabian Clermidy (CEA-LETI, France), Christer Svensson (Linköping University, Sweden)
Pagepp. 578 - 585
KeywordBrain simulation, BCPNN, Custom supercomputer, Spiking Neural Network
AbstractA multi-chip custom digital super-computer called eBrain for simulating Bayesian Confidence Propagation Neural Network (BCPNN) model of the human brain has been proposed. It uses Hybrid Memory Cube (HMC), the 3D stacked DRAM memories for storing synaptic weights that are integrated with a custom designed logic chip that implements the BCPNN model. In 22nm node, eBrain executes BCPNN in real time with 740 TFlops/s while accessing 30 TBs synaptic weights with a bandwidth of 112 TBs/s while consuming less than 6 kWs power for the typical case. This efficiency is three orders better than general purpose supercomputers in the same technology node.