基調講演

Opening & Keynote I : Wednesday, January 26, 8:30-10:00, Small Auditorium, 5F

Non-Volatile Memory and Normally-Off Computing

TakayukiKawahara.jpg

Dr. Takayuki Kawahara

Chief Researcher
Central Research Laboratory, Hitachi Ltd., Kokubunnji Tokyo, Japan

A different sort of innovation in computing architecture based on non-volatile RAM is now advancing to enable further power reduction in addition to conventionally developed low-voltage technology. This new goal is to find a way to allow computing equipment to normally be turned off when not in use but be able to turn on instantly with full performance when needed. This is consistent with what we prefer to do in our daily life where there is an increasing interest to sustainable world. What is needed is a union of power control technology and information-communication technology. The key to achieving this is to ensure that any internal status of computation is memorized at any time before the power is turned off without consuming power. Non-volatile (NV) RAM is the critical component. NV RAM offers an infinite number of fast write and read operations as well as non-volatility. This talk will address the following.

1. The review of new memory development. Many types of memory based on new materials and principles have been reported. Among these new candidates, magnetic RAM is notable in that is nonvolatile and permits an infinite number of write cycles during its lifetime. The basic operation of this memory and comparison between other kind of emerging memories are given.

2. Needed cooperation of memory and CAD designer. Because a new memory is based on new phenomena, deep cooperation of the memory designer and CAD designer is necessary for the efficient design. We need a system that the result of physical analysis can be easily implemented. Conventionally, TCAD is optimized for exploring new transistor device, but seems not for memory device.

3. New tools for normally off computing design. The management of fine grain power control combined with internal computation status is needed. The tool that checks the sequence of reactivation is necessary where the system is turned on instantly and resumes the state prior to the interruption. Also designer should verify the arbitrations between blocks those operate based on event-driven operation.



Keynote II : Thursday, January 27, 9:00-10:00, Small Auditorium, 5F

Managing Increasing Complexity through Higher-level of Abstraction: What the past has taught us about the future

AjoyBose.jpg

Dr. Ajoy Bose

Chairman, president and CEO, Atrenta Inc.

Time to market and design complexity challenges are well-known; we've all seen the statistics and predictions. A well-defined strategy to address these challenges seems less clear. Design for manufacturability approaches that optimize transistor geometries, "variability aware" physical implementation tools and design reuse strategies abound. While each of these techniques contributes to the solution, they all miss the primary force of design evolution. Over the past 30 years or so, it has been proven time and again that moving design abstraction to the next higher level is required if design technology is to advance. In this keynote presentation, a new empirical model named Bose-Hackworth model will presented, examples of past trends will be identified, and an assessment will be made on what these trends mean in the context of the current challenges before us. A snapshot of the future will be presented which will contain some non-intuitive predictions.



Keynote III : Friday, January 28, 9:00-10:00, Small Auditorium, 5F

Robust Systems: from Clouds to Nanotubes

SubhasishMitra.jpg

Prof. Subhasish Mitra

Professor, Department of Electrical Engineering and Department of Computer Science
Stanford University, Stanford, CA, USA

Today's mainstream electronic systems typically assume that transistors and interconnects operate correctly over their useful lifetime. With enormous complexity and significantly increased vulnerability to failures compared to the past, future system designs cannot rely on such assumptions. At the same time, there is explosive growth in our dependency on such systems. For example, in 2009, a glitch in a single circuit board of the air-traffic control system resulted in hundreds of flights being canceled or delayed. Robust system design is essential to ensure that future systems perform correctly despite rising complexity and increasing disturbances. For coming generations of silicon technologies, several causes of hardware failures, largely benign in the past, are becoming significant at the system-level. Furthermore, emerging nanotechnologies such as carbon nanotubes are inherently highly subject to imperfections. With extreme miniaturization of circuits, factors such as transient errors, device degradation, and variability induced by manufacturing and operating conditions are becoming important. While design margins are being squeezed to achieve high energy efficiency, expanded design margins are required to cope with variability and transistor aging. Even if error rates stay constant on a per-bit basis, total chip-level error rates grow with the scale of integration. Moreover, difficulties with traditional burn-in can leave early-life failures unscreened.

This talk will address the following major robust system design goals:
  • New approaches to thorough test and validation that scale with tremendous growth in complexity
  • Cost-effective tolerance and prediction of failures in hardware during system operation
  • A practical way to overcome substantial inherent imperfections in emerging nanotechnologies

Significant recent progress in robust system design impacts almost every aspect of future systems, from ultra-large-scale cloud computing and storage systems, all the way to their nanoscale components.

Last Updated on: 11, 15, 2010