(Back to Session Schedule)

The 16th Asia and South Pacific Design Automation Conference

Session 5A  System-Level Simulation
Time: 13:40 - 15:40 Thursday, January 27, 2011
Location: Room 411+412
Chairs: Nagisa Ishiura (Kwansei Gakuin University, Japan), Bo-Cheng Charles Lai (National Chiao Tung University, Taiwan)

5A-1 (Time: 13:40 - 14:10)
TitleHandling Dynamic Frequency Changes in Statically Scheduled Cycle-Accurate Simulation
Author*Marius Gligor, Frédéric Pétrot (TIMA Laboratory, CNRS/INP Grenoble/UJF, France)
Pagepp. 407 - 412
Keywordcycle accurate simulation, simulation acceleration, static scheduling, dynamic frequencies change
AbstractAlthough high level simulation models are being increasingly used for digital electronic system validation, cycle accuracy is still required in some cases, such as hardware protocol validation or accurate power/energy estimation. Cycle-accurate simulation is however slow and acceleration approaches make the assumption of a single constant clock, which is not true anymore with the generalization of dynamic voltage and frequency scaling techniques. Fast cycle-accurate simulators supporting several clocks whose frequencies can change at run time are thus needed. This paper presents two algorithms we designed for this purpose and details their properties and implementations.
Slides

5A-2 (Time: 14:10 - 14:40)
TitleCoarse-grained Simulation Method for Performance Evaluation a of Shared Memory System
Author*Ryo Kawahara, Kenta Nakamura, Kouichi Ono, Takeo Nakada (IBM Research - Tokyo, Japan), Yoshifumi Sakamoto (Global Business Services, IBM Japan, Japan)
Pagepp. 413 - 418
KeywordSimulation, performance, UML, embedded system, multi-processors
AbstractWe propose a coarse-grained simulation method which takes the effect of memory access contention into account. The method can be used for the evaluation of the execution time of an application program during the system architecture design in an early phase of development. In this phase, information about memory access timings is usually not available. Our method uses a statistical approximation of the memory access timings to estimate their influences on the execution time. We report a preliminary verification of our simulation method by comparing it with an experimental result from an image processing application on a dual-core PC. We find an error of the order of 3 percents on the execution time.
Slides

5A-3 (Time: 14:40 - 15:10)
TitleT-SPaCS – A Two-Level Single-Pass Cache Simulation Methodology
AuthorWei Zang, *Ann Gordon-Ross (University of Florida, U.S.A.)
Pagepp. 419 - 424
KeywordConfigurable cache, cache hierarchy, cache optimization, low energy, embedded systems
AbstractThe cache hierarchy's large contribution to total microprocessor system power makes caches a good optimization candidate. We propose a single-pass trace-driven cache simulation methodology - T-SPaCS - for a two-level exclusive instruction cache hierarchy. Instead of storing and simulating numerous stacks repeatedly as in direct adaptation of a conventional trace-driven cache simulation to two level caches, T-SPaCS simulates both the level one and level two caches simultaneously using one stack. Experimental results show T-SPaCS efficiently and accurately determines the optimal cache configuration (lowest energy).
Slides

5A-4 (Time: 15:10 - 15:40)
TitleFast Data-Cache Modeling for Native Co-Simulation
Author*Héctor Posadas, Luis Diaz, Eugenio Villar (University of Cantabria, Spain)
Pagepp. 425 - 430
KeywordSystem-Level, Cache modeling, Native co-simiulation, Embedded SW
AbstractEfficient design of large multiprocessor embedded systems requires fast, early performance modeling techniques. Native co-simulation has been proposed as a fast solution for evaluating systems in early design steps. Annotated SW execution can be performed in conjunction with a virtual model of the HW platform to generate a complete system simulation. To obtain sufficiently accurate performance estimations, the effect of all the system components, as processor caches, must be considered. ISS-based cache models slow down the simulation speed, greatly reducing the efficiency of native-based co-simulations. To solve the problem, cache modeling techniques for fast native co-simulation have been proposed, but only considering instruction-caches. In this paper, a fast technique for data-cache modeling is presented, together with the instrumentation required for its application in native execution. The model allows the designer to obtain cache hit/miss rate estimations with a speed-up of two orders of magnitude with respect to ISS. Miss rate estimation error remains below 5% for representative examples.
Slides