Designers' Forum

Designers' Forum is conceived as a unique program that shares the design experience and solutions of real product developments among LSI designers and EDA academia/developers. The topics discussed in this forum include Robotics, Imaging technologies and applications, Virtual Realty, and Emerging Technologies for Tokyo Olympic 2020.

  • Date: January 23-24, 2019
  • Place: Miraikan, 7F, Room Saturn
  • Designers' Forum Chair: Masaitsu Nakajima (socionext)
  • Designers' Forum Chair: Koji Inoue (Kyushu University, Japan)
  • Designers' Forum Members:
    • Hiroe Iwasaki (NTT, Japan)
    • Shinichi Shibahara (Renesas Electronics, Japan)
    • Masaru Kokubo (Hitachi, Japan)
    • Koichiro Yamashita (Fujitsu Laboratories, Japan)
    • Akihiko Inoue (Panasonic, Japan)
    • Masaki Sakakibara (Sony Semiconductor Solutions, Japan)
    • Yuji Ishikawa (Toshiba Device & Storage, Japan)

Date/Time Title
5A January 23, 13:50-15:05 Oral Session:
Robotics: From System Design to Application
6A January 23, 15:35-17:15 Oral Session:
Advanced Imaging Technologies and Applications
8A January 24, 13:30-14:45 Oral Session:
Emerging Technologies for Tokyo Olympic 2020
9A January 24, 15:00-16:45 Oral Session:
Beyond the Virtual Reality World


Session 5A:Wednesday, January 23, 13:50-15:05

Oral Session: Robotics: From System Design to Application

Organizers: Koji Inoue (Kyushu University, Japan)
Yuji Ishikawa (Toshiba Device & Storage, Japan)
Session Chair: Yuji Ishikawa (Toshiba Device & Storage, Japan)

This session includes three interesting invited talks regarding Robotics that is a key technology for realizing Society 5.0, and it has significant potentials to change our daily life. The first talk focuses on a modeling technique for minimal invasive surgery, and that could be used on remote medicine. The second presentation targets ROS (Robot Operating System) led by Open Robotics that is a state-of-the-art development framework. The third talk discusses rapid development of Robotics technology through contests and open collaborations. The purpose of this session is to share and discuss the state-of-the-art and future Robotics from the viewpoint of various aspects such as vision technologies, design methodologies, and open collaborations.

  • 1: Computer-Aided Support System for Minimally Invasive Surgery Using 3D Organ Shape Models

    Ken'ichi Morooka (Kyusyu University, Japan)

    Our research group has been doing research about computer-aided support systems for safe and accurate minimally invasive surgeries. Especially, our support system uses 3D shapes and deformations of organs by combining stereo endoscopic images and neural networks. We talk about the fundamental techniques of our support system.

  • 2: ROS and mROS: How to accelerate the development of robot systems and integrate embedded devices

    Hideki Takase (Kyoto University, JST PRESTO, Japan)

    Robot Operating System (ROS) is a state of the art component oriented development framework led by Open Robotics. This talk firstly describes advantages of ROS on the robot development process. ROS can accelerate the development of robot systems by configuring and connecting abundant open-source ROS packages. Another aspect of ROS is a communication middleware based on publish/subscribe model. ROS nodes communicate with each other via topic. An arbitrary node publishes data to a topic and other nodes can subscribe data from the topic. roscore, the master of ROS system, manages advertisement of node information. In addition, ROS provides powerful tools and a world-wide friendly community to help robot system designers.
    Although there are a lot of useful packages available from ROS1, which is a widely used version, you must employ Linux/Ubuntu to execute ROS1 nodes. It means that you have to select high-performance and power-hunger processors, such as AARCH64 or x64 CPU. We think the use of embedded processors would contribute to power consumption and real-time capability for robot systems. The latter part of this talk will present our work about mROS, that enables embedded processors to be integrated to ROS1 systems. mROS is a lightweight runtime environment to run a ROS1 node on embedded systems. mROS is assumed to operate on edge devices in distributed network systems. We employ lwIP as a TCP/IP protocol stack that is included in ARM mbed library, and TOPPERS/ASP kernel as a real-time operating system to realize ROS communication. You can design mROS nodes with native ROS APIs. In addition, you can develop embedded device drivers with mbed library. Moreover, ITRON programming model could help you if you wish to realize multi-tasking. Our work would contribute to the portability of ROS1 packages to embedded systems, and enhancement of power saving and real-time performance for edge nodes on distributed robot systems.

  • 3: Rapid Development of Robotics technology using Robot Contest and Open Collaboration

    Masaki Yamamoto (AI Solution Center, Panasonic Corporation, Japan)

    Robotic system is widely used in manufacturing settings, as the environment can be tuned for the robots. Repetitive tasks can easily be performed by robots with high accuracy and speed surpassing human operators. On the other hand, in human environment, even though robots have long been said promising, their deployment is slow.
    Thanks to emerging AI technology, robots are getting more reliable capability to sense and adapt in the human environment. But once robot has to perform physical tasks, as the robots have physical bodies to interact with the external world, the total system gets more complicated and expensive, which makes the robot system less viable in the real application. To overcome these difficulties, we have to carefully access the potential robotic application from as many view points as possible.
    In this situation, robot contests are becoming popular in robotics research community. In some cases, potential users of the robotics system organize the contest to accelerate the technology development, such as Amazon Robotics Challenge. The environment can be determined by the contest organizer reflecting not only technical but also user point of view. Proposals from worldwide participants can be good testbed to explore design possibilities.
    For universities, students can exert their creativity and test their technological edge. Private companies can use these opportunities to educate young engineers and explore potential technological direction in a short period of time. In this talk, we share our experiences in Amazon Robotics Challenge 2017 and World Robotics Summit 2018.

Session 6A: Wednesday, January 23, 15:35-17:15

Advanced Imaging Technologies and Applications

Organizers: Masaki Sakakibara (Sony Semiconductor Solutions Corporation, Japan)
Shinichi Shibahara (Renesas Electronics Corporation, Japan)
Session Chair: Shinichi Shibahara (Renesas Electronics Corporation, Japan)

The market growth in image sensor is very rapid. Also, image sensors are widely adapted to monitoring, autonomous driving, home-security cameras, and medical fields. This session will visit its trend, hardware implementation, and system design in each application. The first talk presents image sensors for heart rate detection in driver monitoring system, nursing care and security systems. The next presentation shows LiDAR systems for 200m range detection on highway in autonomous driving. The third one shares a low-power event-driven image sensor for detecting moving objects in wireless products. The final talk demonstrates fundus cameras with high frame-rate for physiological monitoring and pathological diagnosis.

  • 1: NIR Lock-in Pixel Image Sensors for Remote Heart Rate Detection

    Shoji Kawahito*1, Cao Chen*1, Leyi Tan*1, Keiichiro Kagawa*1, Keita Yasutomi*1, Norimichi Tsumura*2 (Shizuoka University (*1), Chiba University (*2))

    This paper presents a lock-in pixel CMOS image sensor (CIS) with high near-infrared (NIR) sensitivity for remote physiological signal detection. The developed 1.3M-pixel CIS has a function of lock-in detection of a short-pulse-modulated signal light while suppressing the influence of back-ground light variation. Using the implemented lock-in camera system consisting of the CIS chip and NIR (870nm) LEDs, remote non-contact heart-rate measurements with the 98% accuracy compared with that of the contact-type HR measurements are demonstrated. A HR variability spectrogram for monitoring mental stress is also successfully obtained with the implemented system. Target applications of this sensor include a driver monitoring system, nursing care and security systems.

  • 2: A TDC/ADC Hybrid LiDAR SoC for 200m Range Detection with High Image Resolution under 100klux Sunlight

    Kentaro Yoshioka (Toshiba Corporation)

    Long-range and high-pixel-resolution LiDAR systems, using a Time-of-Flight (ToF) information of the reflected photon from the target, are essential upon launching safe and reliable self-driving programs of Level 4 and above. 200m long-range distance measurement (DM) is required to sense proceeding vehicles and obstacles as fast as possible on a highway situation. To realize safe and reliable self-driving in city areas, LiDAR systems uniting wide angle-of-view and high-pixel resolution are required to fully perceive the surrounding events. Moreover, these performances must be achieved under strong background light (e.g. sunlight), which is the most significant noise source for LiDAR systems. We propose a TDC/ADC hybrid LiDAR SoC with smart accumulation technique (SAT) to achieve both 200m and high resolution range imaging for reliable self-driving systems. SAT using ADC information enhances the effective pixel resolution with an accumulation activated by recognizing only the target reflection. Moreover, the hybrid architecture enables wide range measurement of 0-200m; 2x longer and 2x higher effective-pixel-resolution range imaging is achieved than conventional designs.

  • 3: A 1/4-inch 3.9Mpixel Low Power Event-driven Back-illuminated Stacked CMOS Image sensor

    Oichi Kumagai (Sony Semiconductor Solutions Corporation)

    Wireless products such as smart home-security cameras, intelligent agents, virtual personal assistants, and smartphones, are evolving rapidly to satisfy our needs. Small size, extended battery life, transparent machine interfaces: all these are required of the camera system in these applications. These applications, in battery-limited environments, can profit from an event-driven approach for moving-object detection. We have developed a 1/4-inch 3.9Mpixel low-power event-driven back-illuminated stacked CMOS image sensor deployed with a pixel readout circuit that detects moving objects for each pixel under lighting conditions ranging from 1 to 64,000lux. Utilizing pixel summation in a shared floating diffusion for each pixel block, moving object detection is realized at 10 frames per second while consuming only 1.1mW, a 99% reduction in power from the same CIS at a full-resolution 60fps power of 95mW. The low-power event-driven technology enhance the device usability and create a low-resolution always-on sensing and high-quality imaging world.

  • 4: Next-Generation Fundus Camera with Full-Color Image Acquisition in 0-lx Visible Light using BSI CMOS Image Sensor with Advanced NIR Multi-Spectral Imaging System

    Hirofumi Sumi*1*2, Hironari Takehara*2, Norimasa Kishi*1, Jun Ohta*2, Masatoshi Ishikawa*1  (The University of Tokyo (*1), Nara Institute of Science and Technology (*2))

    This research describes the development of the next-generation of fundus cameras with a high frame-rate, based on intelligent imaging technologies. In one of several POCs (Proof of Concepts) for Dynamic Intelligent Systems using High-Speed Vision, we aimed to develop a solution system that can be used as a camera to facilitate tracking of fast movement of the eye.
    Moreover, these cameras can acquire images in multi-band spectral ranges for the signals NIR1, NIR2, and NIR3 which correspond to visible light R, B, and G respectively based on near-infrared spectral imaging technology. In this regard, advanced NIR multi-spectral technology has been developed. Using this technique, NIR1: 780–800nm, NIR2: 870nm, and NIR3: 940nm in the NIR wavelength range are acquired for a target image. By exploiting the application of interpolation and color correction processing, a color image can be reproduced using only multi-NIR signals in the absence of visible light (0-lx). Using this fundus camera, it is also possible for an individual to observe and acquire images of the bottom of the eye, without assistance.
    Additionally, the fundus of the eye is the only site in the human body where arteries and capillaries can be directly observed non-invasively. By examining the fundus oculi, it is possible to observe the state of blood vessels and the retina/optic papilla, and thus diagnose various diseases ranging from glaucoma and retinal detachment to diabetes and arteriosclerosis.
    Furthermore, another potential application of this compact camera is to capture diagnostic health information, which will allow for control and active health management by individuals.

Session 8A: Thursday, January 24, 13:30-14:45

Oral Session: Emerging Technologies for Tokyo Olympic 2020

Organizers: Koichiro Yamashita (Fujitsu)
Akihiko Inoue (Panasonic Corporation)
Session Chair: Koichiro Yamashita (Fujitsu)

Tokyo 2020 Olympic and Paralympic Games is a most exciting event in the last few decades in Japan, and is also very good test-field to evaluate emerging technologies for creating Olympic legacy. The first talk presents the wearable robotic device, named ‘HIMICO’, which assist human movement softly by using Bowden cables to transfer power from motor system. The next presentation shows image processing for object detection and scene recognition based on deep learning technology. The final talk demonstrates emerging devices for accurate spatial sensing and efficient battery management to realize security and safety for autonomous driving and autonomous control.

  • 1: Walking assistive powered-wear 'HIMICO' with wire-driven assist

    Kenta Murakami (Panasonic Corporation)

    Wearable robotic devices that augment and assist human movement were developed in many groups. Many of these devices, however, are exo-skeleton type. Exo-skeleton robots limit freedom of movement and lead to significant increases in leg inertia since actuators are placed near the joint. On the other hand, we developed the wire-driven assistive robots ‘HIMICO’, which assist human movement softly by using Bowden cables to transfer power from motor system. The key feature of this approach is that the actuator can be located away from the joint, allowing lightweight leg structures while still generating significant forces. The max tension per wire is approximately 100 N, allowing very light-weight, 3.5 kg and reducing metabolic cost in climbing slopes and walking up stairs.

  • 2: Deep Scene Recognition with Object Detection

    Zhiming Tan (Fujitsu R&D Center, Co.LTD.)

    As well as an international large event like Olympics, advanced image processing and deep learning technologies become important for smart-city and smart-life, because of the limit of the visual inspection to recognize the scene. Traditional methods for traffic scene recognition need to solve complex factors, such as multiple object types, object relationship, background, weather, and lighting. So it is hard for them to recognize accurate scene in real time. By using a lightweight CNN model optimized for object detection, we present a system to recognize traffic scene with higher accuracy and in real time. The CNN model is optimized for small objects in far distance and with occlusion. Rules of object relationship are used for recognizing scenes, such as city surveillance; traffic jam, road construction, and waiting for bus, etc. Our activities are enhanced to recognize a human's behavior and scene; grasping the game situation from player's movement for sports applications, detecting a doubtful behavior in the crowd for security applications, etc. And various case of sample movies are introduced in this speech.

  • 3: Spatial and battery sensing solutions for smart cities leading to 2020

    Hiroyuki Tsujikawa (Panasonic Corporation)

    Towards the year of 2020, the deployment of infrastructure services based on IoT has been accelerating in the making of smart cities. Especially for mobility, robots and drones are being widely commercialized. These hi-tech products require highly accurate spatial sensing and efficient battery management to realize security and safety for autonomous driving and autonomous control. This time we are introducing examples of next generation sensing solutions that incorporate Panasonic's valuable sensor devices and algorithm technology. For spatial recognition solutions, we are proposing sensing technology that facilitates free space detection and obstacle detection. These functions have been developed by adding 3D depth measurement techniques to the high-quality imaging technology that we have cultivated with camera products over the years. And as for battery application solutions, we are introducing model based design for lithium-ion batteries’ deterioration diagnosis and lifetime prediction by using AI based battery state estimation technology.

Session 9A: Thursday, January 24, 15:00-16:45

Oral Session: Beyond the Virtual Reality World

Organizers: Hiroe Iwasaki (NTT)
Masaru Kokubo (Hitachi)
Session Chair: Masaru Kokubo (Hitachi)

Virtual reality (VR) technique will take a key role for a human-machine collaboration. This session will talk about the techniques for the next generation VR systems. The first topic presents social impacts from the VR and will discuss about the future of the VR including the latest VR developments and researches. The second topic presents a new wearable display which feature is extremely light weight for easy installation to VR users. The proposed display adopts a new scanning fiber method to improve image qualities. The last presentation discusses new technologies of video presentation techniques to feel super reality for sports experiences. The proposed super-real video expression is effective for both conveying the intention of the producer and a useful scene for viewers. We want to discuss the current technologies trend and also the future impact beyond the VR in this session.

  • 1: The World of VR2.0

    Michitaka Hirose (Tokyo University)

    Recently, VR technology is attracting various interest from our society. In my talk, recent topics of VR research and development are introduced. Also, social impacts of this technology are discussed from various point of view.

  • 2: Optical fiber scanning system for ultra-lightweight wearable display

    Yoshio Seo (Hitachi)

    An optical fiber scanning system is one of laser beam steering type devices that can change the laser traveling direction by displacing the tip of the optical fiber.
    The laser beam steering device is suitable for embedding into small wearable display such as smart glasses due to its small size.
    The conventional fiber scanning system draws with spiral trajectory of rotational displacement with the same vertical and horizontal vibration frequencies. There are some issues such as becoming the resolution depends only on the vibration frequency, bright spot occurs at the center of the drawing area, and the drawing area becomes circular.
    In this study, we developed (1) Oval scanning, (2) Cross scanning, and (3) Cross-Limit scanning as novel scanning control system of overlapping oval trajectories of different shapes.
    We confirmed that the bright spots are moved to the outside of the drawing area, and the drawing area shape is closer to rectangle, by experiment with the actual machine.
    Further, in these systems, it is possible to increase the resolution with higher Laser modulation frequency. Scanning fiber could improve the image qualities with these novel control.

  • 3: Superreal Video Representation for Enhanced Sports Experiences

    Hideaki Kimata (NTT)

    The impressions received from paintings and pictures are different. What this difference comes from? There is a view that both are the results produced including the composition, and the manufacturing process is different. There is something that the producer wants to convey in it. How far can you put the intention of the producer? Should we make it with the composition that the viewer wants to see? There may not be a logical answer to these questions. Besides pursuing the answers to the various questions mentioned above, examples of the research results in the superreal video presentation are shown. We believe that the superreal video expression is effective for conveying the intention of the producer and also has a useful scene for viewers. In this article, we introduce examples of using VR (virtual reality) technology for training at athletes and examples of video experiences at sports watching and events. Among them, we show scenes in which the magnified expression of is effective, such as "emphasizing" a part of the scene, and discuss it with reference to experimental data. When conveying the producer's intention and thinking that the VR is the media that allows a certain degree of freedom to the viewer side while conveying the intention of the producer, to "emphasize" a part is a method of conveying the intention of the producer in VR.

Last Updated on: October 12, 2018