June 14, 2022(Tuesday)
[Keynote Speech #1] 10:10-11:00
|
|
Near Data AI Acceleration with Computational StorageYong Ho Song Biography
Dr. Yong Ho Song is He received bachelor’s and master’s degrees in computer engineering from Seoul National University in 1989 and 1991, respectively, and a Ph.D. in electrical engineering from the University of Southern California in 2002. He is now the head of Controller Development Team of Memory Business in Samsung Electroncis, and responsible for the development of controller SoCs which are core components of storage solution products including server/client SSDs and mobile storages. Before his appointment as head of Contrller Development Team in 2019, he had worked as a professor in Hanyang University since 2003, and performed many research projects on system architecture and software system of memory-based storage systems. And his research results were released to the public in open source form, called OpenSSD. Dr. Song has published more than 150 technical papers in top academic journals and conferences, and has served as a program committed member of many prestigious conferences, including the IEEE International Parallel and Distributed Processing Symposium, the IEEE International Conference on Parallel and Distributed Systems, the IEE International Symposium on High-Performance Computer Architecture, etc. Abstract
The increasing use of large-scale AI and deep learning models for various applications raises interest in efficient storage and process of model data. Server systems for fast processing of vast amounts of model data usually use high-capacity DRAMs. However, high power consumption and cost often limit the scalability of system configuration. And, the overhead associated with data transmission between storage and DRAM is also another problem to avoid. This presentation introduces computational storage which provides data processing capability, and explains the system configuration which effectively processes large-capacity model data stored inside the storage. These computational storage devices can be applied not only to large-scale AI applications but also to graph neural networks and recommended applications, which is expected to have advantages over existing AI systems using lagacy storage devices in terms of total cost of ownership. Samsung aims to discover the possibility of developing new applications through collaboration with various universities/companies using the new storage model. |
|
[Keynote Speech #2] 11:10-12:10
|
|
AI Applications in Semiconductor ManufacturingMarc Hamilton Biography
Marc leads the Solutions Architecture and Engineering team at NVIDIA, responsible for working with customers and partners to deliver the AI and high performance computing solutions. Prior to NVIDIA, Marc worked at HP in the Hyperscale Business Unit and at Sun Microsystems in the HPC and data center groups. Marc holds a B.S. in math and computer science from UCLA, an M.S. in electrical engineering from USC, and is a graduate of the UCLA Executive Management program. Abstract
Modern-day AI has only been possible due to advances in semiconductor manufacturing leading to ever more powerful processors. However, solving today’s AI problems is a data center scale problem and powerful chips alone is not enough. The modern AI data center requires high performance compute, networking, and storage and even more importantly high performance software. NVIDIA’s AI software is used today by every semiconductor manufacturer. In his talk, Hamilton will address some of the most promising application of AI in semiconductor industry. |
|
June 15, 2022(Wednesday)
[Keynote Speech #3] 9:30-10:30
|
|
The compute requirements of developing super-human cognitionSimon Knowles Biography
Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU. He has been designing original processors for emergent workloads for over 30 years, focussing on intelligence since 2012. Before Graphcore, Simon co-founded two other successful processor companies – Element14, acquired by Broadcom in 2000, and Icera, acquired by Nvidia in 2011. He is an EE graduate of Cambridge University. Abstract
The true potential of AI rests on super-human learning capacity, and on the ability to selectively draw on that learning. Both of these properties – scale and selectivity – challenge the design of AI computers and the tools used to program them. A rich pool of new ideas is emerging, driven by a new breed of computing company, according to Graphcore co-founder Simon Knowles. In his talk, Simon discusses the creation of the Intelligence Processing Unit (IPU) – a new type of processor, specifically designed for AI computation. He looks ahead, towards the development of AIs with super-human cognition, and explores the nature of computation systems needed to make powerful AI an economic everyday reality. |
|
[Keynote Speech #4] 10:50-11:50
|
|
Making Computing More Brain-likeMike Davies Biography
Mike Davies is Director of Intel’s Neuromorphic Computing Lab. Since 2014 he has been researching neuromorphic architectures, algorithms, software, and systems, and has fabricated several neuromorphic chip prototypes to date, including the Loihi series. He was a founding employee of Fulcrum Microsystems and Director of its silicon engineering group until Intel’s acquisition of Fulcrum in 2011. He led the development of four generations of low latency, highly integrated Ethernet switches using Fulcrum’s proprietary asynchronous design methodology. He received B.S. and M.S. degrees from Caltech in 1998 and 2000, respectively. Abstract
Despite decades of progress in semiconductor scaling, computer architecture, and artificial intelligence, our computing technology today still lags biological brains in many respects. While deep artificial neural networks have provided breakthroughs in AI, these gains come with heavy compute and data demands relative to their biological counterparts. Neuromorphic computing aims to narrow this gap by drawing inspiration from the form and function of biological neural circuits. The past several years have seen significant progress in neuromorphic computing research, with chips like Intel’s Loihi demonstrating, for the first time, compelling quantitative gains over a range of workloads—from sensory perception to data efficient learning to combinatorial optimization. This talk surveys recent developments in this endeavor to re-think computing from transistors to software informed by biological principles. It previews a new class of chips that can autonomously process complex data streams, adapt, plan, behave, and learn in real time at extremely low power levels. |
Keynote Speakers
Keynote Speakers
June 14, 2022(Tuesday)
[Keynote Speech #1] 10:10-11:00
|
|
Near Data AI Acceleration with Computational StorageYong Ho Song |
|
[Keynote Speech #2] 11:10-12:10
|
|
AI Applications in Semiconductor ManufacturingMarc Hamilton |
|
June 15, 2022(Wednesday)
[Keynote Speech #3] 9:30-10:30
|
|
The compute requirements of developing super-human cognitionSimon Knowles |
|
[Keynote Speech #4] 10:50-11:50
|
|
Making Computing More Brain-likeMike Davies |