Special Sessions
June 14, 2022(Tuesday)
[Special Session#1] 13:30 – 14:45
Novel Computation and Communication Methods for AI Accelerator Design
Organizer :
Kun-Chih Chen (National Sun Yat-sen University, Taiwan)
Md Farhadur Reza (Eastern Illinois University, USA)
Abstract
Artificial Intelligence (AI) technologies have shown significant advantages in many domains such as image processing, speech recognition, and machine translation. Current AI accelerator designs usually require thousands of parameters, leading to the high design complexity and power consumption when developing large-scale AI accelerators. In addition, contemporary AI methods are usually trained based on tons of labeled data. Therefore, it is time-consuming to generate an optimal AI model when facing new applications.
To reduce the design challenge of generating a cost-efficient AI model, computation units in supervised learning (including deep learning) has become an emerging topic in recent years. Unsupervised learning is another angle of machine learning algorithms that works with datasets without labeled data. The common unsupervised learning methods, such as Spike Neural Network (SNN), is trained based on the spike generation between SNN neurons. Although it has the benefit of low-power data process, the low computing accuracy is the main problem of the current unsupervised learning methods. To address the design problems of the supervised and unsupervised learning methods, some novel computation methods or architectures, such as stochastic computing, near-memory processing, etc., are seen as the viable solutions to meet the performance and design productivity requirements. In addition, some novel communication technologies are needed in the AI accelerator to address the performance and energy bottlenecks in high performance computing systems. The special session is motivated by these challenges and opportunities and aims at attracting contributions on efficient design solutions for both computation and communication aspects in supervised as well as unsupervised learning approaches.
[Special Session#2] 13:30 – 14:45
Artificial Intelligence Boosted Circuits and Systems for Brain-Machine Interface
Organizer :
Jie Yang (Westlake University, China)
Mohamad Sawan (Westlake University, China)
Abstract
Recent advancement in artificial intelligence (AI) has significantly shaped the architectures of circuits and systems for closed-loop brain-machine interfaces (BMIs). In the one hand, AI technologies provide unprecedented abilities to analyze neural signals making prediction, intervention, and treatment of many diseases such as vision enhancement, epileptic seizures, stroke, depression, addictions, etc. On the other hand, the embrace of AI technologies poses new challenges for different design aspects since the complexity of AI naturally does not meet the low-power and miniaturized constraints of both wearable and implantable brain-machine interfaces. The contributors of this Special Session on artificial intelligence boosted BMI will present their latest achievements in developing algorithms, circuits and overall systems that tackle the abovementioned challenges and address the demand for next-generation BMI.
[Special Session#3] 16:10 – 17:25
Efficient Hardware Accelerator for DNN / Emerging Neural Network Circuits and Algorithms Combining Bio-Inspired and Machine-Learning Perspect
Organizer :
Charlotte Frenkel (Delft University of Technology, Netherlands)
Lei Deng (Tsinghua University, China)
Wei Zhang (HKUST, Hong Kong)
Abstract
Deep neural networks are widely used in various data processing tasks with design scale growing rapidly nowadays to handle practical scenarios. Moreover, besides traditional CNN, SNN, GNN and varous emerging machine learning algorithms have been developed and utilized in practice. At the same time, to meet the intensive data and computation demands, hardware accelerators implemented on FPGA or ASIC are increasingly used to speed up the processing, improve the system energy efficiency. However, different neural networks may raise different challenges and require different hardware resources for the hardware accelerators on-chip memory and computational resources. Hence, efficient utilization of the limited resources to achieve the best performing designs remains an open and interesting research challenge. In this session, the papers presented four interesting accelerators with the software-hardware co-design methods to optimize various DNNs on ASIC, single or multiple FPGAs.
Embedded systems and dedicated circuit implementations based on standard machine-learning techniques recently demonstrated striking successes, both accuracy- and efficiency-wise for a deployment at the edge. As biological systems still outline order-of-magnitude power savings compared to silicon implementations of conventional neural networks, spiking neural networks have recently received increasing research interest. However, these emerging bio-inspired techniques still lag behind their conventional counterparts in both accuracy and power efficiency. In this special session, we will highlight research leveraging synergies between the bio-inspired and the machine-learning approaches. We will highlight how these synergies can take place along different abstraction levels, ranging from emerging devices to circuit, architecture, and algorithmic aspects.
June 15, 2022(Wednesday)
[Special Session#4] 08:30 – 09:30
Memory-Centric Accelerator Design for Energy-efficient Inferencing and Training
Organizer :
Yuan Du (Nanjing University, China)
Po-Tsang Huang (National Yang Ming Chiao Tung University, Taiwan)
Abstract
Machine Learning (ML) algorithms are widely used in feature classification, recommender systems, and image recognition, where vector-based, matrix-based, CNN-based, search-based computing, and K-means clustering etc. are typically intensive workloads for both edge devices and data-center servers. The power consumption, bandwidth, and latency of memory access become the leading limiting factors for the overall system performance. Therefore, we will discuss memory-centric accelerator design for energy-efficient inferencing and training in different Domain-Specific Accellerators (DSAs), particularly emphasizing on the design of memory-access-optimized von-Neumann accelerators and Computation-in-Memory/Processing-in-Memory (CiM/PiM) accelerators.
[Special Session#5] 08:30 – 09:30
Performance-Power Scalable AI-Accelerator Design Techniques
Organizer :
Ching-Hwa Cheng (Feng Chia University, Taiwan)
Abstract
As the power and performance are bit the major constraints to the mobile (end) device for AI system. This proposed special sessions will complement this emerging topic at scalable performance-power AI-Accelerator design technique, including algorithms, systems, microarchitectures, circuits, and chip. All of them particularly promising in solving the current challenges of artificial intelligence in circuits and systems.
[Special Session#6] 16:20 – 18:05
AI Challenges in Biomedical Engineering
Organizer :
Youngjoo Lee (POSTECH, Korea)
Abstract
This special session includes several recent studies for improving the quality of biomedical engineering inspired by advanced machine learning techniques. From medical imaging to diagnosis applications, we will present multidisciplinary optimization approaches including domain-specific knowledges, neural network designs, hardware-level implementations, and even clinical demonstrations. We’ll also investigate the next challenges in AI-based biomedical applications, which is expected to eventually change our healthcare solutions.
[Special Session#7] 16:20 – 17:50
Low Power Autonomous Systems
Organizer :
Tinoosh Mohsenin (University of Maryland Baltimore County, USA)
Abstract
Artificial intelligence (AI) and robotics technology continue to have a major role in enabling future smart cities, transportation, surveillance, logistics, smart sensing, home health care, and medical technologies. These fields rely on large data-driven approaches and highly computation-intensive algorithms and tend to compute in the cloud and data centers. Local processing on an embedded device is often required to provide low latency and less dependence on the communication link for bandwidth and privacy/security concerns. However, processing locally on the device is very challenging due to its limited memory storage and battery capacity. This special session brings together renowned researchers to present state-of-the-art computing methods through cross-layer design approaches in algorithms, architecture, hardware and system integration, which will enable micro-intelligent systems to perform on-device sensor data analytics and various autonomous and AI tasks at extremely low power. One important design opportunity is efficient low-cost on-device training, which allows an autonomous system to swiftly adapt to new tasks without running traditional computation-intensive training. Another prevailing research interest is hardware-aware and automated model design, which aims at searching for AI models running on embedded devices that can meet strict resource constraints and performance requirements. Also, dedicated hardware accelerators and device development, such as FPGA accelerators and process-in-memory devices, are of the essence and attract intensive research interests. Finally, security and robustness for autonomous systems are demonstrating growing importance and challenges that require novel defense algorithms and hardware support. Speakers will discuss challenges and new opportunities brought about by their unique approaches, and highlight their perspective on the best direction for future work in the field.
[Special Session#8] 16:20 – 17:50
Security and Privacy in Deployment of Deep Neural Networks
Organizer :
Chip Hong Chang (Nanyang Technological University, Singapore)
Yue Zheng (Nanyang Technological University, Singapore)
Abstract
This special session provides a forum to present and discuss potential security and privacy issues and their countermeasures in the deployment of deep neural networks (DNNs). DNN models can be deployed either on cloud or edge platforms. Heterogeneity and shared resources of cloud environment, as well as resource and power constraints of edge devices, make deployed DNN models vulnerable to both security and privacy attacks. Different perspectives and challenges of trade-off among performance, accuracy and robustness of deep learning models are to be optimized and these optimization strategies constantly evolved through the progress of hardware acceleration and data analytic technologies. It turns out that technologies traditionally used for attacks of trained DNN models could also be ingeniously combined and turned into powerful defense mechanisms. Each talk solicited in this session will focus on an emerging topic pertaining to either an attack on or a defense strategy for the deployment of DNN models. Specifically, analysis of accuracy-robustness trade-off in neural network compression, survey of side-channel approaches to reverse engineer neural models, novel DNN attacks including remote power analysis model extraction attacks and dynamic backdoor attacks, as well as state-of-the-art buyer-traceable and proactive DNN IP protection methods will be presented.