TY - JOUR
T1 - F1
T2 - 2021 IEEE International Solid-State Circuits Conference, ISSCC 2021
AU - Lim, Suk Hwan
AU - Liu, Yong Pan
AU - Benini, Luca
AU - Karnik, Tanay
AU - Chang, Hsie Chia
N1 - Publisher Copyright:
© 2021 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/2/13
Y1 - 2021/2/13
N2 - The forum provides a comprehensive full-stack (hardware and software) view of ML acceleration from cloud to edge. The first talk focuses on the main design and benchmarking challenges facing large general-purpose accelerators, including multi-die scaling, and describes strategies for conducting relevant research as the complexity gap between research prototype and product continues to widen. The second talk looks at how to leverage and specialize the open-source RISC-V ISA for edge ML, exploring the trade-offs between different forms of acceleration such as lightweight ISA extensions and tightly-coupled memory accelerators. The third talk details an approach based on a practical unified architecture for ML that can be easily 'tailored' to fit in different scenarios ranging from smart watches, smartphones, autonomous cars to intelligent cloud. The fourth talk explores the co-design of hardware and DNN models to achieve stateof- the-art performance for real-time, extremely energy/throughput-constrained inference applications. The fifth talk deals with ML on reconfigurable logic, discussing many examples of forms of specializations implemented on FPGAs and their impact on potential applications, flexibility, performance and efficiency. The sixth talk describes the software complexities for enabling ML APIs for various different types of specialized hardware accelerators (GPU, TPUs, including EdgeTPU). The seventh talk look into how to optimize the training process for sparse and low-precision network models for general platforms as well as nextgeneration memristor-based ML engines.
AB - The forum provides a comprehensive full-stack (hardware and software) view of ML acceleration from cloud to edge. The first talk focuses on the main design and benchmarking challenges facing large general-purpose accelerators, including multi-die scaling, and describes strategies for conducting relevant research as the complexity gap between research prototype and product continues to widen. The second talk looks at how to leverage and specialize the open-source RISC-V ISA for edge ML, exploring the trade-offs between different forms of acceleration such as lightweight ISA extensions and tightly-coupled memory accelerators. The third talk details an approach based on a practical unified architecture for ML that can be easily 'tailored' to fit in different scenarios ranging from smart watches, smartphones, autonomous cars to intelligent cloud. The fourth talk explores the co-design of hardware and DNN models to achieve stateof- the-art performance for real-time, extremely energy/throughput-constrained inference applications. The fifth talk deals with ML on reconfigurable logic, discussing many examples of forms of specializations implemented on FPGAs and their impact on potential applications, flexibility, performance and efficiency. The sixth talk describes the software complexities for enabling ML APIs for various different types of specialized hardware accelerators (GPU, TPUs, including EdgeTPU). The seventh talk look into how to optimize the training process for sparse and low-precision network models for general platforms as well as nextgeneration memristor-based ML engines.
UR - http://www.scopus.com/inward/record.url?scp=85102338473&partnerID=8YFLogxK
U2 - 10.1109/ISSCC42613.2021.9365804
DO - 10.1109/ISSCC42613.2021.9365804
M3 - Editorial
AN - SCOPUS:85102338473
SN - 0193-6530
VL - 64
SP - 513
EP - 516
JO - Digest of Technical Papers - IEEE International Solid-State Circuits Conference
JF - Digest of Technical Papers - IEEE International Solid-State Circuits Conference
M1 - 9365804
Y2 - 13 February 2021 through 22 February 2021
ER -