Visit
Wed 02/19
Bongjin Kim headshot

Seminar @ Cornell Tech: Bongjin Kim

Memory-centric computing circuits & architectures for machine learning and optimization problems

Recently, brain-inspired neural network accelerators and neuromorphic computing hardware have gained wide attention and outperformed traditional computing hardware such as CPUs and GPUs with high energy efficiency. However, existing brain-inspired computing hardware based on analog or digital implementations have been suffered from several challenges. Analog implementations suffer from non-idealities such as process variation and vulnerability to various environmental noise sources. Significant hardware overhead also limits them due to essential analog-to-digital or digital-to-analog conversions. On the other hand, digital implementations are free from such non-idealities and hardware overhead. However, they are typically less energy and area efficient than their analog counterparts. Besides, both analog and digital brain-inspired computing hardware developed so far have limitations in their reconfigurability and scalability. This talk introduces novel analog and digital memory-centric computing macros with their circuits and architectures using embedded memories, which could potentially address many of the challenges mentioned above. Besides, the talk will also introduce memory-centric alternative computing solutions for prime factorization and combinatorial optimization problems.

Speaker Bio

Bongjin Kim received the BS and MS degrees from POSTECH, Pohang, Korea, and the PhD degree from the University of Minnesota, USA in 2014. After PhD, He spent two years at Rambus, where he was a senior staff member and worked on research of high-speed serial link circuits and micro-architectures. After working as postdoctoral research fellow at Stanford University for a year, he joined Nanyang Technological University as an Assistant Professor in Sep. 2017. From 2006 to 2010, he was with Samsung Electronics, Korea, where he performed circuit research for high-speed serial links. He was also a research intern at Texas Instruments, IBM Research, and Rambus during his PhD. Dr. Kim is the recipient of a prestigious doctoral dissertation fellowship award for his PhD study and several conference awards including an ISLPED low power design contest award. His research works appeared in 20+ top VLSI and circuit conferences and journals including ISSCC, VLSI, CICC, ESSCIRC, ASSCC, and JSSC as first and corresponding authors. His current research interests include memory-centric computing circuits and architecture, machine learning hardware accelerators, and alternative computing.