Stanford Seminar - Neural Networks on Chip Design from the User Perspective
Description
To apply neural networks to different applications, various customized hardware architectures are proposed in the past a few years to boost the energy efficiency of deep learning inference processing. Meanwhile, the possibilities of adopting emerging NVM (Non-Volatile Memory) technology for efficient learning systems, i.e., in-memory-computing, are also attractive for both academia and industry. We will briefly review our past effort on Deep learning Processing Unit (DPU) design on FPGA in Tsinghua and Deephi, and then talk about some features, i.e. interrupt and virtualization, we are trying to introduce into the accelerators from the user's perspective. Furthermore, we will also talk about the challenges for reliability and security issues in NN accelerators on both FPGA and NVM, and some preliminary solutions for now.
Stanford Seminar - Neural Networks on Chip Design from the User Perspective
-
TypeOnline Courses
-
ProviderYouTube
-
PricingFree
-
Duration58 minutes
To apply neural networks to different applications, various customized hardware architectures are proposed in the past a few years to boost the energy efficiency of deep learning inference processing. Meanwhile, the possibilities of adopting emerging NVM (Non-Volatile Memory) technology for efficient learning systems, i.e., in-memory-computing, are also attractive for both academia and industry. We will briefly review our past effort on Deep learning Processing Unit (DPU) design on FPGA in Tsinghua and Deephi, and then talk about some features, i.e. interrupt and virtualization, we are trying to introduce into the accelerators from the user's perspective. Furthermore, we will also talk about the challenges for reliability and security issues in NN accelerators on both FPGA and NVM, and some preliminary solutions for now.