Neuromorphic Devices and Materials for Efficient AI Computation
±è¼¼¿µ ±³¼ö
Æ÷Ç×°ø°ú´ëÇб³

Advances in artificial neural networks and Big Data analytics have begun to deliver impressive cognitive capabilities for various AI applications. However, training large neural networks on conventional computers is considered as a computationally-intensive task which requires datacenter-scale hardware resources. Recently, cross-point arrays of novel resistive memories have been proposed as an alternative computing paradigm to accelerate matrix operations for neural networks in AI applications. Studies have shown that significant acceleration is achievable by utilizing massive parallelism in such analog accelerators. To date, a variety of nonvolatile memory devices have been studied as synaptic elements and used to build neural network prototypes. While rapid progress is being made, non-ideal switching characteristics of these devices including asymmetric weight update, cycle-to-cycle and device-to-device variations, and stochasticity are yet to be solved in order to achieve the promised acceleration for AI applications. In this talk, I will overview the recent progress and effort to achieve ideal synaptic device characteristics for novel neuromorphic architectures, as exemplified in our recent experimental results on resistive memory devices, capacitor-based approach, and 3-terminal ionic switching devices.