Energy Efficient In-memory Computing with Quantized Magnetic Domain Wall Racetrack Devices
ORAL
Abstract
Traditional von-Neumann computing for data intensive classification tasks with deep neural networks (DNNs) consumes a significant amount of power and incurs huge latency [1]. In-memory computing reduces physical separation between memory and computational units by arranging the computational memory units in the crossbar and obviates the need for shuttling data. Magnetic racetrack memory with a single domain wall (DW) can be actuated with spin orbit torque current and allows for efficient vector-matrix multiplication computation in DNN architectures. However, the racetrack memory devices’ response is stochastic and of low-resolution nature in practical scenarios which can hurt DNN accuracy. We have shown previously that these issues can be mitigated with quantized neural network (QNN) learning [2]. In this study, we will experimentally demonstrate the DW motion control in a ferromagnetic racetrack where different DW positions correspond to different memory states and report a QNN implementation with such racetrack memory-based synapses.
[1]. H.-S.-P. Wong et.al, Nat. Nanotechnol., 10, 191 (2015)
[2]. W. A. Misba et. al. IEEE Access, 10, 84946 (2022)
[1]. H.-S.-P. Wong et.al, Nat. Nanotechnol., 10, 191 (2015)
[2]. W. A. Misba et. al. IEEE Access, 10, 84946 (2022)
–
Presenters
-
Walid Al Misba
Virginia Commonwealth University
Authors
-
Walid Al Misba
Virginia Commonwealth University
-
Dhritiman Bhattacharya
Georgetown University
-
Christopher Jensen
Georgetown University
-
Gong Chen
Georgetown University
-
Daniel B Gopman
National Institute of Standards and Technology
-
Damien Querlioz
Université Paris-Saclay
-
Kai Liu
Georgetown University
-
Jayasimha Atulasimha
Virginia Commonwealth University