High-linearity tuning and evaluation method based on program levels characteristics of analog memristor
Please login to view abstract download link
Mapping the weights of an Artificial Neural Network (ANN) onto the resistance values of analog memristors can significantly enhance the throughput and energy efficiency of artificial intelligence (AI) applications, while also supporting AI deployment on edge devices. However, the errors introduced by the program-level characteristics of memristors, including non-linear resistance switching and the precision of resistance settings, can affect the number of configurable program levels, thereby limiting numerical bit precision. These factors can become bottlenecks that impact the ANN accuracy. In this study, we introduce a feedforward pulse scheme that enhances resistance configura- tion precision and increases the number of programmable levels. Fig. 1 demonstrates the linearity of a TiO2−x-based memristor based on our method[1]. It achieves 512 states with nonlinearity metric of 4.95×10−3, while 1.91×10−5 nonlinearity metric is achieved in 32 states configuration. Since there is a trade-off between the program level count and the setting precision caused by non-linearity, we propose an evaluation method to explore the optimal conditions for configur- ing program levels to achieve higher ANN accuracy. Figs. 2(a) and (b) show two approaches to benchmark ANN accuracy. Varying complexity networks including MNIST[Fig. 2(c)], Fashion- MNIST[Fig. 2(d)], and Caltech101[Fig. 2(e)], are constructed for the image classification task. Fig. 3 indicates the benchmark result. The accuracy of the ANN improves with higher program levels, and even with 8-bit precision, ResNet-34 with over 20 million parameters achieves 95.5% accuracy through weight transfer. Our findings pave the way for future advancements in in- creasing resistance states, which will enable more complex AI tasks and enhance the in-memory computational capabilities required for AI edge applications.