Open access peer-reviewed chapter

Non-Idealities in Memristor Devices and Methods of Mitigating Them

Written By

Muhammad Ahsan Kaleem, Jack Cai, Yao-Feng Chang, Roman Genov and Amirali Amirsoleimani

Submitted: 02 October 2023 Reviewed: 08 November 2023 Published: 12 June 2024

DOI: 10.5772/intechopen.1003837

From the Edited Volume

Memristors - The Fourth Fundamental Circuit Element - Theory, Device, and Applications

Yao-Feng Chang

Chapter metrics overview

17 Chapter Downloads

View Full Metrics

Abstract

One of the main issues that memristors face, like other hardware components, is non-idealities (that can arise from long-term usage, low-quality hardware, etc.). In this chapter, we discuss some ways of mitigating the effects of such non-idealities. We consider both hardware-based solutions and universal solutions that do not depend on hardware or specific types of non-idealities, specifically in the context of memristive neural networks. We compare such solutions both theoretically and empirically using simulations. We also explore the different non-idealities in depth, such as device faults, endurance, retention, and finite conductance states, considering what causes them and how they can be avoided, and present ways of simulating these non-idealities in software.

Keywords

  • memristors
  • memristor non-idealities
  • non-idealities simulation
  • deep neural networks
  • ensemble methods

1. Introduction

Memristors are electronic components that are able to speed up vector-matrix multiplication (VMM) operations by performing them entirely in memory and thus have many applications such as accelerating machine learning. Figure 1a depicts the mapping of memristive VMM to artificial neural networks. One of the main problems faced by memristors that hinder their more mainstream use is non-idealities (Figure 1b). These are phenomena that negatively affect their performance, and thus, it is important to find ways to mitigate their effects. We begin this chapter by briefly describing some of the non-idealities commonly present in memristors, followed by a discussion on simulations of memristor devices and circuits with non-idealities, and we conclude by describing methods of mitigating the effects of non-idealities in memristors.

Figure 1.

(a) Memristive vector matrix multiplication (VMM) and its topological mapping to artificial neural networks. (b) Weight distribution shift introduced by memristor non-idealities.

Advertisement

2. Non-idealities in memristors

2.1 Device faults

Memristors have low and high resistance states (LRS and HRS), so device faults are non-idealities where individual memristors become stuck in one of these states. A memristor stuck in the HRS is effectively an open circuit, while a memristor stuck in the LRS is effectively a short circuit.

2.2 Endurance

Endurance is the non-ideality that essentially refers to the phenomenon of wear and tear; like any other hardware components, the performance of memristors degrades over time with use (SET and RESET cycles).

2.3 Random telegraph noise

Random telegraph noise refers to the phenomenon of sudden, random fluctuations in the properties of electronic devices. In the context of memristors, this random fluctuation can affect the resistance, thus resulting in a change in the conductance state of the memristor and causing non-ideal behavior to be exhibited.

2.4 Retention

Memristors have a finite retention time, meaning that they eventually begin to lose the information that they have stored over time. Retention refers to the non-ideal behavior that arises due to this phenomenon.

2.5 Finite conductance states

Ideally, memristors are assumed to have an unlimited amount of conductance states. However, in reality there can only be a finite amount of conductance states. This causes non-ideal behavior because the memristor cannot be set to the exact conductance value that it is supposed to be set to and instead has to be rounded to the nearest conductance value that is supported by the memristor.

2.6 Device-to-device and cycle-to-cycle variability

Overall, different devices exhibit various levels of non-idealities. These non-idealities can be viewed holistically as device-to-device (D2D) variability or cycle-to-cycle (C2C) variability. The coefficient of variation (Cv) is defined as the ratio of the standard deviation of the SET/RESET voltages with the means, with typical values ranging from less than 1% to up to around 6% for C2C SET/RESET variability, and up to 35% reported for D2D SET/RESET variability [1, 2, 3, 4]. For example, Kumar et al. [4] reported 0.2%/1.07% and 2.64%/10.13% for C2C and D2D SET/RESET variability for Al/Y2O3/GZO memristor on 30 × 25 size array.

Advertisement

3. Simulation of memristor devices and circuits with non-idealities

3.1 Overview

While enabling exciting possibilities in various fields, memristors exhibit non-ideal behaviors that can significantly impact their performance and reliability in practical applications. Therefore, it is crucial to accurately model and simulate memristor devices and circuits, taking these non-idealities into account. In this section, we delve into the complexities of simulating memristor behavior, with a particular focus on addressing non-idealities. Starting with a single memristor device, we can explore the intricacies of its operation and the influence of non-idealities on its behavior. Subsequently, we can scale up our simulations to encompass entire memristor-based systems, providing insights into how non-idealities propagate through interconnected components and affect the overall performance of complex applications.

3.2 Simulation of memristor device

Simulating memristor devices is a fundamental aspect of memristor research. To achieve accurate simulations, one must consider various non-ideal behaviors that can arise in memristor devices. These non-idealities encompass factors such as variability in resistance states, endurance issues, and parasitic effects. Mathematical models, which include non-linear differential equations and piecewise linear approximations, are commonly used to describe memristor behavior. In 1971, Chua predicted the flux (current) controlled memristor and the charge (voltage) controlled memristor in his work “Memristor – The Missing Circuit Element” [5], which is described by the following relationship:

it=Wϕtvt,WqdqϕE1

Flux controlled memristor.

vt=Mqtit,MqqdqE2

Charge controlled memristor.

The differential equations describe the general behavior of a memristor, but a lot of details are missing as a real physical memristor is more complex. Real memristor devices often exhibit current-gated or voltage-gated behavior, in which voltage and current below a certain threshold do not change the memristance of the device. To simulate this behavior, more complex mathematical models are used. Notably, the ThrEshold Adaptive Memristor (TEAM) model [6] and the Voltage TEAM (VTEAM) model [7] are widely used in SPICE and custom circuit simulations. These models provide insights into the underlying physics mechanisms and possibly extrapolation of device behavior under unknown circumstances.

Apart from analytical models, phenomenological models are hardware-calibrated and meant to capture the nuances of memristor behavior in practical applications. The phase-change memory (PCM) model by Nandakumar et al. [8] calibrated on 10,000 PCM devices, and the metal-oxide memristor model by Nili et al. [9] is calibrated on 324 devices at six different temperatures. These models provide accurate and polynomial time interpolation of device non-idealities, which are ideal for large-scale integration and security applications. In Figure 2, we show the voltage-current relationship of 200 metal-oxide memristor devices initialized at 65 S, simulated with the Python implementation of Nili’s model.

Figure 2.

Voltage-current relationship of 200 metal-oxide memristor devices initialized at 65S, operating at 100 MHz and 85°C. Variation is due to device-to-device variability, non-linearity, and operating noise.

3.3 Simulation of memristor system

Memristors are not used in isolation but are often integrated into more extensive systems and circuits. These systems may include neuromorphic computing architectures, non-volatile memory arrays, or analog computing devices. To gain a comprehensive understanding of how memristors behave in practical applications, it is essential to simulate entire memristor-based systems.

Simulating memristor systems involves not only modeling individual memristor devices but also integrating these models with other circuit components. In general, memristors are arranged in a crossbar fashion to perform vector–matrix multiplications (VMM). Fully passive (0T1R) or 1-Transistor 1-Resistor (1T1R) schemes are often used architectures, with circuit complexity, energy efficiency, and non-idealities trade-offs. As shown in Figure 3, 0T1R arrangement has memristors being sandwiched between word lines and bit lines, whereas the 1T1R arrangement has source lines allowing the selection of specific memristors.

Figure 3.

Illustration of (a) 0T1R crossbar architecture and (b) 1T1R crossbar architecture. VMM can be performed with voltage encoding feed in from the word lines (WLs) and output read from the bit lines (BLs). In the 1T1R arrangement, individual devices can be selected using the select lines (SLs).

Compared to 1T1R arrangement, 0T1R arrangement is more pruned to line resistance, which refers to the resistance introduced by the crossbar word line and bit lines. The direct impact is voltage drop, as illustrated in Figure 4. To simulate this behavior, Chen proposed an iterative matrix equation solution [10] that is compatible with devices with non-linear IV relationships. The inclusion of this line resistance is important, as it may dominate the non-idealities at large-scale integration. To illustrate this, we implemented Nili’s device model with Chen’s crossbar line resistance model. In Figure 4, we show the impact of line resistance on a 128 × 128 0T1R memristor crossbar under (1) voltage applied from a single side, (2) voltage applied from both sides, and (3) tilization with 16 × 16 arrays.

Figure 4.

Voltage drop due to line resistance on a 64 × 64 0T1R crossbar initialized with 65μS mean conductance, with 20 Ω line resistance. (a) Cell voltage when input is delivered from a single side of the word lines. (b) Cell voltage when input is delivered from both sides of the word lines. (c) Cell voltage in tilized 16 × 16 arrays with single-sided word line voltage delivery.

3.4 Applications in deep learning

One of the most computationally demanding uses of memristor simulation is deep learning. Unlike other applications, deep learning models contain millions of parameters represented by the conductance states of the memristors. Therefore, simulation fidelity, alongside speed and memory consumption, are important metrics. Instead of creating a simulation framework from scratch, there are established open-source frameworks available.

The IBM Analog Hardware Acceleration Kit (IBM AIHWKit) [11] is an open-source Python/C++/CUDA toolkit designed to simulate in-memory computing devices with compatibility with the PyTorch ML framework. IBM AIHWKit supports on-chip training with in-memory gradient computation and inference with real device non-idealities based on statistical models and circuit non-idealities models.

MemTorch [12] is a similar toolkit to IBM AIHWKit, with a simpler VTEAM device model and 1T1R, 0T1R tilized memristor arrays. MemTorch supports numerous simulations of device and circuit non-idealities, including device-to-device variability, finite conductance states, and failure toward high/low resistance states. Contrary to IBM AIHWKit, which aims to provide an accurate simulation, MemTorch allows the exploration of noise levels beyond the usual operating range. In later sections, we will evaluate several proposed non-ideality mitigation methods using the MemTorch library to show robustness under hypothetical scenarios.

NeuroSim [13] is another framework for memristive deep neural network simulation built on C++. Compared to the aforementioned toolkits, NeuroSim provides simulation validated through real silicon devices and accurate power and footprint analysis. The framework is interfaced with the PyTorch library through DNN + NeuroSim [14], enabling an end-to-end benchmark framework for memristive deep neural network simulation. We provide a comparison of the three frameworks in Table 1.

FrameworkIBM AIHWKit [11]
2021
MemTorch [12]
2020
DNN + NeuroSim [14]
2020
Prog. lang.Python, C++, CUDAPython, C++, CUDAPython, C++
ML frameworkPyTorchPyTorchPyTorch, Tensorflow
Device modelStatisticalAnalyticalStatistical
In-situ training
In-situ grad. comp.
Perf. estimation

Table 1.

Comparison of open-source memristive deep neural network simulation frameworks.

Advertisement

4. Methods to mitigate memristor non-idealities

There are two classes of methods that aim to mitigate the effects of non-idealities in memristors. The first is the classical approach, using hardware-based solutions that aim to tackle a specific type of non-ideality for a specific type of device. The second is a relatively new but more promising approach, which aims to find general solutions that are applicable for all kinds of non-idealities; these are applicable in the context of memristive deep neural networks (MDNNs-DNNs implemented on memristor crossbars) and make use of techniques from machine learning such as ensemble methods, which combine the outputs of multiple individual networks to produce a more accurate overall output [15]; see Figure 5 for a depiction of this second class of methods. In this section, we will first go over the first class of methods by analyzing some of these methods and discussing the pros and cons, and then we will go over the second class of methods.

Figure 5.

Memristive deep neural network system with ensemble methods.

4.1 Some hardware-based attempts to deal with the effects of non-idealities in memristors

The first solution we will be looking at is from the 2017 paper by Xia et al., which targets the device faults non-ideality in Resistive Random Access Memory (RRAM)-based Computing Systems (RCSs) by using redundancy schemes and a modified mapping algorithm.

This paper presents a fault-tolerant framework for RCSs [16]. It introduces a mapping algorithm with inner fault tolerance (MAO) designed to convert matrix parameters into RRAM conductances, effectively managing Stuck-At-Faults (SAFs) by optimizing the mapping process. Additionally, this paper proposes two baseline redundancy schemes to enhance RCS’s effectiveness when a significant percentage of RRAM cells are faulty. To further adapt to varying SAF distributions and reduce the number of redundant RRAM cells, this paper introduces a distribution-aware redundancy scheme and a re-configurable redundancy scheme. These schemes provide dynamic fault tolerance capabilities. In essence, the paper proposes a comprehensive approach to address SAFs in RRAM Computing Systems, encompassing fault-tolerant mapping algorithms and adaptive redundancy schemes, ultimately improving the reliability and accuracy of RRAM-based computations.

The second solution we will be looking at is from the 2018 paper by Ambro-gio et al., which targets the limited dynamic range non-ideality in PCM-based memristive devices by using sets of varying conductances for each synaptic weight [17]. This paper presents mixed hardware-software neural network implementations, incorporating PCM (phase-change memory) for long-term storage, volatile capacitors for near-linear updates, and weight-data transfer with “polarity inversion” to mitigate device variations.

The third solution we will be looking at is from the 2018 paper by Chai et al., which targets the random telegraph noise (RTN) non-ideality in memristive devices by adopting non-filamentary RRAM [18]. Based on their switching mechanisms, memristive devices can be categorized as either filamentary or non-filamentary. The authors find that non-filamentary RRAM devices, specifically TiO2/a-Si (a-VMCO), exhibit a more consistent RTN amplitude distribution and significantly lower RTN occurrence rates compared to their filamentary counterparts, thus suggesting that non-filamentary RRAM devices can be a good choice for mitigating the effects of the RTN non-ideality in memrstive systems.

The fourth solution we will be looking at is from the 2018 paper by Fang et al., which targets the device-to-device variability non-ideality in HFOx RRAM memristive devices by introducing an ultra-thin ALD-TiN buffer layer [19]. This paper investigates the impact of oxygen-scavenging metal, specifically titanium (Ti), on the performance of HfOx-based Resistive Random Access Memory (RRAM) devices. Through experimentation, the researchers found that during the physical vapor deposition process, Ti atoms could penetrate randomly into the bulk of HfOx, leading to the creation of oxygen vacancies and subsequent performance degradation in RRAM devices. To address this issue, the paper proposes a novel structure involving the insertion of an atomic layer deposition (ALD) TiN buffer layer between the oxygen-scavenging Ti layer and the HfOx layer. This buffer layer effectively reduces the negative effects of Ti penetration and enhances the vacancy-creating process. Experimental results confirmed that incorporating the TiN buffer layer significantly improved the uniformity of switching parameters, such as forming voltage and resistance, in RRAM devices.

4.2 Ensemble methods for mitigating non-idealities

4.2.1 Ensemble Averaging

The most basic type of ensemble method is known as Ensemble Averaging (EA). In this method, the output of multiple neural networks is simply averaged (linearly, meaning that each network is weighed the same regardless of individual performance) to produce a combined output. Each of the individual networks comprising the ensemble is trained on the complete dataset. This method can be summarized with the following equation:

Ox=1Ni=1NMixE3

Here, x is the input, Mix is the output of network i for input x, N is the total number of networks in the ensemble, and O(x) is the output of the ensemble for input x.

4.2.2 Generalized Ensemble Method

The Generalized Ensemble Method (GEM) is a method that is similar to EA, with the only main difference being that the average is not linear and instead, networks are weighed differently; there is not any one specific scheme that is used to determine how each network is weighed and the most optimal scheme can differ depending on the application and other details. In our work in (cite ensemble method paper), we found a weighting scheme that worked best for the basic task of image classification on the MNIST dataset; please see that paper for more details regarding this. The equation summarizing this method is as follows, with the notation defined similarly as before, the only new addition being that wi is the weight of network i, determined according to whatever weighting scheme is used:

Ox=i=1NwiMixi=1NwiE4

4.2.3 Voting

Unlike the previous two ensemble methods, voting is only applicable to classification tasks. The most common type of voting, known as plurality voting, is as follows. Each individual network in the ensemble independently makes a prediction from the input, and the class that is most commonly predicted is chosen as the output of the ensemble. Ties can be broken arbitrarily or using a heuristic (source for this?). This method can be summarized by the following equation:

Ox=cargmaxji=1NoijxE5

Here ck represents the classes and oijx represents the output of network i for class cj given input x, which takes the value 1 if the prediction of the network is cj and 0 otherwise.

4.2.4 Weighted Voting

In the same way that GEM is an extension of EA, Weighted Voting is an extension of voting in the sense that instead of the vote of each network in the ensemble being weighed equally regardless of their individual performance, the networks can be weighed differently, meaning that, for example, the vote of a better performing network could take more precedence over that of a poorer performing network. There are many possible weighting schemes that can be used for this method. The following is a derivation of what is the theoretically optimal weighting scheme (given some assumptions) [20]. Let the outputs of the individual networks in the ensemble for an input x be 1=l1l2lNT where N is the total number of networks in the ensemble, and li is the label of the class predicted by network i. The following is the Bayesian optimal discriminant function for the combined output of the ensemble on the class with label cj:

Hjx=logPcjP1cjE6

Now, if we make the assumption that the outputs of each network are conditionally independent, meaning that P1cj=i=1NPlicj, then the Bayesian optimal discriminant function can be simplified as follows:

Hjx=logPcj+logi=1NPlicj=logPcj+logi=1,li=cjNPlicji=1,licjNPlicj=logPcj+logi=1,li=cjNpii=1,licjN1pi=logPcj+i=1,li=cjNlogpi1pi+i=1Nlog1piE7

Here, pi denotes the accuracy of network i. Note that the final term in our simplified expression does not depend on cj, and thus can be ignored. Moreover, the first term does not depend on the individual networks, so from the second term, we see that the theoretically optimal weighting scheme for Weighted Voting would satisfy the following:

wilogpi1piE8

In practice, it turns out that due to the assumptions we made to derive this result that is not satisfied in reality (the assumptions being that the outputs of each network are conditionally independent and that we know the ground truth accuracies of the networks), there are other weighting schemes that can result in a greater improvement in performance depending on the task at hand; the reader is referred to our work in [21] for some more details regarding this for the specific task of handwritten digit recognition in the MNIST dataset.

4.2.5 Ensemble methods: summary

In summary, we introduced four ensemble methods that mitigate MDNN non-idealities. We summarized the four techniques in Figure 6. We evaluated the performance of ensemble methods on the handwritten digit recognition task with the MNIST dataset, with experiments gathered from 10 trials, and each ensemble method combined outputs from five different experts [21]. As a brief summary of our work, Figure 7 demonstrates the performance of the ensemble method compared to ideal and non-ideal MDNNs under device fault, endurance, retention, and endurance and retention combined non-idealities. Figure 8 compares MDNN accuracy against the baseline and across different ensemble methods. The reader is again referred to our work in [21] for detailed implementation of these ensemble methods.

Figure 6.

A summary of ensemble methods techniques: four different ways to combine outputs of multiple experts introduced in Section 4.2.

Figure 7.

Performance of memristive deep neural networks (MDNN) for the MNIST hand digit recognition dataset under various non-idealities: device fault, endurance, retention, and endurance & retention combined. Distribution gathered from 10 trials. Ensemble methods (EM) combined outputs from five experts.

Figure 8.

Performance comparison of four ensemble methods techniques and baseline ideal/non-ideal memristive deep neural networks (MDNNs) for the MNIST hand digit recognition dataset. Distribution gathered from 10 trials. Ensemble methods (EM) combined outputs from five experts.

Advertisement

5. Conclusions

In conclusion, this chapter has explored various classes of memristor non-idealities, ways to simulate them, and hardware and universal ways to address them. By conducting simulations, we assessed the effects of diverse non-idealities and proposed strategies for their mitigation with a focus on memristive deep neural networks. This chapter furnishes readers with practical guidance for mitigating non-idealities, facilitating the design of more robust, more efficient memristor circuits and systems.

References

  1. 1. Li S et al. Wafer-scale 2D hafnium diselenide based memristor crossbar array for energy-efficient neural network hardware. Advanced Materials. 2021;34(25):2103376. DOI: 10.1002/adma.202103376 [Accessed: November 5, 2023]
  2. 2. Chen S et al. Wafer-scale integration of two-dimensional materials in high-density memristive crossbar arrays for artificial neural networks. Nature Electronics. 2020;3(10):638-645. DOI: 10.1038/s41928-020-00473-w [Accessed: November 5, 2023]
  3. 3. Kumar S, Gautam MK, Yadav S, Mukherjee S. Memcapacitive to memristive transition in Al/Y2O3/GZO crossbar array. IEEE Transactions on Electron Devices. 2023;70(6):3341-3346. DOI: 10.1109/TED.2023.3265622 [Accessed: November 5, 2023]
  4. 4. Kumar S, Agarwal A, Mukherjee S. Electrical performance of large-area Y2O3 memristive crossbar array with ultralow C2C variability. IEEE Transactions on Electron Devices. 2022;69(7):3660-3666. DOI: 10.1109/TED.2022.3172400 [Accessed: November 5, 2023]
  5. 5. Chua L. Memristor-the missing circuit element. IEEE Transactions on Circuit Theory. 1971;18(5):507-519. DOI: 10.1109/TCT.1971.1083337 [Accessed: September 29, 2023]
  6. 6. Kvatinsky S, Friedman EG, Kolodny A, Weiser UC. TEAM: ThrEshold adaptive memristor model. IEEE Transactions on Circuits and Systems I: Regular Papers. 2013;60(1):211-221. DOI: 10.1109/TCSI.2012.2215714 [Accessed: September 29, 2023]
  7. 7. Kvatinsky S, Ramadan M, Friedman EG, Kolodny A. VTEAM: A general model for voltage-controlled memristors. IEEE Transactions on Circuits and Systems II: Express Briefs. 2015;62(8):786-790. DOI: 10.1109/TCSII.2015.2433536 [Accessed: September 29, 2023]
  8. 8. Nandakumar SR et al. Phase-change memory models for deep learning training and inference. In: 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Genoa, Italy. Genoa, Italy: IEEE; 2019. pp. 727-730. DOI: 10.1109/ICECS46596.2019.8964852 [Accessed: September 29, 2023]
  9. 9. Nili H, Vincent AF, Prezesio M, Mahmoodi MR, Kataeva I, Strukov DB. Comprehensive compact phenomenological modeling of integrated metal-oxide memristors. IEEE Transactions on Nanotechnology. 2020;19:344-349. DOI: 10.1109/TNANO.2020.2982128 [Accessed: September 29, 2023]
  10. 10. Chen A. A comprehensive crossbar Array model with solutions for line resistance and nonlinear device characteristics. IEEE Transactions on Electron Devices. 2013;60(4):1318-1326. DOI: 10.1109/TED.2013.2246791 [Accessed: September 29, 2023]
  11. 11. Rasch MJ et al. A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays. In: 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS). Washington DC, DC, USA: IEEE; 2021. pp. 1-4. DOI: 10.1109/AICAS51828.2021.9458494 [Accessed: September 29, 2023]
  12. 12. Lammie C, Azghadi MR. MemTorch: A simulation framework for deep Memristive cross-Bar architectures. In: 2020 IEEE International Symposium on Circuits and Systems (ISCAS). Vol. 2020. Seville, Spain: IEEE; 2020. pp. 1-5. DOI: 10.1109/ISCAS45731. 2020.9180810 [Accessed: September 29, 2023]
  13. 13. Lu A, Peng X, Li W, Jiang H, Yu S. NeuroSim simulator for compute-in-memory hardware accelerator: Validation and benchmark. Frontiers in Artificial Intelligence. 2021;4:659060. DOI: 10.3389/frai.2021.659060 [Accessed: September 29, 2023]
  14. 14. Peng X, Huang S, Jiang H, Lu A, Yu S. DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training. 2020. Available from: https://arxiv.org/abs/2003.06471 [Accessed: September 29, 2023]
  15. 15. Joksas D, Freitas P, Chai Z, et al. Committee Machines—A Universal Method to Deal with Non-idealities in Memristor-based Neural Networks. 2020. Available from: https://www.nature.com/articles/s41467-020-18098-0 [Accessed: September 30, 2023]
  16. 16. Xia L, Huangfu W, Tang T, Yin X. Stuck-at Fault Tolerance in RRAM Computing Systems. 2017. Available from: https://ieeexplore.ieee.org/ielaam/5503868/8330765/8119491-aam.pdf [Accessed: September 30, 2023]
  17. 17. Ambrogio S, Narayanan P, Tsai H, et al. Equivalent-accuracy Accelerated Neural-Network Training Using Analogue Memory. 2018. Available from: https://europepmc.org/article/med/29875487 [Accessed: September 30, 2023]
  18. 18. Chai Z, Freitas P, Zhang W, Hatem F, Fu Zhang J, Marsland J, et al. Impact of RTN on Pattern Recognition Accuracy of RRAM-Based Synaptic Neural Network. 2018. Available from: https://www.researchgate.net/publication/327484337 [Accessed: September 30, 2023]
  19. 19. Fang Y et al. Improvement of HfOx-Based RRAM Device Variation by Inserting ALD TiN Buffer Layer. 2018. Available from: https://ieeexplore.ieee.org/document/8352781 [Accessed: September 30, 2023]
  20. 20. Zhou Z-H. Ensemble Methods: Foundations and Algorithms. 2012. [Accessed: September 30, 2023]
  21. 21. Kaleem MA, Cai J, Amirsoleimani A, Genov R. A Survey of Ensemble Methods for Mitigating Memristive Neural Network Non-idealities. 2023. Available from: https://ieeexplore.ieee.org/document/10181553 [Accessed: September 30, 2023]

Written By

Muhammad Ahsan Kaleem, Jack Cai, Yao-Feng Chang, Roman Genov and Amirali Amirsoleimani

Submitted: 02 October 2023 Reviewed: 08 November 2023 Published: 12 June 2024