Combining CMOS analog spike neurons with memory synapses, neuromorphic chips can provide massively parallel processing and density of neural networks, providing a promising solution for brain inspired computing. This work demonstrates a leaky integral firing neuron design that implements current integral and synaptic driven dual-mode operation, and crossbar resistance synaptic enabled in situ learning with a single op amp. The proposed design was implemented with 0.18 ?m CMOS technology.
The Von-Neumann architecture comprises memory, a Central processing unit (CPU), and an interconnection between the memory and the CPU. The separation between the memory and CPU leads to the bottleneck with respect to computation, communication, and memory because the information needs to be transferred from memory to the CPU through parallel data bus. Even though the Von Neumann architecture-based processors are capable of performing logic and computation at very high speeds, they perform poorly on many tasks such as image recognition and video motion detection and it is fails to accommodate big data computing tasks. It is not suitable for the implementation of massive neural networks .
In contrast, the human brain is a very energy efficient computing system: functions such as vision, object recognition, speech recognition, and language translation are insignificant in the human brain. Whereas modern machines can do such tasks, but require more magnitude of energy, as well as specialized programming massive parallelism is one of the reasons our brains are so effective in the above-mentioned decision-making tasks . Brain-inspired architectures perform computing tasks by communicating spikes between large network of neurons, which are connected through synapses between each other and locally store memory in form of synaptic strength . It mimics the biological neural cell where synapses receive the synaptic spikes from the other connected neurons. The artificial neural network models are closer to biological neurons in the brain. If the neuron receives the current from other neurons and membrane potential exceeds a threshold voltage, then the output spike will be generated and delivered to other neurons. Therefore, spike timing is considered in the neuron model.
II. RELATED WORKS
Compared with von Neumann’s architecture, neuromorphic systems offer more unique solutions. One of the questions is which neural network model to use. The neural network model defines what components make up the network, how those components operate, and how those components interact. For example, common components of a neural network model are neurons and synapses, taking inspiration from biological neural networks.
A. Neuron Models
Neurons can receive information about chemicals or electricity from other neurons. The interaction between one end of the axon of one neuron and the dendrite of another neuron that allows information or signals to be transmitted between two neurons is called synapse. Generalized neuron architecture along with IBM’s true north was discussed which uses synapses and neurons as basic building blocks. So, the design and implementation of efficient synapses and neurons are critical for an effective neuromorphic processor. The artificial neuron circuit is implemented applying different mathematical models: Hodgkin and Huxley model , Morris lecar model ]6], CMOS integrate and fire model , and Conductance based silicon neuron model . Among these models, CMOS integrate and fire model and Conductance based silicon neuron model are the most adopted models. These models are explicitly considered for discussion in this work.
B. Synapse Models
The artificial neural network models are closer to biological neurons in the brain. If the neuron receives the current from other neurons and membrane potential exceeds a threshold voltage, then the output spike will be generated and delivered to other neurons. Therefore, spike timing is considered in the neuron model. Learning is an essential criterion for the working of neuromorphic computing networks, the synapses adopt their weight in accordance with the learning. Different learning mechanisms have emerged over years. Winner -take- all and Spike timing dependent plasticity: Idongesit E. Ebong, et.al, highlighted the two basic learning rules: They are winner -take- all (WTA)  and Spike timing dependent plasticity (STDP) , .
III. LIF NEURON STRUCTURE
The Fig.1 shows the block diagram of CMOS neuron, inspired by the model . It consists of IFN circuit, comparator, phase controller and STDP-compatible spike generator. The IFN circuits are designed to generate spikes to match spiking behaviors of biological neurons. The IFN circuit integrates the input synaptic currents and generates Vmem signal as output which is compared with the threshold voltage Vth. Whenever Vmem exceeds threshold, comparator generates Vcom signal which drives the spike generator.
IV. THE DESIGN OF CMOS NEURON
This section discusses the design of CMOS neuron circuit blocks.
Operational Amplifier - In earlier works , class AB operation amplifier was used whose drawback was more area and power consumption. Fig.2 shows the he proposed two stage Op-Amp overcome these problems. The Op-Amp with the axillary circuits can be reconfigured both as an integrator as well as a buffer for resistive loads during the firing mode.
2. Comparator - Fig.3 shows the comparator used in this neuron design. It comprises of two cascaded differential amplifiers. The inner amplifier is a gain stage based on source coupled differential pair with diode-connected load devices. The output of the differential pair is further enhanced and regenerated upon using the cross-coupled latch that provides positive feedback. The outer amplifier further enhances the overall gain and converts the intermediate comparison result into a full-scale binary output voltage.
3. Spike Generator - As shown in Fig.4, the spike generator circuit is designed by selecting the voltage reference levels and an RC charging circuit for the positive pulse and negative tail respectively. A shape of the action potential Vspk influences the STDP learning function. A biological like STDP pulse with exponential rising edges is very difficult to realize in the circuit. However, a bio-inspired STDP pulse can be achieved with a simpler action potential shape: a short narrow positive pulse of large amplitude followed by a larger slowly decreasing negative tail .
V. SIMULATION RESULT
The circuits are designed and simulation is done using cadence virtuoso in 0.18um CMOS technology. The layout simulation is carried out in a cadence virtuoso layout XL design environment. The design rule check (DRC) and layout versus schematic (LVS)were executed successfully without errors.
Simulation results of Operational Amplifier - Fig.5 shows the transient response of the two-stage operational amplifier. In this case, a sine wave signal is applied to the two inputs, and the effective value of the output signal is the voltage difference between the two terminals. Simulation is done by taking the parameters Vdd=1.8V, V1 and V2=1mv amplitude, frequency=10khz, bias current Idc=32uA.
2. Simulation results of the Comparator -The transient behaviour of the comparator is illustrated in Fig.6, The membrane potential Vmem is compared with a threshold voltage Vthr, crossing which triggers the spike-generation circuit and Here we have taken the parameters as VDD = 2.2 V, VPULSE = 1.8 V, pulse width =2m, period = 4ms.
3. Simulation results Spike generator circuit- The transient response of the spike generator is shown in Fig.7. For controlling the spike generation, a digital phase controller generates two non-overlapping control signals ?int and ?fire, together with another two signals implemented using pulse circuits, ?1 for the pulse and ?2 for the negative tail. For instance, spike parameters Va+ =500mV, Va- = 380mV and Vrefr = 460mV were chosen for a device.
4. Simulation results of the LIF Neuron - LIF neuron was tested with input spike trains from a spike generator. The neuron was tested for the integration action and firing action. When the integrated voltage falls below the set threshold value of 180mv, an output spike is observed at that instant and the spike is back propagated to the input which can be observed in Fig 8.
The Authors would like to acknowledge the help of Nisarga G S, Sohan G Naik, Abhinandan L, Suhas M K, Nathasha Vepriyana during the design, simulation and layout generation.
This paper describes a spiking neuromorphic system. The CMOS neuron combines its circuit functions in a compact manner based on a single Op-Amp, using a dual-mode operation. The design was implemented in a 180 nm CMOS process. Circuit simulations verified the functionality of the proposed neuron.
 P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, no. 6197, pp. 668–673, Aug. 2014.
 Xinyu Wu, Student Member, IEEE, Vishal Saxena, Member, IEEE, Kehan Zhu, Student Member, IEEE, and Sakkara ani Balagopal “A CMOS Spiking Neuron for Brain-Inspired Neural Networks with Resistive Synapses and In-Situ Learning”.
 Ebong and P. Mazumder, “CMOS and memristor-based neural network design for position detection,” Proceedings of the IEEE, vol. 100, no. 6, pp. 2050 - 2060, 2012.
 Xinyu Wu, Student Member, IEEE, Vishal Saxena, Member, IEEE, and Kehan Zhu, “Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition”.
 Hodgkin and A. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology pp. 500- 544, 1952.
 Borisyuk, “Morris–lecar model,” in Encyclopedia of Computational Neuroscience. Springer, 2015, pp. 1758–1764.
 R. Wang, T. J. Hamilton, et al., “A generalised conductance-based silicon neuron for large-scale spiking neural networks,” in International Symposium on Circuits and Systems, 2014, pp. 1564–1567.
 R. Gregorian, Introduction to CMOS Op-Amps and comparators. John Wiley and sons, 1999.
 M. R. Azghadi, “Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges,” Proceedings of the IEEE, vol. 102, no. 5, 2014.
 Giacomo Indiveri, A Current-Mode Hysteretic Winner-take-all Network, with Excitatory and Inhibitory Coupling, 2000.
 S. Saïghi, C. G. Mayr, T. Serrano-Gotarredona, H. Schmidt, et al., “Plasticity in memristive devices for spiking neural networks,” Frontiers in Neuroscience, vol. 9, 2015.
 G. Indiveri and S.-C. Liu, “Memory and information processing in neuromorphic systems,” Proceedings of the IEEE, vol. 103, no. 8, pp. 1379–1397, 2015.