Spiking Neural Networks

Spiking Neural Networks
Dr.NaveenBansal Profile Pic
Published Date:25-10-2017
Your Website URL(Optional)
On the Use of Quantum-inspired Optimization Techniques for Training Spiking Neural Networks: A New Method Proposed Maurizio Fiasché and Marco Taisch Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Italy maurizio.fiaschepolimi.it Abstract. Spiking neural networks (SNN) are brain-like connectionist methods, where the output activation function is represented as a train of spikes and not as a potential. This and other reasons make SNN models biologically closer to brain principles than any of the alternative Artificial Neural networks (ANN) models proposed. In fact, they have great potential for solving complicated time-dependent pattern recognition problems defined by time series because of their inherent dynamical representation. A lot of works have been presented in the last decade about SNN which promote these models as third generation ANN. Nevertheless, several still open challenges have been reported in these studies. In this paper we analyze a particular type of SNN, the evolving SNN (eSNN), mainly focusing on their weights, parameters and features optimization using a new evolutionary strategy. Keywords: Spiking Neural Network (SNN), evolving SNN (eSNN), Evolutionary Algorithms (EA), Quantum EA (QEA), Quantum Particle Swarm Optimization (QPSO). 1 Introduction Spiking Neural Networks (SNN) represent a special class of artificial neural networks (ANN), where neurons communicate by train of spikes. Networks composed of spiking neurons are able to process the information using a relatively small number of spikes 1. Being very similar to biological neurons on a functional point of view, they are used in powerful tools for the analysis of elementary processes in the brain, including neural information processing, plasticity and learning. At the same time spiking networks offer solutions to a broad range of specific problems in applied engineering, such as fast signal-processing, classification, time-series event prediction, pattern recognition, etc. It has been demonstrated that SNNs can be applied not only to all problems solvable by non-spiking neural networks, but that spiking models are computationally more powerful than perceptrons and sigmoidal gates 2. Several improvements and variants of the spiking neuron model have been developed. An Evolving Spiking Neural Network (ESNN) is proposed by Wysoski et © Springer International Publishing Switzerland 2015 359 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_35 360 M. Fiasché and M. Taisch al. 3 where the output neuron evolves based on the input patterns and weight similarity, and the neural model is trained by a fast one-pass learning algorithm 4. Due to its evolving nature the model can be updated whenever new data becomes available, without requiring the retraining of earlier presented data samples. Some promising results could be obtained both on synthetic benchmarks and real world datasets. Like other neural network models, the correct combination of parameters would influence the performance of the network. On the other hand, increasing the number of used features does not necessarily translate into a higher classification accuracy. In some cases, having fewer significant features could help both to reduce the processing time and to produce good classifications. This study combines the ESNN architecture with a Quantum inspired evolutionary algorithm (QEA), so as to investigate the potential of ESNN when applied to Feature Subset Selection (FSS) problems following the wrapper approach.. The latter one is used to identify relevant feature subsets and simultaneously evolve an optimal parameter setting for the ESNN, while the ESNN itself operates as a quality measure for a given feature subset. By optimizing two search spaces in parallel, we expect to evolve an ESNN configuration specifically generated for the given dataset, providing in the meanwhile a specific feature subset maximizing classification accuracy. In particular, our aim is twofold: i) to describe the use of Evolutionary algorithms useful for training SNNs; and ii) to propose a clever algorithm based on Particle Swarm Optimization (PSO) joint with Quantum principles for simultaneous features and parameters optimization of an ESSN. The paper is organized as follows. In Section 2 an overview of the main components of the proposed model is provided, starting from the adopted Spiking model and related optimization issues, and proposing next a brief review of PSO and its Quantum inspired generalization. In Section 3 the theoretical framework is described while experimental results are reported in Section 4. Finally, in Sections 5 and 6 results and conclusions are presented, giving some hints on possible future trends. 2 Spiking Neural Networks SNNs represent information as trains of spikes, rather than as single scalars, thus allowing the use of features such as frequency, phase, incremental accumulation of input signals, time of activation, etc. Neuronal dynamics of a spiking neuron are based on the increase in the inner potential of a neuron (post synaptic potential, PSP), after every input spike arrival. When a PSP reaches a certain threshold, the neuron emits a spike at its output. In biological neural networks, neurons are connected at synapses and electrical signals (spikes) pass information from one neuron to another. SNN are biologically plausible and offer some means for representing time, frequency, phase and other features of the information being processed. A simplified diagram of a spiking neuron model is shown in Fig. 1(a). Fig. 1(b) shows the mode of operation of a spiking neuron, which emits an output spike when the total spiking input – Post Synaptic Potential (u(t) in the figure), is larger than a spiking threshold. On the Use of Quantum-inspired Optimization Techniques 361 The information in ESNN is represented as spikes; therefore, input information must be encoded in spike pulses. There are several information encoding methods in Spiking Neural Network (SNN). In this paper we have used a well-known encoding technique for ESNN: the Population Encoding 5. Population Encoding distributes a single input value to multiple pre-synaptic neurons. Each pre-synaptic neuron generates a spike at firing time. The firing time is calculated using the intersection of Gaussian function. The centre of the Gaussian function is calculated using Equation (1) and the width is computed using Equation (2) with the variable interval of I , min I . The parameter β controls the width of each Gaussian receptive field. max µ = I + (2 i-3) / 2 (I – I )/(M – 2) (1) min max min σ = 1 / β (I – I )/(M – 2) where 1 ≤ β ≤ 2 (2) max min Fig. 1. (a) A short visualization of a simple spiking neuron model with its spike representation in time. (b) The representation of a spiking neuron emitting an output spike when the total spiking input – Post Synaptic Potential (u(t)representing the PSP), is larger than a spiking threshold 6. 2.1 Evolving Spiking Neural Networks (ESNNs) ESNNs evolve/develop their structure and functionality in an incremental way from incoming data based on the following principles 6: (i) new spiking neurons are created to accommodate new data, e.g. new patterns belonging to a class or new output classes, such as faces in a face recognition system; (ii) spiking neurons are merged if they represent the same concept (class) and have similar connection 362 M. Fiasché and M. Taisch weights (defined by a threshold of similarity). In 4,8 an ESNN architecture is proposed where the change in a synaptic weight is achieved through a simple spike time dependent plasticity (STDP) learning rule: order( j) Δw = mod (1) j,i where: w is the weight between neuron j and neuron i, mod (0,1) is the modulation j,i factor, order(j) is the order of arrival of a spike produced by neuron j to neuron i. For each training sample, the winner-takes-all approach has been used, where only the neuron that has the highest postysynaptic (PSP) value updates its weights. The postsynaptic threshold (PSPTh) of a neuron is calculated as a proportion c 0, 1 of the maximum postsynaptic potential, max(PSP), generated by modulating the training sample with the updated weights, i.e.: PSP = c max(PSP) (2) Th The One-Pass Algorithm is the learning algorithm for ESNN which follows both the SDTP learning rule and the time-to-first spike learning rule 7. In this algorithm, each training sample creates a new output neuron. The trained threshold values and the weight pattern for that particular sample are stored in the neuron repository. However, if the weight pattern of the trained neuron greatly resembles a neuron in the repository, it will merge into the most similar one. The merging process involves modifying the weight pattern and the threshold of the merged neurons to the average value. Otherwise, it will be added to the repository as a newly trained neuron. The major advantage of this learning algorithm is the ability of the trained network to learn incrementally new samples without retraining. Creating and merging neurons based on both localised incoming information and system performance are the main operations of the ESNN architecture that makes it continuously “evolvable”. 2.2 Optimization Challenges In order to provide an efficient and accurate solution to the simultaneous optimization task of features and parameters of an ESNN, for its interesting properties in terms of solution quality and convergence speed, a Versatile Quantum-inspired Evolutionary Algorithm (vQEA) 9 has been used in 10. The method evolves in parallel a number of independent probability vectors, which interact at certain time intervals with each other, forming a multi-model Estimation of Distribution Algorithm (EDA) 11. It has been shown that this approach performs well on epistatic problems, is very robust to noise, and needs only a minimal fine-tuning of its parameters. Moreover, the standard setting for vQEA is suitable for a large range of different problem sizes and classes, and in particular fits well to the feature selection problem under consideration. For the optimization of general numerical problems a QEA has been proposed in 12 as a clever modification of the technique introduced by Han and Kim 13. In this paper we want to introduce another type of QEA for training ESNN, starting from the general optimization technique described in 12. On the Use of Quantum-inspired Optimization Techniques 363 2.3 Quantum Inspired Particle Swarm Optimization Particle Swarm Optimization (PSO) is a population based optimization technique developed by Eberhart and Kennedy in 1995 14. Individual particles work together to solve a given problem by responding to their own performance and to the performance of other particles in the swarm. Each particle computes its own fitness value during the optimization process and stores the best fitness value achieved so far, normally referred to as personal best or individual best (pbest). Likewise, the overall best fitness value obtained by any particle in the population is called global best (gbest). Each particle n moves to a new position x (t) by computing, through the value n of pbest and gbest, its velocity vector v (t), according to the following formulas: n v (t) = w v (t–1) + c r (gbest – x (t)) + c r (pbest – x (t)) (3) n n 1 1 n n 2 2 n n x (t) = x (t–1) + v (t) (4) n n n where: c 0 and c 0, called the cognitive and social parameters, control the 1 2 particle acceleration towards the personal best or global best position, r and r are two 1 2 uniform random realizations, and w 0 is a constant called the inertia parameter. Moreover to create a swarm of N particles, at time t each particle i has: 1. a current position vector x (t), i 2. a current velocity vector v (t), i 3. a record of its own best positions pbest = (pbest ,...,pbest ), 1 t 4. a record of the best global position gbest = (pbest ,...,pbest ) 1 t As previously reported a QEA has been presented in 2002 13 as inspired by the concept of quantum computing. According to the classical computing paradigm, information is represented in bits where each bit must hold either 0 or 1. However, in quantum computing, information is instead represented by a qubit in which a value of a single qubit could be 0, 1, or a superposition of both. Superposition allows the possible states to represent both 0 and 1 simultaneously based on their probability. The quantum state is modeled by the Hilbert function and is defined as Ψ = α 0 + β 1 where α and β are complex numbers defining probabilities of being in the corresponding state (when a qubit collapses, for instance, when reading or measuring). Probability fundamentals require 2 2 α + β = 1 2 2 where α (resp. β ) gives the probability that a qubit is in the OFF (0) (resp. ON (1))state. General notation for an individual with several qubits can be defined as The Quantum inspired PSO (QPSO) was first discussed in 15, and several variants of QPSO 16,17 have been developed thereafter. The main idea of QPSO is to use a standard PSO function to update the particle position represented in a qubit. In order 364 M. Fiasché and M. Taisch for PSO to update the probability of a qubit, the quantum angle θ is used, which can cos(θ ) ⎡⎤ be represented as . ⎢⎥ sin(θ ) ⎣⎦ 3 Theoretical Framework This paper presents QPSO as a new optimizer for ESNN. From the well-known wrapper approach, QPSO interacts with an induction method (the ESNN), optimizes the ESNN parameters: modulation factor (mod), proportion factor (C), and neuron similarity (sim), and finally identifies relevant features. All particles are initialized with a random set of binary values and they subsequently interact with each other based on classification accuracy. Since there are two components to be optimized, each particle is divided into two parts. The first part, used for feature optimization, holds the feature mask values (value 1 represents a selected feature, 0 otherwise), while the second part holds binary strings for parameter optimization, where a set of qubits represent the parameters value. In fact, information held by each particle is in binary representation, therefore, conversion into real value is required. For this task, Gray code method is chosen since it is proven to be a simple and effective way in representing real value from binary representation. The proposed framework is depicted in Fig. 2. Fig. 2. A conceptual representation of the Framework QPSO-ESNN On the Use of Quantum-inspired Optimization Techniques 365 4 Experimentations The proposed QPSO-ESNN method is tested using the uniform hypercube dataset introduced in 18. Only two of the ten features created, namely f and f , are relevant 1 2 to determine the output class. The problem consists of 600 samples grouped in two classes: 276 samples belong to class 1 and 324 to class 2; in particular, a sample i-1 belongs to class 1 when f α γ for i = 1 and 2, where γ = 0.8 and α = 0.5. The i irrelevant features consist of four unit random values and four redundant copies of the two relevant features with the addition of a Gaussian noise having zero mean and standard deviation equal to 0.3. To compute performance measures, 10-folds data cross validation has been used. From our preliminary investigation, we found that the location of the relevant features has a direct effect on the threshold value. In this experiment, features 1 to 4 are set as random, 5 to 8 as redundant, while 9 and 10 are defined as relevant. This is the reason to investigate the ability of QPSO in optimizing value of parameter C where it should be more than 0.5, since the relevant features are at the end of the ordered features. In reception Blocks described in Fig. 2, receptive fields were used to produce a weights pattern or weights vector of a particular sample for identifying the output class. During our experiments, different numbers of receptive fields for each dataset influenced the results accuracy. In our preliminary experimentation, 10 receptive fields were chosen, and 20 particles were used to explore the solution. The variables number to be optimized by QPSO are three ESNN parameters and 10 features. Since all three parameters ranged between 0 and 1, six qubits were sufficient to represent the real value. Cognitive and social parameters c and c were set to 0.05, 1 2 leading to a balanced exploration between gbest and pbest, while the inertia weight w was set to equal to 2.0. We compared ESNN with feature optimization with and ESNN with all features; in both cases we performed parameter optimization for 15 continuous times iterations, computing the average results after 100 iterations. During our experimentation we also compared the ESNN considered with PSO (ESNN-PSO) as optimizer and the QPSO-ESNN proposed here. The ESNN-PSO algorithm, although with a high classification accuracy is highly dependent on the parameter optimization which has affected the results, giving the lowest accuracy compared with QPSO-ESNN. Accuracy obtained during the test phase for ESNN-PSO is in fact of 80% vs 90% of QSPO-ESNN. 5 Results Results of our experimentations show that QPSO can optimize parameters and features in less than 80 iterations. The average accuracy for the ESNN with feature optimization is consistently above 90% compared to ESNN using all features whose mean accuracy attests toward 60%. In the latter case, the network was unable to react to the redundancy introduced by the irrelevant features. Moreover, it has been interesting to observe that the average accuracy of ESNN with feature optimization in the first iteration is extremely high, around 80%, thanks to the presence of a particle 366 M. Fiasché and M. Taisch able to reach an extremely good solution in the early stage of the algorithm. Due to the best global position gbest, the other particles then updated their position toward the best solution in a few iterations. During the learning process, from the 10 features, the gbest particle is able to reduce number of features at the average of six features in the early iteration. Then, the algorithm keeps deleting irrelevant features in order to identify the most relevant ones. The two relevant features are always selected until the end of learning process in 15 runs. However, sometimes no significant features have been selected together with relevant features until the end of learning process and in this case, the proposed method was still able to identify the weights pattern to classify the output class correctly. In this experiment, QPSO managed to optimize binary string information, which represents ESNN parameter values as expected. Mod was used as a weight value with the objective to have different set of weights pattern to differentiate output class, therefore, normally the value must not be too low. If a lower value is selected, there would be only several connections with the weight value and this would make it difficult to have different set of weights pattern. In contrast, higher value means most of the weights will have the connection value, which can be translated into a well- presented weights pattern in accordance to their output class. Thus, in our preliminary investigation, we found that the Mod value should be between 0.6 and 1.0 and QPSO managed to come out with the Mod value within that range in this experiment. With the relevant features located at the last two features, result shows that the average C value found in this experiment is around 0.8, which is more acceptable. Finally, average Sim value found is 0.1 that is fairly adequate since the weights pattern is quite similar between input samples in the same class. Overall, all three parameters evolve steadily towards a certain optimal value, where the correct combination leads to the better classification accuracy. 6 Conclusions This paper presents a new integration method for simultaneous optimization of features and parameters in an ESNN using QPSO starting from a more general QEA. The results have shown that QPSO is able to select the relevant features as well as to optimize ESNN parameters that generate higher classification accuracy for QPSO- ESNN with feature and parameter optimization compared to QPSO-ESNN using all features. Moreover also a comparison with an ESNN with a PSO as optimizer has been done highlighting that QPSO-ESNN performs better than the previous one both in terms of classification accuracy and for feature selected. Future work will focus on how to find a more effective method for eliminating the less relevant features, also proposing a general QEA for optimization 12 adapted to ESNN and compared it with the approach presented in this paper. The optimization also of other crucial aspects for a SNN, e.g. Connections and threshold to cause the spike of neurons will be analyzed. The proposed method will also be tested on different types of dataset such as string dataset as well as other real world dataset with comparison to the other classification algorithms like 19,20. On the Use of Quantum-inspired Optimization Techniques 367 References 1. VanRullen, R., Guyonneau, R., Thorpe, S.: Spike times make sense. Trends Neurosci. 28, 1–4 (2005) 2. Maass, W.: Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 1659–1671 (1997) 3. Wysoski, S.G., Benuskova, L., Kasabov, N.: On-line learning with structural adaptation in a network of spiking neurons for visual pattern recognition. ICANN (1), 61–70 (2006) 4. Wysoski, S.G., Benuskova, L., Kasabov, N.: Brain-like evolving spiking neural networks for multimodal information processing. In: Hanazawa, A., Miki, T., Horio, K. (eds.) Brain- Inspired Information Technology. SCI, vol. 266, pp. 15–27. Springer, Heidelberg (2010) 5. Bohte, S.M., Kok, J.N., Poutre, H.L.: Error-Backpropagation in Temporally Encoded Networks of Spiking Neurons. Neurocomputing 48(1-4), 17–37 (2002) 6. Kasabov, N.: Evolving Connectionist Systems: The System Engineering Approach, vol. 2. Springer-Verlag New York Inc., Secaucus (2007) 7. Thorpe, S.J.: How Can the Human Visual System Process a Natural Scene in Under 150ms? Experiments and Neural Network Models. In: Verleysen, M. (ed.) Proceedings of European Symposium on Artificial Neural Networks, D-Facto public, ISBN 2-9600049-7- 3, Bruges, Belgium (1997) 8. Soltic, S., Wysoski, S., Kasabov, N.: Evolving spiking neural networks for taste recognition. In: IEEE World Congress on Computational Intelligence (WCCI), Hong Kong (2008) 9. Defoin-Platel, M., Schliebs, S., Kasabov, N.: A versatile quantum-inspired evolutionary algorithm. In: IEEE Congress on Evolutionary Computation, CEC 2007, pp. 423–430 (2007) 10. Schliebs, S., Defoin-Platel, M., Worner, S., Kasabov, N.: Integrated Feature and Parameter Optimization for an Evolving Spiking Neural Network: Exploring Heterogeneous Probabilistic Models. Neural Networks 22, 623–632 (2009) 11. Platel, M.D., Schliebs, S., Kasabov, N.: Quantum-Inspired Evolutionary Algorithm: A Multimodel EDA. IEEE Transactions on Evolutionary Computation 13(6), 1218–1232 (2009) 12. Fiasché, M.: A Quantum-Inspired Evolutionary Algorithm for Optimization Numerical Problems. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012, Part III. LNCS, vol. 7665, pp. 686–693. Springer, Heidelberg (2012) 13. Han, K.H., Kim, J.H.: Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Transactions on Evolutionary Computation, 580–593 (2002) 14. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proc. Sixth International Symposium on Micro Machine and Human Science, pp. 39–43. IEEE Press (1995) 15. Sun, J., Feng, B., Xu, W.B.: Particle swarm optimization with particles having quantum behavior. In: Proc. Congress on Evolutionary Computation, vol. 1, pp. 325–331 (2004) 16. Hamed, H.N.A., Kasabov, N., Shamsuddin, S.M.: Integrated feature selection and parameter optimization for evolving spiking neural networks using quantum inspired particle swarm optimization. In: Soft Computing and Pattern Recognition, SoCPaR 2009, pp. 695–698 (2009) 368 M. Fiasché and M. Taisch 17. Hamed, H.N.A., Kasabov, N., Shamsuddin, S.M.: Quantum-Inspired Particle Swarm Optimization for Feature Selection and Parameter Optimization in Evolving Spiking Neural Networks for Classification Tasks. In: Kita, E. (ed.) Evolutionary Algorithms, InTech (2011) 18. Estavest, P., Tesmer, M., Perez, C., Zurada, J.: Normalized mutual information feature selection. Neural Networks 20(2), 189–201 (2009) 19. Kasabov, N.: Integrative probabilistic evolving spiking neural networks utilising quantum inspired evolutionary algorithm: A computational framework. In: Köppen, M., Kasabov, N., Coghill, G. (eds.) ICONIP 2008, Part I. LNCS, vol. 5506, pp. 3–13. Springer, Heidelberg (2009) 20. Kasabov, N.K.: NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Networks 52, 62–76 (2014) Binary Synapse Circuitry for High Efficiency Learning Algorithm Using Generalized Boundary Condition Memristor Models 1 1 1 Jacopo Secco , Alessandro Vinassa , Valentina Pontrandolfo , 2 1 Carlo Baldassi , and Fernando Corinto 1 Politecnico di Torino, Department of Electronic and Telecommunication (DET), Corso Duca degli Abruzzi 24, 10129 Torino, Italy 2 Politecnico di Torino, Department of Applied Sciences and Technologies (DISAT), Corso Duca degli Abruzzi 24, 10129 Torino, Italy fernando.corintopolito.it Abstract. Memristors are memory resistors that promise the efficient implementation of synaptic weights in artificial neural networks 1. This kindoftechnologyhaspermittedtheimplementationofalarge numberof realworlddatainanevolutionarylearning artificialsystem.Humanbrain iscapableofprocessing suchdatawithstandardalwaysequalsignals that are the synapsis. Our goal is to present a circuit which responds with binary outputs to the signal exiting from the memristors implemented in an artificial neural system that functions through a high efficiency learning algorithm. Keywords: Memristor, Memristor–based Circuits, Binary Synapses, Neural Networks, Pattern Recognition. 1 Introduction and Background 1.1 Memristor Model In 1971 Prof. Leon Chua introduced a theoretical model of the memristor 2. The memristor is a two-terminal non–linear element capable of changing and dϕ maintaining its resistance (memristance), defined as R(q)= , is only function dq of q. The equation ϕ(q) is the flux in the device and it is considered the consti- tutive equation of the memristor. Several model have been proposed to describe the nonlinear behavior of memristor devices. In this manuscript we exploit the Generalized Boundary Bondition Memristor model (GBCM), developed by As- coli et.al, which preserves the features of the Boundary Condition Mermristor model (BCM) but allows the tuning of activation based dynamics and if ibidem data processing and storage capability 3,4,5. Let D denote the length of the nano–film and x=(w/D) ∈ 0,1 represent the longitudinal extension of the conductive part of the nano–film. The model equations are 3  c Springer International Publishing Switzerland 2015 369 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_36370 J. Secco et al. dx(t) = ki(t)f (x(t),v(t)) (1) p dt i(t)= W(x(t))v(t)(2) where k ∈ R is a constant depending on physical properties of the memristor −1 (its dimension is C , thus it is also referred to as memristor charge normaliza- tion factor), and W(x(t)) describes the state-dependent memory-conductance. On the other hand f (x(t),v(t)) ∈ 0,1 is a parametrizable window–function p that describes the activation dynamics of the device under suitable boundary conditions. To perform learning essays the GBCM model was built on PSpice as shown in the referring article (see 3 for more details). 1.2 Learning Algorithm Baldassi et al. in 2007 developed a learning algorithm called Stochastic Belief– Propagation-Inspired (SBPI) 6. This kind of algorithm was studied to simulate biologicallearningdynamicswith binarysynapses.Here,wepreliminarydescribe a simplified version of that algorithm (called CP in 6), and we present a circuit which implements it. The CP algorithm has a reduced performance with respect to the SBPI algorithm, but our scheme can be extended rather straightforwardly to SBPI, or a variant thereof (such as the one proposed in 7), since it is already able to model the most crucial quantities used in the algorithm; the extension to complete SBPI algorithm will be the subject of a future work, currently in preparation. A set ξ of binary patterns is presented to a network of N.The current outflowing from each memristor is function of the synaptic weight 1 w = (sign(h)+1) 2 where h ∈ −1,1 is the normalized memristance value (h=0when R = mem (R +R )/2). At the same time the networks response (binary) is compared on off with the desired binary output σ. Given a set of p = αN (where α ∈ 0,1) τ patternsxi and computed the stability parameter Δ=(2σ − 1)(I − θ)where  τ τ I = w ξ and θ is the threshold to obtain the desired binary response σ,at i i i the time τ then (see 6 for a detailed presentatin of the SBPI algorithm) (τ+1) τ (R1) If Δ ≥ 0, then all w = w i i (τ+1) τ τ (τ+1) (R2) IfΔ 0, then all h = h +2ξ (2σ ) i i i withirangingfrom1toN.Allh canassumeanumberK ofvalues(intermediate i hidden states) supra- and under-threshold. A greater number of K states can ensure a better performance of the algorithm enabling the enlargement of the set of p patterns enhancing the uniqueness of the solution.Binary Synapse Circuitry for High Efficiency Learning Algorithm 371 2 Methods 2.1 Electronic Circuit In order to implement a circuit able to perform the tasks required for the CP algorithm, it was necessary to generate a binary response to the ξ patterns pre- sented to the system. Figure 1 shows the Binary Synapse Circuitry (BSC) model thatwasdesignedtoenableabinaryresponseforeachmemristorimplemented in the system 8. In orderto obtain a binaryresponse by means of the memristance a CMOS system was designed as a logical NOT port. Fig.1. Binary Synapse Circuitry model Fig.2. Memristor Contribution Sum Unit and the Sigma Calculator Unit circuitry scheme372 J. Secco et al. Fig.3. Input and Output signal of the single BSC system Inordertoperformthe CP algorithmallthe outputs ofeachbinarymemristor unit must be somehow summed and compared with the threshold θ in order to compare the outputs of the system with the desired response σ. Figure 2 shows the Memristor Contribution Sum Unit (MCSU) which was implemented with an op–amp. This solution permits to sum the binary voltages instead of the currents which results to be more efficient on the circuitry design level. MCSU was combined with the Sigma Calculator Unit (SCU) designed with the use of a double current mirror comparator. SCU has the scope of comparing the current outflowing from the MCSU with the θ threshold given by a voltage source connected to the nMOS mirror composing the SCU. Both MCSU and SCU are to be connected to a control unit able to calculate and switch from the ”reading mode” to the ”writing mode” of the memristors. 3Results TheGBCMmodel,theBSC,theMCSUandtheSCUwerebuilt onPSpice,while a model of the control unit performing the required actions of the CP algorithm was built with the use of Simulink. The two models were then coupled with the use of the Cadence tool SLPS. Several essays were performed to evaluate the activation and deactivation (R to R and R to R respectively) off on on off dynamics of the memristors, their required binary responses and the learning efficiency of the algorithm varying the θ threshold. As proof of concepts it was fixed a minimal Iref in order to better evaluate the binary response of the only BSC system. By applying a 3.3V and a −3.3V potential to the memristor, as shown in Figure 3 it was possible to obtain binary responses.Binary Synapse Circuitry for High Efficiency Learning Algorithm 373 Fig.4. Learning efficiency of the system (mean αc)vs. θ Since the supply voltage of the whole system was set to 3.3V, for simplicity, also the ξ patterns and the writing inputs were set at the same amplitude. From the simulations that were ran on the system it was shown that the memristances did not varywith spikesthat presenteda periodshorterthan 1μs. Bysetting this period to the ξ pattern spikes the system was able to perform the BSI algorithm without changing the synaptic weights at every pattern presentation. Figure 4 shows the learning efficiency of the system performing the BSI algorithm for θ values ranging from 0.6N to 0.36N. It is shown that for θ=0.16 there is a peak of efficiency of the system (mean α = 68%ofthetotalξ patternsacquired).Thefinalproofoftheefficiencyofthis c system composed by memristors was given by the analysis of the same system implemented with only CMOS technology. It was proventhat a memristor which dynamics is divided in K states can be replaced with a number of transistors proportionaltoN = N log K.SoforK 3theuseofthememristortechnology t 2 is preferable also in a design point of view. 4Conclusion Neural Networks trained with the BPI algorithm are able to learn a number of associations close to the theoretical limit in time that is sublinear in the number of input. Using binary synapses, implemented by a memristor, a single layer perceptron with BPI has been proposed and investigated. Acknowledgements. This work has been partially supported by the Italian Ministry of Foreign Affairs“Con il contributo del Ministero degli Affari Esteri, Direzione Generale per la Promozione del Sistema Paese”.374 J. Secco et al. References 1. Alibart, F., Zmanidoost, E., Strukov, D.B.: Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nature Communications (2013) 2. Chua, L.O.: Memristor: the missing circuit element. IEEE Transactions on Circuit Theory 18(5), 507–519 (1971) 3. Corinto, F., Ascoli, A.: A boundary condition-based approach to the modeling of memristor nanostructures. IEEE Trans. Circuits Syst. I 59, 2713–2726 (2012), doi:10.1109/TCSI.2012.2190563 4. Corinto, F., Ascoli, A.: Memristive diode bridge with LCR filter. Electronics Let- ters 48(14), 824–825 (2012) 5. Batas, D., Fielder, H.: A memristor PSpice implementation and a new approach for magnetic flux-controlled memristor modeling. IEEE Transactions on Nanotechnol- ogy (2011) 6. Baldassi, C., Braunstein, A., Brunel, N., Zecchina, R.: Efficient supervised leaning in networks with binary synapses. PNAS (2007) 7. Baldassi, C.: Generalization Learning ina Perceptronwith BinarySynapses. Journal of Statistical Physics (2009) 8. Manem, H., Rajedran, J., Rose, G.S.: Stochastic gradient descent inspired training techniquefor aCMOS/Nanomemristive trainable thresholdgate array.IEEETrans. Circuits Syst. I (2012) Analogic Realization of a Non-linear Network with Re-configurable Structure as Paradigm for Real Time Analysis of Complex Dynamics Carlo Petrarca, Soudeh Yaghouti, Lorenza Corti, and Massimiliano de Magistris Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione University of Naples FEDERICO II - Via Claudio 21, I-80125 Napoli, Italy carlo.petrarca,soudeh.yaghouti, lorenza.corti,m.demagistrisunina.it Abstract. A novel experimental set-up realized for the real time analysis of reconfigurable complex networks, with chaotic Chua’s circuits as nodes, is con- sidered. It has been designed to easily perform large scale experiments on net- works of chaotic oscillators, possibly exploring in real time the parameters space in terms of topologies, coupling strengths, nodes’ dynamics, with poten- tial application in the area of neuro-computing and associative dynamic memo- ries. A sample of the capabilities is given by considering diffusive coupling with a large range of coupling strengths in a set of topologies, and first experi- ments on a ring of 32 chaotic nodes are reported. Synchronization, chaotic waves and patterns are experimentally observed, and the potential of the rea- lized set-up in terms of accuracy, flexibility and analysis time is fully revealed. Keywords: Complex networks, nonlinear oscillatory networks, chaotic synchronization, chaotic waves. 1 Introduction The exploration of complex networks, defined as eventually large ensemble of simple dynamical units with arbitrary, and possibly evolving topological structure, is strongly motivated as paradigmatic of very different phenomena, from neurobiology to social science 1,2. Collective behavior (also referred as “emerging dynamics”) are much richer than individual (stand alone) dynamics, with complexity arising from the com- bination of non linearity (local activity, 3) and topological structure. Such studies embrace a vast range of applications; it is well known that Cellular Neural/nonlinear Networks (CNN) represent revolutionary paradigms for the information processing 4-5. In particular modeling neurons as non linear oscillators has lead to bio-inspired paradigms for neurocomputers (see 6 and references therein). In this context electronic systems have been clearly recognized as good candidates for modeling different real systems 7, mainly because of the availability of well developed simulation tools and, in principle, the possibility of realizing prototypes as integrated structures. A vast range of theoretical and numerical results is available in © Springer International Publishing Switzerland 2015 375 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_37 376 C. Petrarca et al. literature, in particular for networks of interconnected non linear oscillators 7-10. More rare is to find extensive experimental studies, especially when general purpose and configurable networks are considered. We describe here the main features of a dedicated setup designed and realized as an analog simulator of configurable networks with high complexity nodes (Chua’s circuits), and complete control on topology, type and strength of the interconnections. It allows real time experiments on complex dynamics with the double advantage of being more realistic than simulation and, at the same time, drastically reducing the time for getting results. In this work the structure and the potential of the realized setup is presented, and a sample of the experimentally observed dynamics is reported. 2 The Experimental Setup The structure and realization of the experimental setup has already been described in quite detail in previous papers 11,12, and will not be repeated here, where only its major features will be briefly summarized. The network is based on a modular set of Chua’s circuits, taken as paradigmatic case of complex and chaotic dynamic nodes. Stand-alone dynamics of individual nodes can be (individually) settled onto periodic or chaotic trajectories, by properly setting the values for the Chua’s linear resistor R, allowing different regimes. The network nodes are interconnected via a fully confi- gurable link network, with adjustable topology and settable coupling strength. Figure 1 depicts a schematic draw of the Chua’s nodes and the interconnecting links, along with symbols and parameters used within the text. Figure 2 shows some of the topologies considered up to now. Fig. 1. a) Chua’s circuit schematic, reference parameters, b) Chua’s diode characteristic, c) general schematic of the link network Analogic Realization of a Non-linear Network 377 A modular National Instruments USB multi-channel data acquisition system is used to measure and monitor the variables of interest (nodes’ states) in real time. Up to 64 state variables can be synchronously acquired by a single unit, at 60 k-sample/s rate for each channel. The whole network is controlled via a USB interface from a PC running LabVIEW. Two executive modes are possible: a “control mode”, which al- lows to set all the network parameters, displaying in real time the waveforms of the signals; a “scan mode”, which performs a scan of some preset values for network parameters (at present the topology is fixed at the beginning of each scan). Scan mode automatically sets, at each step, the defined parameter (link resistance value) in the predefined sequence, allows the system to reach regime, then acquires and saves all system variables in records up to 2500 samples per channel. A typical execution time for an accurate scan of 255 steps in the 32 nodes system is of about 45 minutes, main- ly limited by the setting time for the link network values. Aside from experimental validation of theoretical conjectures and results, this allows very detailed analysis in a time by far shorter than any simulation system. 4 nodes ring 4 nodes all-to-all 4 nodes star 16 nodes ring 16 nodes array 4 nodes array 4 nodes random 4 nodes near all 8 nodes ring 8 nodes array 8 nodes star 16 nodes star 16 nodes double ring 8 nodes 8 nodes 8 nodes hub random second near 16 nodes double array 16 nodes 4x4 matrix Fig. 2. A set of experimentally explored topologies 3 Experimental Results The realized set-up allows in principle a wide exploration of link topologies and pa- rameters space, with real time results. In particular the nodes oscillators can be set to different periodicity regimes as well as in chaotic behavior, and the link network can evolve in structure and link strength. Here we report briefly some results on chaotic 378 C. Petrarca et al. synchronization 13, spatio-temporal patterns and waves for the case of nominally identical nodes, settled in double scroll chaotic regime when uncoupled, and bilateral diffusive coupling, as a function of coupling strength (equally weighted on the links). Other remarkable results, obtained with different versions of the same assembly, are given in 14 (with reference to directed links and Pinning control) and 15 (with reference to links with dynamic elements). Complete synchronization has been considered for all the network configurations as shown in figure 2 in the double scroll chaotic regime, and a proper defined RMS distance 11 in waveforms is calculated from measured data. Results are shown in fig. 3, where such RMS distance (in %) between corresponding variables are plotted as function of the resistance link values. It can be easily distinguished a strong threshold mechanism for the loss of synchronization, with an extremely good agreement to Master Stability Function (MSF) theoretical thresholds calculated as described in 16. 8 nodes topologies ring 2 array 10 star random 1 hub 10 second near 0 10 -1 10 -2 10 -1 0 1 10 10 10 coupling resistance R kΩ c Fig. 3. Measured RMS distance for the 8 nodes topologies considered in figure2, as function of link resistance R ; solid vertical lines correspond to theoretical thresholds calculated by MSF c A N=32 nodes ring structure has been realized with the same assembly, and we re- port here, for the first time, some results on patterns and waves. In figure 4 a typical output of the experiment is shown, for low value of link resistance (R =198 Ω). Figure c 4a illustrates the space state v -v dynamics for each system node, fig. 4b the syn- C1k C2k chronization diagrams of v , for k=1..N, fig. 4c the v waveforms for the node’s C1k C1k system, fig. 4d an averaged RMS distance between node’s state (I (i,j)). Dynamics of s the nodes result in single scroll chaotic regime, and a space periodicity of N is recog- nized from the synchronization plots and the I (i,j) distance matrix entries. A time- s chaotic travelling (or “rotating”) wave is clearly distinguishable from the waveform graphs. Exact space periodicity can be easily noted from the synchronization diagram N=1 vs. N=32 (bottom right in figure 4b) A complete scan in the coupling resistance parameter range has been carried out, with dynamical behaviors in very good agreement to theoretical and numerical predictions given in 17. RMS distance index (V ) % C1

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.