associative memory by recurrent neural networks with delay elements

báo cáo hóa học: " Synchronization of nonidentical chaotic neural networks with leakage delay and mixed timevarying delays" pptx

báo cáo hóa học: " Synchronization of nonidentical chaotic neural networks with leakage delay and mixed timevarying delays" pptx

Ngày tải lên : 21/06/2014, 02:20
... nonidentical chaotic neural networks with time delays. Neural Netw 2009, 22:869-874. 34. Gan QT, Xu R, Kang XB: Synchronization of chaotic neural networks with mixed time delays. Commun Nonlinear ... of delayed neural networks with infinite gain. IEEE Trans Neural Netw 2005, 16:1449-1463. 2. Chen TP, Lu WL, Chen GR: Dynamical behaviors of a large class of general delayed neural networks. Neural ... bidirectional associative memory networks with time delay. Nonlinear Anal 2007, 66:1558-1572. 6. Xu SY, Lam J: A new approach to exponential stability analysis of neural networks with time-varying delays....
  • 17
  • 320
  • 0
evolving recurrent neural networks are super-turing

evolving recurrent neural networks are super-turing

Ngày tải lên : 28/04/2014, 10:06
... of recurrent neural networks is intimately related to the nature of their synaptic weights. In particular, neural networks with static rational weights are known to be Turing equivalent, and recurrent networks ... various kind of neural networks. We will further prove that evolving (rational and real) recurrent neural network are computationally equivalent to (non-evolving) real recurrent neural networks. Therefore, ... Turing machines and rational recurrent neural networks ensures that the above recursive procedure can indeed be performed by some non-evolving rational recurrent neural sub-network [5]. Since...
  • 7
  • 265
  • 0
interactive evolving recurrent neural networks

interactive evolving recurrent neural networks

Ngày tải lên : 28/04/2014, 10:06
... computa- tional power of interactive recurrent neural networks. Submitted to Neural Comput. Cabessa, J. and Siegelmann, H. T. (2011b). Evolving re- current neural networks are super-Turing. In Interna- tional ... EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING 333 stream o s produced by M . Finally, an ω-translation ψ : {0,1} ω −→ { 0,1} ≤ω is said to be realizable by some interactive Turing machine with ... particular I-Ev-RNN[R] by definition. 5 THE COMPUTATIONAL POWER OF INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS In this section, we prove that interactive evolving re- current neural networks are computationally...
  • 6
  • 318
  • 0
the expressive power of analog recurrent neural networks on infinite

the expressive power of analog recurrent neural networks on infinite

Ngày tải lên : 28/04/2014, 10:06
... direction by providing a characterization of the computational power of analog recurrent neural networks working on infinite input streams. More precisely, we consider analog recurrent neural networks as ... seminal work by Siegelmann and Sontag about the computational power of analog recurrent neural networks [8,10,11]. Hence, the consideration of the same model of synchronous analog neural networks ... 2012 Communicated by J.N. Kok Keywords: Analog neural networks Analog computation Topology Borel sets Analytic sets ω-Automata Turing machines a b s t r a c t We consider analog recurrent neural networks...
  • 12
  • 322
  • 0
An analogue recurrent neural networks

An analogue recurrent neural networks

Ngày tải lên : 28/04/2014, 10:16
... In this paper we discussed the application of; analogue recurrent neural network to learn and track ti dynamics of an industrial robot. The observations ma( from this study suggest that RNNs (similar to those in Fi 1) can be applied to the control of real systems th manifest complex properties - specifically, hig dimensionality, non-linearity and requiring continuoi action. Examples of these real systems include aircri control, satellite stabilization, and robot manipulat control. We conclude that robust controllers of partial observable (non-Markov) systems require real-tin electronic systems that can be designed as single-ch Integrated Circuits (CMOS IC). This paper explored su techniques and identified suitable circuits. an he de g. I at VIII. REFERENCES [1] S. Townley, et al., "Existence and Learning of centerline Oscillations in Recurrent Neural Networks& quot;, IEEE Trans. Neural Networks 11: luS 205-214,2000. ift [21 E. Dijk, "Analysis of Recurrent Neural Networks with application to :Or speaker independent phoneme recognition", M.Sc Thesis, University or of Twente, June 1999. [3] G. Cauwenberghs, "An Analog VLSI Recurrent Neural Network lly Leaming a Continuous-Time Trajectory", IEEE Trans. Neural ne Networks 7: 346-361,Mar. 1996. lip [4] M. Mori et al., Cooperative and Competitive Network Suitable for ch Circuit Realization", IEICE Trans. Fundamentals, vol. E85-A, No.9, 2127-2134, Sept. 2002. [5] H.J. Mattausch, et al., "Compact associative- memory architecture with fully parallel search capability for the minimum Hamming distance", IEEE J. Solid-State Circuits, vol.37, pp.218-227, Feb. 2002. [6] G. Indiveri, "A neuromorphic VLSI device for implementing 2-D selective attention systems", IEEE Trans. Neural Networks, vol. 12, pp.1455-1463, Nov. 2001. [7] C.K. Kwon and K. Lee, "Highly parallel and energy-efficient exhaustive minimum distance search engine using hybrid digital/analog circuit techniques", IEEE Trans. VLSI syst. vol. 9, pp. 726-729, Oct. 2001. [8] T. Asai, M. Ohtani, and H. Yonezu, "Analog Integrated Circuits for the Lotka-Volterra Competitive Neural Networks& quot;, IEEE Trans. Neural Networks, vol. 10, pp. 1222-1231, Sep. 1999. [9] Donckers, et al. "Design of complementary low-power CMOS architectures for loser-take-all and winner-take-all" Proc of 7" Int conf. on microelectronics for neural, fuzzy and bio-inspired systems, Spain, Apr 1999. [10] A. Ruiz, D. H. Owens and S. Townley, "Existence, learning and replication of limit cycles in recurrent neural networks& quot;, IEEE Transactions on Neural Networks, vol. 9, pp. 651-661, Sept. 1998. 467 ... FOX adjustable time constants at the level of the synaptic contributions [5-7]. An alternative type of RNN that can be described by the differential equations given below can also be built with the electronic neurons discussed in the next section. We see that the above schematic (Fig. 1) implements the neural network with only two dynamic neurons (neuron circuit is shown in Fig. 2.). The equations of the branch currents (Iml and Im2) discussed in the next section suggest the synapses are suitable to implement both types of RNN represented by either (1) or (2). The simulated network contained six fully interconnected recurrent neurons with continuous-time dynamics. The simulated neural network can be described by a general set of equations such as the ones given below. N r5', =y'Wi - exp(y,) -A Lexp(yj) N =y'+ W -(1 -A) exp(y,) -2ALexp(yj) (2) with x,(t) the neuron state variables constituting the outputs of the network, x,(t) the external inputs to the network, and a(.) a sigmnoidal activation function. The value for -r is kept fixed and uniform in the present implementation. There are several free paramneters, to be optimally adjusted by the learning process. For example if we implement a fully in- terconnected RNN, there will be 36 connection strengths Wij and -6 thresholds Oj. The so called triggering nonlinear function of the neurons associated with this network is taken as tanh(x,) and is shown in the Fig. 1 as VI(xi). However, it is likely that a larger class of triggering functions with the same properties of oddity, boundedness, continuity, monotonicity and smoothness could be considered. Such triggering functions include arctan(x), (1I+ e-x )1, e x2 etc. In the 463 ... 2005 3rd IEEE International Conference on Industrial Informatics (INDIN) An analogue recurrent neural network for trajectory learning and other industrial applications Ganesh Kothapalli Edith Cowan University, School of Engineering and Mathematics, Joondalup, WA 6027, Australia. e-mail: g.kothapalligecu.edu.au Abstract A real-time analogue recurrent neural network (RNN) can extract and learn the unknown dynamics (and features) of a typical control system such as a robot manipulator. The task at hand is a tracking problem in the presence of disturbances. With reference to the tasks assigned to an industrial robot, one important issue is to determine the motion of the joints and the effector of the robot. In order to model robot dynamics we use a neural network that can be implemented in hardware. The synaptic weights are modelled as variable gain cells that can be implemented with a few MOS transistors. The network output signals portray the periodicity and other characteristics of the input signal in unsupervised mode. For the specific purpose of demonstrating the trajectory learning capabilities, a periodic signal with varying characteristics is used. The developed architecture, however, allows for more general learning tasks typical in applications of identification and control. The periodicity of the input signal ensures convergence of the output to a limit cycle. On-line versions of the synaptic update can be formulated using simple CMOS circuits. Because the architecture depends on the network generating a stable limit cycle, and consequently a periodic solution which is robust over an interval of parameter uncertainties, we currently place the restriction of a periodic format for the input signals. The simulated network contains interconnected recurrent neurons with continuous-time dynamics. The system emulates random-direction descent of the error as a multidimensional extension to the stochastic approximation. To achieve unsupervised learning in recurrent dynamical systems we propose a synapse circuit which has a very simple structure and is suitable for implementation in VLSI. Index Terms- Artificial neural network (ANN), Electronic Synapse, trajectory tracking, Recurrent Neurons. I. INTRODUCTION Recently, interest has been increasing in using neural networks for the identification of dynamic systems. Feedforward neural networks are used to learn static input- output maps. That is, given an input set that is mapped into a corresponding output set by some unknown map, the feedforward net is used to learn this map. The extensive use of these networks is mainly due to their powerful approximation capabilities. Similarly, recurrent neural networks are natural candidates for leaming dynamically varying input-output. For instance, one class of recurrent neural networks which is widely used are the so-called Hopfield networks. In this case, the parameters of the network have a particular symmetric structure and are chosen so that the overall dynamics of the network are asymptotically stable [1]. If the parameters do not have a symmetric structure the analysis of the network dynamics becomes intractable. Despite the complexity of the internal dynamics of recurrent networks, it has been shown empirically that certain configurations are capable of learning non-constant time-varying motions. The capability of RNNs of adapting themselves to leam certain specified periodic motions is due to their highly nonlinear dynamics. So far, certain types of cyclic recurrent neural configurations have been studied. These types of recurrent neural networks are well known, especially in the neurobiology area, where they have been studied for about twenty years. The existence of oscillating behaviour in certain cellular systems has also been documented [1-3,10]. Such cellular systems have the structure of what, in engineering applications, has become known as a recurrent neural network. Thus the neural network behaviour depends not only on the current input (as in feedforward networks) but also on previous operations of the network [4]. II. ANN FOR TRAJECTORY TRACKING In this paper we treat a neural network configuration related to control systems. We describe a class of recurrent neural networks which are able to learn and replicate autonomously a particular class of time varying periodic signals. Neural networks are used to develop a model-based control strategy for robot position control. In this paper we investigate the feasibility of applying single-chip electronic (CMOS IC) solutions to track robot trajectories. 0-7803-9094-6/05/$20.00 @2005 IEEE 462 ...
  • 7
  • 328
  • 0
programming neural networks with encog 2 in java

programming neural networks with encog 2 in java

Ngày tải lên : 29/04/2014, 14:54
... Recurrent neural networks are a special class of neural networks where the layers do not simply flow forward, like the feedforward neural networks that are so common. Chapter 12, Recurrent Neural Networks ... xii Programming Neural Networks with Encog 2 in Java Programming Neural Networks with Encog 2 in Java xx Programming Neural Networks with Encog 2 in Java ... will automatically create such a neural network for you. x Programming Neural Networks with Encog 2 in Java vi Programming Neural Networks with Encog 2 in Java Publisher:...
  • 481
  • 401
  • 0
báo cáo hóa học: " Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks" doc

báo cáo hóa học: " Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks" doc

Ngày tải lên : 19/06/2014, 10:20
... the RMSE with respect to the desired knee EMC vs traditional controllers without fatigueFigure 4 EMC vs traditional controllers without fatigue. A comparison of the performance obtained by the ... flexion extension which is composed by a feedforward inverse model and a feedback controller, both implemented using neural networks. The training of the networks is conceived to avoid to a therapist ... E, Ferrarin M, Ferrigno G: Functional electrical stimulation controlled by artificial neural networks: pilot experiments with simple movements are promising for rehabilitation applications. Funct...
  • 13
  • 392
  • 0
Báo cáo hóa học: "Research Article µ-Stability of Impulsive Neural Networks with Unbounded Time-Varying Delays and Continuously Distributed Delays" doc

Báo cáo hóa học: "Research Article µ-Stability of Impulsive Neural Networks with Unbounded Time-Varying Delays and Continuously Distributed Delays" doc

Ngày tải lên : 21/06/2014, 05:20
... stability analysis for neural networks with time -delay has attracted a large amount of research interest, and many sufficient conditions have been proposed to guarantee the stability of neural networks with various ... concerned with the problem of μ-stability of impulsive neural systems with unbounded t ime-varying delays and continuously distributed delays. Some μ-stability criteria are derived by using the ... μ-stability of delayed neural networks with or without uncertainties via different approaches. Those results can be applied to neural networks with unbounded time-varying delays. Moreover, few results...
  • 12
  • 364
  • 0
Báo cáo hóa học: " Research Article Existence and Stability of Antiperiodic Solution for a Class of Generalized Neural Networks with Impulses and Arbitrary Delays on Time Scales" ppt

Báo cáo hóa học: " Research Article Existence and Stability of Antiperiodic Solution for a Class of Generalized Neural Networks with Impulses and Arbitrary Delays on Time Scales" ppt

Ngày tải lên : 21/06/2014, 07:20
... generalized neural networks with impulses and arbitrary delays. This class of generalized neural networks include many continuous or discrete time neural networks such as, Hopfield type neural networks, ... anti-periodic Cohen-Grossberg neural networks with delays and impulses . 1. Introduction In this paper, we consider the following generalized neural networks with impulses and arbitrary delays on time scales: x Δ  t   ... cellular neural networks, Cohen-Grossberg neural networks, and so on. To the best of our knowledge, the known results about the existence of anti-periodic solutions for neural networks are all done by...
  • 19
  • 481
  • 0
Tài liệu Báo cáo Y học: Prediction of protein–protein interaction sites in heterocomplexes with neural networks ppt

Tài liệu Báo cáo Y học: Prediction of protein–protein interaction sites in heterocomplexes with neural networks ppt

Ngày tải lên : 21/02/2014, 15:20
... by its p osition in t he Ó FEBS 2002 Predicting protein–protein interaction sites (Eur. J. Biochem. 269) 1357 Prediction of protein–protein interaction sites in heterocomplexes with neural networks Piero ... the conserved HPD motif is i mplicated in the interaction with DnaK [41], and one of the residues of the motif is also predicted by neural networks. As a whole, the predicted residues indicate the ... feed-forward neural network trained with the standard back-propagation algorithm [35]. The network system is trained/tested to predict w hether each surface residue (represented by a C A atom)...
  • 6
  • 454
  • 0
Báo cáo khoa học: "Association-based Natural Language Processing with Neural Networks" ppt

Báo cáo khoa học: "Association-based Natural Language Processing with Neural Networks" ppt

Ngày tải lên : 17/03/2014, 08:20
... con- version process reinforced with a neural net- work handler. The network is used by the neural network handler and word associations are done in parallel with kana-kanji conver- sion. ... context switches. To avoid these problems without increasing computational costs, we propose the use of the associative functionality of neural networks. The use of association is a natural ... constraints in homonym selection. In the same vein, associative information should be considered a weak constraint because associations by neural networks are not always reliable. Pos- sible conflict...
  • 8
  • 302
  • 0
leg motion classification with artificial neural networks

leg motion classification with artificial neural networks

Ngày tải lên : 28/04/2014, 10:01
... systems containing neural networks. IEEE Trans. Neural Networ. 1991, 2, 252–262. 21. Jordan, M.I.; Jacobs, R.A. Learning to control an unstable system with forward modeling. In Advances in Neural Information ... decomposition; feature extraction; pattern recognition; artificial neural networks 1. Introduction Sensor networks, particularly wireless sensor networks, have received considerable attention since the ... Sensors 2011, 11 1743 25. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall: Upper Saddle River, NJ, USA, 1999. 26. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford...
  • 23
  • 295
  • 0
Báo cáo hóa học: " Research Article Class-Based Fair Code Allocation with Delay Guarantees for OVSF-CDMA and VSF-OFCDM in Next-Generation Cellular Networks" ppt

Báo cáo hóa học: " Research Article Class-Based Fair Code Allocation with Delay Guarantees for OVSF-CDMA and VSF-OFCDM in Next-Generation Cellular Networks" ppt

Ngày tải lên : 21/06/2014, 11:20
... the window rate degradation, denoted by WRD i,j ,forwindow j by subtracting the minimum of R i,avg and R i,rcv from R i,avg and then by dividing the resultant by R i,avg (i.e., WRD i,j = (R i,avg − ... conditions, and (ii) a packet has a delay bound of 1 packet transmission time, which is equal to the size of the packet divided by the requested average rate of the flow. With these assumptions, Figure ... these assumptions, Figure 6 shows the probability of delay violations experienced by real-time flows. In case of CFCA and (CFC-DBA), the number of delay violations of real-time packets is 0 at all...
  • 21
  • 368
  • 0
Báo cáo hóa học: " Research Article Random Field Estimation with Delay-Constrained and Delay-Tolerant Wireless Sensor Networks" pptx

Báo cáo hóa học: " Research Article Random Field Estimation with Delay-Constrained and Delay-Tolerant Wireless Sensor Networks" pptx

Ngày tải lên : 21/06/2014, 17:20
... pages doi:10.1155/2010/102460 Research Article Random Field Estimat ion with Delay- Constrained and Delay- Tolerant Wireless Sensor Networks Javier Matamoros and Carles Ant ´ on-Haro Centre Tecnol ` ogic ... estimation with wireless sensor networks. We consider two encoding strategies, namely, Compress-and-Estimate (C&E) and Quantize-and-Estimate (Q&E), which operate with and without side ... interest: delay- constrained networks, in which the observations collected in a particular timeslot must be immediately encoded and conveyed to the Fusion Center (FC); delay- tolerant (DT) networks, ...
  • 13
  • 263
  • 0

Xem thêm