胂经网络与模糊系统 Chapter 6 Architecture and Equilibria 结构和平衡 学生:李琦 导师:高新波
Architecture and Equilibria 结构和平衡 Chapter 6 神经网络与模糊系统 学生: 李 琦 导师:高新波
6. 1 Neutral Network As Stochastic Gradient system Classify Neutral network model by their synaptic connection topologies and by how learning modifies their connection topologies synaptic connection topologies feedforward: if no closed synaptic loops l2feedback: if closed synaptic loops or feedback pathways how learning modifies their connection topologies 1. Supervised learning: use class-membership information of' training samples 2.Unsupervised learning: use unlabelled training samplings 2003.11.19
2003.11.19 2 6.1 Neutral Network As Stochastic Gradient system Classify Neutral network model By their synaptic connection topologies and by how learning modifies their connection topologies 1. : 2. : feedforward if no closed synaptic loops feedback if closed synaptic loops or feedback pathways 1. : 2. : Supervised learning use class membership information of training samples Unsupervised learning use unlabelled training samplings − synaptic connection topologies how learning modifies their connection topologies
6.1 Neutral Network As stochastic Gradient system Decode Feedforward Feedback Gradient descent LMS Recurrent BackPropagation BackPropagation R t Learing RABAM Vetor Quantization Brownian annealing Boltzmann learning Self-Organization Maps ABAM ART-2 Counter-propagation BAM-Cohen-Grossberg Model ople cUI Brain-State-In-A-Box Masking field Adaptive-R ART-2 Neural Net Work taxonomy 2003.11.19
2003.11.19 3 6.1 Neutral Network As Stochastic Gradient system Gradient descent LMS BackPropagation Reinforcement Learing Recurrent BackPropagation Vetor Quantization Self-Organization Maps Competitve learning Counter-propagation RABAM Brownian annealing Boltzmann learning ABAM ART-2 BAM-Cohen-Grossberg Model Hopfield circuit Brain-State-In-A-Box Masking field Adaptive-Resonance ART-1 ART-2 Feedforward Feedback Decode S u p e r v i s e d U n s u p e r v i s e d E n c o d e Neural NetWork Taxonomy
6.2 Global Equilibria: convergence and stability Three dynamical systems in neural network synaptic dynamical system M neuronal dynamical system x Joint neuronal-synaptic dynamical system (i, M) Historically, Neural engineers study the first or second neural network. They usually study learning in feedforward neural networks and neural stability in nonadaptive feed back neural networks. RaBaM and art network depend on joint equilibration of the synaptic and neuronal dynamical systems 2003.11.19
2003.11.19 4 6.2 Global Equilibria: convergence and stability Three dynamical systems in neural network: synaptic dynamical system neuronal dynamical system joint neuronal-synaptic dynamical system Historically,Neural engineers study the first or second neural network.They usually study learning in feedforward neural networks and neural stability in nonadaptive feedback neural networks. RABAM and ART network depend on joint equilibration of the synaptic and neuronal dynamical systems. M x ( , ) x M
6.2 Global equilibria: convergence and stability Equilibrium is steady state(for fixed-point attractors Convergence is synaptic equilibrium. M=0 Stability is neuronal equilibrium. x=0 More generally neural signals reach steady state even though the activations still change. We denote steady state in the neuronal field f.F=0 Global stability: x=0.M=0 Stability - Equilibrium dilemma Neurons fluctuate faster than synapses fluctuate Convergence undermines stability 2003.11.19
2003.11.19 5 6.2 Global Equilibria: convergence and stability Equilibrium is steady state (for fixed-point attractors) Convergence is synaptic equilibrium. Stability is neuronal equilibrium. More generally neural signals reach steady state even though the activations still change. We denote steady state in the neuronal field : Global stability: Stability - Equilibrium dilemma : Neurons fluctuate faster than synapses fluctuate. Convergence undermines stability. Μ = 0 x = 0 F x Fx = 0 x = 0,M = 0