6.3 Synaptic convergence to centroids: AvQ algorithms Competitive learning adaptively quantizes the input pattern space Rn. Probability density function p(x)characterizes the continuous distributions of patterns in r We shall prove that competitive AvQ synaptic vector m converge exponentially quickly to pattern-class centroids and more generally, at equilibrium they vibrate about the centroids in a brownian motion 2003.11.19
2003.11.19 6 6.3 Synaptic convergence to centroids: AVQ Algorithms We shall prove that competitive AVQ synaptic vector converge exponentially quickly to pattern-class centroids and, more generally, at equilibrium they vibrate about the centroids in a Browmian motion. m j Competitive learning adaptively quantizes the input pattern space . Probability density function characterizes the continuous distributions of patterns in . n R p(x) n R
6.3 Synaptic convergence to centroids: AvQ algorithms Competitive avQ Stochastic Differential Equations R=D1∪D2∪D3…Dk 1≠ The random indicator function fx∈D 0 if xed Supervised learning algorithms depend explicitly on the indicator functions. Unsupervised learning algorithms don't require this pattern-class information Centriod of D 「D.xp(x)dx x Ip p(x)dx 2003.11.19
2003.11.19 7 6.3 Synaptic convergence to centroids: AVQ Algorithms 1 2 3.... , k n R D D D D D D if i j i j = = The Random Indicator function Supervised learning algorithms depend explicitly on the indicator functions.Unsupervised learning algorithms don’t require this pattern-class information. Centriod of : 1 ( ) 0 j j D j if x D I x if x D = ( ) ( ) j j D j D xp x dx x p x dx = Competitive AVQ Stochastic Differential Equations: Dj
6.3 Synaptic convergence to centroids: AvQ algorithms The stochastic unsupervised competitive learning law m;=S,V,I We want to show that at equilibrium m, =x, or E(m, ) =x As discussed in Chapter 4: S,lD (x) The linear stochastic competitive learning law x)X-m,:|+n The linear supervised competitive learning law m=r(xI(x[x-m, ]+n Dur ∑(x) 2003.11.19
2003.11.19 8 6.3 Synaptic convergence to centroids: AVQ Algorithms The Stochastic unsupervised competitive learning law: ( )[ ] m S y x m n j j j j j = − + We want to show that at equilibrium or E( ) m x m x j j j j = = ( ) j j D As discussed in Chapter 4: S I x The linear stochastic competitive learning law: ( )[ ] j j D j j m = − + I x x m n The linear supervised competitive learning law: ( )[ ] ( ) ( ) ( ) ( ) j j i j j D j j j D D i j r I x x m n r I x I x m x x = − + = −
6.3 Synaptic convergence to centroids: AvQ algorithms The linear differential competitive learning law x-m,+n n practice m, =sony,[x-m,+n I if z>0 sg2]=10∥z=0 f 2003.11.19
2003.11.19 9 6.3 Synaptic convergence to centroids: AVQ Algorithms The linear differential competitive learning law: In practice: [ ] m S x m n j j j j = − + sgn[ ][ ] 1 0 sgn[ ] 0 0 1 0 m y x m n j j j j if z z if z if z = − + = = −
6.3 Synaptic convergence to centroids: AvQ algorithms Competitive avQ algorithms 1. Initialize synaptic vectors: m, 0=x(i),i=1 2. For random sample x(t), find the closest( winning)synaptic vector m, (t): m, ()-x(o=min lm,()x(I where=x+.+x gives the squared Euclidean norm of x 3. Update the wining synaptic vectors m, (t) by the UCL, scl, or DCL learning algorithm 2003.11.19
2003.11.19 10 6.3 Synaptic convergence to centroids: AVQ Algorithms Competitive AVQ Algorithms 1. Initialize synaptic vectors: mi (0) = x(i) , i =1,......,m 2.For random sample , find the closest (“winning”) synaptic vector : x(t) m (t) j ( ) ( ) min ( ) ( ) j i i m t x t m t x t − = − 3.Update the wining synaptic vectors by the UCL ,SCL,or DCL learning algorithm. m (t) j 2 2 2 1 ....... where x x x = + + n gives the squared Euclidean norm of x