SMCB-E-02252003-0112.R1 A Multi-Agent Genetic Algorithm for Global Numerical Optimization ZHONG Weicai,LIU Jing,XUE Mingzhi,and JIAO Licheng,Senior Member,IEEE problems arise in almost every field of science,engineering,and Abstract-In this paper,multi-agent systems and genetic business.Since many of these problems cannot be solved algorithms are integrated to form a new algorithm,Multi-Agent analytically,GAs become one of the popular methods to address Genetic Algorithm (MAGA),for solving the global numerical them.But the major problem of GAs is that they may be trapped optimization problem.An agent in MAGA represents a candidate solution to the optimization problem in hand.All agents live in a in the local optima of the objective function.Therefore,various latticelike environment,with each agent fixed on a lattice-point.In new methods have been proposed,such as combining mutation order to increase energies,they compete or cooperate with their operators [5],macroevolutionary algorithm [6],immune genetic neighbors,and they can also use knowledge.Making use of these algorithm [7],orthogonal genetic algorithm [8],microgenetic agent-agent interactions,MAGA realizes the purpose of algorithm [9],and so on.These algorithms proved to be minimizing the objective function value.Theoretical analyses show effective and boosted the development of genetic algorithms. that MAGA converges to the global optimum.In the first part of the experiments,10 benchmark functions are used to test the Agent-based computation has been studied for several years performance of MAGA,and the scalability of MAGA along the in the field of distributed artificial intelligence [10]-[13]and has problem dimension is studied with great care.The results show been widely used in other branches of computer science [14] that MAGA achieves a good performance when the dimensions are [15].Problem solving is an area that many multi-agent-based increased from 20 to 10,000.Moreover,even when the dimensions applications are concerned with.It includes the following are increased to as high as 10,000,MAGA still can find high subareas:distributed solutions to problems,solving distributed quality solutions at a low computational cost.Therefore,MAGA has good scalability and is a competent algorithm for solving high problems,and distributed techniques for problem solving [12] dimensional optimization problems.To the best of our knowledge, [13].Reference [15]introduced an application of distributed no researchers have ever optimized the functions with 10,000 techniques for solving constraint satisfaction problems.They dimensions by means of evolution.In the second part of the solved the 7000-queen problem by an energy-based multi-agent experiments,MAGA is applied to a practical case,the model.Enlightened by them,this paper integrates multi-agent approximation of linear systems,with a satisfactory result. systems with GAs to form a new algorithm,Multi-Agent Index Terms-Genetic algorithms,linear system,multi-agent Genetic Algorithm (MAGA),for solving the global numerical systems,numerical optimization. optimization problem.In MAGA.all agents live in a latticelike environment.Making use of the search mechanism of GAs MAGA realizes the ability of agents to sense and act on the I.INTRODUCTION environment that they live in.During the process of interacting ENETIC algorithms (GAs)are stochastic global with the environment and other agents,each agent increases its Joptimization methods inspired by the biological energy as much as possible,so that MAGA can achieve the mechanisms of evolution and heredity,which were first ultimate purpose of minimizing the objective function value. developed by Holland in the 1960s [1].In recent years.GAs Being similar to MAGA,cellular genetic algorithms [16]-[18] have been widely used for numerical optimization. also use a lattice-based population.In cellular GAs,each combinatorial optimization,classifier systems,and many other individual is located in a cell of the lattice.All operations of engineering problems [2]-[4].Global numerical optimization cellular GAs and traditional GAs are identical except that there is a neighborhood structure in the former while there is no neighborhood structure in the latter.In essence,cellular GAs are Manuscript received February 25,2003.This work was supported by the National Natural Science Foundation of China under Grant 60133010. greedy techniques and can present the same problem of ZHONG Weicai is with the National Key Laboratory for Radar Signal premature convergence of traditional GAs [18].That is,cellular Processing and the Institute of Intelligent Information Processing,Xidian GAs are only a kind of techniques for enabling a fine-grained University,Xi'an,710071,China (phone: 86-029-8209786;fax: 86-029-8201023;e-mail:neouma@163.com), parallel implementation of GAs.But in MAGA,each individual LIU Jing is with the National Key Laboratory for Radar Signal Processing, is considered as an agent,which has its own purpose and Xidian University,Xi'an,710071,China. behaviors.The experimental results show that MAGA achieves XUE Mingzhi is with the National Key Laboratory for Radar Signal a good performance even for the functions with 10,000 Processing,Xidian University,Xi'an,710071,China and the Department of Mathematics,Shangqiu Teachers College,Shangqiu,476000,China. dimensions,which illustrate that MAGA overcomes the JIAO Licheng is with the National Key Laboratory for Radar Signal problem of premature convergence of traditional GAs in some Processing,Xidian University,Xi'an,710071,China
SMCB-E-02252003-0112.R1 1 Abstract—In this paper, multi-agent systems and genetic algorithms are integrated to form a new algorithm, Multi-Agent Genetic Algorithm (MAGA), for solving the global numerical optimization problem. An agent in MAGA represents a candidate solution to the optimization problem in hand. All agents live in a latticelike environment, with each agent fixed on a lattice-point. In order to increase energies, they compete or cooperate with their neighbors, and they can also use knowledge. Making use of these agent-agent interactions, MAGA realizes the purpose of minimizing the objective function value. Theoretical analyses show that MAGA converges to the global optimum. In the first part of the experiments, 10 benchmark functions are used to test the performance of MAGA, and the scalability of MAGA along the problem dimension is studied with great care. The results show that MAGA achieves a good performance when the dimensions are increased from 20 to 10,000. Moreover, even when the dimensions are increased to as high as 10,000, MAGA still can find high quality solutions at a low computational cost. Therefore, MAGA has good scalability and is a competent algorithm for solving high dimensional optimization problems. To the best of our knowledge, no researchers have ever optimized the functions with 10,000 dimensions by means of evolution. In the second part of the experiments, MAGA is applied to a practical case, the approximation of linear systems, with a satisfactory result. Index Terms—Genetic algorithms, linear system, multi-agent systems, numerical optimization. I. INTRODUCTION ENETIC algorithms (GAs) are stochastic global optimization methods inspired by the biological mechanisms of evolution and heredity, which were first developed by Holland in the 1960s [1]. In recent years, GAs have been widely used for numerical optimization, combinatorial optimization, classifier systems, and many other engineering problems [2]-[4]. Global numerical optimization Manuscript received February 25, 2003. This work was supported by the National Natural Science Foundation of China under Grant 60133010. ZHONG Weicai is with the National Key Laboratory for Radar Signal Processing and the Institute of Intelligent Information Processing, Xidian University, Xi’an, 710071, China (phone: 86-029-8209786; fax: 86-029-8201023; e-mail: neouma@163.com). LIU Jing is with the National Key Laboratory for Radar Signal Processing, Xidian University, Xi’an, 710071, China. XUE Mingzhi is with the National Key Laboratory for Radar Signal Processing, Xidian University, Xi’an, 710071, China and the Department of Mathematics, Shangqiu Teachers College, Shangqiu, 476000, China. JIAO Licheng is with the National Key Laboratory for Radar Signal Processing, Xidian University, Xi’an, 710071, China. problems arise in almost every field of science, engineering, and business. Since many of these problems cannot be solved analytically, GAs become one of the popular methods to address them. But the major problem of GAs is that they may be trapped in the local optima of the objective function. Therefore, various new methods have been proposed, such as combining mutation operators [5], macroevolutionary algorithm [6], immune genetic algorithm [7], orthogonal genetic algorithm [8], microgenetic algorithm [9], and so on. These algorithms proved to be effective and boosted the development of genetic algorithms. Agent-based computation has been studied for several years in the field of distributed artificial intelligence [10]-[13] and has been widely used in other branches of computer science [14], [15]. Problem solving is an area that many multi-agent-based applications are concerned with. It includes the following subareas: distributed solutions to problems, solving distributed problems, and distributed techniques for problem solving [12], [13]. Reference [15] introduced an application of distributed techniques for solving constraint satisfaction problems. They solved the 7000-queen problem by an energy-based multi-agent model. Enlightened by them, this paper integrates multi-agent systems with GAs to form a new algorithm, Multi-Agent Genetic Algorithm (MAGA), for solving the global numerical optimization problem. In MAGA, all agents live in a latticelike environment. Making use of the search mechanism of GAs, MAGA realizes the ability of agents to sense and act on the environment that they live in. During the process of interacting with the environment and other agents, each agent increases its energy as much as possible, so that MAGA can achieve the ultimate purpose of minimizing the objective function value. Being similar to MAGA, cellular genetic algorithms [16]-[18] also use a lattice-based population. In cellular GAs, each individual is located in a cell of the lattice. All operations of cellular GAs and traditional GAs are identical except that there is a neighborhood structure in the former while there is no neighborhood structure in the latter. In essence, cellular GAs are greedy techniques and can present the same problem of premature convergence of traditional GAs [18]. That is, cellular GAs are only a kind of techniques for enabling a fine-grained parallel implementation of GAs. But in MAGA, each individual is considered as an agent, which has its own purpose and behaviors. The experimental results show that MAGA achieves a good performance even for the functions with 10,000 dimensions, which illustrate that MAGA overcomes the problem of premature convergence of traditional GAs in some A Multi-Agent Genetic Algorithm for Global Numerical Optimization ZHONG Weicai, LIU Jing, XUE Mingzhi, and JIAO Licheng, Senior Member, IEEE G
SMCB-E-02252003-0112.R1 2 degree. The purpose of a is to increase its energy as much as possible. The rest of this paper is organized as follows:Section II As can be seen,each agent carries all variables of the describes MAGA and analyzes its convergence.Sections IlI and objective function to be optimized.In order to realize the local IV describe the experimental studies on the problems of global perceptivity of agents,the environment is organized as a numerical optimization and the optimal approximation of linear latticelike structure,which is defined as follows: systems,respectively.Finally,conclusions are presented in Definition 2:All agents live in a latticelike environment,L. Section V. which is called an agent lattice.The size of L is L where Lis an integer.Each agent is fixed on a lattice-point and it can only interact with its neighbors.Suppose that the agent located II.MULTI-AGENT GENETIC ALGORITHM AND ITS at(i,/)is represented asLu,,j=1,2,...Le,then the neighbors CONVERGENCE ofL Neighbors are defined as follows: According to [13],[15],an agent is a physical or virtual entity Neighbors= (3) that essentially has the following properties:(a)it is able to live and act in the environment:(b)it is able to sense its local i-1i≠1 where i'= j-1j≠1 environment;(c)it is driven by certain purposes and(d)it has lL j=1,P= i+1i≠Lec 1 i= some reactive behaviors.Multi-agent systems are [j+1j≠Lc computational systems in which several agents interact or work together in order to achieve goals.As can be seen,the meaning j=Lo of an agent is very comprehensive,and what an agent represents Therefore,the agent lattice can be represented as the one in is different for different problems.In general,four elements Fig.1.Each circle represents an agent,the data in a circle should be defined when multi-agent systems are used to solve represents its position in the lattice,and two agents can interact problems.The first is the meaning and the purpose of each agent. with each other if and only if there is a line connecting them. The second is the environment where all agents live.Since each agent has only local perceptivity,so the third is the definition of the local environment.The last is the behaviors that each agent can take to achieve its purpose.In what follows,the definitions of these elements for global numerical optimization problems are described. A.The Agent for Numerical Optimization A global numerical optimization can be formulated as solving the following objective function minimize f(x),x=(,x)eS (1) where SR"defines the search space which is an n-dimensional space bounded by the parametric constraints s≤x≤x,i-l,…n.Thus,S=[s,],where =(.,x)and=(,,.,)Because many 'x' notations have been used throughout this paper,they are Fig.1.The model of the agent lattice. explained explicitly to prevent confusion.Thex'in boldface represents a real-valued vector in the search space,and thex with a subscript represents a component in the vectorx'.The In traditional GAs,those individuals that will generate boldfaced'with an underline represents the vector of the offspring are usually selected from all individuals according to their fitness.Therefore,the global fitness distribution of a lower bound of the search space,and thex'with a subscript population must be determined.But in nature,a global selection and an underline represents a component in the vector'x'.The does not exist,and the global fitness distribution cannot be boldfaced'with an overline represents the vector of the determined either.In fact,the real natural selection only occurs upper bound of the search space,and thex.'with a subscript in a local environment,and each individual can only interact and an overline represents a component in the vector'.An with those around it.That is,in some phase,the natural agent for numerical optimization problems is defined as evolution is just a kind of local phenomenon.The information follows: can be shared globally only after a process of diffusion Definition 1:An agent,a,represents a candidate solution to In the aforementioned agent lattice,to achieve their purposes, the optimization problem in hand.The value of its energy is agents will compete or cooperate with others so that they can equal to the negative value of the objective function, gain more resources.Since each agent can only sense its local ae s and Energy(a)=-f(a) (2) environment,its behaviors of competition and cooperation can only take place between the agent and its neighbors.There is no
SMCB-E-02252003-0112.R1 2 degree. The rest of this paper is organized as follows: Section II describes MAGA and analyzes its convergence. Sections III and IV describe the experimental studies on the problems of global numerical optimization and the optimal approximation of linear systems, respectively. Finally, conclusions are presented in Section V. II. MULTI-AGENT GENETIC ALGORITHM AND ITS CONVERGENCE According to [13], [15], an agent is a physical or virtual entity that essentially has the following properties: (a) it is able to live and act in the environment; (b) it is able to sense its local environment; (c) it is driven by certain purposes and (d) it has some reactive behaviors. Multi-agent systems are computational systems in which several agents interact or work together in order to achieve goals. As can be seen, the meaning of an agent is very comprehensive, and what an agent represents is different for different problems. In general, four elements should be defined when multi-agent systems are used to solve problems. The first is the meaning and the purpose of each agent. The second is the environment where all agents live. Since each agent has only local perceptivity, so the third is the definition of the local environment. The last is the behaviors that each agent can take to achieve its purpose. In what follows, the definitions of these elements for global numerical optimization problems are described. A. The Agent for Numerical Optimization A global numerical optimization can be formulated as solving the following objective function minimize ( ), ( , , ) 1 n f xx x x = ∈ S (1) where n S R⊆ defines the search space which is an n-dimensional space bounded by the parametric constraints iii xxx ≤ ≤ , i=1,…,n. Thus, S = [ ] x x , , where 1 2 ( , , ..., ) n x = xx x and 1 2 ( , , ..., ) n x = xx x . Because many ‘x’ notations have been used throughout this paper, they are explained explicitly to prevent confusion. The ‘x’ in boldface represents a real-valued vector in the search space, and the ‘xi’ with a subscript represents a component in the vector ‘x’. The boldfaced ‘ x ’ with an underline represents the vector of the lower bound of the search space, and the ‘ i x ’ with a subscript and an underline represents a component in the vector ‘ x ’. The boldfaced ‘ x ’ with an overline represents the vector of the upper bound of the search space, and the ‘ i x ’ with a subscript and an overline represents a component in the vector ‘ x ’. An agent for numerical optimization problems is defined as follows: Definition 1: An agent, a, represents a candidate solution to the optimization problem in hand. The value of its energy is equal to the negative value of the objective function, a∈S and Energy f () () a a = − (2) The purpose of a is to increase its energy as much as possible. As can be seen, each agent carries all variables of the objective function to be optimized. In order to realize the local perceptivity of agents, the environment is organized as a latticelike structure, which is defined as follows: Definition 2: All agents live in a latticelike environment, L, which is called an agent lattice. The size of L is Lsize×Lsize, where Lsize is an integer. Each agent is fixed on a lattice-point and it can only interact with its neighbors. Suppose that the agent located at (i, j) is represented as Li,j, i, j=1,2,…,Lsize, then the neighbors of Li,j, Neighborsi,j, are defined as follows: Neighbors L L L L ij i j ij i j ij , ,, ,, = { ′ ′ ′′ ′′ , , , } (3) where 1 1 1 size i i i L i − ≠ ′ = = , 1 1 1 size j j j L j − ≠ ′ = = , 1 1 size size i iL i i L + ≠ ′′ = = , 1 1 size size j jL j j L + ≠ ′′ = = . Therefore, the agent lattice can be represented as the one in Fig.1. Each circle represents an agent, the data in a circle represents its position in the lattice, and two agents can interact with each other if and only if there is a line connecting them. In traditional GAs, those individuals that will generate offspring are usually selected from all individuals according to their fitness. Therefore, the global fitness distribution of a population must be determined. But in nature, a global selection does not exist, and the global fitness distribution cannot be determined either. In fact, the real natural selection only occurs in a local environment, and each individual can only interact with those around it. That is, in some phase, the natural evolution is just a kind of local phenomenon. The information can be shared globally only after a process of diffusion. In the aforementioned agent lattice, to achieve their purposes, agents will compete or cooperate with others so that they can gain more resources. Since each agent can only sense its local environment, its behaviors of competition and cooperation can only take place between the agent and its neighbors. There is no Fig. 1. The model of the agent lattice
SMCB-E-02252003-0112.R1 global selection at all,so the global fitness distribution is not similar to the inversion operation in AEA [19].It has the required.An agent interacts with its neighbors so that function of random searching,but is better than random information is transferred to them.In such a manner,the searching in that it makes use of the information of a winner. information is diffused to the whole agent lattice.As can be seen, Therefore,occupying strategy I puts emphasis on exploitation the model of the agent lattice is more close to the real while occupying strategy 2 on exploration. evolutionary mechanism in nature than the model of the Neighborhood orthogonal crossover operator:The population in traditional GAs. orthogonal crossover operator is a new operator proposed by [8]. B.Four Evolutionary Operators for Agents It generates new individuals by means of the orthogonal design. Because we usually have no information about the location of To achieve its purposes,each agent has some behaviors.In the global optimum,and an orthogonal array can specify a small addition to the aforementioned behaviors of competition and number of individuals that are scattered uniformly over the cooperation,each agent can also increase its energy by using its search space,the orthogonal crossover operator can generate a knowledge.On the basis of such behaviors,four evolutionary small,but representative sample of potential individuals.In operators are designed for the agents.The neighborhood MAGA,this operator is performed on L and Max to achieve competition operator and the neighborhood orthogonal the purpose of cooperation.In what follows,its basic concept is crossover operator realize the behaviors of competition and introduced,and for more details,see [8]. cooperation,respectively.The mutation operator and the Suppose that the search space defined by L and Max,is self-learning operator realize the behaviors of making use of knowledge.Suppose that the four operators are performed on [M,LM]where the agent located at (i,),L(h,.....),and Max =(min(h,m),min(,m2),..min(I m)) (m,mz,....m)is the agent with maximum energy among the (9) neighbors of Lu namely,MaxeNeighborsu and xw=(max(4,m,),max(凸2,m2),,max(n,mn)) Vae Neighborsup then Energy(a)<Energy(Max). The domain of the ith dimension is quantized into Neighborhood competition operator:If Lu satisfies(4),it is a B.B.2,B..where winner;otherwise it is a loser min(,m) j=1 Ener☒Li)>Energ☒Maxd) (4) If L is a winner,it can still live in the agent lattice.If L is a 月,=min(0,m)+-1)()2≤j≤02-1 (10) loser,it must die,and its lattice-point will be occupied by Max. Max has two strategies to occupy the lattice-point,and it max(1,m) j=02 selects them with probability P If U(0,1)<P,occupying (F-1)integers,(k,k2,.,k),are generated randomly such that strategy 1 is selected;otherwise occupying strategy 2 is selected, 1<k<k<...<k<n,and then create the following F factors for where 0(,)is a uniform random number generator.In the two any agent (1,x2,...x), occupying strategies,Max first generates a new agent, f=(,,%)5=(,2,f=(,,x) New(ee2...),and then New is put on the lattice-point. (11) In occupying strategy 1,New is determined by, 2 levels for the ith factor f are defined as follows: 玉(m+U(-1,1)x(m-l4)< f0)=(P,B+2,,B e=x(m+U(-1,1)x(m4-4)月>x,k-l,…,n (5) f2)=(B2,月+22,,月a) m+U(-1,1)x(m-1)otherwise (12) In occupying strategy 2,Max,is first mapped on [0,1] according to, fg)=(B+e,月2e,Be m=(m-x)/八区-玉),k-l…,n (6) The orthogonal array,)=[is applied to Then,New=(,e2,e)is determined by, generate the following M agents: Ne=(m,m,…,m-m,m2-1… (7) f()(,,f(色e月》 m+,m,m,m+,…,mn) f(b2),5(b22),,f(b2F)》 where 1<i <n,1<i2<n,and i<i. (13) Finally,New is obtained by mapping New,,back to according to, f(bw),(b,,f(bF》 e&=玉+e(-玉),l,…n (8) Finally,the agent with maximum energy among the M2 agents is In this operator,two strategies play different roles.WhenL selected to replace L The details about the construction of the is a loser,it perhaps still has some useful information,so orthogonal array were given in []For the sake of simplicity,in occupying strategy 1,a kind ofheuristic crossover,is in favor of MAGA,2,Fand M2 are fixed at 3,4 and 9,respectively,which are same as those of [8].Therefore,no parameter needs to be reserving some information of a loser.Occupying strategy 2 is
SMCB-E-02252003-0112.R1 3 global selection at all, so the global fitness distribution is not required. An agent interacts with its neighbors so that information is transferred to them. In such a manner, the information is diffused to the whole agent lattice. As can be seen, the model of the agent lattice is more close to the real evolutionary mechanism in nature than the model of the population in traditional GAs. B. Four Evolutionary Operators for Agents To achieve its purposes, each agent has some behaviors. In addition to the aforementioned behaviors of competition and cooperation, each agent can also increase its energy by using its knowledge. On the basis of such behaviors, four evolutionary operators are designed for the agents. The neighborhood competition operator and the neighborhood orthogonal crossover operator realize the behaviors of competition and cooperation, respectively. The mutation operator and the self-learning operator realize the behaviors of making use of knowledge. Suppose that the four operators are performed on the agent located at (i, j), Li,j=(l1,l2,…,ln), and Maxi,j= (m1,m2,…,mn) is the agent with maximum energy among the neighbors of Li,j, namely, Maxi,j∈Neighborsi,j and ∀a∈Neighborsi,j, then Energy(a)≤Energy(Maxi,j). Neighborhood competition operator: If Li,j satisfies (4), it is a winner; otherwise it is a loser. Energy(Li,j)>Energy(Maxi,j) (4) If Li,j is a winner, it can still live in the agent lattice. If Li,j is a loser, it must die, and its lattice-point will be occupied by Maxi,j. Maxi,j has two strategies to occupy the lattice-point, and it selects them with probability Po. If U(0,1)<Po, occupying strategy 1 is selected; otherwise occupying strategy 2 is selected, where U(⋅,⋅) is a uniform random number generator. In the two occupying strategies, Maxi,j first generates a new agent, Newi,j=(e1,e2,…,en), and then Newi,j is put on the lattice-point. In occupying strategy 1, Newi,j is determined by, ( ) ( ) ( ) ( ) ( ) 1,1 ( ) ( 1,1 ( ) 1,1 ( ) otherwise k k kk k k k k kk k k kk x mU ml x e x mU ml x mU ml +− × − < = +− × − > +− × − , k=1,…,n (5) In occupying strategy 2, Maxi,j is first mapped on [0, 1] according to, m mx xx k kk kk ′ =− − ( ) ( ) , k=1,…,n (6) Then, , 12 (, , , ) New e e e ij n ′ ′′ ′ = is determined by, 1 22 1 12 2 , 12 1 1 1 12 ( , , , , , , , , , , , , ) ij i i i i ii i n New m m m m m m mm m m − − + ++ ′ ′′ ′ ′ ′ = ′ ′′ ′ ′ (7) where 1<i1<n, 1<i2<n, and i1<i2. Finally, Newi,j is obtained by mapping Newi j , ′ back to [ ] , k k x x according to, ( ) k kkk k e xexx = +⋅ − ′ , k=1,…,n (8) In this operator, two strategies play different roles. When Li,j is a loser, it perhaps still has some useful information, so occupying strategy 1, a kind of heuristic crossover, is in favor of reserving some information of a loser. Occupying strategy 2 is similar to the inversion operation in AEA [19]. It has the function of random searching, but is better than random searching in that it makes use of the information of a winner. Therefore, occupying strategy 1 puts emphasis on exploitation while occupying strategy 2 on exploration. Neighborhood orthogonal crossover operator: The orthogonal crossover operator is a new operator proposed by [8]. It generates new individuals by means of the orthogonal design. Because we usually have no information about the location of the global optimum, and an orthogonal array can specify a small number of individuals that are scattered uniformly over the search space, the orthogonal crossover operator can generate a small, but representative sample of potential individuals. In MAGA, this operator is performed on Li,j and Maxi,j to achieve the purpose of cooperation. In what follows, its basic concept is introduced, and for more details, see [8]. Suppose that the search space defined by Li,j and Maxi,j is [ ] , LM LM x x where ( ) () ( ) ( ) ( ) () ( ) ( ) 11 2 2 11 2 2 min , , min , , ..., min , max , , max , , ..., max , LM n n LM n n lm lm lm lm lm lm = = x x (9) The domain of the ith dimension is quantized into 2 ,1 ,2 , , , ..., ββ β i i iQ where ( ) ( ) ( ) ( ) ( ) 2 | | , 2 1 2 min , 1 min , 1 2 1 max , i i i i l m ij i i Q i i lm j lm j jQ lm jQ β − − = = + −⋅ ≤≤ − = (10) (F-1) integers, (k1, k2, …, kF-1), are generated randomly such that 1<k1<k2< … <kF-1<n, and then create the following F factors for any agent a=(x1, x2, …, xn), ( )( ) ( ) 1 12 1 11 2 1 1 , ..., , , ..., , ..., , ..., F k k k Fk n xx x x x x + + − ff f == = (11) Q2 levels for the ith factor fi are defined as follows: ( ) ( ) ( ) 1 1 1 1 12 1 2 2 1,1 2,1 ,1 1,2 2,2 ,2 2 1, 2, , (1) , , ..., (2) , , ..., ... ... ( ) , , ..., ii i ii i ii i i kk k i kk k i k Q k Q kQ Q ββ β ββ β ββ β − − − − − − + + + + + + = = = f f f (12) The orthogonal array, ( ) 2 2 2 , F M ij M F LQ b × = , is applied to generate the following M2 agents: ( ) () ( ) ( ) ( ) () ( ) ( ) ( ) ( )( ) ( ) 22 2 1 1,1 2 1,2 1, 1 2,1 2 2,2 2, 1 ,1 2 ,2 , , , ..., , , ..., ... ... , , ..., F F F F M M F MF bb b bb b bb b ff f ff f ff f (13) Finally, the agent with maximum energy among the M2 agents is selected to replace Li,j. The details about the construction of the orthogonal array were given in [8]. For the sake of simplicity, in MAGA, Q2, F and M2 are fixed at 3, 4 and 9, respectively, which are same as those of [8]. Therefore, no parameter needs to be
SMCB-E-02252003-0112.R1 tuned for this operator,and the orthogonal array is L(3),ALGORITHM 1 Self-learning operator which is shown in (14). sL:represents the agent lattice in the rth generation,and sL1/2 is the mid-lattice 「1111 between sL and sL1.sBest'is the best agent 1222 among sL°,si,,sI,and sCBest'is the best 1333 agent in sL.sPm is the probability to 2123 perform the mutation operator,and sGen is the number of generations. L,(3)=2231 (14) step 1:Generate sLo according to (16)and 2312 (17),update sBesto,and r-0; 3132 Step 2:Perform the neighborhood competition 321 operator on each agent in sL',obtaining 3 sL+1/2: 3321 step 3:For each agent in sL1/2,if U(0,1)<sPmr Mutation operator:A new agent,New(el,e2,...en),is perform the mutation operator on it, generated as, obtaining sIi; U(0,1)<4 Step 4:Find sCBest1 in sL1,if e=l+G0,0 k=1,…,n (15) Energy(sCBest)>Energy(sBest'),then otherwise sBest1←-sCBest+1;otherwise sBest1←-sBest', where G(0,is a Gaussian random number generator,and t is sCBest+1←-sBest"; the evolution generation. step 5:If r<sGen,then rr+1,go to step 2; Then,L is replaced by New This operator is similar to the Step6:Li,←sBest'. Gaussian mutation operator used in the evolutionary C.The Implementation of MAGA programming and the evolution strategy,but it only performs a small perturbation on some variables ofL In MAGA,the neighborhood competition operator is Self-learning operator:Agents have knowledge which is performed oneach agent.As a result,the agents with low energy related to the problems that they are designed to solve. are cleaned out from the agent lattice so that there is more According to our experiences,for numerical optimization developing space for the agents with high energy.The problems,integrating local searches with GAs can improve the neighborhood orthogonal crossover operator and the mutation performance.There are several ways to realize the local operator are performed on each agent with probabilities Pe and searches.[9]uses a small scale GA as the local searcher and P respectively.In order to reduce the computational cost,the obtains a good performance.Enlightened by their idea,we self-learning operator is only performed on the best agent in propose the self-learning operator which uses a small scale each generation,but it has an important effect on the MAGA to realize the behavior of using knowledge.In order to performance of MAGA.In general,the four operators utilize be distinguished from the other parameters in MAGA,all different methods to simulate the behaviors of agents,and play symbols of the parameters in this operator begin with an's'. different roles in MAGA.ALGORITHM 2 describes the details In the self-learning operator,first ofall,an agent lattice,sL,is of MAGA. generated.The size of sL is sL and all agents,s, ALGORITHM 2 Multi-Agent Genetic Algorithm '=1,2,sL,are generated according to, L represents the agent lattice in the tth i'=1 and '=1 generation, and Lt1/3 and 1tt2/3 are the (16) mid-lattices between It and 11.Bestt is the Newr! otherwise best agent among L,,,Lf,and CBestt is where New(e)is determined by, the best agent in L'.Pe and Pa are the probabilities to perform the neighborhood IU(1-sRadius,1+sRadius)< orthogonal crossover operator and the erJ'k= 1kU1-sRadius,1+sRadius)>x,k=l,…,n. mutation operator. U(1-sRadius,1+sRadius)otherwise step1:Initialize L°,update Best°,andt←-0; Step 2:Perform the neighborhood competition (17) operator on each agent in L',obtaining L+1/3; where sRadiuse [0,1]represents the search radius. Step 3:For each agent in 1t1/3,if U(0,1)<Pc Next,the neighborhood competition operator and the perform the neighborhood orthogonal mutation operator are iteratively performed on sL.Finally,L is crossover operator on it,obtaining Lt2/3; replaced by the agent with maximum energy found during the step 4:For each agent in 12/,if U(0,1)<P above process.For more clarity,ALGORITHM 1 describes the perform the mutation operator on it, details of this operator. obtaining L1; step 5:Find CBesttti in L1,and then perform the self-learning operator on CBestt+1;
SMCB-E-02252003-0112.R1 4 tuned for this operator, and the orthogonal array is 4 9 L (3 ) , which is shown in (14). 4 9 1111 1222 1333 2123 (3 ) 2231 2312 3132 3213 3321 L = (14) Mutation operator: A new agent, Newi,j=(e1,e2,…,en), is generated as, 1 1 (0,1) (0, ) otherwise k n k k t l U e l G < = + , k=1,…,n (15) where 1 (0, ) G t is a Gaussian random number generator, and t is the evolution generation. Then, Li,j is replaced by Newi,j. This operator is similar to the Gaussian mutation operator used in the evolutionary programming and the evolution strategy, but it only performs a small perturbation on some variables of Li,j. Self-learning operator: Agents have knowledge which is related to the problems that they are designed to solve. According to our experiences, for numerical optimization problems, integrating local searches with GAs can improve the performance. There are several ways to realize the local searches. [9] uses a small scale GA as the local searcher and obtains a good performance. Enlightened by their idea, we propose the self-learning operator which uses a small scale MAGA to realize the behavior of using knowledge. In order to be distinguished from the other parameters in MAGA, all symbols of the parameters in this operator begin with an ‘s’. In the self-learning operator, first of all, an agent lattice, sL, is generated. The size of sL is sLsize×sLsize, and all agents, i j , sL′ ′ , , 1,2, , size i j sL ′ ′ = , are generated according to, , , , 1 and 1 otherwise i j i j i j L ij sL New ′ ′ ′ ′ ′ ′ = = = (16) where , , ,1 , ,2 , , ( , ,, ) New e e e i j i j i j i jn ′′ ′′ ′′ ′′ = is determined by, , , (1- , 1+ ) (1- , 1+ ) (1- , 1+ ) otherwise kk k i jk k k k k x l U sRadius sRadius x e x l U sRadius sRadius x l U sRadius sRadius ′ ′ ⋅ < =⋅ > ⋅ , k=1,…,n. (17) where sRadius∈[0, 1] represents the search radius. Next, the neighborhood competition operator and the mutation operator are iteratively performed on sL. Finally, Li,j is replaced by the agent with maximum energy found during the above process. For more clarity, ALGORITHM 1 describes the details of this operator. ALGORITHM 1 Self-learning operator sLr represents the agent lattice in the rth generation, and sLr+1/2 is the mid-lattice between sLr and sLr+1. sBestr is the best agent among sL0 , sL1 , …, sLr , and sCBestr is the best agent in sLr . sPm is the probability to perform the mutation operator, and sGen is the number of generations. Step 1: Generate sL0 according to (16) and (17), update sBest0 , and r←0; Step 2: Perform the neighborhood competition operator on each agent in sLr , obtaining sLr+1/2; Step 3: For each agent in sLr+1/2, if U(0,1)<sPm, perform the mutation operator on it, obtaining sLr+1; Step 4: Find sCBestr+1 in sLr+1, if Energy(sCBestr+1)>Energy(sBestr), then sBestr+1←sCBestr+1; otherwise sBestr+1←sBestr, sCBestr+1←sBestr ; Step 5: If r<sGen, then r←r+1, go to Step 2; Step 6: Li,j←sBestr . C. The Implementation of MAGA In MAGA, the neighborhood competition operator is performed on each agent. As a result, the agents with low energy are cleaned out from the agent lattice so that there is more developing space for the agents with high energy. The neighborhood orthogonal crossover operator and the mutation operator are performed on each agent with probabilities Pc and Pm, respectively. In order to reduce the computational cost, the self-learning operator is only performed on the best agent in each generation, but it has an important effect on the performance of MAGA. In general, the four operators utilize different methods to simulate the behaviors of agents, and play different roles in MAGA. ALGORITHM 2 describes the details of MAGA. ALGORITHM 2 Multi-Agent Genetic Algorithm Lt represents the agent lattice in the tth generation, and Lt+1/3 and Lt+2/3 are the mid-lattices between Lt and Lt+1. Bestt is the best agent among L0 , L1 , …, Lt, and CBestt is the best agent in Lt . Pc and Pm are the probabilities to perform the neighborhood orthogonal crossover operator and the mutation operator. Step 1: Initialize L0 , update Best0 , and t←0; Step 2: Perform the neighborhood competition operator on each agent in Lt, obtaining Lt+1/3; Step 3: For each agent in Lt+1/3, if U(0,1)<Pc, perform the neighborhood orthogonal crossover operator on it, obtaining Lt+2/3; Step 4: For each agent in Lt+2/3, if U(0,1)<Pm, perform the mutation operator on it, obtaining Lt+1; Step 5: Find CBestt+1 in Lt+1, and then perform the self-learning operator on CBestt+1;
SMCB-E-02252003-0112.R1 J step 6:If Energy(CBestt+)>Energy(Best'), agent lattice in C.In any generation,the four evolutionary then Best+1←-CBestt1;otherwise operators transform the agent lattice,L,to another one,L and Bestt+1←Best',CBest+1←Best; this processcan be viewed asatransition fromLtoLetp Step 7:If termination criteria are reached, be the probability of transition from L to L,Pu be the output Bestt and stop;otherwise tet+1,go probability of transition from L to any agent lattice in and to Step 2. Pbe the probability of transition from any agent lattice in C D.The Convergence of MAGA to any agent lattice in C.It is obvious that Obviously,encoding a variable,x,of the objective function (23) with an M-bit sequence is equivalent to quantizing its search AA-∑,w,∑PA=1,Pu2n space,,into 2 values,and the precision g is equal to Based on the concepts above and [20],[21],the convergence ()/2.Therefore,the convergence of MAGA is of MAGA is proved as follows: Theorem 1 [22]Let P:n'xn'be a reducible stochastic matrix analyzed by real coding directly.Suppose that the required which means that by applying the same permutations to rows precision is e.Thus,the search space S can be changed to a c0) discrete space.Its number of elements,is equal to and columns,P can be brought into the form where C: R T (e,and each element is a candidate solution, mxm is a primitive stochastic matrix and R,T0.Then namely an agent.LetE=[Energy(a)lae It is obvious 01 p产=limP=lim C0 rRCT气R0 (24) thatS,and can be represented as E={E,E,,E},where E>E2>…>E间 is a stable stochastic matrix with p"=I'p",where This immediately gives us the opportunity to partition S into p"=pp"is unique regardless of the initial distribution,and a collection of nonempty subsets,si=1,2,.., psatisfies:p,>0forl≤i≤n and p,=0form<i≤n'. where Theorem 2:In multi-agent genetic algorithm, S=aaes and Energy(a)=E (18) i,ke{1,2,…,E},Pk= >0,k≤i ∑1 SHSt S≠a,i∈{L,2…lB: =0,k>i (19) SnS=②,i≠方Us=s Proof:VLEL,i=1,2,…,lE1,j=1,2,…,lC|, 3a'=(x.,x)e1,Energy(a')=E.Suppose that L Obviously,E is the global optimum,and s consists of all changes to L after the four evolutionary operators are agents whose energies are El performed.IfL is the agent lattice in the tth generation,thenL In MAGA,the number of agents remains constant throughout is the one in the (+1)th generation.Therefore,L and L are the evolutionary process.Let C stand for the set of all agent labeled as L'and L,respectively. lattices.Because an agent lattice may contain multiple copies of Firstly,(25)can be obtained according to Step 6 in one or more agents,the number of elements in c is ALGORITHM 2. ICE IS|+L×Lee-I Energy(CBest)=Energy(Best) Li XLiEe (25) ≥Ener☒(Best)=Eerg☒y(a) Suppose that the energy of an agent lattice,L,is equal to Therefore, EnergL),which is determined by, Energy(L')2 Energy(L)→k≤i Energy(L)=max[Energy(i,j=1. (20) →k>i,Pnu=0 Thus VLeC,Elsl<Energy(L)<E.Therefore,c can be (26) partitioned into a collection of nonempty subsets →k>iP4-∑P,u=0 c i=1,2,.,where =k>i,P,k=0 Secondly,in the neighborhood competition operator,a must C=LLec and Energy(L)=E (21) be a winner because its energy is greater than that of any other ∑1 CHCk C≠o,ie{L2,sh agents in L',so a'L.The probability of a'is(1-P.) (22) because the probability to perform the neighborhood orthogonal cne=②,i≠j:Uc=c crossover operator on a is P Therefore,the probability to perform the mutation operator on a'is: C consists of all the agent lattices whose energies are E. Pr=(1-P).P>0 (27) Let L,i=1,2,...j=1,2c,stand for the jth 3a'e L,Energy(a)=Ek.Suppose that there are m
SMCB-E-02252003-0112.R1 5 Step 6: If Energy(CBestt+1)>Energy(Bestt ), then Bestt+1←CBestt+1; otherwise Bestt+1←Bestt , CBestt+1←Bestt ; Step 7: If termination criteria are reached, output Bestt and stop; otherwise t←t+1, go to Step 2. D. The Convergence of MAGA Obviously, encoding a variable, xi, of the objective function with an M-bit sequence is equivalent to quantizing its search space, [ ] ,i i x x , into 2M values, and the precision ε is equal to ( ) 2M i i x x − . Therefore, the convergence of MAGA is analyzed by real coding directly. Suppose that the required precision is ε . Thus, the search space S can be changed to a discrete space. Its number of elements, | | S , is equal to ( ) 1 n i i i x x ε = − ∏ , and each element is a candidate solution, namely an agent. Let E = ∈ {Energy( )| a a S} . It is obvious that E ≤ S , and E can be represented as { } 1 2 || = EE E , , ..., E E , where 1 2 || E > >> E E ... E . This immediately gives us the opportunity to partition S into a collection of nonempty subsets, { 1, 2, ..., | |} i S i = E , where { and ( ) } i i S S =∈ = aa a Energy E (18) { } | | 1 1 | | | |; , 1,2, ,| | ; , ; i i i ij i i i i j = = = ≠∅ ∀ ∈ =∅ ∀ ≠ = ∑ | | SS S SS S S E E E (19) Obviously, E1 is the global optimum, and 1 S consists of all agents whose energies are E1 . In MAGA, the number of agents remains constant throughout the evolutionary process. Let L stand for the set of all agent lattices. Because an agent lattice may contain multiple copies of one or more agents, the number of elements in L is || 1 | | size size size size L L L L +× − = × S L . Suppose that the energy of an agent lattice, L, is equal to Energy(L), which is determined by, Energy L Energy L i j L ( ) max ( ) , 1, , = = { i j size , } (20) Thus 1 ∀∈ ≤ ≤ L L, () E Energy L E E . Therefore, L can be partitioned into a collection of nonempty subsets { 1,2, ,| |} i L i = E , where { and ( ) } i i L L =∈ = L L Energy L E (21) { } | | 1 | | 1 | | | |; , 1,2, ,| | ; , ; i i i ij i i i i j = = = ≠∅ ∀ ∈ =∅ ∀ ≠ = ∑ LLL LL LL E E E (22) 1 L consists of all the agent lattices whose energies are E1 . Let Lij , 1,2, ,| |, 1,2, ,| | i i j = = E L , stand for the jth agent lattice in i L . In any generation, the four evolutionary operators transform the agent lattice, Lij , to another one, Lkl, and this process can be viewed as a transition from Lij to Lkl. Let pij.kl be the probability of transition from Lij to Lkl, pij.k be the probability of transition from Lij to any agent lattice in k L , and pi.k be the probability of transition from any agent lattice in i L to any agent lattice in k L . It is obvious that | | . . 1 k ij k ij kl l p p = = ∑L , | | . 1 1 ij k k p = ∑ = E , i k ij k . . p ≥ p (23) Based on the concepts above and [20], [21], the convergence of MAGA is proved as follows: Theorem 1 [22] Let P: n′×n′ be a reducible stochastic matrix which means that by applying the same permutations to rows and columns, P can be brought into the form C 0 R T , where C: m×m is a primitive stochastic matrix and R T, ≠ 0 . Then 1 =0 lim lim k k k - i ki k k k i ∞ ∞ →∞ →∞ − ∞ = = ∑ 0 0 0 C C P= P T RC T R (24) is a stable stochastic matrix with ∞ ∞ P = 1′p , where ∞ ∞ 0 p = p P is unique regardless of the initial distribution, and ∞ p satisfies: 0 i p∞ > for 1 ≤ ≤i m and 0 i p∞ = for min < ≤ ′ . Theorem 2: In multi-agent genetic algorithm, ∀ ∈ i k, 1,2, ,| | { E } , , 0, 0, i k k i p k i > ≤ = = > . Proof: , 1,2, ,| | ij i ∀∈ = L i L E , 1,2, ,| | i j = L , * 1 (, , ) ij n ∃= ∈ a x xL , * ( ) i Energy E a = . Suppose that Lij changes to Lkl after the four evolutionary operators are performed. If Lij is the agent lattice in the tth generation, then Lkl is the one in the (t+1)th generation. Therefore, Lij and Lkl are labeled as Lt and Lt+1, respectively. Firstly, (25) can be obtained according to Step 6 in ALGORITHM 2, 1 1 * ( ) () ( ) ( ) t t t Energy CBest Energy Best Energy Best Energy + + = ≥ = a (25) Therefore, 1 . | | . . 1 . ( ) () , 0 , 0 , 0 k t t ij kl ij k ij kl l i k Energy L Energy L k i k ip k ip p k ip + = ≥ ⇒ ≤ ⇒ ∀> = ⇒ ∀> = = ⇒ ∀> = ∑L (26) Secondly, in the neighborhood competition operator, a* must be a winner because its energy is greater than that of any other agents in Lt , so 1 * 3 t L+ a ∈ . The probability of 2 * 3 t L+ a ∈ is (1-Pc) because the probability to perform the neighborhood orthogonal crossover operator on a* is Pc. Therefore, the probability to perform the mutation operator on a* is: 1 (1 ) 0 Pr P P =− ⋅ > c m (27) 1 , ( ) t k L Energy E + ∃∈ = a a ′ ′ . Suppose that there are n1