444 Chapter 10. Minimization or Maximization of Functions Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). 84.10. Wilkinson,J.H.,and Reinsch,C.1971,Linear Algebra,vol.Il of Handbook for Automatic Com- putation (New York:Springer-Verlag).[5] 10.9 Simulated Annealing Methods 三 The method of simulated annealing [1,2]is a technique that has attracted signif- icant attention as suitable for optimization problems of large scale,especially ones where a desired global extremum is hidden among many,poorer,local extrema.For practical purposes,simulated annealing has effectively"solved"the famous traveling salesman problem of finding the shortest cyclical itinerary for a traveling salesman who must visit each of N cities in turn.(Other practical methods have also been found.)The method has also been used successfully for designing complex integrated circuits:The arrangement of several hundred thousand circuit elements on a tiny silicon substrate is optimized so as to minimize interference among their connecting 9 wires[3,4].Surprisingly,the implementation of the algorithm is relatively simple. Notice that the two applications cited are both examples of combinatorial minimization.There is an objective function to be minimized,as usual;but the space over which that function is defined is not simply the N-dimensional space of N continuously variable parameters.Rather,it is a discrete,but very large,configuration space,like the set of possible orders of cities,or the set of possible allocations of 兰∽6 9 silicon "real estate"blocks to circuit elements.The number of elements in the OF SCIENTIFIC configuration space is factorially large,so that they cannot be explored exhaustively. Furthermore,since the set is discrete,we are deprived of any notion of"continuing 6 downhill in a favorable direction."The concept of"direction"may not have any meaning in the configuration space. Below.we will also discuss how to use simulated annealing methods for spaces with continuous control parameters,like those of 810.4-10.7.This application is actually more complicated than the combinatorial one,since the familiar problem of Numerica 10621 "long,narrow valleys"again asserts itself.Simulated annealing,as we will see,tries "random"steps,but in a long,narrow valley,almost all random steps are uphill! 431 Some additional finesse is therefore required. Recipes At the heart of the method of simulated annealing is an analogy with thermody- namics,specifically with the way that liquids freeze and crystallize,or metals cool 腿 and anneal.At high temperatures,the molecules of a liquid move freely with respect North to one another.If the liquid is cooled slowly,thermal mobility is lost.The atoms are often able to line themselves up and form a pure crystal that is completely ordered over a distance up to billions of times the size of an individual atom in all directions. This crystal is the state of minimum energy for this system.The amazing fact is that, for slowly cooled systems,nature is able to find this minimum energy state.In fact,if a liquid metal is cooled quickly or "quenched,"it does not reach this state but rather ends up in a polycrystalline or amorphous state having somewhat higher energy. So the essence of the process is slow cooling,allowing ample time for redistribution of the atoms as they lose mobility.This is the technical definition of annealing,and it is essential for ensuring that a low energy state will be achieved
444 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §4.10. Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag). [5] 10.9 Simulated Annealing Methods The method of simulated annealing [1,2] is a technique that has attracted significant attention as suitable for optimization problems of large scale, especially ones where a desired global extremum is hidden among many, poorer, local extrema. For practical purposes, simulated annealing has effectively “solved” the famoustraveling salesman problem of finding the shortest cyclical itinerary for a traveling salesman who must visit each of N cities in turn. (Other practical methods have also been found.) The method has also been used successfully for designing complex integrated circuits: The arrangement of several hundred thousand circuit elements on a tiny silicon substrate is optimized so as to minimize interference among their connecting wires [3,4]. Surprisingly, the implementation of the algorithm is relatively simple. Notice that the two applications cited are both examples of combinatorial minimization. There is an objective function to be minimized, as usual; but the space over which that function is defined is not simply the N-dimensional space of N continuously variable parameters. Rather, it is a discrete, but very large, configuration space, like the set of possible orders of cities, or the set of possible allocations of silicon “real estate” blocks to circuit elements. The number of elements in the configuration space is factorially large, so that they cannot be explored exhaustively. Furthermore, since the set is discrete, we are deprived of any notion of “continuing downhill in a favorable direction.” The concept of “direction” may not have any meaning in the configuration space. Below, we will also discuss how to use simulated annealing methods for spaces with continuous control parameters, like those of §§10.4–10.7. This application is actually more complicated than the combinatorial one, since the familiar problem of “long, narrow valleys” again asserts itself. Simulated annealing, as we will see, tries “random” steps; but in a long, narrow valley, almost all random steps are uphill! Some additional finesse is therefore required. At the heart of the method of simulated annealing is an analogy with thermodynamics, specifically with the way that liquids freeze and crystallize, or metals cool and anneal. At high temperatures, the molecules of a liquid move freely with respect to one another. If the liquid is cooled slowly, thermal mobility is lost. The atoms are often able to line themselves up and form a pure crystal that is completely ordered over a distance up to billions of times the size of an individual atom in all directions. This crystal is the state of minimum energy for this system. The amazing fact is that, for slowly cooled systems, nature is able to find this minimum energy state. In fact, if a liquid metal is cooled quickly or “quenched,” it does not reach this state but rather ends up in a polycrystalline or amorphous state having somewhat higher energy. So the essence of the process is slow cooling, allowing ample time for redistribution of the atoms as they lose mobility. This is the technical definition of annealing, and it is essential for ensuring that a low energy state will be achieved
10.9 Simulated Annealing Methods 445 Although the analogy is not perfect.there is a sense in which all of the minimization algorithms thus far in this chapter correspond to rapid cooling or quenching.In all cases,we have gone greedily for the quick,nearby solution:From the starting point,go immediately downhill as far as you can go.This,as often remarked above,leads to a local,but not necessarily a global,minimum.Nature's own minimization algorithm is based on quite a different procedure.The so-called Boltzmann probability distribution, Prob (E)~exp(-E/kT) (10.9.1) 81 expresses the idea that a system in thermal equilibrium at temperature T has its energy probabilistically distributed among all different energy states E.Even at 县 low temperature,there is a chance,albeit very small,of a system being in a high energy state.Therefore,there is a corresponding chance for the system to get out of a local energy minimum in favor of finding a better,more global,one.The quantity (Boltzmann's constant)is a constant of nature that relates temperature to energy. In other words,the system sometimes goes uphill as well as downhill;but the lower the temperature,the less likely is any significant uphill excursion. 9 In 1953,Metropolis and coworkers [5]first incorporated these kinds of prin- ciples into numerical calculations.Offered a succession of options,a simulated thermodynamic system was assumed to change its configuration from energy E to energy E2 with probability p exp[-(E2-E1)/kT].Notice that if E2<E1,this probability is greater than unity;in such cases the change is arbitrarily assigned a probability p =1,i.e.,the system always took such an option.This general scheme, 、在总号分 of always taking a downhill step while sometimes taking an uphill step,has come to be known as the Metropolis algorithm. To make use ofthe Metropolis algorithm for other than thermodynamic systems, 61 one must provide the following elements: 1.A description of possible system configurations. 2.A generator of random changes in the configuration;these changes are the "options"presented to the system. 3.An objective function E(analog of energy)whose minimization is the Numerica 10621 goal of the procedure. 4.A control parameter T (analog of temperature)and an annealing schedule 营 431 which tells how it is lowered from high to low values,e.g.,after how many random changes in configuration is each downward step in T taken,and how large is that Recipes step.The meaning of"high"and "low"in this context,and the assignment of a schedule,may require physical insight and/or trial-and-error experiments. Combinatorial Minimization:The Traveling Salesman A concrete illustration is provided by the traveling salesman problem.The proverbial seller visits N cities with given positions(,yi),returning finally to his or her city of origin.Each city is to be visited only once,and the route is to be made as short as possible.This problem belongs to a class known as NP-complete problems, whose computation time for an exact solution increases with N as exp(const.x N), becoming rapidly prohibitive in cost as N increases.The traveling salesman problem also belongs to a class of minimization problems for which the objective function E
10.9 Simulated Annealing Methods 445 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Although the analogy is not perfect, there is a sense in which all of the minimization algorithms thus far in this chapter correspond to rapid cooling or quenching. In all cases, we have gone greedily for the quick, nearby solution: From the starting point, go immediately downhill as far as you can go. This, as often remarked above, leads to a local, but not necessarily a global, minimum. Nature’s own minimization algorithm is based on quite a different procedure. The so-called Boltzmann probability distribution, Prob (E) ∼ exp(−E/kT ) (10.9.1) expresses the idea that a system in thermal equilibrium at temperature T has its energy probabilistically distributed among all different energy states E. Even at low temperature, there is a chance, albeit very small, of a system being in a high energy state. Therefore, there is a corresponding chance for the system to get out of a local energy minimum in favor of finding a better, more global, one. The quantity k (Boltzmann’s constant) is a constant of nature that relates temperature to energy. In other words, the system sometimes goes uphill as well as downhill; but the lower the temperature, the less likely is any significant uphill excursion. In 1953, Metropolis and coworkers [5] first incorporated these kinds of principles into numerical calculations. Offered a succession of options, a simulated thermodynamic system was assumed to change its configuration from energy E 1 to energy E2 with probability p = exp[−(E2 − E1)/kT ]. Notice that if E2 < E1, this probability is greater than unity; in such cases the change is arbitrarily assigned a probability p = 1, i.e., the system always took such an option. This general scheme, of always taking a downhill step while sometimes taking an uphill step, has come to be known as the Metropolis algorithm. To make use of the Metropolis algorithm for other than thermodynamic systems, one must provide the following elements: 1. A description of possible system configurations. 2. A generator of random changes in the configuration; these changes are the “options” presented to the system. 3. An objective function E (analog of energy) whose minimization is the goal of the procedure. 4. A control parameter T (analog of temperature) and an annealing schedule which tells how it is lowered from high to low values, e.g., after how many random changes in configuration is each downward step in T taken, and how large is that step. The meaning of “high” and “low” in this context, and the assignment of a schedule, may require physical insight and/or trial-and-error experiments. Combinatorial Minimization: The Traveling Salesman A concrete illustration is provided by the traveling salesman problem. The proverbial seller visits N cities with given positions(xi, yi), returning finally to his or her city of origin. Each city is to be visited only once, and the route is to be made as short as possible. This problem belongs to a class known as NP-complete problems, whose computation time for an exact solution increases with N as exp(const. × N), becoming rapidly prohibitive in cost as N increases. The traveling salesman problem also belongs to a class of minimization problems for which the objective function E
446 Chapter 10.Minimization or Maximization of Functions has many local minima.In practical cases,it is often enough to be able to choose from these a minimum which,even if not absolute,cannot be significantly improved upon.The annealing method manages to achieve this,while limiting its calculations to scale as a small power of N. As a problem in simulated annealing,the traveling salesman problem is handled as follows: 1.Configuration.The cities are numberedi=1...N and each has coordinates (i,y).A configuration is a permutation of the number 1...N,interpreted as the order in which the cities are visited. 2.Rearrangements.An efficient set of moves has been suggested by Lin [61. The moves consist of two types:(a)A section of path is removed and then replaced with the same cities running in the opposite order,or(b)a section of path is removed and then replaced in between two cities on another,randomly chosen,part of the path. 3.Objective Function.In the simplest form of the problem,E is taken just as the total length of journey, N E=L=∑V-x+1P+(-+1P (10.9.2) s 9 with the convention that point N+1 is identified with point 1.To illustrate the flexibility of the method,however,we can add the following additional wrinkle: Suppose that the salesman has an irrational fear of flying over the Mississippi River. In that case,we would assign each city a parameter ui,equal to+1 if it is east of the 026 Mississippi,-1 if it is west,and take the objective function to be (工-工+12+(-+1P+A(山-4:+1)2 (10.9.3) 6 A penalty 4A is thereby assigned to any river crossing.The algorithm now finds the shortest path that avoids crossings.The relative importance that it assigns to length of path versus river crossings is determined by our choice of A.Figure 10.9.1 shows the results obtained.Clearly,this technique can be generalized to include Numerica 10621 many conflicting goals in the minimization. 4.Annealing schedule.This requires experimentation.We first generate some 431 random rearrangements,and use them to determine the range of values of AE that Recipes will be encountered from move to move.Choosing a starting value for the parameter (outside 腿 T which is considerably larger than the largest AE normally encountered,we proceed downward in multiplicative steps each amounting to a 10 percent decrease North in T.We hold each new value of T constant for,say,100N reconfigurations,or for 10N successful reconfigurations,whichever comes first.When efforts to reduce E further become sufficiently discouraging,we stop. The following traveling salesman program,using the Metropolis algorithm, illustrates the main aspects of the simulated annealing technique for combinatorial problems
446 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). has many local minima. In practical cases, it is often enough to be able to choose from these a minimum which, even if not absolute, cannot be significantly improved upon. The annealing method manages to achieve this, while limiting its calculations to scale as a small power of N. As a problem in simulated annealing, the traveling salesman problem is handled as follows: 1. Configuration. The cities are numbered i = 1 ...N and each has coordinates (xi, yi). A configuration is a permutation of the number 1 ...N, interpreted as the order in which the cities are visited. 2. Rearrangements. An efficient set of moves has been suggested by Lin [6]. The moves consist of two types: (a) A section of path is removed and then replaced with the same cities running in the opposite order; or (b) a section of path is removed and then replaced in between two cities on another, randomly chosen, part of the path. 3. Objective Function. In the simplest form of the problem, E is taken just as the total length of journey, E = L ≡ N i=1 (xi − xi+1)2 + (yi − yi+1)2 (10.9.2) with the convention that point N + 1 is identified with point 1. To illustrate the flexibility of the method, however, we can add the following additional wrinkle: Suppose that the salesman has an irrational fear of flying over the Mississippi River. In that case, we would assign each city a parameter µi, equal to +1 if it is east of the Mississippi, −1 if it is west, and take the objective function to be E = N i=1 (xi − xi+1)2 + (yi − yi+1)2 + λ(µi − µi+1) 2 (10.9.3) A penalty 4λ is thereby assigned to any river crossing. The algorithm now finds the shortest path that avoids crossings. The relative importance that it assigns to length of path versus river crossings is determined by our choice of λ. Figure 10.9.1 shows the results obtained. Clearly, this technique can be generalized to include many conflicting goals in the minimization. 4. Annealing schedule. This requires experimentation. We first generate some random rearrangements, and use them to determine the range of values of ∆E that will be encountered from move to move. Choosing a starting value for the parameter T which is considerably larger than the largest ∆E normally encountered, we proceed downward in multiplicative steps each amounting to a 10 percent decrease in T . We hold each new value of T constant for, say, 100N reconfigurations, or for 10N successful reconfigurations, whichever comes first. When efforts to reduce E further become sufficiently discouraging, we stop. The following traveling salesman program, using the Metropolis algorithm, illustrates the main aspects of the simulated annealing technique for combinatorial problems.
10.9 Simulated Annealing Methods 447 Permission is Sample page a b http://w.nr.com or call 1-800-872-7423(North America only),orsend email to directcustserv@cambridge.org(outside North America). granted for internet users to make one paper copy for thei Copyright (C)1988-1992 by Cambridge University Press.Programs Copyright(C)1988-1992 by Numerical Recipes from NUMERICAL RECIPES IN C:THE ART OF SCIENTIFIC COMPUTING(ISBN 0-521-43108-5) 1 Software. (c) 0 .5 Figure 10.9.1.Traveling salesman problem solved by simulated annealing.The (nearly)shortest path among 100 randomly positioned cities is shown in(a).The dotted line is a river,but there is no penalty in crossing.In (b)the river-crossing penalty is made large,and the solution restricts itself to the minimum number of crossings,two.In(c)the penalty has been made negative:the salesman is actually a smuggler who crosses the river on the flimsiest excuse!
10.9 Simulated Annealing Methods 447 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 0 .5 1 0 .5 1 0 .5 1 0 .5 1 0 .5 1 0 .5 1 (a) (b) (c) Figure 10.9.1. Traveling salesman problem solved by simulated annealing. The (nearly) shortest path among 100 randomly positioned cities is shown in (a). The dotted line is a river, but there is no penalty in crossing. In (b) the river-crossing penalty is made large, and the solution restricts itself to the minimum number of crossings, two. In (c) the penalty has been made negative: the salesman is actually a smuggler who crosses the river on the flimsiest excuse!
448 Chapter 10. Minimization or Maximization of Functions #include <stdio.h> #include <math.h> #define TFACTR 0.9 Annealing schedule:reduce t by this factor on each step. #define ALEN(a,b,c,d) sqrt(((b)-(a))*((b)-(a)+(d)-(c))*((d)-(c)) void anneal(float x[],float y[],int iorder[],int ncity) This algorithm finds the shortest round-trip path to ncity cities whose coordinates are in the arrays x[1..ncity],y[1..ncity].The array iorder [1..ncity]specifies the order in which the cities are visited.On input,the elements of iorder may be set to any permutation of the numbers 1 to ncity.This routine will return the best alternative path it can find. int irbiti(unsigned long *iseed); 常 int metrop(float de,float t); float ran3(long *idum); float revcst(float x[],float y,int iorder[],int ncity,int n[]); void reverse(int iorder[],int ncity,int n[]); granted for 19881992 float trncst(float x[],float y[],int iorder[],int ncity,int n[]); void trnspt(int iorder[],int ncity,int n[]); 1800 int ans,nover,nlimit,i1,12; int i,1,k,nsucc,nn,idec; static int n[7]; to any Cambridge from NUMERICAL RECIPES I long idum; unsigned long iseed; float path,de,ti (Nor to make nover=100*ncity; Maximum number of paths tried at any temperature. THE nlimit=10*ncity; Maximum number of successful path changes before con- America server computer, e University Press. path=0.0; tinuing. one paper ART t=0.5; for(i=1;1<nc1ty;1++){ Calculate initial path length. i1=iorder[i]; strictly prohibited. Programs i2=iorder[i+1]; path +ALEN(x[i1],x[i2],y[i1],y[i2]); il=iorder[ncity]; Close the loop by tying path ends together. i2=iorder[1]; to dir Copyright (C) path+=ALEN(x[11],x[12],y[i1],y[i2]); 1dum=-1; iseed=111; OF SCIENTIFIC COMPUTING(ISBN for(j=1;j<=100;j+)[ Try up to 100 temperature steps ectcustser nsucc=0; for (k=1;k<=nover;k++){ do 10-621 n[1]=1+(int)(ncity*ran3(&idum)); Choose beginning of segment n[2]=1+(int)((ncity-1)*ran3(&idum)); ..and.end of segment. 1f(n[2]>=n[1])+n[2]; 1988-1992 by Numerical Recipes -43108 nn=1+((n[1]-n[2]+ncity-1)%ncity); nn is the number of cities while (nn<3); not on the segment. idec=irbit1(&iseed); (outside Decide whether to do a segment reversal or transport. if (idec==0){ Do a transport. North Software. n[3]=n[2]+(int)(abs(nn-2)*ran3(&idum))+1; n[3]=1+(n[3]-1)%ncity); Transport to a location not on the path de=trncst(x,y,iorder,ncity,n); Calculate cost. ans=metrop(de,t); Consult the oracle if (ans){ ++nsucc; path +=de; trnspt(iorder,ncity,n); Carry out the transport. else Do a path reversal de=revcst(x,y,iorder,ncity,n); Calculate cost. ans=metrop(de,t); Consult the oracle
448 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include <stdio.h> #include <math.h> #define TFACTR 0.9 Annealing schedule: reduce t by this factor on each step. #define ALEN(a,b,c,d) sqrt(((b)-(a))*((b)-(a))+((d)-(c))*((d)-(c))) void anneal(float x[], float y[], int iorder[], int ncity) This algorithm finds the shortest round-trip path to ncity cities whose coordinates are in the arrays x[1..ncity],y[1..ncity]. The array iorder[1..ncity] specifies the order in which the cities are visited. On input, the elements of iorder may be set to any permutation of the numbers 1 to ncity. This routine will return the best alternative path it can find. { int irbit1(unsigned long *iseed); int metrop(float de, float t); float ran3(long *idum); float revcst(float x[], float y[], int iorder[], int ncity, int n[]); void reverse(int iorder[], int ncity, int n[]); float trncst(float x[], float y[], int iorder[], int ncity, int n[]); void trnspt(int iorder[], int ncity, int n[]); int ans,nover,nlimit,i1,i2; int i,j,k,nsucc,nn,idec; static int n[7]; long idum; unsigned long iseed; float path,de,t; nover=100*ncity; Maximum number of paths tried at any temperature. nlimit=10*ncity; Maximum number of successful path changes before conpath=0.0; tinuing. t=0.5; for (i=1;i<ncity;i++) { Calculate initial path length. i1=iorder[i]; i2=iorder[i+1]; path += ALEN(x[i1],x[i2],y[i1],y[i2]); } i1=iorder[ncity]; Close the loop by tying path ends together. i2=iorder[1]; path += ALEN(x[i1],x[i2],y[i1],y[i2]); idum = -1; iseed=111; for (j=1;j<=100;j++) { Try up to 100 temperature steps. nsucc=0; for (k=1;k<=nover;k++) { do { n[1]=1+(int) (ncity*ran3(&idum)); Choose beginning of segment n[2]=1+(int) ((ncity-1)*ran3(&idum)); ..and end of segment. .. if (n[2] >= n[1]) ++n[2]; nn=1+((n[1]-n[2]+ncity-1) % ncity); nn is the number of cities } while (nn<3); not on the segment. idec=irbit1(&iseed); Decide whether to do a segment reversal or transport. if (idec == 0) { Do a transport. n[3]=n[2]+(int) (abs(nn-2)*ran3(&idum))+1; n[3]=1+((n[3]-1) % ncity); Transport to a location not on the path. de=trncst(x,y,iorder,ncity,n); Calculate cost. ans=metrop(de,t); Consult the oracle. if (ans) { ++nsucc; path += de; trnspt(iorder,ncity,n); Carry out the transport. } } else { Do a path reversal. de=revcst(x,y,iorder,ncity,n); Calculate cost. ans=metrop(de,t); Consult the oracle