Predictions for Scientific Computing Fifty Years From Now Lloyd N.Trefethen Oxford University Computing Laboratory LNT@comlab.ox.ac.uk This essay is adapted from a talk given June 17,1998 at the conference "Numerical Analysis and Computers-50 Years of Progress"held at the University of Manchester in commemoration of the 50th anniversary of the Mark 1 computer. Fifty years is a long,long time in any technological field.In our own field of scientific computing or numerical analysis,think back to 1950.Around the world,numerical problems in 1950 were solved with slide rules andon paper,or with mechanical calculators that had little in common with today's computers.Some of the algorithms we use today were in existence then,but on the whole,the last fifty years have changed numerical computing beyond recognition.The next fifty will do it again. My remarks consist of twelve predictions.I did not aim for these to orbit around a unifying theme,but that is nevertheless what happened 1.We may not be here. In the 20th century,everything tecnological seems to be changing exponentially. This raises a problem.Exponentials do not go on for ever;something happens to them. Now in my opinion,many of the exponentials we are sitting on have not yet started to level off.Here at the beginning of the third millennium,biology is just beginning its great explosion,and although electronics got a head start of a few decades,it is hardly slowing down yet. The presence of exponentials all around us overshadows any attempt to predict the future.I feel I must dwell for a moment on one of the shadows,one that has nothing specifically to do with computing.In my opinion,our position on an exponential trajectory is evidence that technological civilisations do not last very long.I do not claim that our civilisation must end within fifty years,or five hundred,but I do believe there is reason to doubt it can survive for,say,ten thousand years. My reasoning has nothing to do with any particular catadlysm that may befall us, such as environmental catastrophe or exhaustion of resources or asteroid impact or bio- logical or nuclear war.The argument is more abstract,and it goes like this.The indus trial explosion on earth began just two or three hundred years ago.Now if technological civilisations can last tens of thousands of years,how do you explain the extr aordinary coincidence that you were born in the first few generationsof this one?-in the very first century of radio,television,light bulbs,telephones,phonogr aphs,lasers,refriger ators, 1
Predictions for Scientic Computing Fifty Years From Now Lloyd N Trefethen Oxford University Computing Laboratory LNTcomlaboxacuk This essay is adapted from a talk given June at the conference Numerical Analysis and Computers Years of Progress held at the University of Manchester in commemoration of the th anniversary of the Mark computer Fifty years is a long long time in any technological eld In our own eld of scientic computing or numerical analysis think back to Around the world numerical problems in were solved with slide rules and on paper or with mechanical calculators that had little in common with today s computers Some of the algorithms we use today were in existence then but on the whole the last fty years have changed numerical computing beyond recognition The next fty will do it again My remarks consist of twelve predictions I did not aim for these to orbit around a unifying theme but that is nevertheless what happened We may not be here In the th century everything technological seems to be changing exponentially This raises a problem Exponentials do not go on for ever something happens to them Now in my opinion many of the exponentials we are sitting on have not yet started to level o Here at the beginning of the third millennium biology is just beginning its great explosion and although electronics got a head start of a few decades it is hardly slowing down yet The presence of exponentials all around us overshadows any attempt to predict the future I feel I must dwell for a moment on one of the shadows one that has nothing specically to do with computing In my opinion our position on an exponential tra jectory is evidence that technological civilisations do not last very long I do not claim that our civilisation must end within fty years or ve hundred but I do believe there is reason to doubt it can survive for say ten thousand years My reasoning has nothing to do with any particular cataclysm that may befall us such as environmental catastrophe or exhaustion of resources or asteroid impact or bio logical or nuclear war The argument is more abstract and it goes like this The indus trial explosion on earth began just two or three hundred years ago Now if technological civilisations can last tens of thousands of years how do you explain the extraordinary coincidence that you were born in the rst few generations of this one in the very rst century of radio television light bulbs telephones phonographs lasers refrigerators
atadi faiparefpacaafcoputasfnccarpovfinccarveparsfpasticsfi at cticsfardgereticergireerg? Iieteeparatiandarspeda [sitiain isaynaybetatitisrct sospeca aterafecaiseh iaytercsrctta atveryiag Thisag nerthasbeen caedteCopernican Frinciple by..R.GcttoFircelcuriesity Theeisasecad irede icercefscnetineskrovnas Frmi's paradox tataso sugseststatlecrocsica cviiatiarsaesh ated Thehuaaceisrctanatrcst cviisatiasiattasdtasarcsd yas Anagsodtcrocia cviisatianvi rtddyceien ootas wiyde rrcresvono atarenurdect tesreddiig tfacviisatiancansneadaretasard ig tvasfi aesoraeas toakemayaestaranagnmiars? 61t器。LB98 peacrg theresh mnacalacysmsogeatastotaet esaayvit te Siccenytepolemdpecctirgi fty yasdscet ccaptirgpesjrsta ca casy Letsgetcovntoit 2.We'lltalk to compiters more oftenthan type to them and they'll respond withpictures more often than numbers. A bigcarseint elast twarty yashasbcnteaiva dgamka inkafces tona ciresctrakedyxeafearywkstatiarskatugvircoskarstnkeard anasfi biti thag ttheseweratytricsfkognnic ylocalc a Tccayt ecescercaris dteashaediencter ncires ioe tirctian it tasroseda irsigt to peccttatscafane a ygeatcagevin ccarasveta eloireactirgvit cam ecarer tisgcdfintoingireviatcaptergasvi eieinfyyas Ihady caefecettordet att recnersicravima raityvi beasadraryasveao Gias ft ag teceexnetdsec adgaisvn nearnnaia wk &ernaenianinkefiess dviasynnekafeuceryirgcatatias vil cafmnetobebasedanmasiepeserkdagita ytonayggitsd peasian saresersemnvic tescatistsadagreasd telituevi befut rIeoed fiat ecelaisdcaptirgt anveadistasweaefuth erienoedt anwrear 3.Numerical computing will be adaptive,iterative,exploratory.intelligent-and the Acrtiennaka cantirgis aedtegaisdtecapter age cass
automobiles airplanes spacecraft computers nuclear power nuclear weapons plastics antibiotics and genetic engineering I believe that the explanation of our special position in history may be that it is not so special after all because history tends not to last very long This argument has been called the Copernican Principle by J R Gott of Princeton University There is a second line of evidence sometimes known as Fermi s paradox that also suggests that technological civilisations are short lived The human race is not an outpost of a galactic society it is a domestic product How can we explain this if technological civilisations last tens of thousands of years An ages old technological civilisation will expand across its galaxy simply because it can Don t ask why for expanding is what life does If one species doesn t another will replace it Yet in years of expanding at one hundredth the speed of light a civilisation can spread one thousand light years a distance encompassing millions of stars Is it plausible that technological civilisations are so rare as to arise on only one star among millions I believe that the explanation of the emptiness out there may be that technological civilisations perish before they start to spread across their galaxyor that they start spreading then perish in a cataclysm so great as to take the galaxy with them Suddenly the problem of predicting fty years of scientic computing begins to look easy Let s get down to it We l l talk to computers more often than type to them and they l l respond with pictures more often than numbers A big change in the last twenty years has been the arrival of graphical interfaces When I was a graduate student at Stanford around we played with some Alto ma chines donated by Xerox early workstations featuring windows icons mice and pointers but I thought these were party tricks too gimmicky to catch on Today the descendants of the Altos have driven other machines to extinction It takes no special insight to predict that soon an equally great change will occur as we take to interacting with com puters by speech It has been a long time coming but this transformation is now around the corner It is good fun to imagine what computer graphics will be like in fty years I hardly dare except to note that three dimensional virtual reality will be as ordinary as Velcro Curiously though the development of speech and graphics will make our numerical work ever more human in feel less obviously numerical the underlying computations will continue to be based on numbers represented digitally to many digits of precision The digital idea is what makes everything possible and it is not going to go away This is one sense in which the scientists and engineers of the future will be further removed from the details of computing than we are just as we are further removed than were our parents Numerical computing wil l be adaptive iterative exploratory intel ligentand the computational power wil l be beyond your wildest dreams Adaptive numerical computing is one of the glories of the computer age Gauss
quadrature was invented two centuries ago,but adaptive quadrature didn't arrive until the 1960s.Adaptive ODE sovers came soon after,and turned the solution of most ordi- nary differential equations into the use of a black box.Partial differential equations are not yet boxed in black,but the trend is in that direction.As time goes by,adaptivity managed by the computer's intelligence becomes more and more widespread.Computers are not as wise as people.but they can explore a forest of possibilities faster than we can.In fifty years,this is how most numerical problems will be solved.We will tell the machine what we want,and the machine,an intelligent control system sitting atop an ency copaedia of numerical methods,will juggle computational options at inco mprehen- sible speed until it has solved the problem to the accuracy required.Then it will give us the answer;and if we insist,it may even tell us something of how it got there. The power unleashed by this kind of computing will be vast.Large parts of physical reality will be simulated in real time before our eyes,with effects so far beyond what the men of 1950couldenvision that the word"computation"may begin to seem old-fashione and drop out of use When computations are all intelligent,when everything is embedded in a contro loop.the mathematical landscape will change.One distinction that means a great deal to us today is that,broadly speaking,linear problems can be solved in one pass,but nonline ar ones require iter ation.In fifty years,when everything is embedded in an iterative loop anyway.this difference will have diminished.For the same reason,todav's big distinction between forward and inverse problems will have faded too My next prediction is a corollary Determinism in numerical computing will be gone Recently our family rented a car for a holiday.One evening we wanted to look at the stars,which meant turning off the dome light.We couldn't figure out how to do it! A decade ago,dlosing the doors and fipping a switch would have sfficed,bu it nowadays cars are more intelligent.In some,the light stays on for a fixed period after you dose the doors,and in ours,the situation was eve n more complicated.There was an interlock with the engine.plus some additional intelligence that we never got to the bottom of. Eventually we got the light off,but we were not quite sure how we had done it,or if we oould do it the same way again Have you noticed how many of our machines behave this way?photocopiers used to be deterministic,but nowadays they have complicated arrays of internal states.The first copy may come out in landscape orientation,but the second in portr ait,if the machine decides in-between that it ought to change modes.Typewriters used to be predictable too:you knew what would happen when you pressed a key.Nowadays,in Word or LaTeX,changing one char acter of input may alter the whole document in startling ways. Why,at motorway rest stops,even toilets are intelligent devices now whose states of mind we don't fully understand.and when you're finished with the toilet.you have two further negotiations to undertake with the intelligent sink and the intelligent hand drier! What's true of toilets will be true of numerical comput ations.In fifty vears,though the answers you get will be accurate without fail to the prescribed precision,you will
quadrature was invented two centuries ago but adaptive quadrature didn t arrive until the s Adaptive ODE solvers came soon after and turned the solution of most ordi nary dierential equations into the use of a black box Partial dierential equations are not yet boxed in black but the trend is in that direction As time goes by adaptivity managed by the computer s intelligence becomes more and more widespread Computers are not as wise as people but they can explore a forest of possibilities faster than we can In fty years this is how most numerical problems will be solved We will tell the machine what we want and the machine an intelligent control system sitting atop an encyclopaedia of numerical methods will juggle computational options at incomprehen sible speed until it has solved the problem to the accuracy required Then it will give us the answer and if we insist it may even tell us something of how it got there The power unleashed by this kind of computing will be vast Large parts of physical reality will be simulated in real time before our eyes with eects so far beyond what the men of could envision that the word computation may begin to seem old fashioned and drop out of use When computations are all intelligent when everything is embedded in a control loop the mathematical landscape will change One distinction that means a great deal to us today is that broadly speaking linear problems can be solved in one pass but nonlinear ones require iteration In fty years when everything is embedded in an iterative loop anyway this dierence will have diminished For the same reason today s big distinction between forward and inverse problems will have faded too My next prediction is a corollary Determinism in numerical computing wil l be gone Recently our family rented a car for a holiday One evening we wanted to look at the stars which meant turning o the dome light We couldn t gure out how to do it A decade ago closing the doors and ipping a switch would have suced but nowadays cars are more intelligent In some the light stays on for a xed period after you close the doors and in ours the situation was even more complicated There was an interlock with the engine plus some additional intelligence that we never got to the bottom of Eventually we got the light o but we were not quite sure how we had done it or if we could do it the same way again Have you noticed how many of our machines behave this way Photocopiers used to be deterministic but nowadays they have complicated arrays of internal states The rst copy may come out in landscape orientation but the second in portrait if the machine decides in between that it ought to change modes Typewriters used to be predictable too you knew what would happen when you pressed a key Nowadays in Word or LaTeX changing one character of input may alter the whole document in startling ways Why at motorway rest stops even toilets are intelligent devices now whose states of mind we don t fully understand and when you re nished with the toilet you have two further negotiations to undertake with the intelligent sink and the intelligent hand drier What s true of toilets will be true of numerical computations In fty years though the answers you get will be accurate without fail to the prescribed precision you will
not expect to duplicate them exactly if you solve the problem a second time.I don't see how this loss of determinism can be stopped.Of course,from a tednical point of view,it would be easy to make our machines deterministic by simply lea ing out all that intelligence.However,we will not do this,for int dligence is too powerful. was that it is unreasonable to ask for ex actness in numerical comput ation.In the net fifty,they will learn not to ask for repeatability,either. ff The importance of foating point arithmetic will be undminishedf So muc will change in fifty y ears that it is rereshing to to predict some continuity. One thing that I bdieve will last is floating point arithmetic.Of course,the details will change and in particular,word lengths will continue their progression from 16 to 32 to 64 to 128 bits and beyond,as sequences of computations become longer and require to hardware based on a logarithmic representation of numb ers.But I b elieve the two magnitudes,and rounding of all intermediate operations. Outside the numerical analysis community,some people fed that foating point arithmetic is an anacronism,a1950s kludge that is destined tobe cast aside as macines b ecome more sophisticated.Computers may hae b een b orn as number crunders,the feding goes,but now that they arefast enough to do abitrary symbolic manipulations, we must move to a higher plane.In truth,no amount of computer power will cange thefact that most numerical problems cannot be solv ed sy mb olically.You ha e to make approximations,and foating p oint arithmetic is thebest generalpurpose approximation idea ev er dev ised.It will persist,but get hidden deeper m the macme. ff Linear systems of equations will be solved in O+)fopsff Matrix computations as performed on machines around the world ty pically require O4N3)floating point operations-"flops"-where N is the dimension of the problem. This statement applies exactly for computing inverses,determinants,and solutions of sy stems of equations,and it applies approximately for eigerv alues and singular values. But all of these problems involve only OAN2)inputs,and as machines get faster,it is increasingly aggraating that ON)operations should be needed to solve them. strassen showed in 1968 that the oan3)barrier could be breaded.he devised a recursive agorithm whose ruming time),ON)and subsequent improv ements by Copp ersmith,Winograd and others have brought the ex- ponent down to 2.376.Howev,the algorithms in question inolve constants so large that they are impractical,and they have had little e.ect on scientific computing.As a result,the problem of sp eeding up matrix computations is viewed by many mumerical analy sts as a theoretical distraction.This is a strange attitude to take to the most con- spicuous unsolv ed problem in our fidd!Of course,it may be that there is some reason why no practical algorithm can ever befound,but we certainly do not know that today. A "fast matrix inv erse"may be possible,perhaps one with complexity O4N log N)or
not expect to duplicate them exactly if you solve the problem a second time I don t see how this loss of determinism can be stopped Of course from a technical point of view it would be easy to make our machines deterministic by simply leaving out all that intelligence However we will not do this for intelligence is too powerful In the last fty years the great message communicated to scientists and engineers was that it is unreasonable to ask for exactness in numerical computation In the next fty they will learn not to ask for repeatability either The importance of oating point arithmetic wil l be undiminished So much will change in fty years that it is refreshing to to predict some continuity One thing that I believe will last is oating point arithmetic Of course the details will change and in particular word lengths will continue their progression from to to to bits and beyond as sequences of computations become longer and require more accuracy to contain accumulation of errors Conceivably we might even switch to hardware based on a logarithmic representation of numbers But I believe the two dening features of oating point arithmetic will persist relative rather than absolute magnitudes and rounding of all intermediate operations Outside the numerical analysis community some people feel that oating point arithmetic is an anachronism a s kludge that is destined to be cast aside as machines become more sophisticated Computers may have been born as number crunchers the feeling goes but now that they are fast enough to do arbitrary symbolic manipulations we must move to a higher plane In truth no amount of computer power will change the fact that most numerical problems cannot be solved symbolically You have to make approximations and oating point arithmetic is the best general purpose approximation idea ever devised It will persist but get hidden deeper in the machine Linear systems of equations wil l be solved in ON ops Matrix computations as performed on machines around the world typically require ON oating point operationsopswhere N is the dimension of the problem This statement applies exactly for computing inverses determinants and solutions of systems of equations and it applies approximately for eigenvalues and singular values But all of these problems involve only ON inputs and as machines get faster it is increasingly aggravating that ON operations should be needed to solve them Strassen showed in that the ON barrier could be breached He devised a recursive algorithm whose running time was ON log approximately ON and subsequent improvements by Coppersmith Winograd and others have brought the ex ponent down to However the algorithms in question involve constants so large that they are impractical and they have had little eect on scientic computing As a result the problem of speeding up matrix computations is viewed by many numerical analysts as a theoretical distraction This is a strange attitude to take to the most con spicuous unsolved problem in our eld Of course it may be that there is some reason why no practical algorithm can ever be found but we certainly do not know that today A fast matrix inverse may be possible perhaps one with complexity ON log N or
O(N2l0g2N),and discovering it would change everything. In 1985 I made a bet with Peter Alfeld of the University of Utah that a matrix algorithm with complexity 0(N2+)for any e>0would be found within ten years.None was,and I gave Alfeld a check for $100.We renewed our bet,however,to 2005,and in that year I will renew it again if necessary.One morning,with ludk,the headlines will appear.I think fifty years should be long enough. 7.Multipole methods and their des cendants will be ubiguitous. The conjugate gradient and Lanczos algorithms were invented around 1950,and their story is a curious one.Nowadays we have no doubt as to what these methods are good for:they are matrix iterations,which for certain structured matrices bring those O(N3)operation counts down to O(N2)or even better.Though there are constants hidden in the"O",these methods are often mudh faster than Gaussian elimination and its relatives when N is large. What is curiousis that Hestenes,Stiefel,Lanczos and the rest didn't see this coming. In the 1950s,N was too small for conjugate gradients and Lanczos yet to be competitive, but all the mathematical pieces were in place.These men knew something of the conver gence properties of their iterations,enough to have been able to predict that eventually, as machines grew faster.they must beat the competition.Yet they seem not to have made this prediction.A numerical analyst writing an essay like this one in 1960 might not have mentioned conjugate gradients at all. It is with this history in mind that I mention multipole methods,by which I mean methods related to the recent algorithms of Rokhlin and Greengardfor N-body problems and integral equations.Times have changed,and we are all asymptotickers.When multipole methods were being invented in the 1980s,they were competitive in 2D but not 3D.Yet Rokhlin and Greengard saw immedately that these techniques reduced operation counts from (N2)to O(N),give or take a logarithmic factor,o how could they not win in the long run?And so they will. The success of multipole methods will exemplify a general trend As time goes by. large-scale numerical computations rely more on approximate algorithms,even for prob- lems that might in principle be solved exactly in a finite number of steps.Approximate algorithms are more robust than exact ones,and they are also often faster 8.Breakthroughs will have occurred in matrir preconditioners,spectral methods,and time stepping for partial differential equations It is hard not to be optimistic about merely technical hurdles.The business of matrix preconditioners is vitally important,but it is a jungle these days-surely improvements arein store!Spectral methods for PDEs are in a similar state-remarkably powerful,but varying awkwar dly from one application to the next.Order is needed here,and it will come.As for time-stepping,this is the old problems of stiffness,reasonably well in hand for ODEs but still unsolved in a general way for PDEs.To this day,the CFL restriction constrains our computations all across the range of science and engineering.To get around this constraint,time steps are taken smaller than we would wish,huge matrix
ON log N and discovering it would change everything In I made a bet with Peter Alfeld of the University of Utah that a matrix algorithm with complexity N for any would be found within ten years None was and I gave Alfeld a check for We renewed our bet however to and in that year I will renew it again if necessary One morning with luck the headlines will appear I think fty years should be long enough Multipole methods and their descendants wil l be ubiquitous The conjugate gradient and Lanczos algorithms were invented around and their story is a curious one Nowadays we have no doubt as to what these methods are good for they are matrix iterations which for certain structured matrices bring those ON operation counts down to ON or even better Though there are constants hidden in the O these methods are often much faster than Gaussian elimination and its relatives when N is large What is curious is that Hestenes Stiefel Lanczos and the rest didn t see this coming In the s N was too small for conjugate gradients and Lanczos yet to be competitive but all the mathematical pieces were in place These men knew something of the conver gence properties of their iterations enough to have been able to predict that eventually as machines grew faster they must beat the competition Yet they seem not to have made this prediction A numerical analyst writing an essay like this one in might not have mentioned conjugate gradients at all It is with this history in mind that I mention multipole methods by which I mean methods related to the recent algorithms of Rokhlin and Greengard for N body problems and integral equations Times have changed and we are all asymptotickers When multipole methods were being invented in the s they were competitive in D but not D Yet Rokhlin and Greengard saw immediately that these techniques reduced operation counts from ON to ON give or take a logarithmic factor so how could they not win in the long run And so they will The success of multipole methods will exemplify a general trend As time goes by large scale numerical computations rely more on approximate algorithms even for prob lems that might in principle be solved exactly in a nite number of steps Approximate algorithms are more robust than exact ones and they are also often faster Breakthroughs wil l have occurred in matrix preconditioners spectral methods and time stepping for partial dierential equations It is hard not to be optimistic about merely technical hurdles The business of matrix preconditioners is vitally important but it is a jungle these dayssurely improvements are in store Spectral methods for PDEs are in a similar stateremarkably powerful but varying awkwardly from one application to the next Order is needed here and it will come As for time stepping this is the old problems of stiness reasonably well in hand for ODEs but still unsolved in a general way for PDEs To this day the CFL restriction constrains our computations all across the range of science and engineering To get around this constraint time steps are taken smaller than we would wish huge matrix