1.Introduction and Historical Review 5 Figure 1.2 Baking a cake. ingredients (software) (hardware) oven recipe utensils baker cake Recipes,as just mentioned,are called algorithms here,while the area of human study,knowledge,and expertise that concerns algorithms will be termed algorith- mics in this book.The analogy drawn here has been made as exact as possible:the recipe,which is in a sense an abstract entity,is the algorithm;the formal written ver- sion of the recipe,such as the one found in a cookbook,is analogous to a computer program.Software actually refers more to programs-precise representations of algorithms written in special computer-readable languages-than to the algorithms themselves.However,until we discuss programming languages in Chapter 3,this distinction is quite immaterial. We confront algorithms wherever we go.Many everyday processes are governed by algorithms:changing a flat tire,constructing a do-it-yourself cabinet,knitting a sweater,dividing numbers,looking up a telephone number,updating a list of expenses,or filling out an income tax form.Some of these(division,for example) might be more immediately related in our minds to computers,than others (cabinet construction,for example),but this is of less concern to us here.Although computers are fundamental to the topic of this book,we shall not concentrate on their physical aspects at all,except implicitly in parts of Chapters 3 and 9.It is with their spirit that we are concerned;with the recipes that make them tick-with their algorithms. Algorithmics vs.Computer Science Algorithmics is more than a branch of computer science.It is the core of computer science,and,in all fairness,can be said to be relevant to most of science,business, and technology.The very nature of algorithmics renders it particularly applicable to those disciplines that benefit from the use of computers,and these are fast becoming an overwhelming majority
P1: GIG PE002-01drv PE002-Harel PE002-Harel-v4.cls February 25, 2004 14:38 1. Introduction and Historical Review 5 recipe oven utensils baker (software) (hardware) ingredients cake Figure 1.2 Baking a cake. Recipes, as just mentioned, are called algorithms here, while the area of human study, knowledge, and expertise that concerns algorithms will be termed algorithmics in this book. The analogy drawn here has been made as exact as possible: the recipe, which is in a sense an abstract entity, is the algorithm; the formal written version of the recipe, such as the one found in a cookbook, is analogous to a computer program. Software actually refers more to programs—precise representations of algorithms written in special computer-readable languages—than to the algorithms themselves. However, until we discuss programming languages in Chapter 3, this distinction is quite immaterial. We confront algorithms wherever we go. Many everyday processes are governed by algorithms: changing a flat tire, constructing a do-it-yourself cabinet, knitting a sweater, dividing numbers, looking up a telephone number, updating a list of expenses, or filling out an income tax form. Some of these (division, for example) might be more immediately related in our minds to computers, than others (cabinet construction, for example), but this is of less concern to us here. Although computers are fundamental to the topic of this book, we shall not concentrate on their physical aspects at all, except implicitly in parts of Chapters 3 and 9. It is with their spirit that we are concerned; with the recipes that make them tick—with their algorithms. ■ Algorithmics vs. Computer Science Algorithmics is more than a branch of computer science. It is the core of computer science, and, in all fairness, can be said to be relevant to most of science, business, and technology. The very nature of algorithmics renders it particularly applicable to those disciplines that benefit from the use of computers, and these are fast becoming an overwhelming majority
6 I.Preliminaries People have been known to ask:"What really is computer science?Why don't we have submarine science,dishwasher science,or telephone science?"Telephones and dishwashers,it might be argued,are as important to modern life as computers are; perhaps more so.A slightly more focussed question is whether computer science is subsumed by such classical disciplines as mathematics,physics,neuro-science, electrical engineering,linguistics,logic,and philosophy. This book does not attempt to answer these questions.It is hoped,however,that the book will implicitly convey something of the uniqueness and universality of algorithmics,and hence something of the importance of computer science as an autonomous-albeit,young-field of study.Since computers could conceivably re- strict the generality of algorithmics,some people view the unavoidable link between the two as unfortunate.In fact,terming the field "computer science,"someone once said,is like referring to surgery as"knife science."Be that as it may,it is clear that algorithmics would never have developed the way it has without that link.However, it is generally agreed that the term"computer science"is misleading,and that some- thing like"information science,”“process science,.”or“the science of the discrete' might be better.Again,we only claim that our subject matter,algorithmics,forms the underpinnings of computer science,not that it replaces it. Some of the topics we discuss in the sequel,such as the existence of problems that computers cannot solve,have philosophical implications,not only on the limits of the wonderful machines we are able to build,but also on our own limits as mortals with finite mass and a finite life span.The profound nature of such implications notwithstanding,the emphasis in this book is on the more pragmatic goal of acquiring a deep understanding of the fundamentals of machine-executable processes,and the recipes,or algorithms,that govern them Some History Let us now review several important milestones in the development of computers and computer science,mainly to illustrate that as an orderly scientific discipline the field is extremely young. Somewhere between 400 and 300 B.C.,the great Greek mathematician Euclid invented an algorithm for finding the greatest common divisor(gcd)of two positive integers.The gcd of X and Y is the largest integer that exactly divides both X and Y.For example,the gcd of 80 and 32 is 16.The details of the algorithm itself are of no concern here,but the Euclidian algorithm,as it is called,is considered to be the first non-trivial algorithm ever devised. The word algorithm is derived from the name of the Persian mathematician Mohammed al-Khowarizmi,who lived during the ninth century,and who is cred- ited with providing the step-by-step rules for adding,subtracting,multiplying,and dividing ordinary decimal numbers.When written in Latin,the name became Algo- rismus,from which algorithm is but a small step.Clearly,Euclid and al-Khowarizmi were algorithmicians par excellence. Turning from software to hardware,one of the earliest machines to carry out a pro- cess controlled by what might be called an algorithm was a weaving loom invented
P1: GIG PE002-01drv PE002-Harel PE002-Harel-v4.cls February 25, 2004 14:38 6 I. Preliminaries People have been known to ask: “What really is computer science? Why don’t we have submarine science, dishwasher science, or telephone science?” Telephones and dishwashers, it might be argued, are as important to modern life as computers are; perhaps more so. A slightly more focussed question is whether computer science is subsumed by such classical disciplines as mathematics, physics, neuro-science, electrical engineering, linguistics, logic, and philosophy. This book does not attempt to answer these questions. It is hoped, however, that the book will implicitly convey something of the uniqueness and universality of algorithmics, and hence something of the importance of computer science as an autonomous—albeit, young—field of study. Since computers could conceivably restrict the generality of algorithmics, some people view the unavoidable link between the two as unfortunate. In fact, terming the field “computer science,” someone once said, is like referring to surgery as “knife science.” Be that as it may, it is clear that algorithmics would never have developed the way it has without that link. However, it is generally agreed that the term “computer science” is misleading, and that something like “information science,” “process science,” or “the science of the discrete” might be better. Again, we only claim that our subject matter, algorithmics, forms the underpinnings of computer science, not that it replaces it. Some of the topics we discuss in the sequel, such as the existence of problems that computers cannot solve, have philosophical implications, not only on the limits of the wonderful machines we are able to build, but also on our own limits as mortals with finite mass and a finite life span. The profound nature of such implications notwithstanding, the emphasis in this book is on the more pragmatic goal of acquiring a deep understanding of the fundamentals of machine-executable processes, and the recipes, or algorithms, that govern them. ■ ■ ■ Some History Let us now review several important milestones in the development of computers and computer science, mainly to illustrate that as an orderly scientific discipline the field is extremely young. Somewhere between 400 and 300 B.C., the great Greek mathematician Euclid invented an algorithm for finding the greatest common divisor (gcd) of two positive integers. The gcd of X and Y is the largest integer that exactly divides both X and Y . For example, the gcd of 80 and 32 is 16. The details of the algorithm itself are of no concern here, but the Euclidian algorithm, as it is called, is considered to be the first non-trivial algorithm ever devised. The word algorithm is derived from the name of the Persian mathematician Mohammed al-Khowarizm ˆ ˆı, who lived during the ninth century, and who is credited with providing the step-by-step rules for adding, subtracting, multiplying, and dividing ordinary decimal numbers. When written in Latin, the name became Algorismus, from which algorithm is but a small step. Clearly, Euclid and al-Khowarizm ˆ ˆı were algorithmicians par excellence. Turning from software to hardware, one of the earliest machines to carry out a process controlled by what might be called an algorithm was a weaving loom invented
1.Introduction and Historical Review in 1801 by a Frenchman,Joseph Jacquard.The pattern woven was determined by cards with holes punched at various locations.These holes,which were sensed by a special mechanism,controlled the selection of threads and other actions of the machine.It is interesting that Jacquard's loom had nothing to do with the narrow numerical connotation of the term"computation." One of the most important and colorful figures in the history of computer science was Charles Babbage.This English mathematician,after having partially built a machine in 1833,called"the difference engine,"for evaluating certain mathematical formulas,conceived and planned a remarkable machine that he called "the analytical engine."In contrast to the difference engine,which was designed to carry out a specific task,the analytical engine was to have been capable of executing algorithms, or programs,encoded by the user as holes punched in cards.Had the analytical engine been built,it would have been the mathematical analogue of Jacquard's loom,which was in fact its inspiration.Needless to say,Babbage's machine was mechanical in nature,based on levers,cogs,and gears,rather than on electronics and silicon.Nevertheless,the ideas present in his design of the analytical engine form the basis of the internal structure and workings of today's computers.Babbage is generally considered to have lived well ahead of his time,and his ideas were not really appreciated until much later. Ada Byron,Countess of Lovelace,was Babbage's programmer.She is one of the most interesting figures in the history of computing,and is credited with laying the foundations for programming,more than a hundred years before the first working computer was available. An American engineer by the name of Herman Hollerith invented a machine, also based on punched cards,that was used by the American Census Bureau to help tabulate the 1890 national census.However,the first general-purpose com- puters were built only in the 1940s,partly as a response to the computational needs of physicists and astronomers,and partly as a natural outgrowth of the avail- ability of the appropriate electromechanical and electronic devices.Ironically,the Second World War,with its bomb-building and code-cracking activities,also helped. Some of the key figures in this crucial and exciting period were the Englishman Alan Turing,the Americans Howard Aiken,John Mauchly,J.Presper Eckert,and Herman Goldstine,and the famous German/American mathematician John von Neumann. Returning to software and algorithmics,the mid-1930s witnessed some of the most fundamental work on the theory of algorithms,yielding results that concern the capabilities and limitations of machine-executable algorithms.It is remarkable that this work,parts of which will be described later in the book,predated the actual materialization of the computer.Nevertheless,it is of universal and lasting importance.Some of the key figures here,all mathematicians,are,again,Alan Turing,the German Kurt Godel,the Russian Andrei A.Markov,and the Americans Alonzo Church,Emil Post,and Stephen Kleene. The 1950s and 1960s witnessed far-reaching and rapid technological advance- ments in computer design and construction.This can be attributed to the arrival of the era of nuclear research and space exploration on the one hand,and to the boom in large businesses and banks,and diverse government activity on the other. Precise prediction of various nuclear phenomena required very heavy computing power,as did the planning and simulation of space missions.Space exploration also required advances in computer-supported communication,facilitating reliable
P1: GIG PE002-01drv PE002-Harel PE002-Harel-v4.cls February 25, 2004 14:38 1. Introduction and Historical Review 7 in 1801 by a Frenchman, Joseph Jacquard. The pattern woven was determined by cards with holes punched at various locations. These holes, which were sensed by a special mechanism, controlled the selection of threads and other actions of the machine. It is interesting that Jacquard’s loom had nothing to do with the narrow numerical connotation of the term “computation.” One of the most important and colorful figures in the history of computer science was Charles Babbage. This English mathematician, after having partially built a machine in 1833, called “the difference engine,” for evaluating certain mathematical formulas, conceived and planned a remarkable machine that he called “the analytical engine.” In contrast to the difference engine, which was designed to carry out a specific task, the analytical engine was to have been capable of executing algorithms, or programs, encoded by the user as holes punched in cards. Had the analytical engine been built, it would have been the mathematical analogue of Jacquard’s loom, which was in fact its inspiration. Needless to say, Babbage’s machine was mechanical in nature, based on levers, cogs, and gears, rather than on electronics and silicon. Nevertheless, the ideas present in his design of the analytical engine form the basis of the internal structure and workings of today’s computers. Babbage is generally considered to have lived well ahead of his time, and his ideas were not really appreciated until much later. Ada Byron, Countess of Lovelace, was Babbage’s programmer. She is one of the most interesting figures in the history of computing, and is credited with laying the foundations for programming, more than a hundred years before the first working computer was available. An American engineer by the name of Herman Hollerith invented a machine, also based on punched cards, that was used by the American Census Bureau to help tabulate the 1890 national census. However, the first general-purpose computers were built only in the 1940s, partly as a response to the computational needs of physicists and astronomers, and partly as a natural outgrowth of the availability of the appropriate electromechanical and electronic devices. Ironically, the Second World War, with its bomb-building and code-cracking activities, also helped. Some of the key figures in this crucial and exciting period were the Englishman Alan Turing, the Americans Howard Aiken, John Mauchly, J. Presper Eckert, and Herman Goldstine, and the famous German/American mathematician John von Neumann. Returning to software and algorithmics, the mid-1930s witnessed some of the most fundamental work on the theory of algorithms, yielding results that concern the capabilities and limitations of machine-executable algorithms. It is remarkable that this work, parts of which will be described later in the book, predated the actual materialization of the computer. Nevertheless, it is of universal and lasting importance. Some of the key figures here, all mathematicians, are, again, Alan Turing, the German Kurt Godel, the Russian Andre ¨ ˘ı A. Markov, and the Americans Alonzo Church, Emil Post, and Stephen Kleene. The 1950s and 1960s witnessed far-reaching and rapid technological advancements in computer design and construction. This can be attributed to the arrival of the era of nuclear research and space exploration on the one hand, and to the boom in large businesses and banks, and diverse government activity on the other. Precise prediction of various nuclear phenomena required very heavy computing power, as did the planning and simulation of space missions. Space exploration also required advances in computer-supported communication, facilitating reliable
8 I.Preliminaries analysis and filtering,and even improvement of data that was communicated to and from satellites and spaceships.Business,banking,and government activity re- quired computers to help in the storage,manipulation,and retrieval of information concerning very large numbers of people,inventory items,fiscal details,and so on. Interesting evidence of the importance of the technological machine-oriented de- velopments during that period can be found in the names of the world's largest computer company,IBM,and one of the world's largest computer-related profes- sional organizations,the ACM.The former name was coined around 1920 and the latter around the late 1940s.In both cases the"M"comes from the word"machine": International Business Machines,and the Association for Computing Machinery. (IBM evolved from a company formed in 1896 by the aforementioned Herman Hollerith to produce his tabulating machines.) The recognition of computer science as an independent academic discipline oc- curred around the mid-1960s,when several universities formed computer science departments.In 1968,the ACM published a widely acclaimed recommendation for a curriculum of courses in computer science,which forms the basis of most current computer science programs of study at the undergraduate level.This curriculum is revised periodically.Today,almost every academic institution has a department of computer science,or a computer science group within its mathematics or electrical engineering departments.The 1960s showed a renewed interest in the 1930s work on algorithmics,and the field has been the subject of extensive and far-reaching research ever since. We shall not dwell any further on the present technological situation:computers are simply everywhere.We use them to surf the internet,which means that we use them to receive and deliver information,to read,to hear,and to see,and,of course, to browse and buy.There are desktop,laptop,and palm-sized computers,so we need never be without one,and the fast-closing gap between cellular phones and comput- ers is heralding the age of wearable computers.Almost every modern appliance is controlled by a computer,and a single modern car,for example,contains dozens of them.Children request,and get,personal computers for their birthdays;students of computer science in most universities are required to have their own computers for homework assignments;and there is no industrial,scientific,or commercial activity that is not crucially assisted by computers. A Strange Dichotomy Despite all of this(or possibly as a result of it)the general public is strangely divided when it comes to computer literacy.There are still those who know absolutely noth- ing about computers,and then there are the members of the ever-growing class of computer literates.Starting with the 10-year-old owners of personal computers,this expanding group of people who use computers on a day-to-day basis includes man- agers,engineers,bankers,technicians,and,of course,professional programmers, system analysts,and members of the computer industry itself. Why is this strange?Well,here is a science about which some people know nothing,but about which a rapidly increasing number of people apparently know everything!As it happens,however,the really unusual phenomenon is that large and
P1: GIG PE002-01drv PE002-Harel PE002-Harel-v4.cls February 25, 2004 14:38 8 I. Preliminaries analysis and filtering, and even improvement of data that was communicated to and from satellites and spaceships. Business, banking, and government activity required computers to help in the storage, manipulation, and retrieval of information concerning very large numbers of people, inventory items, fiscal details, and so on. Interesting evidence of the importance of the technological machine-oriented developments during that period can be found in the names of the world’s largest computer company, IBM, and one of the world’s largest computer-related professional organizations, the ACM. The former name was coined around 1920 and the latter around the late 1940s. In both cases the “M” comes from the word “machine”: International Business Machines, and the Association for Computing Machinery. (IBM evolved from a company formed in 1896 by the aforementioned Herman Hollerith to produce his tabulating machines.) The recognition of computer science as an independent academic discipline occurred around the mid-1960s, when several universities formed computer science departments. In 1968, the ACM published a widely acclaimed recommendation for a curriculum of courses in computer science, which forms the basis of most current computer science programs of study at the undergraduate level. This curriculum is revised periodically. Today, almost every academic institution has a department of computer science, or a computer science group within its mathematics or electrical engineering departments. The 1960s showed a renewed interest in the 1930s work on algorithmics, and the field has been the subject of extensive and far-reaching research ever since. We shall not dwell any further on the present technological situation: computers are simply everywhere. We use them to surf the internet, which means that we use them to receive and deliver information, to read, to hear, and to see, and, of course, to browse and buy. There are desktop, laptop, and palm-sized computers, so we need never be without one, and the fast-closing gap between cellular phones and computers is heralding the age of wearable computers. Almost every modern appliance is controlled by a computer, and a single modern car, for example, contains dozens of them. Children request, and get, personal computers for their birthdays; students of computer science in most universities are required to have their own computers for homework assignments; and there is no industrial, scientific, or commercial activity that is not crucially assisted by computers. ■ A Strange Dichotomy Despite all of this (or possibly as a result of it) the general public is strangely divided when it comes to computer literacy. There are still those who know absolutely nothing about computers, and then there are the members of the ever-growing class of computer literates. Starting with the 10-year-old owners of personal computers, this expanding group of people who use computers on a day-to-day basis includes managers, engineers, bankers, technicians, and, of course, professional programmers, system analysts, and members of the computer industry itself. Why is this strange? Well, here is a science about which some people know nothing, but about which a rapidly increasing number of people apparently know everything! As it happens, however, the really unusual phenomenon is that large and
1.Introduction and Historical Review 9 important parts of the science of computing are not sufficiently known,not only to members of the first group,but to members of the second group as well. It is one of the purposes of this book to try to illuminate an important facet of the computer revolution by presenting some of the fundamental concepts,results, and trends underlying the science of computation.It is aimed at both the novice and the expert.A reader with no knowledge of computers will (it is hoped)learn about their"spirit"here,and the kind of thinking that goes into making them work while seeking elsewhere material concerning their "flesh."The computer-knowledgeable reader,who might find the first couple of chapters rather slow going,will (it is hoped)be able to learn much from the later ones. Some Limitations of Computers Before embarking on our tour,let us contrast the opening paragraph of this chapter with some feats that current computers are as yet incapable of performing.We shall return to these contrasts in the final chapter of the book,which deals with the relationship between computers and human intelligence. Currently,computers are capable of on-the-spot analysis of an enormous quantity of data resulting from many X-ray pictures of a human patient's brain,taken from gradually increasing angles.The analyzed data is then used by the computer to generate a cross-cut picture of the brain,providing information about the brain's tissue structure,thus enabling precise location of such irregularities as tumors or excess fluids.In striking contrast,no currently available computer can analyze a single,ordinary picture of the very same patient's face and determine the patient's age with an error margin of,say,five years.However,most 12-year-old kids can! Even more striking is the ability of a one-year-old baby to recognize its mother's face in a photograph it has never before seen,a feat computers are nowhere near imitating (and this is not merely because they have no mothers...). Computers are capable of controlling,in the most precise and efficient way imag- inable,extremely sophisticated industrial robots used to construct complex pieces of machinery consisting of hundreds of components.In contrast,today's most ad- vanced computers are incapable of directing a robot to construct a bird's nest from a pile of twigs,a feat any 12-month-old bird can perform! Today's computers can play chess on the level of an international grand-master, and hence can beat the vast majority of human players.However,on changing the rules of the game very slightly (for example,by allowing a knight two moves at a time,or by limiting the queen's moves to five squares),the best of these computers will not be able to adapt without being reprogrammed or reconstructed by humans. In contrast,a 12-year-old amateur chess player will be able to play a reasonably good game with the new rules in a very short time,and will become better and better with experience. As mentioned,these dissimilarities are related to the difference between human and computerized intelligence.We shall be in a better position to discuss these matters further in Chapter 15,after having learnt more about algorithms and their properties
P1: GIG PE002-01drv PE002-Harel PE002-Harel-v4.cls February 25, 2004 14:38 1. Introduction and Historical Review 9 important parts of the science of computing are not sufficiently known, not only to members of the first group, but to members of the second group as well. It is one of the purposes of this book to try to illuminate an important facet of the computer revolution by presenting some of the fundamental concepts, results, and trends underlying the science of computation. It is aimed at both the novice and the expert. A reader with no knowledge of computers will (it is hoped) learn about their “spirit” here, and the kind of thinking that goes into making them work while seeking elsewhere material concerning their “flesh.” The computer-knowledgeable reader, who might find the first couple of chapters rather slow going, will (it is hoped) be able to learn much from the later ones. ■ Some Limitations of Computers Before embarking on our tour, let us contrast the opening paragraph of this chapter with some feats that current computers are as yet incapable of performing. We shall return to these contrasts in the final chapter of the book, which deals with the relationship between computers and human intelligence. Currently, computers are capable of on-the-spot analysis of an enormous quantity of data resulting from many X-ray pictures of a human patient’s brain, taken from gradually increasing angles. The analyzed data is then used by the computer to generate a cross-cut picture of the brain, providing information about the brain’s tissue structure, thus enabling precise location of such irregularities as tumors or excess fluids. In striking contrast, no currently available computer can analyze a single, ordinary picture of the very same patient’s face and determine the patient’s age with an error margin of, say, five years. However, most 12-year-old kids can! Even more striking is the ability of a one-year-old baby to recognize its mother’s face in a photograph it has never before seen, a feat computers are nowhere near imitating (and this is not merely because they have no mothers ...). Computers are capable of controlling, in the most precise and efficient way imaginable, extremely sophisticated industrial robots used to construct complex pieces of machinery consisting of hundreds of components. In contrast, today’s most advanced computers are incapable of directing a robot to construct a bird’s nest from a pile of twigs, a feat any 12-month-old bird can perform! Today’s computers can play chess on the level of an international grand-master, and hence can beat the vast majority of human players. However, on changing the rules of the game very slightly (for example, by allowing a knight two moves at a time, or by limiting the queen’s moves to five squares), the best of these computers will not be able to adapt without being reprogrammed or reconstructed by humans. In contrast, a 12-year-old amateur chess player will be able to play a reasonably good game with the new rules in a very short time, and will become better and better with experience. As mentioned, these dissimilarities are related to the difference between human and computerized intelligence. We shall be in a better position to discuss these matters further in Chapter 15, after having learnt more about algorithms and their properties. ■ ■