IB Computers High Intelligent Computers - The Future (1) First generation computer A computer of the first generation is classified as beginning around 1951, characterized by physically large units using vacuum tube circuitry, stored programs, and mostly magnetic tape for auxiliary storage. The UNIVAC I was a first generation computer, commercially available in 1951. First generation computers are now museum pieces. Second generation computer A computer of the second generation is classified as beginning from the late 1950's and early 1960's. It is characterized by physically smaller units that produced less heat and required less power because they used solid state transistor circuitry, and that used disks as well as tape for auxiliary storage. From the early 1960's, the IBM 1401 and the Honeywell 400 are examples of second generation computers. Third generation computer A computer of the third generation is classified as beginning in the mid 1960's and continuing into the early 1970's, characterized by physically smaller computers using integrated circuits on chips for most of its circuitry, while utilizing disk storage and on-line terminals. The third generation of computers started roughly in 1964 with the advent of the IBM System/360 series. Fourth generation computer A computer of the fourth generation is characterized by physically small, lower cost microcomputers using microprocessors and memory chips. There is disagreement as to whether this is a new generation or merely an advanced stage of the third generation. Fifth generation computer The next generation of computers, predicted to be in use before the year 2000, will be referred to as fifth generation computers. Their increased computational power is expected to result from parallel-processing or the ability to process several programs at the same time. They are expected to be true knowledge systems, able to combine one set of facts with other sets to produce sophisticated new solutions. No other computer has yet accomplished that task. To play the central role in society that present day scientists envision, these machines will need to be easier to use and will understand spoken, written, and graphic input. IB Computers High Intelligent Computers - The Future (2) AI, Artificial Intelligence "AI" is the field of computer science that seeks to understand and implement computer-based technology that can simulate the characteristics of human intelligence. A common goal of artificial intelligence work involves developing computer programs capable of learning from experience in order to solve problems. "AI" refers to the development or capability of a machine that can proceed or perform functions that are normally concerned with human intelligence, such as learning, adapting, reasoning, self-correction, and automatic improvement. In a more restricted sense, artificial intelligence refers to the study of techniques for the more effective use of digital computers by improved programming practices. PROLOG and Lisp are the most popular programming languages for the development of non-numeric programs, particularly in the field of Artificial Intelligence. Expert Systems An expert system is a program which reasons, using an "experience" database and a set of rules (heuristics), to solve complex problems - in line with AI's long-term goals. Expert systems are an attempt to capture the knowledge of an expert in a particular field and to make that knowledge available to other people via a computer program. This means that a knowledge-capture (learning) program must be developed, so that the expert (who is NOT a programmer) can tell the computer what it is supposed to do. The learning program allows a consultation to occur between the expert and the program, during which expert describe his line of reasoning when solving a problem. The program then attempts to employ the "rules" which it has learned from the expert to solve further problems, and the expert comments on whether the resulting decisions were good or bad. The program modifies its behavior in response to the expert's criticisms and further explanations. Such a program is known as a "development environment." Contrast this situation with a more traditional computer program, where a "programmer" must write the decision rules in a language such as Fortran or Pascal. The other part of the system is the "engine", which executes the rules in response to new inputs such as a set of medical symptoms. One of the first and most successful commercially available expert systems was a medical diagnosis system called MYCIN. Neural networking Neural networking, or synthetic neurology, is the promising new computer field dealing with the task of using computers to simulate the processing power of the complex human brain. Although this task is extremely complex, the first traditional problems to be addressed will involve pattern and speech recognition, object classification, machine learning, dynamic adaptation, and a host of other real-world needs. In the near future, it will be possible to design and train synthetic neural networks to perform operations to convert written text to speech, understand continuous unbroken speech, read poorly written handwriting, win complex board games, and other tasks that require a high degree of associative recognition. IB Computers High Intelligent Computers - The Future (3) Robotics Robotics is the area of artificial intelligence dealing with the creation and use of stand-alone hybrid computer systems, called robots, that perform physical and computational activities. Robots are equipped with sensing devices for detecting changes in its operational environment, a calculating mechanism for making decisions, and with a guidance mechanism for directing its physical actions. Robots are used extensively in manufacturing, performing such repetitive tasks such as welding, painting, and riveting. Complexity Some tasks are "intrinsically complex", which means that ANY algorithm which performs that task MUST be extremely complex and thus probably too slow to be practical. Such problems may simply be awaiting a brilliant insight which will reveal some previously unrecognized simplicity in the problem. The following activities are automatic and easy for human beings, but have thus far resisted all attempts by programmers to create computer systems with these abilities: Natural Language - producing a computer which can use a human language at the same level of competence as a human being. Problem Solving - the ability to FIND or CREATE a solution to a NEW problem, without be told what to do. Knowledge Representation - the ability to store and retrieve memories, and to use REASONING to produce NEW knowledge from these memories. Learning - rote learning, learning from experience, and the ability to design and analyze experiments Vision - vision is not just optically capturing an image; the difficult part is analyzing, understanding, and recognizing the content of that image. These tasks belong to a set of problems known as "AI-complete." This means that if anyone ever solves one of these, he will be able to solve all of the others in a similar fashion. The problems are "equivalent." New insights may provide more productive ways to attack these problems. Fractals Fractal theory, developed by Benoit Mandelbrot (1977), is a mathematical theory which allows apparently complex geometric forms by described by astoundingly simple mathematical formulae. This may allow a computer to manipulate the physical world according to much simpler rules than was previously believed. The major appeal of fractals is that a very short description can reproduce a seemingly vast amount of physical detail. Two computer scientists in the United States have patented a fractal-based data-compression scheme for digital images, allowing animated sequences to be reproduced in real time from a very small data-base. The AI-complete problems above may benefit from such a compact, fast storage algorithm. IB Computers High Intelligent Computers - The Future (4) Alan Turing (1912-1954) Turing was a British mathematician, who published a paper in 1937 titled "On Computable Numbers with an Application to the Entscheidungsproblem." The HALTING PROBLEM (Entsheidungsproblem) is a very abstract, theoretical, fundamental question in computer science: Is it possible to write a program which, upon reading another computer program, can DECIDE whether that program will eventually halt or not? For example, even a novice programmer will quickly point out that the following program will never halt: Var x:real; Begin x:=0; Repeat WriteLn(x); Until x>2; End. On the other hand, the following program will certainly halt: Var x:real; Begin x:=0; Repeat WriteLn(x); x:=x+1; Until x>2; End. But the following is a lot more difficult to decide: Var x,n:real; Begin x:=0; n:=1; Repeat WriteLn(x); x := x + 1/n ; n := n * 2 ; Until x>2; end. If you run this program, you see that x gets closer and closer to 2, ever so slowly. At some point, maybe it will make it and cross over the boundary - then again, maybe not. If we RUN the program to find out whether it halts, and it never does, at what point will we DECIDE that we have waited long enough? This is the fundamental question. It is clear that the HALTING-DECISION program cannot simply RUN other programs to make its decision. It will need to employ some sort of complex logic to make the decision. Turing proved that it is NOT POSSIBLE to write the HALTING-DECISION program. The proof has nothing to do with the size or speed of the computer used, nor with the choice of a programming language. The HALTING program CANNOT be written, no matter what. It is LOGICALLY impossible! IB Computers High Intelligent Computers - The Future (5) Turing's Machine Turing's study of the Halting Problem is all the more impressive when one considers that it was published in 1937, before the first general-purpose computer was ever built. That is, before it was ever invented, Turing predicted something that it would not be able to do. Turing's argument is purely theoretical. In order to study the problem, he invented the "Turing Machine", a mythical machine which has a read/write head, an infinitely long tape for storage, and is controlled by a program consisting of "states." The tape is divided up into cells, each of which contains one character. The tape can move left and right under the read/write head. In each cycle, the Turing Machine reads the current character, changes into a new state, writes a new character in the current cell, and then moves the tape to the left or right one cell. Turing proved that such a machine can perform ANY task which a computer can perform. He also proved that the Turing Machine cannot solve the Halting Problem. Hence, no computer can solve the Halting Problem. Turing's Machine is just one example of a Finite Automata - the theoretical incarnation of a digital computer. A spreadsheet is also a finite automaton, but one with an actual purpose. Another example of finite automata is the Game of Life. The Halting Problem is unsolvable. There are other problems which cannot be programmed, no matter what machine or what language is used. Such problems are called Non-Computable or Non-Algorithmic. It should be noted that it is possible to PROVE that these problems CANNOT be solved. It is not just a matter of waiting for someone with a clever idea. Computability The Halting Problem is just one example of a class of problems which are called NON-COMPUTABLE. These are problems which CANNOT be solved by a computer program (or any step-by-step algorithm.) A simple example of such a problem is to write the entire list of all integers. An algorithm exists to start the list, and to continue, but the problem is infinite and therefor cannot be completed. There are many other, less trivial examples. It is not a matter of choosing the right machine - some problems simply CANNOT be solved by a computer. NP = Non-Polynomial Some computable problems are doomed to require extremely long amounts of time to finish. Other problems are not so bad. A bubble sort needs an amount of time proportional to Ný, where N is the number of items being sorted. The problem of calculating 2ü for an integer n requires n multiplications, and so the time required is proportional to n. Other problems are not so nice. If you have a deck of 52 cards, how many different orders are possible for the cards? The answer is 52*51*50..., or 52! (factorial). If there are N cards, then there are N! different ways to order the cards. A computer program which must list all of the ways of ordering N cards must have an execution time proportional to N!. This is a lot worse than Ný. IB Computers High Intelligent Computers - The Future (6) Computer scientists divide problems into two categories: POLYNOMIAL time problems and NON-POLYNOMIAL time problems (NP). An NP problem has NO ALGORITHM to solve it in a polynomial amount of time. N! is not a polynomial, so the deck-of-cards problem is NP. It is possible to PROVE that some problems cannot have a fast algorithm - that all algorithms will be slow (non-polynomial). A standard NP problem is the Travelling Salesman problem. A salesman must travel to N different cities. He wants to go to every city one time. He wants to figure out which order will be the fastest - some roads have lower speed limits, some have less traffic, etc. He has all of the data he needs to figure it out. Unfortunately, it takes an extraordinarily long time to find the fastest route, because this is an NP problem. Turing's Test Turing did not tell us whether AI will ever be achieved. But he did consider the problem of AI in great detail, and gave us a method for deciding that we are there, if we ever arrive. This is called the Turing Test of artificial intelligence: If you cannot tell the difference between the machine's responses and a real human being's responses, then the machine is exhibiting intelligence. Imagine that you are sitting at your computer, logged onto a modem, which is in touch with another computer. You type something into your computer, which is a request or a command to the other computer. The other computer responds. Can you tell whether the response was generated by a human being or a computer program? Maybe some very bored programmer is sitting there typing all that stuff which is being sent to you. If the transmission is coming in at 14400 BAUD (1800 bytes per second), then you can be pretty sure that it isn't being typed by a human being. But what if it slows down, and you are getting only 2 characters per second? And it is simple text? And it is asking you about the weather, and your favorite games? And it contains some jokes? And some misspelled words? You answer about your favorite game, and ask for the other person's name. The response is HAL. You start up an electronic conversation. Some of the questions and responses coming in are puzzling, and don't really make sense. You start to wonder, "Is this really a person, or just a very clever imitation?" You start asking questions intended to test the program, to make it give itself away. The key issue here is that you cannot tell anything about HOW the conversation is being generated - you only have the conversation itself. Now if the computer is just giving standard, pre-programmed answers, we don't like the idea that this is called "intelligence." There should be some THINKING going on. No, Turing's point is, that it does not matter HOW the responses are generated. A computer does not need to generate the conversation in THE SAME WAY as a human being. It merely needs to generate a human conversation, even if it does so using peculiar methods. Most people get bogged down in the issue of whether the computer is thinking, and whether it is self-aware, whether it has a soul, etc. Turing's idea was that all that does not matter. If it SEEMS intelligent, that is good enough. IB Computers High Intelligent Computers - The Future (7) ELIZA The hypothetical modem-conversation is not at all far-fetched. It already happened in 1963. Joseph Weizenbaum, working at M.I.T., was playing around with a program which would ask questions the way a therapist does. This is easily done - whatever the patient says, the therapist asks a question based on that. Thus, the therapist never takes the initiative, which is the ideal situation for a computer program. A typical conversation would be something like this: Computer: How are you today? Human : Oh, I'm feeling tired. Computer: Have you been feeling that way long? Human : Well, I didn't sleep enough last night. Computer: Why do you think that is? Human : I had a lot on my mind. Computer: Tell me more about that. ........ The computer might give very non-specific responses, or it might key in on common words, like HAPPY, BAD, TIRED, etc. The story is that Weizenbaum went home, but left his program running by accident. One of his colleagues came into the lab and saw the conversation on the screen, and thought the computer was hooked into another terminal somewhere. He typed in a question, got an answer, and assuming that Weizenbaum was at the other end, continued the conversation. After the computer continued to ask leading questions but give no useful information, the colleague typed some nasty comments to Weizenbaum and left in a huff. ELIZA is now famous because it passed Turing's Test - the user did not realize that he was dealing with a machine! Indeed, after further work on the ELIZA program, it was used with moderate success to do actual therapy. The Real Thing ? The Turing Test, ELIZA, NP functions, fractals - all of this runs around one key question about artificial intelligence: What is it that we want from intelligent machines? Are we after cute tricks, quick fixes, or do we want the "real thing"? If a computer can diagnose the flu, and type a prescription for an antibiotic, would patients take the medicine? Does it matter that the computer was following a set of rules when it made the decisions? Is a set of rules good enough, at least for simple tasks? In the near future we will probably see a good many expert systems developed which can perform tasks such as simple medical diagnoses as well or better than human beings, even trained professionals. There is no chance that these systems will actually be thinking in the same way that human beings do. Nevertheless, the machine's work may actually be superior to that of a human being. IB Computers High Intelligent Computers - The Future (8) A great deal of artificial intelligence research has been done in the area of games - not video games, but board games such as chess and checkers. It is already the case the a computer program is the BEST checkers player in the world, another computer program is as good as the world champion backgammon player, and that chess programs can defeat 99.9% of all human opponents - the other 0.1% are the grand-masters. These programs are not self-aware, and they don't get angry when they lose, or happy when they win, but they perform BETTER than human beings. Robots do better work on assembly lines than human beings, in spite of the fact that human eyes and hands are far superior to their robot counterparts. It is still clear that computers do not REASON, do not UNDERSTAND, do not THINK. They still require human direction - they are not self-motivated, despite many science fiction stories to the contrary. The extreme difficulty of programming "intelligent" systems is clearly illustrated by the following anecdote. In the 1950's, the U.S. Department of Defense tried to develop an English-Russian language translator. It produced the following result: English Phrase Russian Translation Retranslation to English ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ Out of sight, Nevidno, soshyol suma. Invisible idiot. out of mind. Impossible or Impractical or Unknown Computer science THEORY tells us that some problems will NEVER be solvable with a computer, while other problems might be solvable, but only VERY slowly. Still other problems look impossible, but are only waiting for a brilliant idea to surface. Looking back at our list of AI-complete problems, we realize that these problems might be in any of these categories. That is, it might be impossible to program them on a computer, or it might just be impractical. Even if someone could PROVE that these problems have polynomial (i.e. fast) algorithms, it still might be the case that the machines we can build are simply too slow, both now and in the near future. If they are basically impossible, all of the megaherz and megabytes in the world won't help. If they are NP, then we might have to settle for very SIMPLE-MINDED thinking machines, which can only talk and reason at a very low level. Right now, no one knows just exactly which category the AI-complete problems are in. The current state of affairs is that computers are doing what they are good at, and human beings are still doing the thinking. The sensible developments in the near future of AI will have more to do with promoting cooperation between computer and human workers than with making computers which can replace human beings. Commercial efforts currently center on making computers which are easier to talk to and to use - improving user friendliness. This has little to do with making more powerful computers which can perform more complex tasks. Hardware and software will need to become more powerful in order to recognize human handwriting, or respond to verbal commands, but in the end the computers will still perform the same tasks as before. They will not UNDERSTAND what they hear or read. True "thinking machines" are still not within sight. The human brain is still many orders of magnitude more complex and more versatile than the most powerful computer systems.