(More pictures are here)
Software(Copyright © 2016 Piero Scaruffi)
Software didn't even have a name until 1958. While the word had been used before, it was Princeton statistician John Tukey in his article "The Teaching of Concrete Mathematics" who used it to mean "routines, compilers, and other aspects of automative programming" in contrast with the "hardware of tubes, transistors, wires, tapes and the like". During the 1950s most programs came bundled with the computer. The computer manufacturer was in charge of providing the applications, which typically targeted the industry sectors (banking, manufacturing, retailing, etc) to which the manufacturer was marketing the machines. In other words, the application program was just one of the many accessories that helped sell the machine. There were also user-written programs, but they were not really "sold". They were made available to the community of users of the same machine. "Share" was the name of the IBM user group (originally formed in 1955 by aerospace companies of the Los Angeles area that used the 701) and "Use" was the name of the Univac user group. It was the manufacturer itself (IBM or Univac) that facilitated the flow of know-how within the group. The largest organizations could also afford to maintain in-house teams of programmers for developing ad-hoc applications; but, again, the programs were not for sale. In 1957 John Backus at IBM in New York wrote the Fortran compiler for the IBM 704. The language was attractive because it was very similar to the language of algebra (of variables and propositions) and the compiler was very efficient in the way it generated machine code. This was the language that de-facto invented the job of the software engineer. The Fortran language would remain the main programming language for scientific applications until the invention of C. Robert Bemer adapted the Fortran language to the business world and created the Comtran language. The following year another seminal programming language was introduced: COBOL, defined by the Conference on Data System Languages (CODASYL) and largely based on Grace Murray-Hopper's Flow-matic. Cobol's success was due to a different set of circumstances: the US government opted to standardize its computers on Cobol. It also helped that it was meant to be compatible across computers, and that it allowed longer character names for variables (so that the name of a variable explained indirectly what that variable represented). Other languages appeared in the following years: the European ALGOL (1958) that used the Backus-Naur Form, LISP (invented by John McCarthy in 1958), Honeywell's FACT (1959), and MAD (the Michigan Algorithmic Decoder) that was based on ALGOL (1959). In 1961 the mathematician George Forsythe at Stanford started an influential "Division of Computer Science". But Software was not taught in college in the 1960s. The first master program in software engineering would only come in 1979 (Seattle University). Nonetheless, both manufacturers and customers were beginning to realize that computers were powerful hardware but their usefulness depended on the software applications. However, given the limitations of the hardware, it was not possible to create large sophisticated applications even if one wanted to. SAGE was the notable exception, a task originally estimated at one million instructions. And SAGE, in fact, did spawn the first software company: RAND Corporation, selected in 1955 to write the code for SAGE, created a new System Development Division that in 1957 was spun off as an independent software company, the System Development Corporation (SDC). This company (a nonprofit until 1969) created from scratch a whole new profession by training about two thousand programmers at a time when most countries had zero and only a few had more than 100. In 1953 Simon Ramo and Dean Woldridge, former engineers at Hughes in the Los Angeles area, started a company to work on military projects. In 1958 Ramo-Wooldridge was acquired by Thompson and became TRW that became a giant supplier of computer services to shady but very lucrative government projects. In the process it mysteriously ended up building a database of virtually all US citizens ranked by their "credit score". John Sheldon and Elmer Kubie of IBM's scientific software division started Computer Usage Company in march 1955 in New York specifically to provide software consulting. The biggest software consulting company in the late 1950s was C-E-I-R (Corporation for Economic and Industrial Research), originally created by the Air Force in 1952 in Washington, and transformed into a consulting firm in 1954 by Herbert Robinson. It had a software division headed from 1956 on by William Orchard-Hays, who had worked with George Dantzig (the guru of linear programming who had moved in 1952 from the Air Force to the RAND Corporation) and programmed in 1953 the simplex algorithm on an IBM CPC, a program (largely coded in assembly language by Orchard-Hays' wife) later renamed RSLP1, one of the first open-source programs (in 1958 CEIR renamed it SCROL and in 1967 CEIR was acquired by CDC). Fletcher Jones, one of the founders of the Share user group, started Computer Sciences in Los Angeles in 1959. The "labs" of these software firms became valuable training grounds for self-taught programmers. Incidentally, in 1956 Robert Patrick at General Motors designed a batch processing system (called GM-NAA I/O) for the IBM 704 computer in collaboration with the North American Aviation. IBM's Share of 1959 would be based on this batch system but run on the 709. |
(More pictures are here)
Artificial Intelligence(Copyright © 2016 Piero Scaruffi)In the 1950s the media began to talk about the "electronic brains".
In 1951 David Shepard, a former cryptographer during World War II, and his friend Harvey Cook, both employed at AFSA, the government agency that would later be renamed National Security Agency (NSA) but experimenting in the attic of Shepard's house in Arlington (Virginia), had devised a computer program capable of recognizing printed text and went on to build a "reading machine" called "Gismo": he founded Intelligent Machines Research Corporation in 1952 and the first commercial system was installed at the Readers Digest in 1955. When IBM introduced a similar product in 1959, it named it Optical Character Recognition (OCR). In 1951 Princeton University's student Marvin Minsky built SNARC (Stochastic Neural Analog Reinforcement Calculator), a machine that simulated the way the neural network of the brain learns. In 1954 he graduated with a thesis on reinforcement learning.
During the war Warren Weaver, the head of the Applied Mathematics Panel of the US Office of Scientific Research and Development, had been impressed at the success of cryptography in deciphering the Japanese secret code with the simple technique of having machines analyze letter patterns. Having been appointed director of the Natural Sciences Division of the Rockefeller Foundation he wondered if the same technique could be used to translate languages. In 1946 he discussed the idea with British computer pioneer Andrew Booth. In March 1947 he wrote about it in a letter to Norbert Wiener, and finally, while he was in New Mexico, in July 1949 Weaver sent a "memorandum" simply titled "Translation" to about 30 friends about using the digital electronic computer (that had just been invented) for language translation. As he wrote: "It is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the Chinese code". That memorandum was influential in convincing the US military to fund research in machine translation. Meanwhile, Harry Huskey at UCLA had used the SWAC for machine translation. In fact, the first article ever published on machine translation came out in May 1949 in the New York Times by a journalist who had just visited his lab. It described "a new type of electric brain calculating machine capable not only of performing complex mathematical problems but even of translating foreign languages". One consequence of Weaver's memorandum was that the MIT appointed the Israeli philosopher Yehoshua Bar-Hillel to lead research on machine translation. Bar-Hillel toured all the labs in 1951, notably Abraham Kaplan's lab at the RAND Corporation that had published the first paper on resolving ambiguity ("An Experimental Study of Ambiguity and Context", 1950), and in 1952 organized a "Conference on Mechanical Translation" was organized at the MIT, attended in particular by Leon Dostert, who in 1949 had built (on machines donated by IBM) a computer-based system for language interpretation at the Nuremberg trial of Nazi officers and subsequently had established the Institute of Languages and Linguistics at Georgetown University in Washington. In January 1954 Cuthbert Hurd's team at IBM gave the first public demonstration of a machine-translation system developed jointly with Dostert's group at Georgetown University and running on a 701 (in reality, a bluff to obtain funding from the Armed Forces). Yehoshua Bar-Hillel became convinced that machine translation was impossible without common-sense knowledge and in 1958 published a scathing report. In 1959 the philosopher Silvio Ceccato started a project in Italy funded by the US military, and in 1961 published his theory in the book "Linguistic Analysis and Programming for Mechanical Translation". Unfortunately, Ceccato's machine was destroyed in 1965 by communist demonstrators. In 1954 George Devol, a former radar specialist who had joined Remington Rand, designed the first industrial robotic arm, Unimate, which, manufactured by Joseph Engelberger at Consolidated Controls Corp, was first delivered to a General Motors factory in New Jersey in 1959.
But the robots of the time were mostly publicity stunts, and Garco, built in 1953 by Harvey Chapman in his Los Angeles garage out of six aircraft servo systems, was the most popular because "employed" by Disney. In 1959 Edwin Shelley at US Industries in Maryland designed the TransfeRobot, a programmable robotic arm (an evolution of numerical control rather than of Hollywood robots).
While most of these machines had little if any intelligence, some young scientists were serious in claiming that intelligent machines were possible. The first influential conference for designers of intelligent machines took place in 1955 in Los Angeles: the Western Joint Computer Conference. At this conference Newell and Simon presented the "Logic Theory Machine", Newell also presented his "Chess Machine", Oliver Selfridge gave a talk on "Pattern Recognition and Modern Computers", and Wesley Clark and Belmont Farley described the first artificial neural network ("Generalization of Pattern Recognition in a Self-organizing System"). In 1956 John McCarthy of the MIT organized the first conference on Artificial intelligence at Dartmouth College.
In 1956 McCarthy also co-edited with Claude Shannon a volume on "Automata Studies" that included papers continuing the mission of the McCulloch-Pitts neuron. Artificial Intelligence quickly split into two competing groups. Allen Newell at RAND Corporation in Los Angeles and Carnegie Mellon University's psychologist Herbert Simon in Pittsburgh unveiled the "Logic Theorists" in 1956 (written by Clifford Shaw at RAND in IPL on the Johnniac computer) and in 1957 the "General Problem Solver". Logic Theorist effortlessly proved 38 of the first 52 theorems in chapter 2 of Bertrand Russell's "Principia Mathematica." A proof that differed from Russell's original was submitted to the Journal of Symbolic Logic, the first case of a paper co-authored by a computer program. In 1958 the trio also presented a program to play chess, NSS (the initials of the three), the program for which Shaw had invented the high-level language IPL (Allen had published in 1955 a paper titled "The Chess Machine"). These were computer programs that represented another step in abstracting a Turing machine: not only separating data and instructions, but even knowledge (about the domain of the problem) and inference (the logical methods that lead to solutions). Simon and Newell basically conceived intelligence as mathematical logic, and their programs were therefore symbolic processors. In 1958 John McCarthy published the seminal paper "Programs with Common Sense". Machines can easily perform many repetitive tasks better than humans but "common sense", i.e. knowledge of the real world, is what really makes us "intelligent". His article spawned the discipline of "knowledge representation". Simon and Newell had focused on the "inference" part of intelligence. McCarthy emphasized that inference is useless unless you also have knowledge from which to infer important facts. The knowledge-based approach was indirectly sponsored also by the linguistic revolution that started at the MIT, thanks to Noam Chosky, who had arrived in 1955 at the MIT from the University of Pennsylvania. In 1957 his book "Syntactic Structures" introduced the transformational grammar that worked with logical rules. Chomsky speculated that our mastery of language is due to logical rules that represent our knowledge of the language. In 1959 his review of a book by Burrhus Skinner (perhaps the most influential psychologist of the era) marked the end of the domination of behaviorism and resurrected cognitivism.
In 1959 Arthur Samuel at IBM in New York wrote a program to play checkers, the world's first self-learning program. In 1959 McCarthy and Minsky founded the Artificial Intelligence Lab at the MIT, that would mostly invest in the knowledge-based approach. In parallel, though, another school of thought emerged. The field of "neural networks" (or "connectionism") aimed at simulating the way the brain works: a network of neurons, Warren McCulloch's and Walter Pitts' old idea. One of the most influential books in the early years of neuroscience was "Organization of Behavior" (1949), written by the psychologist Donald Hebb at McGill University in Montreal (Canada). Hebb described how the brain learns by changing the strength in the connections between its neurons. In 1954 Wesley Clark and Belmont Farley at the MIT simulated Hebbian learning on a computer, i.e. created the first artificial neural network (a two-layer network). In 1956 Hebb collaborated with IBM's research laboratory in Poughkeepsie to produce another computer model, programmed by Nathaniel Rochester's team (that included a young John Holland).
In 1957 Frank Rosenblatt at Cornell University designed the "Perceptron", a system that simulated the neural structure of the brain and that could learn by trial and error. The Perceptron was simulated on an IBM 704 at Cornell Aeronautical Laboratory.
In 1959 Norbert Wiener's pupil Oliver Selfridge at the MIT unveiled "Pandemonium", a computer modeled after the neural networks of the brain and capable of machine learning that could be used for problems of pattern recognition that eluded existing computers. In 1960 Bernard Widrow at Stanford and his student Ted Hoff (the future Intel inventor) created ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element), a neural network of McCulloch-Pitts neurons that debuted an important algorithm, the "delta rule" (later known as "backpropagation"). The age of theoretical studies on machine vision opened with Larry Roberts' 1960 thesis at the MIT. The issue was how to turn the 2D image captured by a machine into 3D geometric information. It remained purely theoretical speculation because digital imaging was not feasible. Meanwhile, in 1957 Morton Heilig invented the "Sensorama Machine", a pioneering virtual-reality environment.
|
(More pictures are here)
DEC(Copyright © 2016 Piero Scaruffi)
During the first decade of the computer industry the common assumption had been that size mattered: the bigger the computer, the more "intelligent" it must be; but someone in Boston started working from a different assumption. In 1957 Georges Doriot's ARDC (the American Research and Development Corporation) made a modest investment in a computer company started near Boston by a former SAGE and Whirlwind engineer, Ken Olsen: the Digital Equipment Corporation (DEC).
Its "mini-computer" PDP-1 (Program Data Processor), designed by Ben Gurley (who had worked on the TX-0 with Olsen), basically a streamlined version of the TX-0 using a new Philco transistor, was introduced in 1960 at a price of $125,000. The PDP-1 came with a keyboard and a monitor in a much smaller (and cheaper) package than the IBM and Univac computers. It could process 100,000 instructions per second and had a memory of 4,000 words (each word being made of 18 bits). It also came with the first commercially-available graphics display, the Type 30. Its input-output architecture completely changed the design of computers. The first customers included Bolt Beranek & Newman in Boston (a firm founded in 1948 by MIT professors Richard Bolt and Leo Beranek to design acoustic spaces such as symphony halls and auditoria) and the Lawrence Berkeley Livermore Laboratory in California, while DEC donated one to the MIT's Project MAC. DEC only sold 50 units but created a loyal audience. DEC divulged the inner workings of the machine to motivate the users to contribute hardware and software add-ons. This was the beginning of the "original equipment manufacturer" (OEM), a third party that used the PDP to build a special-purpose system.
|
(More pictures are here)
IBM(Copyright © 2016 Piero Scaruffi)IBM had failed to convince its customers to ditch the (electromechanical) accounting machines in favor of the (electronic) machines, but the 1401 of 1959, a decimal computer engineered by Francis Underwood at the Endicott labs, changed many minds. Its success (12,000 units sold in a decade) was not due solely to the increased speed, smaller size and lower cost: IBM decisively removed the issues that caused hesitation in its customer base. First of all, it was fully transistorized (using the transistor circuits being developed for "Project Stretch"), finally getting rid of the unreliable vacuum tubes, and allowing for a shrinked size. Secondly, the software. Very few customers had in-house programmers capable of programming a computer. IBM delivered a whole library of software applications for free. For those who wanted to program the machine, IBM also delivered a new programming language, RPG (Report Program Generator), that emulated the way operators used punched-card tabulators. RPG, in fact, was not a programmming language but a new kind of system to produce programs: it used forms, not statements, to describe the desired behavior. The 1401 also included a Fortran compiler and soon added a compiler for Cobol, the "standard" programming language for business applications defined in 1959 by the US government and six computer manufacturers (IBM, Burroughs, Honeywell, RCA, Sperry Rand, Sylvania) at the University of Pennsylvania, basically an evolution of Grace Murray-Hopper's Flow-matic (the first Cobol compiler appeared on the RCA 501). In 1962 IBM adopted Cobol as the primary programming language for all its computers. Thirdly, IBM made sure that customers buying the computer instead of the tabulating/accounting machine would reap secondary benefits, and the main one turned out to be the new printer, the 1403, four times faster than the printer of the old 407 accounting machines (IBM's all-time best-seller). Finally, Eliot Noyes, whom IBM had hired in 1956 to design all its products, let Edgar Kaufmann design the machine's look, and Kaufmann gave it a sleek architecture, hiding (instead of emphasizing) the internal workings of the machine and projecting a feeling of harmony of parts and harmony with the office environment. Kaufmann liked the light blue color, and IBM got a new nickname: "Big Blue". In January 1956 two government agencies had sponsored a project for a supercomputer code-named "Project Stretch": one was the National Security Agency, which needed to process strings (characters) and the other one was the Atomic Energy Agency (AEC), which needed to carry out calculations (numbers). IBM assigned the project to Steve Dunwell at Ralph Palmer's Poughkeepsie laboratory, with Erich Bloch as the chief engineer.
In April 1961 Poughkeepsie completed the new transistorized computer, now renamed IBM 7030, which was good at performing both tasks. The machine was first delivered to the Los Alamos National Laboratory. It was on this project that a German born scientist, Werner Buchholz, coined the term "byte" (eight bits). Only nine 7030s were built, sold only to government agencies and large research centers, but its architecture laid the foundations for the forthcoming 360, a computer for both the scientific and the business communities (i.e., numbers and strings). The 7030 remained the fastest computer in the world until 1964 and remained in use at the NSA until 1976, but it was not as successful as it should have been because it was not backward-compatible. Nonetheless, the Stretch computer represented a major leap forward in computer architectures because it was the first computer designed to excel at both number crunching (i.e., scientific applications) and at alphanumeric processing (the main task of business applications). Luckily for IBM, it had to please two customers that had completely different requirements. The result was the first computer that excelled at both scientific and business applications, a fourth kind of computer after the IAS (scientific applications), the Univac (business applications) and the Whirlwind/SAGE (real-time applications). More successful was the slower IBM 7090 of 1959, that was simply a fully-transistorized version of the 709 of 1957, developed for and first delivered to an Air Force base in Greenland (and used in 1962 by NASA to monitor its first manned orbital flight). It cost $3 million but IBM sold several hundreds of them. The core memory coupled with the hard disk and magnetic tapes created a standard for the high end of computers. Visually, the 7090 series introduced the notion of the huge metal boxes connected via invisible cables and spread into a big room to which only specialized personnel had access. The programmer (now mostly a male) delivered his deck of punched cards to the operators of the room and hours or days later received a print-out with the results of his program. The programmer was now separated from the expensive machine. The operators were the high priests of the computing room. The processor and the memory of the high-end machines of this generation was enclosed in a metal cabinet called "mainframe", and somehow the term came to identify that generation of computers. IBM had not been first in commercializing the computer (the Univac had come first) and not the first in transistorizing the computer (the NCR 304 had come out a few months before the 1401 in 1959), but IBM had managed to leverage its experience in business machines and the know-how from military projects. After the successful introduction in 1960 of the 7000 series, that fully transistorized the 700 series, IBM owned more than 81% of the computer market. The market wasn't that big anyway. Computers were cumbersome and expensive. In 1960 the price of an IBM 7030 was $13.5 million. They were even more expensive to run. Only governments and a few large corporations could operate one. However, IBM had struck some lucrative contracts. One of them was for automating American Airlines' reservation system. Someone at IBM had noticed that an airline's reservation system was very similar to the monitoring problem solved by SAGE. IBM set up a team at American Airlines' headquarters in Manhattan: with senior programmers such as Robert Head, hired from General Electric's ERMA team, under the direction of John Siegfried. This team basically adapted SAGE to the airline business, and the result was a system called SABRE, the first online transaction processing. It was first demonstrated in 1960 but it took over 100% of reservations only in 1964, running on a transistorized IBM 7090 and on a multi-tasking real-time operating system. IBM's disk drive 1301 (made in San Jose and introduced in 1962 for the 7030) was tested on SABRE. It dramatically improved storage density and access time. IBM achieved an important milestone in 1962: the removable disk pack, invented at the San Jose laboratories by Thomas Leary, and commercialized as the IBM Disk Pack (better known as the IBM 1316), first available for use with the 1311 disk drive, which in turn was used as data storage for small business and scientific machines such as the IBM 1401. The removable disk pack offered the convenience of the magnetic tape and the speed of the hard disk at an affordable price. It was the first kind of data storage that truly made the punched card obsolete as a medium for data storage. |
(More pictures are here)
The BUNCH(Copyright © 2016 Piero Scaruffi)Not many computer startups survived. Control Data (CDC) was formed in 1957 in Minneapolis by disillusioned engineers of Sperry Rand headed by ERA's cofounder William Norris, and soon joined by Seymour Cray. CDC started selling in 1959 what was basically a transistorized 48-bit version of the Univac 1103 as the CDC 1604 (first delivered to the navy in 1960), that later became the CDC 3000. A 24-bit version of the 48-bit 1604, the CDC 924, was used at NASA. As a by-product of that project, Cray designed a much smaller and cheaper machine, the CDC 160, which was de facto a minicomputer, introduced in 1960.
In 1955 a Minneapolis company that had become rich during World War II working on technology for bombers and submarines, Honeywell, decided to enter the computer market via a joint venture with Raytheon called Datamatic, that first sold a computer in 1957. A group of computer engineers including Max Palevsky from Packard Bell (an electronics company that manufactured the last computer ever based on Turing's ACE design) formed Scientific Data Systems (SDS) in 1961 in Los Angeles, raising $1 million from Arthur Rock and the Rosenwald family (heirs to the Sears Roebuck fortune). They introduced a first model, the 24-bit silicon computer SDS 910, in 1962, which, for the time, was basically a mini-computer, meant to challenge IBM and the other mainframe manufacturers. Their first customer was NASA, that would remain SDS' main customer. In 1965 the 910 was upgraded to integrated circuits (the 925), the second one after RCA's Spectra 70. The SDS 940 was built in April 1966 for the time-sharing system at U.C. Berkeley, funded by DARPA's Project Genie. This machine boasted Atlas-style virtual memory. In 1966 SDS also introduced the 32-bit mainframe computer Sigma 7 to compete directly with the IBM /360. SDS rapidly passed DEC in revenues but its success didn't last long. SDS would be acquired by Xerox in 1969 (when NASA's purchases started declining).
The big computer manufacturers of the USA quickly fell behind IBM, and the press coined the expression "the seven dwarves" for the other seven computer manufacturers: Burroughs, Univac (formerly Sperry Rand), NCR, Control Data Corporation, Honeywell, RCA and General Electric. Some of IBM's rivals actually had more advanced technology. In 1959 RCA launched its fully-transistorized 501, that featured the first COBOL compiler, and in 1964 it introduced the Spectra 70, the first clone of the IBM/360, faster and cheaper, and two of its models already used integrated circuits (manufactured in house). In 1959 NCR introduced the fully-transistorized 304 and in 1960 the smaller and cheaper 315 that could read data from NCR's cash registers. In 1960 Sperry delivered the Livermore Automatic Research Computer (LARC), a multi-processor supercomputer that came out before the Stretch and before the Manchester Atlas (but only two were built). In 1962 Burroughs built a military computer, the D-825, another multi-processing computer, and then the B5000, which was used ALGOL as the main programming language. Cray at CDC (or, better, at his Wisconsin laboratory) used faster transistors (developed by Fairchild Semiconductor) to create his first supercomputer, the 6600, delivered in 1964 to the Lawrence Livermore Laboratory in California.
In 1963 Honeywell's Datamatic division introduced a computer, the Model 200, that was compatible with the IBM 1401: it was faster (because it used more up-to-date transistors) and it could run IBM software. General Electric's 635, introduced in 1963, one of the earliest multi-processor computers, was chosen by the MIT and others for their time-sharing systems. GE was the largest user of IBM mainframes but had until then refrained from entering the market, despite building the sophisticated computer that the US Air Force employed for the MISTRAM (MISsile TRAjectory Measurement) project to track the trajectory of rockets. In 1966 General Electric introduced one of the first commercial computers made with integrated circuits, a process computer for industrial control named GEPAC 4020. General Electric, the top electronic company in the world, killed its internal computer projects (despite its success in time-sharing systems) and in 1970 GE sold its computer business to Honeywell. After RCA sold its to Univac in 1971, and Philco pulled out of the computer market after being acquired by Ford in 1961, the press coined the acronym "BUNCH" (Burroughs, Univac, NCR, Control Data, and Honeywell) for the survivors. |
Software Services(Copyright © 2016 Piero Scaruffi)
In November 1961 Fernando Corbato at the MIT demonstrated the first working time-sharing system, CTSS (Compatible Time Sharing System), which allowed many users to share the same computer and to remotely access a computer, an IBM 709 (upgraded to a 7094 in 1962); the idea of time sharing spread both to academic and industrial centers to minimize the cost per user of using a computer.
In 1961 Charles Bachman at General Electric in New York developed the first database management system, Integrated Data Store (IDS), released by GE in 1964 on its 200-series computers. Steve Russell and others at the MIT implemented the computer game "Spacewar" on a PDP-1 (1962), a sign that enthusiastic computer engineers were beginning to see the computer as a vehicle for entertainment. Another influential project was underway at the University of Manchester under the direction of Tom Kilburn: the Atlas Computer. Physically manufactured by Ferranti in 1962, it employed a new technique to vastly expand the memory capacity of the computer: virtual memory. Data and instructions were automatically transferred from RAM to drum-memory and back in a manner hidden to the programmer. In 1963 MIT's student Ivan Sutherland demonstrated "Sketchpad" on the MIT's transistorized computer TX-2. Subtitled "A Man-Machine Graphical Communication System", it was the first computer program ever with a Graphical User Interface (GUI) and the first computer program to display three-dimensional objects on a two-dimensional screen. The user was able to point and move a lightpen and create engineering drawings directly on the CRT display.
Using the same ARPA funding that funded Sketchpad, in 1964 Tom Ellis at RAND Corporation developed the RAND tablet, a desktop device equipped with a graphical user interface, capable of capturing human gestures and recognize handwriting (gesture and handwritten recognition was carried out remotely on an IBM 360). Two major software ventures were created in Texas. Ross Perot, a former IBM salesman, founded Electronic Data Systems (EDS) in 1962 and de facto invented the business of outsourcing: EDS would gladly implement custom software for any mainframe customer that did not have the in-house resources to do it (many of them). University Computing Company (later renamed Uccel) was founded in 1963 on the campus of Southern Methodist University (one of the earliest college spin-offs) by Sam and Charles Wyly with funding from Swiss billionaire Walter Haefner. Their claim to fame was the packaged product Tape Management System (TMS). Within a decade EDS and Uccel would become two of the largest software companies in the world. |
The LASER(Copyright © 2016 Piero Scaruffi)
A Stanford graduate, Ted Maiman, working at Hughes Research Laboratories in Los Angeles had demonstrated the first laser (a ruby laser) in may 1960, beating the more famous teams of Charles Townes at Columbia University and Arthur Schawlow at Bell Labs, not to mention the very inventor of the laser, Gordon Gould, who had moved to the firm TRG (Technical Research Group) from Columbia (in 1959 Gould had coined the term, which stands for "Light Amplification by Stimulated Emission of Radiation"). The laser was a formidable invention. No other invention would be integrated so quickly in society and become so pervasive in such a short time (bar-code scanners, compact discs, cutting and welding, holography, precision surgery).
|
Digitizing the Environment(Copyright © 2016 Piero Scaruffi)
In 1962 Wesley Clark, the creator of the first artificial neural network and of the first transistorized computer at the MIT, designed the small, inexpensive, interactive and user-friendly LINC (Laboratory INstrument Computer), the first minicomputer, specifically to encourage biologists to use computers. Built by his student Charles Molnar, it was used in 1963 by Arnold Starr at the National Institutes of Health (NIH) in Maryland to process the brain waves of cats when they heard sounds, another pioneering neuroscience application. DEC and others were engaged by MIT in 1964 to manufacture LINCs for dozens of biology laboratories around the country.
A "modem" (modulator-demodulator) is a device to turn the analog continuous signal of the telephone into a series of zeros and ones, i.e. into a digital signal that can be manipulated by a computer. The first modems were built in 1958 by AT&T for the SAGE project to connect terminals located around the USA to the SAGE computers. The Bell 101 was a big box, the size of a teenager, transmitting bits at a rate of 110 per second. In 1962 AT&T introduced the first commercial modem, the Bell 103, that digitized the signal at a bit rate of 300 bits per seconds, and all future modems would run data rates in multiples of 300. In 1963 a standard committee worked out the "American Standard Code for Information Interchange" or "ASCII" to encode character symbols in the digital format of the computer. Even parity remained a rarity on keyboards until 1965, but became the standard after 1967. ASCII terminated the long life of the "Murray code" (the ITA2). This was a new code that made the Flexowriter obsolete. In 1963 Teletype introduced the Model 33 (originally built for the navy), that was not much faster than the Flexowriter but it used ASCII.
|
NASA(Copyright © 2016 Piero Scaruffi)
During the 1960s NASA was in charge of the plan to "put a man on the Moon". Computers quickly entered the picture, both for controlling the spacecrafts and for "checking out" the parts before the flight. In 1958 IBM had delivered an ASC-15 computer (a magnetic-drum computer) to the air force for the guidance system of the Titan missile. That computer was used for the Titan II roket of 1962, that served both as an intercontinental ballistic missile and a space launcher for NASA (it was used by NASA for launching the manned Gemini spacecraft in April 1964). IBM expanded the architecture of the ASC-15 to produce the Launch Vehicle Digital Computer (LVDC) that in January 1964 became the first onboard computer to guide a rocket into space (the Saturn SA-5), i.e. the first "autopilot". Saturn V rocket would take astronauts to the Moon five years later. The biggest problem on ground was to "check out" the spacecraft as it was built, deployed and before takeoff. This involved procedures at multiple sites: the Marshall Space Center in Alabama, the Kennedy Space Center in Florida, and flight control in Texas (the famous Houston center, later renamed Johnson Space Center). The Alabama center had an RCA 110A to communicate with the ASC-15 used in the Saturn rockets, and IBM wrote on that computer the Saturn Operating System and the applications for automating the checkout procedure. All the NASA contractors for the Saturn project were asked to purchase a Control Data Corporation CDC 924A and install it at their factories, while RCA 110A computers were used for the assembled spacecraft at the launch site. The Automatic Checkout System or Acceptance Checkout Equipment (ACE) was online in 1963. In 1964 a more sophisticated ACE was set in place for the Apollo mission, whose spacecrafts had as many as eight million parts. The ACE now involved computers at the sites of the various manufacturers (such as North American Aviation), plus computers in Alabama (assembling), Florida (pre-flight), and Texas (control). It became obvious that the large IBM computers, designed for batch processing, were not ideal for real-time processing, and NASA settled on the CDC-168 minicomputers, interfacing the multiple RCA 110As and the Apollo guidance computers. This was therefore a testbed for networking multiple different computers and for online data processing. It was also one of the first applications involving the online data transfer of megabytes. Wernher von Braun said that "I personally attribute, as far as the launch vehicle is concerned, the greatest reason [for success] in this field to our automatic checkout procedure".
|
(More pictures are here)
The 360(Copyright © 2016 Piero Scaruffi)Something truly monumental happened in 1964: IBM introduced the System/360, the ultimate result of Project Stretch. IBM's chief architect, Gene Amdahl, had designed a family of computers that were software-compatible and, to some extent, modular, so that customers could migrate and upgrade their software applications at will. IBM understood the value of software. Compatibility across a wide range of computers (all named 360 but actually quite different from each other) was achieved by deploying a read-only microprogram in the control unit of the machine that implemented the exact same set of instructions, although each microprogram was different from the other. Microprograms could be written even to provide compatibility with machines of the past, i.e. to "emulate" other computers. It was an early concept of "virtualization". The compatibility extended to the input-output devices (the "peripherals"), a variety of storage drives, key punches and teletypes that worked with every computer of the 360 series. IBM had two very successful lines of computers: the business 1401 and the scientific 7090. Both were based on technology that was rapidly becoming obsolete. And the very notion of separating scientific and business applications was becoming obsolete because business applications increasingly required massive computation while scientific applications also required data management. The 360 series upgraded IBM's technology and unified the two lines of products. Each could be emulated on the 360 by writing the appropriate microprogram. The architecture of the 360 machines was truly inspired by binary logic: there were 16 registers (2 to the power of 4), the word length was 32 bits (2 to the power of 5) a character was encoded by 8 bits (for a total of 256 characters). Characters were not encoded in ASCII but in IBM's own EBCDIC (Extended Binary Coded Decimal Interchange Code). The 360 came with a graphics display, the 2250, the second commercially-available graphics display after the PDP-1 display of 1961, a display that used as a pointing device and drawing tool a "light pen" very similar to the SAGE's "light gun", further refine after the collaboration with General Motors on the DAC-1. In 1965 IBM and SHARE developed the logical language PL/I that blended features of Fortran, Cobol and Algol, thereby unifying scientific and business applications. Despite IBM's marketing push, PL/I failed to dislodge Fortran and Cobol from their loyal bases. In 1967 IBM introduced the long-promised full-fledged "operating system" developed at Poughkeepsie, the OS/360, that allowed multiple programs to run at the same time on the same machine. It had been written by thousands of engineers (under the supervision of Fred Brooks) but it was full of mistakes. This was one of the factors that prompted the convening of the conference on "Software Engineering" in Germany in 1968. It was becoming clear that software had to become a science. The 360 attached the 2311 disk drive (that used the removable 1316 disk pack and had a total capacity of 7 megabytes) via a standard interface. In 1961 Laurence Spitters, a Wall Street investment banker who had moved to San Francisco and joined Ampex in 1958, founded Memorex in Santa Clara taking three Ampex engineers with him to work on computer magnetic tape. Memorex started making hard-disk drives in 1966 and in 1968 focused on developing a clone of the IBM 2311 disk drive (the Memorex 630). That marked the birth of the industry of IBM-compatible disk storage. In 1961 IBM had launched two supercomputing projects to exceed the speed of the Stretch computer: "Project X", assigned to Poughkeepsie, that in 1964 became the 360 Model 92; and "Project Y", initially assigned to the Watson Research Center in Yorktown Heights (where Jack Bertram was setting up an "Experimental Computers and Programming" laboratory) but, after CDC introduced the 6800 in 1964, relocated in 1965 to Menlo Park in California to be near Lawrence Livermore Laboratory. Here, under the direction of Max Paley, Project Y was renamed Advanced Computing Systems, or ACS-1, and Stretch veteran John Cocke started experimenting with the architecture that would be called RISC a few years later. The computer industry had gotten its start from government contracts. Fundamentally, computer manufacturers designed computers in collaboration with research laboratories for use in research laboratories. Then the computer manufacturer would give the computer a business name, publicize it as a general-purpose machine and educate large customers on what that machine could do for them. That changed in the 1960s. Increasingly the specifications were driven by the customer. The successful computer manufacturers learned to listen to their customers. The importance of software became more obvious: software represented the application, i.e. the very reason that a customer purchased or rented a computer. Software caused a trend away from the general-purpose computer towards specialized vertical-market machines. In turn, vertical markets increased the demand for specialized software. Transistors had made a difference. In 1960 there were a little over 5,000 computers in the USA, and not many more in the rest of the world. In 1965 there were probably already 25,000 in the USA alone, mostly transistorized models. While computers were still enormous and enormously expensive, the transistor had lowered the threshold for owning one of these monsters. |
Prehistory of the Internet(Copyright © 2016 Piero Scaruffi)
The Internet owes its existence to the fear of Soviet cybernetics. Norman Wiener's cybernetics became very popular in the Soviet Union as the correct way to create the new "communist" person. In 1960 Wiener was invited to Moscow at the first International Congress on Control and Automation. In 1961 a CIA agent in charge of spying on the Soviet cybernetic program, John Ford, wrote a top-secret 126-page report that greatly alarmed the John Kennedy administration. In fact in 1962 a mathematician, Viktor Glushkov, wrote "The All-State Automated System for the Gathering and Processing of Information for the Accounting, Planning and Governance of the National Economy of the USSR" (OGAS) and established (near Kiev) the Institute of Cybernetics. At the same time Joseph Licklider, an MIT professor of psychology and a SAGE alumnus who was part of the club that met on Tuesday nights at Wiener's house, had become vice-president (in 1957) at Boston's consulting firm Bolt Beranek and Newman (BBN), now a major computer science, and was preaching about the power of computer networks: in 1959 BBN pioneered time-sharing and in 1960, when BBN had grown to 189 employees and was recognized as a major center for computer science, Licklider published the first of his seminal papers, "Man-Computer Symbiosis" (1960). In 1962 the Polish-born mathematician Paul Baran at the RAND Corporation proposed that a distributed network of computers was the form of communication least vulnerable to a nuclear strike, a highly sensitive topic during the Cold War ("On Distributed Communications Networks", 1962). The Department of Defense had established the Advanced Research Projects Agency (ARPA) in 1958. Within a few years this agency was providing the largest funding for computer engineering research. In 1962 it created a specific office devoted to computers, the Information Processing Techniques Office (IPTO), and hired the visionary Licklider from the MIT and BBN to be its first director. This was happening at the peak of the Cuban missile crisis. One of the first papers that he wrote as director was titled "Intergalactic Computer Network" (1963) and envisioned a global network of computers. In his new capacity Licklider sponsored the pioneering time-sharing systems Project MAC (Machine Aided Cognition) at the MIT and Project Genie at U.C. Berkeley. In 1965 Harvard student Ted Nelson coined the word "hypertext" to refer to nonsequential navigation of a document. In 1965 Donald Davies a mathematician at the National Physical Laboratory in Britain, who during WWII had worked as a cryptographer with Alan Turing, built the first "packet switching" network. The idea was similar to Baran's: transmit data in blocks ("packets") that can travel independently, following whichever route makes sense at the time, and reassemble them when they arrive at the destination. In case of error transmission the sender doesn't need to resend the entire message but just the blocks that failed. Licklider also dispatched money to the budding research centers in the Bay Area: Stanford University, U.C. Berkeley (neither of which had a graduate program in computer science yet) and especially Douglas Engelbart's team at the SRI. Another benefactor of Engelbart's project had been NASA's Office of Advanced Research and Technology in the person of Bob Taylor (who had joined NASA in 1961 after working for Maryland-based defense contractor Martin Marietta). NASA was interested in using computers for flight control and flight simulation, not purely "number crunching". Licklider was succeeded at IPTO in 1963 by Ivan Sutherland, who in 1965 hired Bob Taylor away from NASA, and Taylor became the new director of IPTO in 1966. Taylor used ARPA to promote his philosophy: he wanted computers to be more useful than for just rapid large-scale arithmetic, and one way was to connect them in a network. Taylor was a crucial person in the US government, funding key projects in computer science research. In February 1966 Taylor launched an ARPA project to create a computer network, later named Arpanet. Wes Clark of LINC fame, now at Washington University of St Louis, suggested that, instead of having each node write its own software on its own mainframe in order to connect to such network, they should hand each node a small "gateway" computer in charge of networking functions, the same one at each node, letting the local mainframe do what it normally did. Bolt Beranek and Newman (BBN) won the contract to develop the Interface Message Processor (IMP), a customized version of the Honeywell DDP-516 (a descendant of the first 16-bit minicomputer); basically, the first router.
|
Time Sharing(Copyright © 2016 Piero Scaruffi)
In 1963 the most influential time-sharing systems were probably CTSS at the MIT, that used an IBM 7094, and TSS at the System Development Corporation (SDC) in Santa Monica, that used an IBM AN/FSQ-32. MIT's Project MAC (actually a laboratory that used a General Electric computer), established in 1963 by Roberto Fano, was mainly devoted to Artificial Intelligence, featuring Marvin Minsky as the director and John McCarthy as the visionary, and to time-sharing, because Corbato envisioned a successor to CTSS, eventually named MULTICS (Multiplexed Information and Computing Service) and initially a collaboration among the MIT, General Electric and the Bell Labs, first deployed in 1965.
Multics was indirectly important for the development of the hacker community because those who didn't like the Multics direction, mostly in the A.I. Lab, started working on a different time-sharing operating system, called ITS (Incompatible Timesharing System), and these were the original hackers, or at least the first ones who delivered something significant, working initially with an old PDP-6 and later a PDP-10. Computer hackers were computer programmers who loved to program just for fun, not because they were told to. The first organized group of hackers were the ones who emerged from the Project Mac laboratory in the 1960s. The ethos of these hackers was communal: all software was available to everybody in the community. Another influential time-sharing system was developed at nearby Dartmouth College in New Hampshire under the direction of the Hungarian-born mathematician Gyorgy Kemeny and Thomas Kurtz: the Dartmouth Time-Sharing System (DTSS). It was inaugurated in 1964, running on a General Electric 235 (a faster version of the 225). For this system Kemeny and Kurtz developed a new programming language called BASIC, originally conceived for educational purposes. BASIC stands for Beginners All-Purpose Symbolic Instruction Code. Licklider's funds established a West Coast counterpart to Project MAC, Project Genie, started in 1964 at the U.C. Berkeley, whose main achievement would be a public-domain time-sharing system designed by Harry Huskey and completed by an engineer who had worked with Huskey on the Bendix G-15, David Evans (before being recruited in 1965 by the University of Utah). Several team members (notably Charles Thacker) started a company, Berkeley Computer Corporation (BCC), to commercialize it. As the costs of owning and operating a mainframe were prohibitive for most companies, time-sharing became a lucrative business. General Electric (thanks to its collaborations with the MIT and Dartmouth) was ahead of IBM in time-sharing (the OS/360 had not been designed with time-sharing in mind). In 1963 Cliff Shaw at RAND Corporation (the programmer who had written the Logic Theorist for Allen Newell and Herbert Simon) developed JOSS (the Johnniac Open Shop System). Funded by the Air Force and running on the Johnniac computer, JOSS boasted an influential user-friendly interface. Keith Uncapher designed the graphical successor to JOSS, with the flowchart-based Graphic Input Language (GRAIL). It was on this project that Rand developed its tablet.
Both BBN and the Dallas-based University Computing Company (UCC) offered nation-wide services. BBN's was called Telcomp, inspired by JOSS, launched in 1964, originally running on the PDP-1. In July 1966 Tymshare, founded by two General Electric engineers (Tom O'Rourke and Dave Schmidt), using its own version of UC Berkeley's Genie software and its own SDS 940, started one of the most popular time-sharing services out of Los Altos, in the heart of the future Silicon Valley. In 1968 Hewlett-Packard introduced the HP 2000A Time-Shared BASIC System, that had a BASIC interpreter. An interpreter does not compile the program in machine language: it executes the instructions directly. Mark Bramhall at DEC used BASIC in 1970 to write DEC's time-sharing system RSTS-11 for the PDP-11, and in the process further improved the language so that it became a staple of computer education. The importance of time-sharing systems for the spreading of software skills cannot be overstated. Before time-sharing systems, only a small elite had access to computers. Time-sharing allowed students to programm all they wanted. It multipled by an order of magnitude the number of hours of programming around the world. Indirectly, it also enabled the concept that software can be a hobby, just like reading comics or playing the guitar. It helped not only computer lovers in high-tech cities like Boston but also and especially computer buffs in low-tech parts of the world like the Midwest. |
Information Retrieval(Copyright © 2016 Piero Scaruffi)
As the size of databases grew, the need for systems to retrieve information became obvious. Mortimer Taube, who had worked at the Library of Congress in Washington, started his own company, Documentation, in 1952 and in 1953 he introduced the "uniterm" system of indexing a library of texts using cards. In 1954 Harley Tillit's team at the Naval Ordnance Test Station (NOTS) in Southern California (China Lake) implemented the uniterm system on an IBM 701 computer, probably the first computerized "search system" that wasn't just an experiment. Search programs multiplied after the introduction of the IBM RAMAC that greatly increased "random" access to data. These systems were all run as batch programs: the user did not have a real-time answer. (For the record, in 1961 Taube wrote a scathing critique of Artificial Intelligence titled "Computers and Common Sense - The Myth of Thinking Machines"). Hans-Peter Luhn at IBM wrote the paper "A Business Intelligence System" (1958) in which he envisioned a way to bypass the information overload caused by data processing: to automatically route by phone, telefax, display or printout the new journal articles to the individuals who "subscribed" to a service, the precursor of RSS. The System Development Corporation (SDC) had a Strategic Air Command Control System Department based in New Jersey and Santa Monica. In 1960 John Roach in New Jersey developed the online retrieval system SATIRE (Semi-Automatic Information Retrieval) using punched cards and in 1963 demonstrated remote access to SATIRE running on an IBM 1401. The Santa Monica group (that included psychologist Robert Simmons and linguist Sheldon Klein) had already started work in 1959 on an ambitious Artificial Intelligence project to "teach computers to read and write in English": Synthex. In 1961 Jules Schwartz, a former SAGE programmer now at the Santa Monica lab, unveiled JOVIAL (Jules Own Version of the International Algebraic Language), a programming language similar to ALGOL (that was originally supposed to be called International Algorithmic Language). Protosynthex, a prototype of Synthex that only offered information retrieval, written in JOVIAL by Keren McConlogue, debuted in 1963 and went online in 1964. It was running on a colossal military computer AN/FSQ-32 built in 1961 by IBM and based on the transistorized IBM 4020. (This military machine was never deployed except for the unit at SDC). In 1964 Synthex added a relevance-ranking algorithm to the search program. The Q2 was used by many other laboratories via the Time-Sharing System (TSS) developed at SDC in 1963.
At the same time, Charles Bourne at the Stanford Research Institute (SRI) was working on its own information retrieval technology. In 1963 his programmers used remotely SDC's Q2 computer in Santa Monica from a CDC-160 installed at SRI. David Evans' group at UC Berkeley (that included a young Ed Feigenbaum) did the same: in 1963 they demonstrated an information retrieval system running remotely on SDC's Q-32. In 1964 MIT adopted a system developed by Ukraine-born radar engineer Myer Kessler, the Technical Information Project (TIP), for online search. The IBM 360, with its huge potential for "random" access to data, made a difference. In 1966 Roger Summit at Lockheed in Palo Alto used a 360 to develop Dialog, an online information retrieval system that was first used in 1967 via a leased telephone line by nearby NASA Ames to search a database of 200,000 article citations hosted on their IBM 1410 (a variation of the 1401 with five times more storage). In 1965 a pupil of Howard Aiken, Gerry Salton (born Gerhard Sahlmann in Germany), founded the department of Computer Science at Cornell University bringing from Harvard an information retrieval project commonly known as "Salton's Magical Retriever of Text" or SMART (officially "System for the Mechanical Analysis and Retrieval of Text") and running on an IBM 7090, that established the field of what one day would be called "search engines". Joseph Rocchio of Harvard developed one of the most famous linear classifier algorithms for this project. Computerized dating originated at Stanford in 1959 with a matchmaking program running on an IBM 650 mainframe computer designed by math students Jim Harvey and Phil Fialer for the Happy Families Planning Service; but the first commercial computer-based dating system was Operation Match, launched in 1965 by Compatibility Research, i.e. Harvard students Jeff Tarr and Dave Crump, and run from a dormitory of the university. Progress would be slow if non-existent until two decades later, when the Internet would come to the rescue with newsgroups and bulletin board systems.
|
Computers in Manufacturing(Copyright © 2016 Piero Scaruffi)
Two important steps toward automating factories were taking place: numerical control and computer-aided design. Numerical Control (the computer-based automation of machine tools) was enabled by improvements in electrical servomechanisms, and these improvements were due to the development of automated anti-aircraft guns during World War II. The first robotic weapon was perhaps the combination of the SCR-584 radar, made at the MIT, and the electrical director M9, made by Bell Labs. A "gun director" is a device that feeds firing tables to a weapon. During the 1930s these directors were mechanical cams, like the ones built by Sperry. The M9 was conceived at Bell Labs as an "electrical predictor for automatic control, calculation, and pointing of a small anti-aircraft gun" (in the words of the man who first envisioned such a system, David Parkinson of Bell Labs). A servomechanism is a device that uses feedback to correct the performance of the mechanism it is attached to. The T-10 (as the project was originally named) was completed in 1941 and consisted of four servomechanisms. This was the machine sending firing data to the gun; but the tracking of the enemy aircraft was done by humans with telescopes connected to the T-10. The scientists at Bell Labs conceived the T-10 director as a feedback system at every level: it was basically a servo of servos. All this progress in automated weapons was due to the electrical servo. Electrical servos had existed since the invention of the electrical motor, but were not practical until the introduction of the "amplidyne", an electromechanical direct-current amplifier invented during the war by Swedish-born General Electric engineer Ernst Alexanderson (the same man who in 1924 had transmitted the first image across the Atlantic). After the Japanese attack on Pearl Harbor, the US army quickly adopted the T-10 and renamed it M9 Director. It was not surprising that Bell Labs, a center for research on communications, came up with the first electromechanical director. First of all, Bell Labs' president Frank Jewett had been a founding member of Vannevar Bush's NDRC in 1940, and the NDRC was supervising the research on directors; but, more importantly, there are similarities between the problem of calculating trajectories and of communications engineering: they both require feedback algorithms. It was to improve the calculations of the M9 that MIT scientists Norbert Wiener and Claude Shannon refined the ideas that would lead, respectively, to Cybernetics and to Information Theory. The other founder of Information Theory, Warren Weaver, had been hired from the Rockefeller Foundation to head the Section D-2 of the NDRC, the one in charge of research on directors. It was in this context that Warren started talking about signal and noise. In fact, Clarence Lovell (Parkinson's boss) emphasized the fact that servomechanisms could be used to solve simultaneous systems of equations, and realized early on that Bell Labs' electrical servomechanisms (viewed as computing devices) could implement an electrical version of Bush's differential analyzer. The M9 was basically an analog computer. Once the T10/M9 was operational, it was a matter of integrating it with the radar developed in 1942 by the MIT Servomechanisms Laboratory in order to obtain an automatic-tracking weapon: the combined system was capable of tracking a target automatically with the radar and of continuously calculating the firing tables for such a moving target. It debuted in 1944 during the invasion of Italy.
These military applications improved the electrical servo to the point that it provided accurate measurement of movement and accurate control of movement. It was then that it became natural to think of using it in combination with computers to control machinery. In 1942 Connecticut-based helicopter manufacturer Sikorsky Aircraft, an aviation company founded in 1925 by Ukrainian immigrant Igor Sikorsky, introduced the first practical commercial helicopter (the R4). In 1948 one of its engineers, John Parsons, based in Michigan, came up with the idea of using punched card machines to control factory machinery. In 1949 the Air Force funded a project by Parsons in collaboration with the MIT's Servomechanisms Laboratory that had worked on the SCR-584 radar (as well as on the Boeing B-29 "superfortress") to build "Card-a-matic Milling Machines". The first machine was demonstrated by the MIT in 1952, using a punched tape for input (instead of punched cards): it contained 250 vacuum tubes and 175 relays. In 1955 several engineers of the MIT team formed Concord Controls that made the Numericord controller, using this time a General Electric magnetic-tape reader. The Numericord was first installed at Giddings and Lewis, a machine-tool maker in Wisconsin. In 1953 Honeywell too pioneered numerical control with the Automatic Master Sequence Selector developed for the air force, a device to control the airplane's autopilot via instructions punched on a tape. In 1958 Patrick Hanratty at General Electric (who had worked on ERMA) unveiled PRONTO, a numerical-control programming language that allowed computers to control machine tools, while the similar Automatically Programmed Tool (APT) was still being developed by Douglas Ross' team at the MIT (mostly by Raynold George). While coming out a few months later in 1959, APT would become the industry's standard. What numerical control did was to introduce the concept of "real time" in commercial computing. It placed the computer inside a "living" system, the factory. The computer was no longer an isolated machine that could compute an exotic scientific problem in isolation and leisurely, but a cog in a complex clockwork that had to interact with other machines on the fly. For these applications the sensor was as important as the processor. The ultimate goal of Computer-Aided Design (CAD) is to build a system that can take a two-dimensional paper sketch and turn it into a three-dimensional object. The development of CAD automation is entangled with the story of spy satellites. When he was in the air force, Kodak's executive Richard Leghorn had proposed to build spy satellites over the Soviet Union (he had originally proposed them when he was in the Air Force). In 1957 Leghorn was in the unique position of knowing the top-secret plans for such spy satellites and of knowing the state of the art in photography. He obtained money from Laurance Rockefeller and founded Itek near Boston to build reconnaissance cameras just in time to sell them to the CIA for its Corona project (1959), beating the competition of camera leader Fairchild Camera. With that money in 1960 Itek funded the Electronic Drafting Machine (EDM) at the MIT, a machine to automate the design process of engineering departments, the first true CAD system. This required the ability to display graphics and not only text on monitors, something that had been experimented by the Whirlwind/SAGE project at the MIT. In fact, the brains behind the project were two MIT gurus who had worked on the Whirlwind/SAGE project: Norman Taylor, who had been hired by Itek, and Jack Gilmore, who had started one of the earliest software companies. They decided to use a PDP-1 just released by another SAGE alumnus, Ken Olsen, and in 1962 delivered the first Digigraphics (as it had been renamed), the forerunner of graphic workstations. In fact, the Air Force was funding a similar project at the MIT itself, precisely in the old Servomechanisms Laboratory (now renamed Electronic Systems Laboratory): AED, engineered as an extension of Algol 60, was released for free to the CAD community in 1965. At the end of 1963 General Motors and IBM debuted DAC-1 (Design Augmented by Computer), another pioneering CAD system (the team included Patrick Hanratty of PRONTO fame). The marriage of numerical control (CNC) and CAD was consummated when Lockheed used Digigraphics (sold by CDC since 1964) to design and build parts for the military transport airplane C-5 Galaxy, deployed in 1968. The first major application of CAD to semiconductor manufacturing came in 1967 when Fairchild introduced the Micromosaic, the first "gate array" chip, a chip that is not finished but that can be customized: that was the beginning of the ASIC (Application-Specific Integrated Circuit) industry. In 1965 Harold Bradley'steam at Lockheed's Burbank laboratory launched a Project Design, written in Fortran and assembly language on an IBM 360 mainframe using 2250 graphic display terminals. It took nine years of improvements, but in 1974 this system became a commercial product called CADAM (Computer Aided Design and Manufacturing), mainly sold by IBM in the USA and by Fujitsu in Japan. This was the high-end of CAD. Cheaper CAD systems used cheaper terminals. Tektronix had been formed in 1946 in Oregon to commercialize the oscilloscope designed by Charles Vollum, an indispensable instrument for the electronic industry. In 1969 Tektronix claimed to own about 75% of the world's market for oscilloscopes. In 1970 it applied its CRT technology to computer terminals and started selling graphic displays. Its series 4000 became extremely popular and in 1975 Tektronix owned 50% of the market for graphic displays. The history of manufacturing applications is tied to the Midwest, where many of the USA's manufacturing giants resided. The problem for them was to guarantee the smooth flowing of materials through the several stages of a production phase. The structure that was used to list all the components of a product was called "bill of material" (BOM). It was in the Midwest that IBM first tried to automate the management of the BOM. These firms were aware of Japan's "lean manufacturing", designed by Taichi Ohno at Toyota, which made factories much more efficient. In 1961 Joe Orlicky at J.I. Case, a manufacturer of tractors, implemented such an automated system on an IBM RAMAC 305. IBM engineers led by Gene Thomas in Wisconsin expanded that idea into more general systems named BOMP ("Bill Of Materials Planning") in 1963, LAMP ("(Labor & Material Planning") in 1964 and PICS ("Production Information and Control System ") in 1966. Eventually the field got the name MRP (initially from "Material Requirements Planning" but later renamed "Manufacturing Resource Planning"). Each generation integrated more functions to further optimize the plant. In 1966 IBM was asked to cooperate with Rockwell and Caterpillar in designing a BOM system for the Apollo mission to the Moon. Vern Watts at IBM's Aerospace Division in Los Angeles led the effort. The result was the Information Control System and Data Language/Interface (ICS/DL/I), a hierarchical database completed in August 1968, the natural evolution of the hierarchical BOMP, renamed Information Management System (IMS) after the Moon landing of 1969, and destined to become IBM's main software product: forty years later more than 95% of the Fortune 1000 corporations would be using IMS to process more than 50 billion transactions a day.
|