Aug 09

A Crucial Decision

dinnerI’m sure we can all recall at least one turning point in our lives, where if we had made a different decision our lives probably would be quite different. If you could go back in time and make a different choice, would you want to do so? Here is the story of my crucial decision. It happened during my Freshman year in college.

Since the age of thirteen I knew I was attracted to men sexually, but back then (I was a Freshman in 1954) I was unwilling to admit, even to myself, that I was gay. I did not come out as gay until I was in my forties.

One day late in my Freshman year a classmate I knew, I’ll call him David, called me up and invited me to his rooms for supper. We were not supposed to cook in our rooms, but apparently he did. David was handsome and I liked him. He was also quite effeminate, and either I suspected or someone told me that he was homosexual. Nonetheless, I accepted his invitation, but at the last minute I got cold feet and asked him if I could bring a friend. He said yes. I dragged my friend Jeff along with me. David was visibly disappointed. He had a spacious private room. He had prepared a very romantic meal, with wine, candles, two steaks, and salad. David split the two steaks into three portions. Jeff and I ate quickly, embarrassed, and left.

I have often wondered what would have happened if I had kept that dinner date alone. Would David have been able to break through my wall of fear? I really liked him a lot. Or would it have been traumatic for both of us. Would I have fallen in love with David (it would have been easy to do). If so, I might have come out to myself and others as gay. Would I then have gone to graduate school? I’ve often suspected that I channelled my energies into my academic life because I was denying my emotional life.

If I had come out as gay, even if I had continued my academic career I probably would have ended up near New York, as in fact I did. Had I been openly gay I would have been gay in New York in the 70s when AIDS was rampant but unrecognized. Would I still be alive? Do I wish I had gone on that date alone? I don’t know.

- Rudd Canaday (ruddcanaday.com)
    – Start of blog: My adventures in software


Share this post:

Jun 29

Heathkit and EICO

heathkit scopeI spent ninth grade with my parents in Europe while my father was on sabbatical. Fascinated by electrical things but away from my workshop, all I could do was study, and so I taught myself about electronics. I describe this in another post. By the time I got home I was raring to go. I started by building devices from kits. I think my first electronic kit was a vacuum tube EICO FM tuner.

I’m happy about the tremendous advances in electronics, except for one thing. Modern devices are built by robots using tiny surface-mounted integrated circuits. An entire computer central processing unit can be on a chip smaller than a fingernail, with hundreds of contacts on its underside. These contacts have to be aligned precisely with corresponding contacts on the circuit board, and then soldered in place with precisely the correct amount of heat. I have seen instructions on how to do this at home, using a toaster oven, but the procedure is so difficult that I am not willing to tackle it. And, even if I did, the interesting design challenges are inside the integrated circuits, not in the assembly of them into a device.

In 1952 when I built my first kit transistors were a rarity. Most circuits were built with vacuum tubes and other components. Sophisticated kits used some printed circuit boards, but most interconnections were made by soldering wires from pin to pin. Building these kits was lots of fun. The neater you were, the better the device functioned.

To build a kit you followed detailed step-by-step instructions, for example, “Connect a blue wire from pin 8 on vacuum tube V5 to one end of resistor R12.” You checked off each step as you performed it. When I was in college my roommates and I made a bit of money by building EICO tuner kits for a local Hi-Fi store. We would set the latest tuner kit up on a table in our dorm living room and any one of the five of us who had a few minutes would perform and check off a few of the steps in the instruction manual. This way we finished a kit in short order. The owner of the HI-Fi store told us that the kits we constructed were the best built and best performing ones he had.

One kit company, Heathkit, had a very complete line of electronic test instruments at reasonable prices. Back then, unlike now, it was much cheaper to build it yourself. In tenth grade this was heaven for me. Over the next couple of years I amassed a fairly complete electronic workshop. My pride and joy was my Heathkit oscilloscope. I also had signal generators, analyzers, a tube tester, and everything else I needed to design, build, and repair most vacuum tube circuits. By the time I entered college I was beginning to built circuits from transistors, so I built a transistor tester too. Still, though, everything I built was hand wired. It was not until much later that I started designing circuit boards.

After college and graduate school I was still building Heathkits. About 1965 I built a Heathkit color television, my first color set. I also built simpler kits with my two sons. I remember a Heathkit digital clock that particularly fascinated my younger son.

My parents were not wealthy, so in high school and college there were many Heathkits I could not afford. In particular I lusted after the Heathkit analog computer, but at almost $700 it was way beyond my means. I think this is fortunate, because it meant that I had to design things for myself, and scrounge for surplus and discarded parts to build them. Building Heathkits was a lot of fun, but not at all intellectually challenging.

The most challenging kit I built was not a Heathkit. It was the full size Schober “Recital” organ which I built in my thirties. Probably the most challenging device I ever built was a computer I designed from scratch, as I describe in another post.

- Rudd Canaday (ruddcanaday.com)
    – Start of blog: My adventures in software


Share this post:

Jun 22

Building DIR/ECT II

vax11-785In 1975, when I was 37, I got my first and only job at Bell Labs which was not research. Bell Labs had written, a long time before, an elaborate software system, named DIR/ECT, used in printing white pages phone books. This sounds like a simple job, but the rules for how listings are arranged, alphabetized, and displayed in the white pages were arcane, dating back to the turn of the century. This software was obsolete, using batch processing with data on magnetic tapes. It was hard to use and error prone. I was asked to look at it to see what could be done.

I worked with Ron, the head of the department maintaining the DIR/ECT system. We looked at the system and decided that it was not practical to improve it; to bring it into the modern on-line age. We decided that a new system should be built. Officially named “DIR/ECT II,” we called it “the upgrade” to emphasize that it would be fully compatible with the old system. The work of maintaining the old system, and of building a new one, was funded by the “operating companies,” the telephone companies that were part of AT&T. So Ron and I had to convince the operating companies that a new system was needed, and to fund it. This was not difficult, since the old system was so difficult to use. We estimated that the job would take three years.

I formed a new department, which ended up with 50 people under seven supervisors, to build the new system while next door Ron’s department of about 30 people maintained the old system.

DIR/ECT II (the “upgrade”), my first and only Bell Labs job outside of research, was by far my largest department, and my least successful job. Since it was being paid for by the operating companies (the AT&T telephone companies) I had the job of presenting our status in a twice-yearly meeting of the companies, to convince them to continue funding the project. At first these meetings were easy, but as we missed our deadlines they became quite difficult. Meanwhile, back at the ranch, we realized that the job was much more difficult than we had thought, and much more expensive in computer time. It took us almost five years to complete DIR/ECT II, not the expected three.

I had decided early on that we would build the system on UNIX using a DEC (Digital Equipment Corp.) minicomputer. DIR/ECT II was the first AT&T product built on UNIX, and perhaps the first on a minicomputer. The decision to use UNIX was controversial, largely because UNIX only ran on minicomputers. It turned out to be an unwise decision. The computational demands of the system were much higher than we expected. At that time DEC dominated the minicomputer market. I remember us using a DEC VAX machine, but Wikipedia tells me that the VAX machine was introduced in 1977, so perhaps we started with a DEC PDP-11. In any case, a couple of years into the project we realized that we needed a faster machine. Fortunately DEC was about to introduce a new machine, and the salesman assured me it would be much faster. Previous releases of new machines by DEC had each doubled the speed of its predecessor, but this time the new machine, when it became available, was not much faster. So we spent quite a bit of time trying to make our system more efficient. When we finally went to trial, in Florida, performance was marginal.

Meanwhile Ron, who had the old department, had a difficult job. How was he going to motivate his people to work on the dinosaur while next door people were working on a sexy new system? Ron decided to challenge his team to improve the old system so drastically that by the time my new system was available the operating companies would not need it. Neither Ron nor I thought this was possible. The old system was just too cumbersome. But we agreed that the challenge would be good for his team. Ron invented the slogan “Obviate the Upgrade,” and threw down the challenge. To our surprise, by the time my team finished the upgrade, the old system had been transformed into a modern, on-line system. And, indeed, after the new system had successfully passed its trial period, none of the operating companies wanted it and it was abandoned. Ron and his team had done a terrific job.

Moral: competition can lead people to accomplish remarkable things.

- Rudd Canaday (ruddcanaday.com)
    – Start of blog: My adventures in software


Share this post:

Jun 15

My Second Computer

ohio scientific single board computer

Ohio Scientific SBC

In 1971 the first personal computer, the Kenbak-1, became available. Only 40 were sold. The real personal computer revolution started, I think, with the availability of the Altair computer kit in 1975 and the Apple 1 in 1976. Also, in 1971, the Intel 4004, one of the first microprocessors, appeared. These integrated circuit chips incorporated a complete central processing unit (CPU) for a computer, albeit quite limited in performance. Then in 1976 RCA introduced the CDP 1802 microprocessor. This device was used in the Galileo spacecraft because it can run in a very low power mode. It is still made and you can buy one new for $4.95. In a previous post I talked about using the CDP 1802 to build my first computer

I had decided that when a decent personal computer cost less than a car I would buy one. But I missed by a lot. My first real computer cost much less than a car.

A company named Ohio Scientific came out with a single board computer. This machine consisted of a single fairly large circuit board (perhaps 12 X 18 inches) which included a keyboard and the electronics needed to drive a CRT (a TV monitor). It cost about $280 ($1250 in today’s money). It had no case, although perhaps one could be bought separately. On the surplus market I found a CRT (a display screen) complete with necessary electronics, but no case. I also found a pair of high speed magnetic tape drives that had been designed for a WANG word processor. With these, plus power supplies I designed, I finally had a complete, useful computer. I don’t remember if I had a printer, but I suppose I did. The final setup occupied most of the surface of my desk, and could not easily be moved. In essence it was built into the desk.

The single board computer had the Basic language built in. Basic, which stands for “Beginner’s All-purpose Symbolic Instruction Code,” is a very interesting language. Professors Kemeny and Kurtz at Dartmouth thought all students should know how to use computers, so they designed Basic to be easy to program. I think it is one of the first interpreted languages; you can execute Basic statements immediately, without going through a compiler. Basic ran on the Dartmouth time sharing system, and was made available to all Dartmouth students. Also, Basic was one of the first pieces of software to be offered to everyone free of charge. It shipped with almost all early personal computers.

To make use of my elegant tape drives I needed an operating system. I love designing operating systems. I did not want to build it in Basic, so in Basic I built an assembler, a program that facilitates building software for the actual machine instructions. Then, using the assembler, I built an operating system. Now I had a complete, fully functional computer. I can’t remember what I used it for; the fun was in building it.

This story has two morals: It’s more fun to build than to buy, and, bugs are almost always in the software, not the hardware.

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

Jun 08

Artificial Intelligence

minsky

Marvin Minsky

After college I went to M.I.T. for my graduate work. I started at M.I.T. in 1959. I had to spend the first year and a half completing coursework for the Ph.D. qualifying exams and taking the exams. With that behind me it was time to choose my M.S. thesis topic. I decided that I wanted my thesis to be in the area of artificial intelligence.

In 1961 Artificial Intelligence was just coming into its own, and M.I.T. was the leader. Two men, Marvin Minsky and John McCarthy, both born in 1927, founded the M.I.T. Computer Science and Artificial Intelligence Laboratory in 1959, the year I entered M.I.T. These two men are the acknowledged leaders in the field. Minsky is still at M.I.T. McCarthy went to Stanford in 1962, where he remained until his death in 2011.

The C.S. & A.I. lab in 1961 was a wild place. Many of the A.I. graduate students had their desks in the same room, and it seemed to me noisy and chaotic. Darts were always being thrown (at a dartboard) as were wadded up pieces of paper (at each other), all of this amidst lively discussions and arguments about A.I. I don’t know how anyone got anything done in that atmosphere, but many of the groundbreaking advances in A.I. happened there.

In 1950 Alan Turing had proposed a test, now called the “Turing test” to determine whether a machine was intelligent. In the Turing test you sit at a teletypewriter and converse with a person out of sight at another teletypewriter, or is it a machine? If it is a machine that can fool you into thinking it is a person, then the machine is intelligent. Unfortunately, Turing introduced the idea by describing a party game in which you are trying to determine if the unseen person is a man or a woman, thus complicating his explanation by introducing the notion of sex which for a while obscured the simplicity of his test.

The common belief in the C.S.& A.I. Lab was that we would achieve true machine intelligence, a machine that could pass the Turing test, probably within five years, certainly within ten years. Many others believed it also. At M.I.T during 1964 – 66 Joseph Weizenbaum wrote a program called ELIZA to analyze natural English sentences. One of the scripts he wrote for ELIZA, DOCTOR, simulated a Rogerian psychotherapist. This was an easy target, since Rogerian therapists typically work by just asking questions. ELIZA was not at all intelligent. Weizenbaum was focusing only on analyzing English sentences. However, many people, including many psychotherapists, saw it as much more and thought that machined could revolutionize the field of psychotherapy.

ELIZA DOCTOR is still available on the internet. It is fun. If it does not understand your sentence it typically replies “Tell me more about your father.” It is hard to believe, today, that anyone could have thought it intelligent.

I’ve often thought about why machine intelligence, which we have yet to achieve, is so much harder than we thought fifty years ago. I think that a central issue is world view. When we talk with anyone else, we depend on the fact that they have a view of the world similar to ours. That is what makes communication across cultures sometimes difficult. Since we share a vast amount of common information with other people, we communicate in shorthand, taking for granted that shared information. Computers lack that comprehensive world view.

One of the most impressive recent advances in A.I. is the IBM program Watson, which recently won the game of Jeopardy playing against two human experts. Jeopardy is a very interesting challenge because it uses colloquial English, but more importantly it depends on having an excellent world view. Watson got it’s world view by mining the internet. So the world view shared by all humans is now available to machines. How far are we now from true machine intelligence?

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

Jun 01

Early Turing-complete Computers

Eniac

ENIAC Computer

Before and during World War II all computers were special purpose machines, each designed to solve a particular war related problem. After the war computers continued to be invented at a rapid pace, but the focus shifted to general purpose (Turing-complete) machines.

In 1945 Turing himself turned his attention to designing a practical general purpose machine. He was still a Fellow of Cambridge University (which he remained until his death) and could have joined Cambridge’s effort, then underway, to design a computer, but instead he chose to form a group funded by the British government, building on the work he had done at Bletchley Park. Unfortunately he failed to take into account how the world had changed with the end of the war.

At Bletchley Park Turing and his team had been given whatever they needed. At one point when they were not being allocated enough resources Turing and his colleagues wrote directly to Winston Churchill. The day he received the letter Churchill ordered that Bletchley Park’s needs be given top priority. But after the war Turing found his requests for funding and assistance mired in governmental red tape. The machine he designed in 1945, the Turing ACE, would have been one of the first Turing-complete machines, but it was never completed. Discouraged, Turing returned to Cambridge.

Several vacuum tube computers were built followed the Bletchley Park Colossus, most notably the ENIAC in 1946, the first electronic general purpose computer. Although designed to calculate artillery firing tables, it was the first Turing-complete machine, capable of solving any problem any computer can solve. It was called in the press a “giant brain” and it was giant. It contained 17,468 vacuum tubes, so as you can imagine it was out of service a lot with a burned out tube. It weighed 30 tons. It was 100 feet long. Unlike modern computers, it did not store its programs in memory. Instead, it was programmed using patch cords, much like a huge telephone switchboard. One of its first programs was to study the feasibility of the hydrogen bomb.

ENIAC was built at the University of Pennsylvania, designed by Professors Eckert and Mauchley. One big problem with the ENIAC machine was the difficulty in programming it. Even before the machine was finished, Eckert and Mauchley came up with a better design. Their EDVAC computer stored its program in memory, a “stored program computer.” It was not completed until August 1949.

Unfortunately for Eckert and Mauchley, John von Neumann, who was a consultant on the project, wrote a description of this new design and one of the engineers working on the project, Herman Goldstein, published it, removing all references to Eckert and Mauchley. This resulted in two unfortunate (for Eckert and Mauchley) consequences. They were not able to patent their design, and two teams in Great Britain, using this paper, beat them to the first electronic stored program computers.

It is not clear who on the EDVAC project thought of storing the program in memory. The idea may have been kicked around in many places. But, based on his paper, von Neumann is widely credited with the invention, and computers with stored programs (which includes all modern computers) are called “von Neumann architecture” computers.

In 1948 a team at Manchester University built the first electronic stored program computer, the Manchester Baby. Turing joined this effort and participated in the design. The Manchester Baby was only a demonstration of feasibility, but it led to the Manchester Mark 1 in April, 1949. This was followed by the EDSAC in May 1949, designed at Cambridge University. The Manchester Mark 1 was the prototype for the first successful commercial computer, the British Ferranti Mark 1, while EDVAC became the prototype for the first U.S. commercial computer, the Univac 1, 16 months after the Ferranti.

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

May 25

Lessons in Management 1

teacherLooking back over my career in graduate school and at Bell Labs, I am surprised to realize that even though I set out to be a software designer, most of my jobs involved teaching people or leading teams. In the process I have learned quite a bit about leadership.

One very common mistake that managers make, I believe, is to underestimate their people. One of the reasons I have been successful is that I have hired very competent people and then trusted them to do their jobs, working with them more as a colleague than as a boss. However, my first lesson taught me that one also must not over-estimate people’s abilities.

I spent five years in graduate school, working half time as a teaching assistant while taking classes. In my third year I took a famous class in active circuit theory, taught by the legendary Professor S. J. Mason. It was one of the most difficult classes I took. I struggled through the semester, barely able to understand the difficult material. To my surprise, I got an A in the course.

The summer after I took Prof. Mason’s course he asked me to come to his office. He told me that he was going to stop teaching the active circuit theory course, which he had taught for many years, because he was tired of teaching it,. However, when students found out that the course was going to be discontinued they clamored for it to be taught one more time so they could take it. So the administration decided to offer it. Prof. Mason told me that he wanted me to teach it. I was astounded. Not only had I found the material very difficult, but also I had never been responsible for teaching an entire course.

“Why me?” I asked. “Because,” he replied, “you are the only student in the class who really understood the material.” (Really???) So I agreed to teach it.

I spent the last month of the summer preparing to teach Prof. Mason’s course. At that time I did not yet have my M.S. degree. When, at the start of the term, I looked over the enrollment for the class I discovered that the class was full and all of the students had at least a M.S. degree. Many were post-doc students who already had their Ph.D. degrees.

I was petrified. I was convinced that most of the students were much smarter than me. I was afraid that I would bore them. At the first meeting of the class, on Monday, I plunged right in to the material. I gave what I thought was a reasonably good lecture. The next meeting, on Wednesday, I continued my rapid progress through the material, hoping I would not bore these very bright and accomplished students.

On Friday, as I was about to start my third lecture, one student, obviously very nervous, came up to my desk and said “Professor Canaday, I’ve been delegated by the class to tell you that none of us understand a word you’ve been saying.”

After a moment to digest this, I turned to the class and said “I understand that you have not been understanding anything I’ve been teaching.” A chorus of affirmation met this statement. “Perhaps I have been going a bit too fast,” I said. Again affirmation. “OK,” I said, “let’s start over at a slower pace.”

So I started over from the beginning and spent almost two weeks covering the material I had covered in the first two days. From that point on the class went quite well.

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

May 18

Why were early computers so late?

bell-labs-relay-1

Bell Labs Relay Computer #1

When I was born, in 1938, there were no computers. The word “computer” meant a person who used a calculator. I don’t know why computers did not exist then. The seeds had been planted well before. The first general-purpose computer was designed in 1837 by Charles Babbage. He planned to build it, but was not able to get funding. In any case his computer really was not practical given the technology of his day, gears and levers. His machine would have been the size of a room, driven by a steam engine, very slow, and terribly expensive to build.

Ada Countess Lovelace, Lord Byron’s daughter, worked with Babbage to document his machine. She is often called the first programmer. She was the first person to see the potential of computers to do more than arithmetic. She foresaw that computers could, for example, write music. However, there is no evidence that the work of Babbage or of Ada Lovelace had much impact on the world. Apparently most of the twentieth century computer pioneers did not know of them.

The second person to design a general purpose computer had a profound impact on the world. Alan Turing, who started out as a pure mathematician, made major contributions to our understanding of computers and to the field of artificial intelligence as well as being instrumental in breaking German encrypted messages during World War II. After the war Winston Churchill said that Turing’s work was the greatest single contribution to victory in the Second World War.

In 1936, the year my parents married, Turing published a paper in which he answered a question in theoretical mathematics called the Entscheidungsproblem. In doing so he invented a computer, although he did not call it that. This machine is completely impractical and he never intended that it be built, but in proving that his machine can compute any computable mathematical function, Turing gave us an unequivocal definition of what it means to call a computer “general purpose.” It is easy to show whether any given real computer can imitate a Turing machine. If it can, it is called “Turing complete.” Any Turing complete computer can do anything any other computer can do. All present-day computers are Turing complete. My smartphone can do anything an IBM mainframe computer can do. Except for limitations in memory size and attachments, so can my microwave oven.

Babbage’s computer was Turing-complete, but none of the several computers built during my early childhood were. The first computer that was Turing-complete in practice did not come along until 1946 (the ENIAC) when I was eight.

Work on the first computer ever built started in 1938, my birth year, and was completed at Bell Labs in New York in 1939. It was the brainchild of George Stibitz, a Bell Labs engineer.

As a hobby Stibitz took home some surplus telephone relays (electrically operated switches used in dial telephone switching equipment). He played with these, discovering how to use them to construct the building bones of a computer. When his boss heard about this he asked Stibitz to build a computer to be used in designing telephone systems, the Bell Labs Relay Computer #1. It was not a general purpose computer, really more like a very elaborate calculator for dealing with complex arithmetic, but the techniques it incorporated were applicable to more capable computers and led to four more machines at Bell Labs.

All the technology used in early machines had existed before 1920. Dial telephones, which had been in major cities for years, came to my home town when I was eight. I remember being fascinated by our new telephone with its dial. The ‘phone was installed before the switching system was operational, so we still had to wait for an operator. Since the telephone company was letting the operators retire or transfer to other jobs in preparation for the new system. the wait for an operator to answer the phone got longer and longer. One day, waiting for the operator, I could not resist the new dial, so I dialed ‘0’ for ‘Operator.” Apparently the operator came on the line just as I dialed, and the sound of the dial pulses in her headphones were painfully loud. “Don’t EVER do that again!” she scolded.

The early machines were quite expensive. Except for the first Bell Labs machine, all early computers were built as part of the war effort (World War II). Perhaps that is why computers were not built until I was born.

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

May 04

Burroughs E101 – a weird computer

e101As a Sophomore at Harvard, in 1956-7, I audited an introductory course on computers. That is where I wrote my first program, for the Univac 1. In this course we studied a number of machines, from punched card tabulating machines to the Univac. The most unusual of these machines was the Burroughs E101.

The E101 was a weird amalgam of manual calculator and computer. It was a desk-sized machine with what looked like an ordinary four-function electromechanical calculator in the center of the desk, and a set of pin boards set into the surface of the desk to the right of the calculator. The machine was programmed by inserting pins into holes on the pin boards. There were 8 pin boards each holding up to 16 instructions, giving a total capacity of only 128 instructions. Each of the 8 pin boards could be invoked manually from the calculator keyboard to perform some calculation, or all 128 instructions could be used for a program to run automatically. The photo shows an operator programming one of the pin boards.

Data memory was a magnetic drum with a capacity of 100 numbers. The instruction set included conditional branching so the machine was Turing-complete.

The E101 was electronic, with 160 vacuum tubes, 1,500 diodes, and about 20 relays. the calculator-like input-output device appears to have been electromechanical. The E101 consumed 3Kw of power.

The E101 was in prototype in 1954, so it probably hit the market in 1955, the year I entered Harvard as a Freshman. It cost $32,500, which is $285,000 in today’s money, much less than a real computer cost then. It was slow. It could do 20 additions or 4 multiplications per second, and could print two words of 12 digits each per second.

The E101 was not a success commercially. Why am I not surprised?

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post:

Apr 20

My Home Built Computer

rca1802The first commercial microcomputer was the Intel 4004, introduced in 1971. It cost $60, which is $350 in today’s money. These devices fascinated me, but $350 was too much to spend on something of no practical value. Then in 1976 RCA introduced the CDP 1802 microprocessor. This device was used in the Galileo spacecraft because it can run in a very low power mode. It is still made and you can buy one new for $4.95.

This device interested me for two reasons. First, it was relatively inexpensive. I don’t remember the price, although it was more than $5. Second, it had a built-in capability called DMA (Direct Memory Access) which made it possible to input information into the memory with very little additional hardware. So I decided to build a computer.

While in graduate school I had a job one summer working for Honeywell. My assignment was to write a program to minimize the total wire lengths in a wire-wrapped board. Wire wrap is a technique used to interconnect electronic components. Commercially, wire wrap circuit boards are assembled by automated machines, but the technique can be used for hand-made boards. It is a very painstaking and exacting job, because every one of the hundreds of wires interconnecting the components must be individually cut to length, stripped of insulation at each end, and then attached to the correct two pins of the electronic component sockets using a wrapping device (a “wire wrap gun.” Only three or four wires can be attached to each pin, so when building a complex circuit you have to plan the wiring carefully.

The computer I built consisted of a main circuit board which was about 6” X 12” plus three auxiliary boards about 4” X 8” each. My computer had about 60 integrated circuit devices, each with 14 or more pins. Many had 24 pins. The CDP 1802 was the largest chip with 40 pins. So, in all, there were well over 1000 pins that had to be interconnected, and a single error would render the computer inoperable.

Like most computers, my computer had to have lots of things in addition to the microprocessor. It needed two kinds of memory: the volatile memory (RAM) which held programs and data while the computer was running, but which forgot everything when the computer was turned off, and the read-only memory (ROM) which was pre-loaded with information that remained unchanged. Modern read-only memory can be altered easily, but back then programming the data into the ROM was difficult. First the memory had to be erased completely and then the new data programmed into it, using much higher voltages than were normally used in the computer. This ROM programming usually was done separately from the computer using a special device.

Fortunately my job at Bell Labs gave me access to several things I needed. I could use the computers and equipment at work to program data into my ROM memory. And I could borrow a wire wrap gun.

In addition to the RAM and ROM, my computer needed some sort of input device, some sort of output device, and some way to get it started. I can’t remember what I used for input. Probably I used a keyboard. For output I used an oscilloscope, usually used for displaying electric waveforms. An oscilloscope was essential when building electronic devices, to allow one to see what a circuit is doing, so of course I had one. I built into my computer the ability to display text on my oscilloscope, in 16 lines of 16 characters each.

Finally, I needed a way to tell the computer how to start running. Here the built-in DMA capability of the CDP 1802 was crucially important. My computer had a row of 8 toggle switches, representing one word of data. it also had two toggle switches and a pushbutton. The two toggle switches allowed me to set the computer to run, stop, or load. In the load position I could use the eight switches and the pushbutton to load data into the machine’s memory. Just a few words input manually were sufficient to tell the computer to start running the program stored in the ROM.

After designing my computer and checking the design carefully, I wrote out a list of each of the hundreds of wire wrap connections to be made. I checked everything carefully, because I knew that a single error probably would render the computer inoperable. I carefully made each of the connections, and finally turned the computer on. To my surprise, it worked!

However, as I ran the computer I discovered that occasionally, at random, the computer would fail. I was sure that I had made some mistake in wiring the machine. I spent hours checking everything, but could not fine the mistake. Finally I turned my attention to the software I had written. Since the error seemed to occur randomly, I didn’t think the software was at fault. But it was. From this I learned that if a computer does not work, the problem is probably the software, not the hardware.

- Rudd Canaday (ruddcanaday.com)

    – Start of blog: My adventures in software


Share this post: