Recently, I wrote a post about what’s the same and what’s different about heading off to university 50 years ago versus now. The 50th anniversary of having graduated from McGill had been the catalyst for this thought process. Shortly after I wrote that post, I had an email out of the blue from a former colleague of mine at IBM in the UK from some 47 years ago. We hadn’t communicated since then; wow, that’s a lot of catching up to do! That got me thinking about what computing was like in the late 60s, and how much has changed since then. Those changes make changes in university life seem pretty trivial by comparison.
When getting ready to leave home for university, one of the things my mother determined I needed was a portable typewriter. She was convinced that I wouldn’t be allowed to submit the many term papers I’d be writing in hand-written form and that a typewriter would be essential. As it turned out, I didn’t have many term papers, being in science, and those I did have could be submitted in hand-written form, but my university readiness kit did indeed include a typewriter. Also among my going-away must-haves were my booklets of log tables and trig tables, critical for making calculations for math and for science experiments. Oh my, how things have changed! Both of those “technical” aids have gone the way of the dodo.
With respect to computers and technology, what is the same and what’s difference from when I left home to start university in 1963?
What’s the same:
- The desire to communicate with others through all available technology remains the same.
- The need to make complicated mathematical calculations and the desire to do so using the most convenient, fastest, and easiest available technology remains the same.
- Pretty well everything!
It’s impossible to realize that you are part of the beginning of truly revolutionary change while you are in the midst of it. It’s just new and exciting. What’s impossible to grasp at the beginning of a “revolution” is that the changes you are experiencing are not going to be incremental, the way most changes are. They are going to be … well, ground-breaking, radical, advancing by quantum leaps, impacting how we do everything. You just can’t see that when you’re at the beginning of transformational change, in this case the dawning of the information and telecommunications revolution.
When I left high school and started university in 1963 I had never heard of a computer. I’m pretty sure I had literally never heard the word “computer”. Sure, the Russians had sent a satellite into space in 1957, 6 years earlier. That must have taken lots of complicated calculations. But the complex machines being developed that could make these kinds of calculations remarkably quickly weren’t the story; Sputnik and the Russians’ space success were the story.
One of my first big purchases at McGill after starting courses in math and physics was my first (and last) slide rule. Oh my, I was very excited about that acquisition. I felt like a real scientist. The slide rule, which I could use for multiplication, division, square roots, logs, and trig calculations, replaced my need for my log tables. And having one was very cool. Yes, there were mechanical calculators in labs, but they were anything but portable and could never replace your own slide rule.
I continued to use my slide rule for the 4 years of my undergrad years, but the first handheld calculators were starting to appear. They weren’t cheap in those early days, to say the least, but they could do all the calculations you could do with a slide rule just by pushing buttons. Amazing.
To recap so far:
- Early 1600s: First log and trig tables produced to assist with complex calculations.
- 1900s: Modern slide rule came into use. Werner von Braun brought one with him to the U.S. after WW II and used it at NASA in the 60s. A slide rules was de rigour for an engineer in the 1950s and 60s.
- 1967: Texas Instrument invented the first electronic handheld calculator. Believe it or not, this small device represented an enormous step forward. It was almost like having your own computer, because mathematical calculations were pretty well all a computer could do! Bye-bye, slide rule.
University courses to teach programming were starting to be introduced in some universities in the mid-1960s. I had my introduction to computers and programming in 1965 when I used McGill’s mainframe to program small FORTRAN programs for my crystallography professor to calculate crystal structure measurements, after taking a week-long workshop at McGill’s computing centre. Those were the days!
I started my first job as a computer programmer in May 1967. Let’s stop and think about what computers were like then. I started work on a GE 415 mainframe in Montreal, programming in FORTRAN and COBOL, and then moved to an IBM360-50 in the UK. Computers in the home, or computers that could communicate with each other, weren’t even a glimmer in their inventors’ eyes yet. What were these old mainframes like?
- These computers were enormous in physical size and required air-conditioned rooms to counter the heat they generated.
- They were stand-alone machines; they had no way of communicating with other machines or sharing data. Think of an extraordinarily large, remarkably expensive, extremely primitive PC that cannot be hooked up to the Internet (which didn’t exist). And cannot display graphics, except for some line graphs. And no colour!
- The computer received input (programs and data) from punch card readers or big magnetic tapes, and then from more advanced products like disk drives. Their output was either paper listings, tape, or other storage devices like disk drives.
- Their core memory, where the calculations were done, was in the order of 8-64K bytes. That’s 8K to 64K for a computer that takes up an entire room, as opposed to the immense power our smart phones and iPads provide us with today. That’s K (kilobytes) for thousands, not M (megabytes) for millions or G (gigabytes) for billions, which is standard fare now on the lowest priced laptops. The amount of memory and the speeds available at the time allowed for impressive advances in scientific calculations and in transactional business applications like payroll and accounting. But that amount of memory and processing power would have no way of supporting graphics, video, or nearly any of the applications we take for granted now.
From when I started my life as a programmer in the mid-60s until the late 70s or early 80s, people “wrote” their programs by keying each line of code onto a punch card. Those of us who did this for a living actually wrote out our code (programs) on coding sheets, which were then submitted to the keypunch operators, who keyed our written work into a deck of punch cards. (Students keyed their own decks of cards!) The punch card deck was then submitted to the computer operators, who fed batches of card decks into the computer and then retrieved the printed output (computer listing), wrapped it around your card deck, secured it all with a rubber band, and then placed it in an output area for pickup. You then retrieved your output, reviewed the printout for errors, wrote out your corrections on a new coding sheet, resubmitted, and waited again. With skillful debugging and a little luck, your program worked right after a few tries. You looked it over pretty darn carefully, because this process could take from several hours to several days!
Needless to say, the arrival of networked remote computer terminals, linked to the mainframe by miles of wiring, was a welcome step forward. You could type your program directly on the terminal, just like on your home Commodore 64 or TRS80 (which first came out in 1977), submit your program from there, and wait for the results on the screen, or on a computer listing you retrieved from the computer room. This was a huge advance. It also changed the nature of many jobs, including the need for key punch operators.
Prior to networking computers together so they could share data – and long before the Internet – stand-alone computers really meant standing alone. My earliest experience in data-sharing was in London in 1969, when an IBM operation in one part of town wanted to run its data on a larger, faster machine at another site. I was given the task of transporting a large disk pack of data of interest via London cab from Croydon to the City. That was the closest thing to networking in 1969!
The changes since then have been continual, enormous, and transformational. We all know how pervasive computer technology is in all aspects of our lives. We carry small devices that provide us with instant access to a phone, email, social media, encyclopaedias, language translators, dictionaries, a calculator, a camera and video recorder, games, newspapers, address book, personal calendar, maps, music, and even TV. That’s how far we’ve come in 50 years; a far cry from a pay phone in the stairwell, a typewriter, a slide rule, and a TV in the lounge with 2 channels. Commonplace applications that we take for granted today, like word processing, Photoshop, and spreadsheets, have transformed the way we do all sorts of everyday activities. Technology has transformed how we interact with each other right across the globe, how we work, and how we play.
When I left home for university I had never heard of a computer. As it turned out, I was introduced to the emerging world of computing earlier than most … and the rest, as they say, is history. I had absolutely no way of knowing what career paths were about to open up. As our current generation of high-school leavers ponders what possibilities lie ahead, it is worthwhile reminding yourself that sometimes you just can’t know in advance. Sometimes you just have to let it all unfold and be open to unexpected opportunities. Here’s hoping for many unexpected opportunities!
Photo credits: Wikipedia, Wikimedia