Bugs - the bane of a programmers life. Life would be just great if we coded like wizards and everything worked first time. Or maybe the process just wouldn’t be as rewarding without those pesky obstacles.
It may sound rich to suggest that we take for granted how easy debugging is these days - but it’s true. We have all sorts of tools at our disposal, not least including sophisticated IDEs with intelligent built in debuggers, but also a wide community knowledgebase on places like Stack Overflow which can pretty much solve almost anything your debugger doesn’t pick up.
Compare this to what our computer scientist forefathers had back at the dawn of computing and you’ll thank your lucky stars what you have today - read on to find out where we get the name ‘bug’ from and how the first debuggers went about it.
A literal bug in the system
The first recorded use of the term ‘bug’ tells us just how different the world of computing was back in its early days. If you don’t believe us, then check this out - the first bug was an actual moth that got stuck in a computer relay.
The written entry in the log for the first bug is not very descriptive, but the computer scientist that inputted it does give us a very clear picture of what happened. Grace Murray Hopper who was part of the team operating the Harvard Mk. II computer in 1947, reported that the team had discovered a moth inside one of their relays - on Panel 5, relay 70 to be exact - and sellotaped the actual bug to the page to prove it!
Why they felt the need to immortalize the poor moth by gluing it down in their logbook is something that can only be speculated upon - perhaps they knew the significance of what would be one of the most commonly used terms in programming - perhaps they just found it amusing. We’ll never know.
Here’s our suggestion - next time you solve a bug, get a report, print it out and stick it in a scrapbook- you should be as proud of your efforts as your programming ancestors.
Debugging in the 40s
That original ‘debugging’ in 1947 gives us a look into how different programming and debugging used to be. Back then computer design and programs were more or less the same thing - you designed circuits to do a specific task.
Computers were a lot less compact than today and their components were a lot larger and more fragile, so a lot working out why they weren’t working, or ‘debugging’ was a case as much finding broken or missing pieces as it was working out errors in logic - a literal bug in the system was just one of many practical problems that needed to be accounted for.
Typical debugs of the 40s
Computers developed hugely during the forties, going from mechanical calculators through to the makings of machines that could actually read written programs, the makings of modern computing machines.
The Second World War drove the practical invention of Computers using the academic thought of early 20th century to design devices that would decrypt German Enigma encryptors, starting with the famous ‘Bombe Machines’ invented by the British, under Allan Turing, in the Second World War and ending with the American made Colossus computer.
The devices went from using simple electromechanical rotors to fully fledged electrical computers with complex interrelated circuitry components.
As computers which had programming hardwired into their physical design, debugging was critical in the planning phase of the design. This was more akin to something like structural engineers troubleshooting a prospective piping network or a bridge, and having to work out with logical prediction what would go wrong.
With the time and money as a limited resource, engineers of these devices needed to be pinpoint perfect with their designs. However, even when they were perfect, things could go wrong - with a spaghetti mesh of wires and hundreds of moving/electrically firing parts debugging was as much led by technique and having a sharp eye for component failure and displacement as it was working out logically where things are going wrong.
Engineers in this period became experts on making these machines work and troubleshooting typical problems that arose with them.
Debugging the first stored programs
By the end of the decade we have the first instances of what we can truly call a computer that could run programs stored on its hardware. These computers would be fed programs through tape that would set memory components to their on or off states.
Debugging these programs can be considered to be quite similar to modern day debugging, as the same principles behind manipulating Boolean logic have not changed.
However, debugging code was much more important to get right first time with these machines, as reprogramming frequently required completely redesigning the circuits to work with new programs, so errors critically had to be removed in the planning of the circuit design.
The first of these computers, the EDSAC from Cambridge in 1949 , was able to show, for the first time, where a ‘bug’ might be happening. If the program worked it would print something - if it didn’t work it would print the location of the memory module where the program stopped. Scientists could inspect the component to see if, for example, two numbers were converging during an operation, a classic ‘bug’.
The makings of modern programming, and alongside it debugging, appeared less than ten years after the first Bombe was designed - pretty good going, no?
So we do have it easy
So there you have it, a very brief rundown of debugging in the 40s. We said you had it easy and hopefully now you agree - from working out mathematical and mechanical failures through to inspecting the physical circuitry for actual bugs, debugging the first computers required an entirely different skill-set to the debugging the programs of today. Although do let us know if a program of yours isn’t working because you found a moth inside your laptop, we’d love to cover it in our next newsletter.