Did you ever get that feeling when you think the whole world has it out for you, that everyone is plotting and conspiring to make your life a misery? Don’t worry, you aren’t alone. Google gets you. Google understands.
Trust Nobody - Not Even Yourself...
So one time our lovable search engine giant concluded that every website on the internet was up to no good.
Had you been surfing the world wide web in the early hours of the 31st January 2009, you may have come into some trouble trying to access your favourite online content - every page on the internet was flagged with an ominous ‘This site may harm your computer’ and users were prevented from going in to them.
Google obviously thought that every website was conspiring to harm their users. This included Google’s own webpages. I mean fair enough to Google. How can you trust your friends, when you can’t even trust...yourself?
In all seriousness, think about it though. This was January 2009 - you needed Google.
You probably sat there unsure what to do with yourself for the 40 minutes the bug lasted. iPhone 3G in hand, you were probably looking for news about Barack Obama’s first days in office, just about to put on ‘Single Ladies’ by Beyonce on Youtube. You probably tried to go on to Twitter to complain that you couldn’t book tickets online to see the Box Office topping Clint Eastwood film ‘Gran Torino’. Probably.
It may surprise you to learn that I Googled all that outdated pop culture. See how useful Google is?
Noble Causes, Major Bugs
The error was this: there’s this non-profit website that works with Google called StopBadware.org. This site researches and compiles a list of websites which are known to sneakily upload and install malware onto your computer, and publishes them - a noble cause to be sure.
Periodically StopBadware.org sends Google a file which contains the list of urls to flag up as dangerous, which Google then warns people about before clicking on them. Now the error here was human, as always - on January 31st the list sent by StopBadware.org contained an errant ‘/’ in one of it’s lines.
I don’t know if you’ve ever seen a URL before, but forward slashes are in all of them. Google’s code read this forward slash, and any URL that contained it, as as a URL to be worried about.
Thus, in the morning of the last day of January in 2009 every single site on the world wide web, as far as Google’s code was concerned, was diseased and dangerous, and it put in the measures accordingly.
It’s Actually Quite Scary If You Think About It
For forty minutes nobody could use the Internet unless you directly typed in the URLs.
It may not seem like a long time, but think about the implications for a second. One simple bug and the the largest website in the world, to which almost every other one depends, was rendered useless.
Imagine if it had been longer. Imagine if it had taken Google three days to spot where the problem lay. Or a week even. I don’t know if I could live that long without cat videos and facebook.
Or more seriously, if I were an online business owner, I don’t know how I could live that long without...you know...money.
It’s roughly estimated that digital advertising spending across the board will likely be around $335bn for the year 2020. If we assume that all that revenue, at least to some degree, is tied to Google being up and running, then we can roughly say that 40 minutes of downtime would equal approx. $25m dollars wasted. 3 days would be $2.7bn. And that isn’t even considering the revenue loss from ecommerce sites like Amazon, who depend on organic traffic to function.
Now that we talk big numbers reader, it doesn’t seem like such a trivial matter any more.
A Bug For The Better?
Despite my doomsaying, in the end, this could be one of those bugs that was actually more useful long term than harmful. The problem was swiftly solved, Google quickly apologised, and not too much money and productivity was lost, thankfully.
But how can it be useful, you ask?
Here we can be reminded of Y2K, another bug we have covered here. It can be argued that Y2K did not happen purely because it was so worked against, and that so many measures were put in place to stop it happening.
Similarly it's good that Google actually did have a ‘trial’ run like this in place to scare them into implementing more ‘robust checks’ suggested by Mayer in her apology.
The 2009 bug could have been harder to find, it could have been bigger and it could have been more damaging.
Nearly ten years on, the number of companies that depend on Google for their revenue are far greater than they were in 2009, and continue to grow. Less and less is it acceptable for a giant like Google to have any sort of unreliability.
The fact that we had this bug, then, is probably a force for good. It has probably averted all kinds of subsequent bugs since as a result.
The lesson is clear though - it can never hurt to do a bit more testing.