Hi folks,
I remember one of Jerry Seinfeld's stand-up comedy moments when he was being perplexed by the fact that women, who basically can take boiling hot wax, pour it on their legs and rip the hair out by the roots, can still be afraid of bugs. Well, maybe same paradox can somehow be extrapolated to software developers and software bugs too. But why and what are these software "bugs" anyway?

In one of our older articles we've told you about how the word "bug" came to express computer malfunctioning. It originally applied for hardware-related malfunctions but then the short-and-catchy term quickly came to designate software-related malfunctions and inaugurated the gallery of creatures in software domain, which also includes viruses, worms, spiders and other crawlers. But unlike those, which are created on purpose, bugs are created by mistake.

Bugs are errors (or defects, flaws, faults) within a software code that, depending on the magnitude of the mistake in code design or writing can produce effects in system's behavior ranging from annoying or inconvenient up to almost catastrophic.

Probably the most famous bug is the Millenium Bug aka "Y2K". It was a design flaw: in order to increase simplicity, many professional programmers of the decades preceding year 2000 used to express the year within a date by using 2 digits only. So for example, year 1986 was expressed as "86" by many computer programs. A quite rational thing to do, actually, except that when year 2000 was to come, it would have been expressed as year "00" that is, indistinguishable from year 1900 and breaking the increasing numbers logic (ie, 96, 97, 98, 99, 00). Well, believe it or not, this bug issue was so serious it determined a coordinated worldwide response (as if it were an alien bug from space coming to destroy humankind like in a B movie from Hollywood).

An International Y2K Cooperation Center was established, reuniting 120 countries and being funded by the World Bank. It is estimated that global costs for avoiding a potential disaster exceeded 400 Billion USD in today's money. But disaster prevention efforts can pay off in unpredictable ways: when the September 11th 2001 tragedy happened, New York's infrastructure continued to be operational thanks to networks redundancy and backup plans originally devised to deal with worst-case scenarios of the Y2K bug impact. Needless to say this now, after hundreds of billion bucks spent and a global emotional warming, nothing bad happened (except for the usual stuff, of course).

Thing is, if you think that such ridiculous situation is singular you'd better think again: have you ever heard about the Year 2038 Problem?

TWell, this problem is related to the C programming language and the way it handles time values. The standard time library of routines in C uses a 4-byte format (32-bit long, that is) to handle (calculate, convert and store) time values as seconds. Being limited by definition (ie, the maximum number) this format was set to assume that Time started on January 1st 1970 at 12:00 AM. So all time/date values are conveniently (for a computer) calculated as seconds with respect to this date. Problem is however that the maximum length of time is the biggest 31-bit binary value (the 32-nd bit is used to express "+" or "-") which in decimals means 2,147,483,647 seconds (2 ^31), which added to the "zero moment" translates into 19 January 2038. Most 32-bit Unix-like systems store and manipulate time in this format so the Year 2038 bug mainly concerns the numerous Unix installations and embedded-systems that controle lots of devices like phones, cars, avionics, medical equipment and so on. Moving from 32-bit to 64-bit is one of the proposed solutions but, even if it seems a natural step currently ongoing for computers, it is not as simple for embedded systems' case. But even if the bug's manifestation came into attention quite late (on May 2006 in the 'AOLserver' open-source software) the problem is being adressed and it will probably be easier to overcome than the Y2K problem.

Well folks, good news is that crises generated by these bugs had (or will have) a happy ending, regardless the costs. But bad news is that some other bugs had tragic outcome, like for example the THERAC-25 radiation therapy machine (1985-1987) which because of software bugs in both design and development practices, caused the death of at least 4 patients by delivering an amount of radiation 100 times higher than the intended dose.

Another example of bugs turning tragic happened in February 1991, when because of a bug in the targeting software, a Patriot missile failed to assess an Iraqi SCUD missile as "threat" and canceled its interception allowing the SCUD missile to hit its target killing 28 soldiers and injuring further 98. There would be other examples too, but we really don't want to transform this article into a blacklist of casualties and damages caused by software bugs.

Instead, we are going to finish by mentioning two bugs discovered very recently which, although not necesarily life-threatening are nonetheless serious.

One of them, codenamed "Heartbleed" is a security bug in the OpenSSL, an open-source implementation of the Transport Layer Security (TLS) protocol.

OpenSSL implementations exists almost everywhere, being used in webservers (like Apache and nginx), Operating Systems (like some Linux distributions including RedHat Enterprise, Oracle Linux, Amazon Linux, Ubuntu or Android 4.1.1), various software applications and firmware.

In other words, most major websites and online services in the world were affected, not to count the client-side vulnerabilities and the less impressive names on the list. Even if the OpenSSL Project was founded in 1998 and despite being open-source the bug was discovered by Neel Mehta from Google as late as April 2014! A fix was developed by Bodo Moeller and Adam Langley from Google (or was it Adam Google from Langley?) but truth is the vast majority of specialists seem to agree that damages produced by this extremely serious security bug are almost impossible to track and estimate.

But wait, we're not finished with internet security protocol bugs yet. Last month a new vulnerability was discovered and reported.

It was nicknamed POODLE (stands for "Padding Oracle On Downgraded Legacy Encryption") and even if it's about a flaw in the design of the SSL v.3 algorithms (which were long time replaced by the TLS as told above), most browsers are still SSL compliant and here's the thing: whenever a failed connection occurs, browsers automatically try a downgraded security protocol... like the vulnerable SSL v.3. So unlike Heartbleed (which concerned a particular implementation) the POODLE bug concerns all implementations, either open-source or proprietary, because the vulnerability resides in the very design of the algorithms, which were always public by the way.

One long lasting bug, provided it took 18 years to get discovered. But even if it's less dangerous and quite easy to fix it still raises two important questions. One is how serious public specifications and open-source codes are being reviewed by the crowds of contributors. And the other one is how many other such bugs are still being 'at large', hidden in plain sight.

Well folks, regardless of the answers, we think the one conclusion anybody can draw for this article is that all bugs in software are actual manifestations of the bugs in our heads of course.

And, as it seems that we're making fast progress in designing software able to re-write itself (so presumably also prone to auto-debug and fix), we'd better hurry in paying some more attention to the bugs in our own thinking too. Before it's too late.