Are Your Innovations Safe?

Today’s pace of life can make you feel like you are strapped to the top of rocket. With more and more screaming for your attention, we barely have time to send that long forgotten birthday card, let alone to sit down and really think about the long term effects of our innovations. But what if your latest and greatest innovation turned out to damage the lives of millions instead of improve them as planed? What if your proudest moment was also your most heinous? We are now entering the era where this might just be the case.

At the start of the dual Oscar winning documentary “An Inconvenient Truth”, Al Gore draws our attention to two important things. Although both have been around for thousands of years, by advances in technology, they are now able to detrimentally affect the lives of millions and billions of people on the planet. The first is weapons innovation, think Nukes etc. The second is the effect our advances are having on climate change. When improper checks and balanced are used, we can now add a third to this; information technology.

When it comes to the climate and weapons of mass destruction, we either have a grip on it or we are starting to try and control it. For the final one, information technology, we are completely in unknown territory, so we can only guess the outcomes. It is also an area of innovation used and exploited by practically everyone pushing the innovation agenda. The problem with it is that, if we’re not careful, it may cause more damage than climate change and armaments combined.

Why? Let’s first take a step back and create a dichotomy of the type of serious information technology disasters to answer this. The first type are major and seriously affect a limited number of people. The second type are catastrophic and affect exponentially more. And the scariest thing about both types? They both occur in the same way. It’s only the number of people affected which differs.

An example of the first type of major disaster caused by our data driven lifestyles is the London Ambulance Service chaos of 1991. Like most innovation projects, tragically the disarray was caused by the government trying to achieve an admirable goal; ambulances reaching 95% of emergencies within 15 minutes. The plan was to achieve this via a computer-aided dispatch system. However, as they started to partially implement the system via a semi-automatic scheme, the number of emergencies reached within 15 minutes dropped from the normal 65% to 30%. Ignoring these poor results, the fully automated system was introduced and after the first day fewer than 20% crisis situations were reached with the percentage falling again the next day.

After only six hours of using this system, it was taking staff up to 10 minutes to answer incoming phone calls. The volume of calls increased. People wondered why ambulances weren’t coming. Management decided to switch back to the semi-automated system. One week later the whole system crashed and 1.5 million pounds were flatlined.

Why did this happen? Because management was determined to introduce the new technology in 14 months. It was estimated after the disaster that a proper implementation should take around five years with all the necessary testing and quality controls. This is a point I will return to later.

An example of the second type of major disaster is the health care scare in California in 2007. It was a malfunction in the Department of Health Services’ new automated computer system which cut off the entitlements of thousands of poor seniors and people with disabilities. As a result, Medicare promptly cancelled their healthcare coverage. Unlike the first type of disaster, things like this not only affect a city but whole states or countries.

Although not many disasters of the second type have occurred, what we are witnessing is that society is getting closer and closer to such catastrophes as can be observed in the constant stream of headline technology problems (credit card fraud, identity theft, etc). What is worse is that sometimes systems which purport to protect us can cause us more harm than good. One such system is the one used to classify people as terrorists at airports. These systems are so inaccurate that each week around 1,500 ill-fated airline travellers are incorrectly classified as terrorists. Examples of this include “a four year old boy, former Army majors and an American Airline pilot who was detained 80 times over the course of a single year.”

As more and more services enter the realm of code, the previous example is becoming more common. However, it is not just such errors but also intentionally designed features which can cause harm unintentionally; physical, psychological or both. Goolge’s powerful ad serving machine, AdWords, is known to present African-Americans with adverts such as “Have you ever been arrested?” accompanying their Google searches.

According to the co-founder of one of Silicon Valley’s most powerful venture capital companies, Marc Andreessen, software is currently eating the world. As software relentlessly devours more services with more hastily implemented code, we can only pray for the best. These looming disasters should urge us to think and reflect on these new innovation tools as we are proselytizing innovation’s benefits. If our idea is to be released upon millions, are we sure what we’ve done has the necessary quality controls behind it to minimize a snowballing disaster?

Don’t get me wrong here either. I am one developing such tools and am a strong supporter of their usage. I have implemented as many innovations in my career as the next. However, as the pace of innovation picks up, there is a clear need to make sure our checks and balances are appropriate for the millions affected by our ideas. In this respect, many 20th century risk mitigation tools may no longer appropriate. We need to carefully consider a set of quality checks appropriate for the extreme reach of our innovations. Examples such as the 2010 Dow Jones Flash Crash, which annihilated 1 trillion dollars of wealth in just 300 seconds, is a violent reminder of the lack of proper quality controls.

This is one of those rare occasions where we need to step back and look at what we’re doing. As the people defining the innovation agenda of some of the largest and most influential companies in the world, we have a responsibility to think about the tools and techniques we are developing. It is a matter of professionalism even if you aren’t interested in an ethical debate because it would be tragic to be a victim of one’s own proud innovative idea. You just need to ask yourself, am I the masters of my innovations or is my drive to innovate mastering me?

Image credits: Flickr Gerry Lauzon

Wait! Before you go…

Choose how you want the latest innovation content delivered to you:


Evan Shellshear is a technology and software expert working as the Point Cloud Manager at the Fraunhofer-Chalmers Centre in Sweden. His work focuses on turning cutting-edge research into successful industrial solutions across numerous industries. Connect with him @eshellshear

Evan Shellshear

NEVER MISS ANOTHER NEWSLETTER!

Categories

LATEST BLOGS

Three things you didn’t know about credit cards

By Hubert Day | October 18, 2023

Photo by Ales Nesetril on Unsplash Many of us use credit cards regularly. From using them for everyday purchases to…

Read More

Five CV skills of a business-minded individual

By Hubert Day | September 21, 2023

Photo by Scott Graham on Unsplash The skills listed on a CV help employers quickly understand your suitability for a…

Read More

Leave a Comment