The Mortal Risk of Riding Shotgun in an Autonomous Vehicle

The Mortal Risk of Riding Shotgun in an Autonomous Vehicle

Source: link

We live in strange times. And in interesting and amusing times.

A recent article I read, spoke about how most automotive manufacturers are misleading (or are confused themselves), when they claim to offer autonomous driving features in their vehicles.

Their mindset seems hugely flawed, if not shocking. Article here

Don Norman could have a field day ripping this mindset apart.

I have heard numerous stories since when I was a teen. Of people falling off to sleep while driving to or from work in the US. It never made sense to me. However, in the years since, I have seen and personally known fatigue while driving.

I worked in Pune in the manufacturing sector for a year and half. Work largely involved workday trips to relatively far off industrial sectors and every other weekend trips back home, I was mostly driving alone.

Then there were outstation trips, where I would leave early one morning, pick up one or two colleagues, and drive to another city, attend meetings at companies spread across a large industrial sector. The next few days would involve more meetings all day, before either driving back to Pune. Or driving to the next city for an encore. In all, over 33,000 km in under 18 months.

What auto manufacturers apparently offer with autonomous driving, is different versions of driving systems that take care of driving for you. It could be identifying and staying within lanes, measuring vehicular distance and safe braking, and using GPS to drive you to your destination.

You would assume you could completely disconnect and do your thing, as your car takes you places. However, auto manufacturers still expect you to be as alert as if you were driving, in case a sudden manual intervention is needed.

That expectation of theirs is absurd at best.

Humans are either engaged or not. Or as my Statistics professor would often quote the popular idiom, ‘she’s either pregnant or not, there is no somewhat pregnant’.

If you have someone drive a car, you can hope they are awake and alert. And yet there’s no guarantee, proof being the numerous accidents that occur due to distracted driving.

But the moment you are not driving, your brain switches off, or switches to something else. Unless you are a professional rally car navigator, or in the armed forces.

On most long distance drives, be it with friends, family or work colleagues, the person in the passenger seat eventually nods off, and I’m almost certain it is not because of the company.

So, expecting someone not to drive, but have the alertness and rapid response times of someone who is, is asking for a lot!

Of course, the biggest reason for this expectation is not so much the flaws in technology, but rather human behaviour again. Many autonomous vehicle accidents are due to unanticipated human errors – be it pedestrians or other human-driven vehicles.

So the effort should be on improving that unpredictability in erratic human driving, before rolling out technology that could potentially cause fatal harm to customers who come with a very different expectation of the technology than what the manufacturer offers them.

Look at the quality revolution and process improvement. They took industry by storm several decades ago. And their impact on our machines and automated processes is unquestionable. But are we humans more efficient today, or are we far more distracted and poor managers of our time than we were? Phones, entertainment and noise to blame.

Maybe manufacturers are explaining the gaps in tech to customers before the purchase. Maybe even spelling out the risks and precautions to them. But there’s only so much you can change human behaviour in short periods of time.

And finally, it was amusing how this potentially life-threatening flaw got reported.
The article was titled, “..a UX risk!”
Why dilute a crucial message?
It’s a f@€k!^¢ risk to life! Far more than a risk to the customer experience.
Can’t have a bad experience if you’re dead. Why not highlight that?

If you own, manage or work at a company, and are grappling with a complex challenge or are in need of innovation for growth, get in touch. More here.

And you might find my book, ‘Design the Future’ interesting. It demystifies the mindset of Design Thinking. Ebook’s on Amazon, and paperbacks at leading online bookstores including Amazon & Flipkart.

Moral Dilemmas from the Future

Moral Dilemmas from the Future

Image: source

I came across this extremely interesting article around the future of healthcare that gives us a peek into the near future. It also highlights increasing complexity and moral high seas that businesses need to, and will have to navigate around in the years to come.

Google has been able to predict regional flu trends since 2008 or earlier. Most people share with her (I refer to her as Ms. Google) more than they share with close friends and family. And thanks to this, Google has been getting increasingly good at predicting if someone may have a certain condition or illness. It is based on their searches and perhaps the mention of some symptoms, which ordinarily might not raise any red flags.

This article basically talks about whether, in such a situation, Google should, or is, responsible to tell the user that they might be ill, or just go about with business as usual, providing search results and nothing more.

Most of us might have a direct, personal answer to the question. Either a ‘most certainly Google should tell me’, or ‘hell no!’. The problem however, gets more complicated with the large number of false positives (false alarms). That,  and the astronomical medical costs associated with those false alarms. Not to mention the number of angry users who might perhaps consider suing Google for medical expenses. And all because of incorrect information it might have given them out of a moral obligation it may have felt towards them.

The problem (and article) doesn’t stop with Google. It also touches upon an older but extremely important topic about self-driving cars and choices they’d make on our behalf. Imagine a situation where you, the owner of an autonomous car, are being driven. You are heading toward a group of people who suddenly jump irresponsibly onto the road. Would you rather your car hit them, or manage to avoid them, but end up hitting a wall that kills you? Or the choice your car might one day make between one of two similar, unavoidable eventualities.

Coming back to the Google problem, its accuracy has only been getting better with time and searches. It deals with everything from user reactions to health insurance coverage, etc. All of which make it a very interesting and complex question to answer.

You should really read this one!

Here’s the article link.

***

Look forward to your views. And if you liked this one, consider following/subscribing to my blog (top right of the page). You can also connect with me on LinkedIn and on Twitter.