Tag: moral dilemma

Exploitative Businesses & Divine (and Tech) Interventions

When I wrote ‘Design the Future’ about Design Thinking, it had a brief overview of the behavioural aspects of innovation – from an innovator’s and user’s perspective.
 
There was a mention of nudges (not sure I used the term though).
I have been of the (possibly obvious) view that, as companies get increasingly sneaky, especially when selling ill-health or stuff we don’t necessarily need, that despite how creative their marketing gets, customers too keep pace by becoming resistant to the nudges.
 
I also think the Ben Franklin Effect probably wears off, and that people aren’t exactly suckers to keep giving. Of course, it varies for people, their preferences, value trade-off, etc.
 
Unless business are sincerely trying to benefit or create a good habit in customers, I’ve personally never been a fan of exploitative nudges. Which is why, while some soft drink or fast food ads and initiatives are creative and impressive, you know it isn’t promoting something great in customers.
 
Two recent events seemed to be a sort of divine intervention to nudges and business practices that aren’t exactly in the best interest of customers.
 
First, Cristiano Ronaldo removing Coca-Cola bottles during a press conference at the Euros coinciding with a $4bn fall in the company’s share price. Nothing against the company in particular, but not a fan of global giants that proudly continue to promote ill-health.
 
The Second, email marketing. While useful to businesses including mine to spread the word, it also has become increasingly sneaky in that they closely track numerous user interactions. I recently got an email about an offer. Opened the email because the subject line was interesting, but immediately realized I didn’t need it. Instantly, the next mail appears, asking if there was something missing in the offer (previous mail). That was pushing it.
 
As per developments discussed at Worldwide Developers Conference (WWDC) that ended about a week ago, Apple will be putting more limitations on email marketing and in-app advertising. They’ll likely be preventing marketers from knowing when users have accessed their emails, among other features.
 
During a recent project with a company trying to create a positive habit in customers, the analytics team had a list of around 140 data points/actions on the app to track. I found some more to take the tally to 200. While the overarching service is beneficial to customers, I wasn’t overly proud of my contribution and faced the moral dilemma of whether we should track so much, or simply create a more effective user experience that might achieve the dual objective: one for the customer and one for the company.
 
Interesting how some businesses offer invasive tech to businesses, and other businesses offer defense against such tech in the form of new features on their products.
 
 

Moral Dilemmas from the Future

Moral Dilemmas from the Future

Image: source

I came across this extremely interesting article around the future of healthcare that gives us a peek into the near future. It also highlights increasing complexity and moral high seas that businesses need to, and will have to navigate around in the years to come.

Google has been able to predict regional flu trends since 2008 or earlier. Most people share with her (I refer to her as Ms. Google) more than they share with close friends and family. And thanks to this, Google has been getting increasingly good at predicting if someone may have a certain condition or illness. It is based on their searches and perhaps the mention of some symptoms, which ordinarily might not raise any red flags.

This article basically talks about whether, in such a situation, Google should, or is, responsible to tell the user that they might be ill, or just go about with business as usual, providing search results and nothing more.

Most of us might have a direct, personal answer to the question. Either a ‘most certainly Google should tell me’, or ‘hell no!’. The problem however, gets more complicated with the large number of false positives (false alarms). That,  and the astronomical medical costs associated with those false alarms. Not to mention the number of angry users who might perhaps consider suing Google for medical expenses. And all because of incorrect information it might have given them out of a moral obligation it may have felt towards them.

The problem (and article) doesn’t stop with Google. It also touches upon an older but extremely important topic about self-driving cars and choices they’d make on our behalf. Imagine a situation where you, the owner of an autonomous car, are being driven. You are heading toward a group of people who suddenly jump irresponsibly onto the road. Would you rather your car hit them, or manage to avoid them, but end up hitting a wall that kills you? Or the choice your car might one day make between one of two similar, unavoidable eventualities.

Coming back to the Google problem, its accuracy has only been getting better with time and searches. It deals with everything from user reactions to health insurance coverage, etc. All of which make it a very interesting and complex question to answer.

You should really read this one!

Here’s the article link.

***

Look forward to your views. And if you liked this one, consider following/subscribing to my blog (top right of the page). You can also connect with me on LinkedIn and on Twitter.

%d bloggers like this: