| | | |

Sample Size of One: Towards a Possible Solution

This post explores an alternative to fix the replication crisis (particularly in the behavioural science and economics fields, and if relevant, in other fields too). This post is in continuation to an earlier post titled Sample Size of One: The Rose Negotiations. It would help to read that one first before coming to this one.
 
What can we do to solve our human desire to create or find patterns and thumb rules to how we function? Or to find keys to getting humans to behave in a particular manner, be it to drive a more healthy culture, or improve finance sense among populations. Especially when few patterns exist. And when our desire might be overly simplifying a pattern which might be far broader than what we might want it to be.
 
Here is a broad suggestion towards what a possible solution looked like, at least in my head.
Consider a “hypothetical” scenario where a group of researchers wants to find the effects of an ‘opt-in/opt-out response’ for organ donation.
Up until now, behavioural economists or scientists would identify a large, diverse study group of volunteers, and conduct the experiment. Let’s suppose at the end of the study, they found that 70% of respondents opt for the organ donation program when the form requires them to physically opt-out of organ donation.
Now, a non-profit across the world tries this tactic on a local population, but perhaps has a less than encouraging (and far less than a 70% success rate) outcome. This leads to questioning the research findings, and the broader hue and cry around the reproducibility and replicability of such studies/ experiments.
 
For a moment, consider currencies. They are always fluctuating, and there is a definite exchange rate to convert between any two currencies at at a given point in time.
Or consider diverse marketplaces across the world. Any single product would be differently priced in these different marketplaces. And within a single market, the price variance might not be too much. But it might vary if you went to a market in the next village.
 
Coming back to trying to find an alternative to traditional experiments that try to find thumb rules to then apply to social or business causes.
 
The alternative I see, is where the economist or scientist creates a simple experiment or study around the hypothesis they would like to test. They would then put it on an online platform (and share it with their colleagues and counterparts across the world, who would then deploy it among local populations).
The experiment would be introduced via a website. Deployment could be done online with voluntary participants, or random people. The experiments would run in perpetuity (hence online), and results of the same would keep evolving over time and geographies.
 
The outcome for the same opt-in/ opt-out hypothesis with this alternate deployment might look something like this:
The experiment is designed to be unbiased, simple (easy to deploy without the original team of researchers being physically present), and yet robust enough to provide meaningful data.
The results of this experiment would not be captured as a single value (like 70% in the first hypothetical scenario), but rather as a function of (age/sex/location/study response/point in time).
As a result, what the outcome might look like, is diverse data points from across the world at diverse points in time.
It is possible that patterns will emerge in localized groups, or even at a nation-level for some experiments (since respondents or the general population might share a similar national history, current political and economic environment, and similar fears and concerns – whether it is about inflation, unemployment, or a multitude of other variables that were possibly getting ignored when a research study focused on finding a thumb rule.
With a global, perpetual study, for the same opt-in/ opt-out experiment, we might perhaps get results like an average of about 65% in Mumbai, India, but a 20% on the outskirts of Mangalore, India, and maybe even an 80% in Itanagar, India.
That way, researchers and anyone trying to use these research findings would be mindful that it isn’t a one-size-fits-all finding. But rather that perhaps (cautiously), one might expect to get a similar response to an organ donation campaign in a town in Country 1 and a city in Country 2, because their outcome values over a particular period of time have been similar.
 
And these values that emerge across individuals and locations are not fixed values. They are ever-evolving, to reflect the evolution of humans in a particular society, given the context of its changing sociopolitical and socioeconomic landscape, among other variables. So perhaps the same individuals too could participate in the same experiment multiple times over the years, with different results. In that sense, it would be similar to taking an IQ test or an MBTI test.
Which means, the same non-profit that is driving an organ donation exercise in a particular country in a particular year, would refer to the current result outcomes for different parts of that country, to determine what strategies they might have to employ (government intervention, financial incentives, etc.), towards driving a more successful change effort.
 
An obvious extension of this proposed solution will be coming to a post soon.
In the meantime, it was still challenging conveying the problem and my solution concept to people. So I started working on a simpler way to do that, and here it is.
 
#SampleSizeOfOne #BehaviouralScience #BehaviouralEconomics

Similar Posts