What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 37 of MisDisMal-Information
By the time you read this edition, MisDisMal-Information would have turned one year old. In saner times, this milestone is something I would have been thrilled about. I considered skipping this edition altogether because, really, who is thinking or reading about information disorder right now? Ultimately, I decided to power through, just in case some people could do with a distraction, I know I could. But I haven't been able to think as deeply as I would have liked over some of the developments I am writing about in this edition.
Cooperation of the crowds, madness of the feeds, and we-flip
Over the last few days, scrolling on Twitter has literally been 'doomscrolling' as timelines are flooded with SOS calls, stories of suffering, and death. One redeeming feature, though, has been the scores and scores of volunteers that have been collating information, then trying to verify, amplify these sources. Some characteristics became more apparent to me as I put myself in the shoes of someone navigating it. These are general qualities of social media feeds/timelines; they are just more pronounced in a low knowledge/high uncertainty information environment.
Without Context: Disparate pieces of information need to be pieced together. But you don't know how one fits in with another. A broader aspect of this phenomenon is called Context-Collapse.
Evolving: As more information comes through, individual pieces of content are consolidated into threads or lists, then shared documents with all combinations of nesting possible.
Flattened: Amid so many SOS calls, tagging different people queries that would have different levels of urgency (how do I get from A to B for work in a curfew v/s Which hospital has beds or oxygen supply) get clubbed and consumed together.
Latency: How long has it been since this particular piece of information was put together? Is it still relevant? Is it still accurate?
Integrity: Just because it says 'verified' in some form, typically in caps, as a hashtag or both, or says that they heard it from a doctor - does that mean it is true?
Provenance: Where has a particular piece of content come from? Yes, you can see who posted it, but where did they get it from? How many people have seen it before you? How many have acted on it? What happened when they acted on it?
In other words, WE-FLIP(out?).
Given that social media (because we are specifically talking about that here) feeds typically feel like a low-trust environment most vulnerable during crises, what keeps these efforts going?
These volunteer-driven information flows, however, remain fragile. And as more time passes, it becomes harder to ensure they are updated and accurate.
Ideally, you would want 'official' or 'authoritative' sources conveying this information, but these hierarchical models of communication just don't work in the instant information environment we have today. Even daily update frequencies on bed/ICU/oxygen availability may not be good enough.
Some questions this situation brings up (to think about in terms of consequences, not necessarily information):
At what point would a low signal-to-noise ratio invert the utility they offer?
Are they always a net benefit no matter how noisy the information streams get?
How much of this is a result of pushing out the local from our information diets?
As a quote in ThePrint's story about these volunteers notes:
Pandey said the problem is that people tag PM Narendra Modi or their respective states’ chief ministers for help, as most of them don’t know who their local administrators are.
“We need to hold the local officers and politicians accountable for the crisis. They are also more accessible. Most of the time, all my volunteers need to do is to call them,” she added.
A parting thought on the integrity aspect:
A study published in Nature titled 'Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics' attempted to represent how people can be exposed to misleading information diagrammatically.
Human (circles) and non-human (squares) accounts participate in the spread of news across a social network. Some users (A and B) create unreliable content, such as false or untrustworthy news or unsupported claims, while others (C) create content informed by reliable sources. When the topic attracts worldwide attention as in the case of COVID-19, the volume of information circulating makes it difficult to orientate oneself and to identify reliable sources. Indeed, some users (D) might be exposed to unreliable information only, while others (E and F) might receive contradictory information and become uncertain as to what information to trust. This is exacerbated when multiple spreading processes co-occur, and some users might be exposed multiple times to the same content or to different contents generated by distinct accounts.
Har(dly)vard(th) Fact checking?
Ok, this heading is a stretch and is meant to be a play on whether certain Harvard studies are hardly worth the effort of fact-checking. Have you guessed where I am going with this?
In the first few weeks of April, there were reports of two studies about the UP government, one from Harvard and the other from Johns Hopkins University. The Harvard study supposedly praised its handling of the migrant crisis, while the Johns Hopkins University study was portrayed as claiming that the state government was among the best managers of pandemic responses around the world.
Attempts to fact-checking these claims put in a lot of hard work (not Harvard) and went into excruciating details (I've only selected one each as an example):
The Harvard Study [Altnews]
The report seems to have been published by a Gurgaon based affiliate that should not have used Harvard's logo.
It also does not say what the headlines claimed. Quote attributed to IFC's honorary chairman.
“Contrary to media reports our study doesn’t conclude UP government handled the migrant crisis more effectively than other states. The document is not a comparative statement on the handling of the crisis by different states. It is documentation pertaining to the effort of the Uttar Pradesh government and extracting insights from the same.”
On the larger question, what a 'Harvard Study' even is:
Smriti Iyer, an alumnus of Harvard Kennedy School, told Alt News, “To the best of my knowledge there is no such thing as a ‘Harvard study’. There are studies by institutions/ professors/ centres associated or housed at Harvard. In most cases, it’s made clear that the study is attributable to the authors and not the university. In order to create legitimacy, they are often referred to as Harvard studies, which is misleading.”
The Johns Hopkins Study [Boomlive]
It is not a comparative study.
It was prepared by faculty members from Johns Hopkins School of Public Health and officers from the UP Government.
Attributed to Brandon Howard (not Harvard):
In a statement to BOOM, Howard stated that the case study covered activities in UP from January 30, 2020, to January 15, 2021, and aimed to document the range of actions taken in Uttar Pradesh in response to COVID-19, and to identify lessons for how to respond in resource constrained settings.
None of this new. It would also be a stretch to label this as anything but motivated. And fact-checkers have tirelessly debunked several such items in the past, yet, they keep on coming. These items also have a specific audience in mind.
Note: The paragraphs that follow are not to imply that fact-checking is ineffective. It is the examine the gap(s) between whether it is necessary or necessary and sufficient. It certainly has signalling benefits. It can also stem the flow of people into influence circles that rely on motivated information. It can even arm community-led corrective efforts with information - how these communities choose to intervene is a different matter.
As Amber Sinha wrote in Networked Public: How Social Media Changed Democracy
First, fact-checking is based on the belief that when informed of the ‘fakeness’ of a political issue, people will change their opinions about it. Even more fundamentally, it assumes that online discussions are a form of a deliberative process where people engage in informing, convincing and debating with others. Neither of these may be true of online consumption and dissemination of news.
In You Are Here, Whitney Philips and Ryan Milner list 3 effects that play a role in what they define as the fact-check fallacy:
But the principle is straightforward enough: if the truth doesn’t matter to the person speaking, then facts won’t work as counterarguments.
Illusory Truth Effect: “that repeated claims seem more true than new claims”.
Continued influence Effect: “that belief in misinformation can persist even when countered with clear corrections” after a coherent causal explanation is established.
Boomerang Effect: “when a person mistrusts the source of a fact check, and as a result, comes away from the correction more convinced of the falsehood than before”.
Then, there's also the often quoted 'backfire' effect, which implies that people double down on their pre-existing views or beliefs when faced with contradictory information.
Rasmus Kleis Nielsen put out a very informative post in August 2020, with links to supporting research.
TLDR: Subsequent research has improved our understanding of this phenomena. It has plenty of nuances and should not be taken as a given. i.e. not every ideologically rooted person will react to information that contradicts their world view by doubling down.
One thing you'll notice, though, is that this is underexplored in the Indian context. We can assume some of these findings will carry through globally, but that ignores a lot of local context.
A draft paper published by Sumitra Badrinathan titled 'Educative Interventions to Combat Misinformation' studied the effects of in-person media literacy training (one session) on ~1200 people in semi-urban Bihar. One of the things the study uncovered was that the ability to determine the accuracy of anti-BJP stories by participants who identified as pro-BJP actually worsened. Now, let's be clear, this is far from saying all BJP supporters will exhibit this behaviour or that supporters of other political parties wouldn't behave in the same way in a different context. What it does seem to indicate is that:
These findings point to the resilience of misinformation in India and the presence of motivated reasoning in a traditionally non-ideological party system.
To round this out, these false items seek to create the Illusory and Continued Influence effects. In a section of the audience, it creates results in a Boomerang effect of sorts (just look at responses to fact-checkers on social media, even if you concede that a number of these responses are themselves motivated to create the first 2 effects)