What is this? This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 34 of MisDisMal-Information
Hex, Big Lies and Consequential facts
Warning: This portion contains some spoilers for Disney's WandaVision. After the events of Avenger's Endgame, Wanda Maximoff, unable to come to terms with the grief of her romantic partner's death - The Vision, wills an entire town under a spell and resurrects a version of The Vision. I'm less interested in the main plot for the point I am getting to, so let's shift focus to the side-characters. These were regular people who, against their will, have been cast into alternate roles such as - boss, coworker, neighbour, delivery person, etc.
Another character called this entire setup a 'Hex' - an obvious witchcraft pun since Maximoff's character also goes by the monicker The Scarlet Witch. Now, here's where my interest in this Hex comes from. For all these 'regular people' in it, there were two simultaneous versions of the truth—their real-lives and the ones that the Hex had thrust on them.
Wait, Prateek, the second one isn't really the truth, no?
You're right. This 2nd truth was, in fact, fictional. But it also reminded me of a portion from an essay I read recently [The Value of Truth] by Michael Patrick Lynch (emphasis added)
... imagine that during a football game, a player runs into the stands but declares, in the face of reality and instant replay, that he nonetheless scored a touchdown. If he persists, he’d normally be ignored, or even penalized. But if he—or his team—hold some power (perhaps he owns the field), then he may be able to compel the game to continue as if his lie were true. And if the game continues, then his lie will have succeeded—even if most people (even his own fans) don’t “really” believe he was in bounds. That’s because the lie functions not just to deceive, but to show that power matters more than truth. It is a lesson that won’t be lost on anyone should the game go on. He has shown, to both teams, that the rules no longer really matter, because the liar has made people treat the lie as true.
The objective truth didn't matter in the face of the enforced truth. Now, let me back up and add some context. The essay I am referring to is trying to make the case that philosophy can "contribute to our most urgent cultural questions about how we come to believe what we think we know" - in the context of information dysfunction.
Lynch describes knowledge polarisation:
Indeed, a striking feature of our current political landscape is that we disagree not just over values (which is healthy in a democracy), and not just over facts (which is inevitable), but over our very standards for determining what the facts are. Call this knowledge polarization, or polarization over who knows—which experts to trust, and what is rational and what isn’t.
The essay goes on to say that this polarisation creates mistrust, which leads to mutual scepticism. This mutual scepticism can lead to a lack of trust in institutional expertise and digging in. And then introduces the 'Big Lies'
This brings us to the most obvious epistemic threat to democracy, one that feeds and is fed by the others: conspiracy theories and what historian Timothy D. Snyder has called Big Lies. There is often debate about whether people saying and sharing such things “really” believe them, and to what degree endorsing them is a form of partisan identity expression. But this may be the wrong question entirely.
So, what is the right question?
what we really need to understand is how big political lies turn into convictions. A conviction is an identity-reflecting commitment. It embodies the kind of person you aspire to be, the kind of group you aspire to be a part of. Convictions inspire and they inflame.
Keep the point about convictions in mind while I go off on a little side quest.
Lynch also mentions 'arrogant ideologies'
To those in the grip of arrogant ideologies, convinced that only they know and everyone else is a moron, it is unclear, at best, that just lobbing more facts at them is going to help at all—if by “help” we mean “change their minds.” To this point, we must be clear: in such cases, what matters is not changing their minds but keeping them from power.
This 'limiting the damage' frame is something Francis Fukuyama proposed in a recent blog post too. He refers to the control over discourse that social media platforms have and proposes a 'middleware' approach (software that sits between the platform and the user, and allows the latter to control the kinds of information served up by the platform. Rather than being determined by the platform’s non-transparent algorithm, the user’s feed will be customizable through the outsourcing of content curation to a competitive layer of middleware companies).
He concludes (emphasis added):
If the middleware idea were to take off, it would not solve the problem of fake news and conspiracy theories. There would be anti-vaxxer middleware providers, or perhaps a QAnon-based one that would keep users locked up in narrow filter bubbles. But the objective of public policy should not be, to repeat, to eliminate such constitutionally-protected speech. Rather, it should follow a public health model of reducing the incidence of infection. By restricting bad information to clearly labeled channels, we might be able to get to a world in which the disease can be contained. The patient may still be sick, but at least will still be alive.
But, let's stick with this point about 'arrogant ideologies'. I think it's relevant because if we were to consider the definition as 'only they know and everyone else is a moron', many of us can be bucketed into a category of 'arrogant ideologues'.
In a write-up in Discourse Magazine, Daniel Rothschild seems to describe this as a 'factional narrative' (I say seems because that is my interpretation of it, others may differ).
But the commitment to factional narratives as all-encompassing through-lines in our lives is new, and we should be deeply skeptical of them. These narratives are not confined just to politics but touch almost all aspects of our lives; they’re present in all our institutions. There’s no set of events or facts that cannot be placed in service of, or summarily ignored, because of a narrative.
He goes on to make the point that liberals are not really liberals anymore, and this is leading to a 'culture war'. Of course, I am being reductive since I am condensing a long-ish piece. It is more nuanced than what that one line might imply, and I don't entirely disagree with the broader point of the need to break out of 'all-encompassing narratives'. I am zooming into that point for a specific reason since that's often where the debate derails due to an unequal equivalence (note: I did not say false).
That's where consequentiality needs to come in. Which facts are consequential or which set of lies have consequences? A point that Kathleen Hall Jamieson, co-founder of FactCheck.org, addressed in an interview with Politico recently (emphasis added).
But Jamieson is optimistic—chronically so, in her words—and her advice on misinformation is simple: Focus on those facts that are most consequential.
“With a lot of things, whether or not they’re factual doesn’t really affect anybody. I mean, they’re useful to know at a cocktail party, but they’re not consequential,” says Jamieson. “That takes a whole lot of worry out of my life, because most things that people worry about where there’s dispute over the fact just don’t make any difference to me.”
There’s a dividing line, says Jamieson: “When people start acting on misinformation, when they start acting on misconceptions and endanger others, now you’re in territory where suddenly that becomes a consequential fact.”
This seems to tie into the convictions aspect. The problem, however, remains in determining what may or not be consequential. For social media platforms, specifically, Susan Benesch proposes that they pay attention to how language by influential political figures is interpreted, at least to start with.
How should companies decide whether a particular drop of petrol is “actionable,” as they put it? At what point does the risk of harm outweigh the right of a political figure to speak and the right of an audience to read or hear what they want to say, wherever and however they choose to say it?
...
The words are typically equivocal, as the politicians’ readers and followers know just as well as the moderators.
...
What really matters for preventing violence is how content is understood by its audience, especially people who might commit or condone violence, as I’ve learned studying rhetoric that increases the risk of violence at the Dangerous Speech Project. Content moderation staff should focus on real-world potential impacts and consequences, not unknowable states of mind or hypothetical meanings.
...
To better determine the risk of violence, and also to demand more of those who have an extra measure of power and influence, social media platforms should hold such people accountable when the content they post is understood by their followers to call for crime.
...
A major advantage of online public discourse is that it’s quite easy and quick to discover how large numbers of people understand particular content from the way they discuss it — especially if you’re at a company in possession of the data.
...
Before billions of people became accustomed to expressing themselves in writing online, the only practical way to find out what large numbers of them thought was polling, which is slow and sometimes unreliable. But now, a company could easily identify which of its users are spreading disinformation and using language that is threatening or tends to increase fear and a sense of grievance.
Yes, this opens up many questions.
Do we really want platforms to be taking more control of public discourse?
How do you classify 'influential political figures'?
How accurate are the sentiment classification methods that platforms will have to employ? And how will they work across platforms?
So, what about those arrogant ideologues?
Ah, yes, those arrogant ideologues, who cannot see beyond their world-view and only share content that aligns with their ideologies and genuinely believe in it.
Though, a recent study by Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles & David G. Rand seems to indicate
that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. Our results challenge the popular claim that people value partisanship over accuracy
They observed a dissociation between accuracy judgements and an intention to share, suggesting that the latter is performative and based on gaining social currency in their networks.
Why, then, were the participants in study 1—along with millions of other American people in recent years—willing to share misinformation? In answer, we advance the inattention-based account, in which (i) people do care more about accuracy than other content dimensions, but accuracy nonetheless often has little effect on sharing, because (ii) the social media context focuses their attention on other factors such as the desire to attract and please followers/friends or to signal one’s group membership
An associated aspect is demonstrated by an example in Sinan Aral's Hype Machine (we pay attention to content that already has relatively higher levels of engagement)[extract]. He describes an experiment by neuroscientists at UCLA who observed how people's brains (specifically adolescents in this case) reacted to content while scrolling through Instagram. They manipulated the types of photos, the amount of engagement they got, etc.
seeing photographs with more likes was associated with more activity in brain regions responsible for social cognition, rewards (the dopamine system), and attention (the visual cortex). When participants saw photos with more likes, they experienced greater overall brain activity, and their visual cortex lit up. When the visual cortex lights up, we are concentrating more on what we are looking at, paying more attention to it, and zooming in to look at it in greater detail.
They were able to replicate this by randomising the number of likes, types of photos too.
In short, when we see social media images with more likes, we zoom in and inspect them in greater detail. We pay more attention to online information when it is valued more highly by others. You might think, Well, the photos that get more likes are probably more interesting. But the researchers randomly assigned the likes, which means it was the likes themselves, not the photos, that were triggering the activation of the visual cortex.
And, likes on one’s own photos activate the dopamine reward system.
more likes on one’s own photos activated the dopamine reward system, which controls pleasure, motivation, and Pavlovian responses. The dopamine system makes us crave rewards by stimulating feelings of joy, euphoria, and ecstasy.
Some positives, though, the results from the Pennycook and co. research suggest that making people think about accuracy does lead to a reduction in sharing false information. They did this by DM-ing a set of Twitter users who shared content from untrustworthy, hyperpartisan sites and noticed a shift towards sharing content from more trustworthy sites over a 24h period.
They also point to social media feeds' tendency to flatten context as a cause for people reflecting on accuracy to a lesser degree.
Our results suggest that the current design of social media platforms—in which users scroll quickly through a mixture of serious news and emotionally engaging content, and receive instantaneous quantified social feedback on their sharing—may discourage people from reflecting on accuracy.
So, maybe things aren't as polarised as our social media feeds indicate?
Media-aaaaah! No, not like this!
Akshay Deshmane has an intriguing story in TheMorningContext(paywall) about legacy media organisations' efforts to lobby the government for a level-playing field with digital news publishers and aggregators. This wasn't through asking for less regulation for themselves but more regulation for the latter. Unsurprisingly, attempts to link less regulation to 'fake news' were made.
The minutes then describe what the DPIIT officials and DNPA members agreed upon as the problem to be resolved and its possible solution: “It was agreed that the lack of regulation or standards for content on such platforms was contributing to the proliferation of fake news and misinformation, and therefore necessary steps to bridge the regulatory gap need to be taken. Accordingly, MeitY and I&B may identify such regulatory gaps and take necessary action…"
I guess they don't watch TV news, where the mere existence of regulation has not done much to stem the flow of false information.
Framing that belongs in a hall of (in)fame(?) somewhere (Hat-tip to TheKen's Rohin)
(not just) Video Deepfakes
There's a good chance you've already come across the Tom Cruise deepfakes that horrified many people a few weeks ago (if you missed it, here is an article in TheGuardian featuring the person who created them).
And, if you're done panicking about deepfakes (or cheapfakes), I have another set of fakes for you to worry about. Ok, they're technically not fakes as much as synthetic.
Back in January, Will Knight wrote about a project that submitted auto-generated comments to a call for public feedback [Wired]. In this particular case, the call for feedback received ~1000 responses, and half came from this project.
The project was the work of Max Weiss, a tech-savvy student at Harvard, but it received little attention at the time. Now, with AI language systems advancing rapidly, some say the government, and internet companies, need to rethink how they solicit and screen feedback to guard against deepfake text manipulation and other AI-powered interference.
The article also claims that in an experiment to differentiate between generated comments and human-written ones, volunteers 'did no better than random guessing.' *gulp*
This is particularly interesting because of the implications it could have for public feedback mechanisms. Either it can manipulate processes (that pre-supposes that such feedback is always taken seriously to begin with), or it becomes an excuse to delegitimise genuine feedback - “It's all manipulated by bots and trolls”. And may even be just another reason to go after the idea of anonymity or more stringent mechanisms for participating in public consultations.
But, I should point out that just using templates can have a similar effect - the only variation would be that identifying these identical responses will probably be easier.
Aside: See IFF’s post on their Tandav RTI.