Of shrinking roads from fear to hate, effects of 'Votes'App groups, and national security by platform
The Information Ecologist 55
What is this? Surprise, surprise. This publication no longer goes by the name MisDisMal-Information. After 52 editions (and the 52nd edition, which was centred around the theme of expanding beyond the true/false frame), it felt like it was time the name reflected that vision, too.
The Information Ecologist (About page) will try to explore the dynamics of the Information Ecosystem (mainly) from the perspective of Technology Policy in India (but not always). It will look at topics and themes such as misinformation, disinformation, performative politics, professional incentives, speech and more.
Welcome to The Information Ecologist 55
Yes, I’ve addressed this image in the new about page.
In this edition:
The ‘Dharam Sansad’ and the shrinking road from fear speech to hate speech
A field experiment during the Tamil Nadu legislative assembly that attempts to understand the effect of Whatsapp groups on participants
National Security roles for DCN firms
The shrinking road from fear to hate
I can’t say I was too surprised when @zoo_bear posted a series of videos from Haridwar’s “Dharam Sansad” containing horrible, despicable hate speech and calls for violence against Muslims. Shocked? Yes. Alarmed? Yes. But not surprised. And, if you’re reading this, you probably aren’t either (because you’ve self-selection into subscribing). Because this hasn’t happened overnight. In my opinion, we’re likely living through one of the largest (in terms of absolute numbers) periods of mass radicalisation known. A complete deconstruction is outside my area of expertise.1
Since the thread, several things have happened (simplistic representation below):
Thread → Outrage + Amplification (why no police action? why isn’t the media covering this? why aren’t politicians condemning this? etc.) → A counter-thread that posted edited/out-of-context video snippets (I will not link to it) → Press coverage (both domestic and international) → Police finally lodged FIRs [as I am tracking here] (FWIW, this appears to be selective) → There are going to be more such events [Alishan Jafri - Twitter].
What makes this so complicated is that the proverbial sunlight (of sunlight is the best disinfectant fame) appears to be feeding into a system that is emboldening (or worse, to an extent, rewarding) them. I got a similar sense when I read some of the statements/quotes in these important stories by Kunal Purohit [Article14] (on groups targeting Munawar Faruqui’s comedy shows) and Pavneet Singh Chadha [IndianExpress] (on groups disrupting Friday namaaz in Gurgaon). I am not suggesting that we shouldn’t cover these stories. But that we need to collectively figure out how to tell/hear/share them without rewarding the people perpetrating harms in the first place. Especially given that we’re living in times where they are more like to end up with fame (rather than infamy) in the target groups they are performing for. It is also worth pointing out that they will do this regardless (through social media and various grassroots activities) and that a more significant share of the reward likely comes from favourable coverage - and not from calling it out. There also isn’t a single formula that will apply. E.g. the response to a local political aspirant should be different from that to a sitting Member of Parliament. One deterrent could be (non-arbitrary) law enforcement action sending clear signals. However, the ‘non-arbitrary’ bit is unlikely to be a reality even if the degrees may vary under different political parties/elected representatives and the manner in which they exercise control over law enforcement agencies. Like I said, complicated. But this isn’t the point I am trying to address in this section.
It is instructive to look at a February 2021 paper by Punyajoy Saha, Binny Mathew, Kiran Garimella and Animesh Mukherjee on fear speech in public WhatsApp groups in India (the sub-heading of this section is a play on the title of the paper) [arXiv]. Quoting the main insights (I’ve broken them up into bullet points)
We observed that the fear speech messages have a higher spread and larger lifetime when compared to non fear speech messages.
The fear speech messages talk about topics such as ‘aggression’, ‘crime’, ‘hate’, ‘fighting’ and ‘negative emotions’ in general. Using topic modeling, we found that there are concerted narratives which drive fear speech, focused on already debunked conspiracy theories showcasing Muslims to be criminals and Hindus to be victims.
We showcased the prevalence and use of various emojis to emphasize the several aspects of the message and dehumanize Muslims.
Finally, when compared to hate speech, fear speech is found to be significantly less toxic.
We then looked users who posted fear speech messages and found that these users are popular and occupy central positions in the network, which in part explains the popularity of the fear speech content, allowing them to disseminate such messages much more easily.
Using a survey of these users, we show that fear speech users are more likely to believe and share fear speech related statements and significantly believe or support in anti-Muslim issues.
As always, I recommend reading the paper rather than relying solely on my interpretation of it.
But, what is fear speech? (emphasis added)
due to the strict laws punishing hate speech in India6, many users refrain from a direct call for violence on social media, and instead prefer a subtle ways of inciting the readers against a particular community. According to Buyse [15], this kind of speech is categorized as ‘fear speech’, which is defined as “an expression aimed at instilling (existential) fear of a target (ethnic or religious) group”.
And how does it differ from hate speech?
In my mind, it appears to be ‘malign creativity’ (See 47:En-Gendering Disinformation) or leaving the hateful part either unsaid or implicitly suggested.
We didn’t do enough to stem the tide in its ‘malign creativity phase’ (I would argue that malign creativity is still strategically deployed, but more explicit messaging is also required as mass-radicalisation accelerates). I wonder how far past the point-of-no-return we are. And whether the ‘shrinking road’ between fear and hate is now just a series of overlapping paths.
Ad break (no private information was used in targeting this ad)
If you are interested in the topics I write about, then you may want to look at Takshashila’s 12-week public policy courses.
Effects of Party Whats(votes?)App Groups
53: Participatory dysfunction was based on a paper about coordinated efforts across platforms to manipulate Twitter’s trends before the 2019 general elections in India [ACM Digital Library]. Recently, I came across a Job Market Paper by Kevin Carney about the effects of social media on voters during the Tamil Nadu legislative assembly elections in 2021. Given how integral WhatsApp has become to the campaigning process, I’m surprised we haven’t started calling it VotesApp (Sorry! It was right there).
~1500 participants (from an initial sample of ~3000) were assigned to three types of treatments: a full group (a group created by a political party and allowed anyone to post), a party content only group (a group in which posts by admins were auto-forwarded and users couldn’t post, i.e. only direct party messaging), and a control set (not assigned any group). Note that a participant didn’t necessarily need to be a supporter to be assigned to a group.
My first result is that political WhatsApp groups increase knowledge about political news. Participants in full groups were better able to distinguish true from false news.
I have a (minor) gripe with this one because I think ‘knowledge about political news’ is too broad to describe what appears to have happened. But the findings are intriguing, nevertheless. Participants got better at identifying accurate headlines about the party whose Whatsapp group they were assigned to (again, not necessarily supported). The reduction of belief in rumours and false headlines was, apparently, not statistically significant. There also wasn’t a significant difference when it came to accurate headlines about the other party (but there was some). Also, there was a reduction in belief in rumours about the BJP.
My second result is that the full groups have a significant average effect on political prefer- ences, pushing participants toward the assigned party. This effect comes mainly from participants who identified as moderate at the baseline.
Again, interesting because this implies (as the author states) that ‘backlash’/backfire effects were not strong (since group assignment was independent of party support). But, does this also mean that those identifying as moderates, and supported the party, were pushed towards being more partisan? I wasn’t able to determine if this was observed or not. Though, the paper does mention that there was no ‘increase’ in affective polarisation. Again, I’m not sure if the baseline levels themselves would be considered high/low, etc.
Also, while there was some self-reported difference in a greater likelihood of voting for the randomly assigned party, it was not significant enough to be considered to be influencing a voting decision.
My third result is that horizontal communication between group members is key to the groups’ treatment effects. Across all main outcomes, the treatment effects of party messaging alone are consistently smaller and less significant than those of the full groups. Party messaging alone has no significant effect on knowledge or political preferences.
Highlighting the importance of the difference the structure of communication networks can make. i.e. top-down v/s participatory. Despite a higher volume of messages, full groups had a higher likelihood of messages being viewed and higher self-reported values of time spent.
National Security by Digital Communication Networks?
In August 2021, the actions that the likes of Facebook, Twitter were about to take (or not take / or had not taken in the months/year before it) in the aftermath of the Taliban’s takeover of Afghanistan were the subject of intense scrutiny and debate. These were yet another reminder of how entangled DCN firms are in decisions that have significant geopolitical implications as well as the national security of individual states.
Literature on the role of DCNs primarily invokes the lenses of competition, privacy and speech. However, a recent paper, ‘National Security by Platform’ by Elena Chachko, proposes a framework for analysing their role in the privatisation of national security functions.
Before going into the framework, there are some key points the paper makes which are worth considering upfront. I’ve paraphrased my interpretation here:
Ad hoc developments: The growing role of DCN firms in geopolitics and national security weren’t the product of a deliberate, consensus-building exercise. Instead, these were ad hoc, piecemeal and incremental steps in response to significant events such as terrorist attacks, concerns over election integrity, etc.
Contradiction with Competition: While competitive markets envisage many private firms taking part, a market with a limited set of large-scale operators is better suited for cooperation with the national security apparatus and rapid, uniform responses/actions.
DCN capabilities and intent: Are DCN firms capable of meeting national security challenges, and are they likely to prioritise addressing them over profits?
Chachko makes the following points about the relationships between DCN firms and governments (I’ve separated them into bullet points for better readability) :
… Involve threat analysis and policy development cooperation, information sharing, and platforms replicating government practices and methods.
A mutually beneficial, at times even symbiotic, relationship has emerged between platforms and government agencies in addressing certain important national security and geopolitical challenges.
On other fronts, however, platforms and government have clashed.
These trends can be considered to be forms of ‘indirect, informal national security privatization’, and proposed the following categories:
A. Hard Structural Constraints
There can be institutional or constitutional limitations/constraints on state actors. E.g. state actors likely have neither the capability to detect/respond to sophisticated disinformation operations nor the authority to control what information can or cannot be posted/shared in other jurisdictions. DCN firms, on the other hand, exercise more control over these spaces (at least the ones they operate) and have the tools/capabilities/expertise to understand these threats better than state actors. Thus creating a need for state actors to rely on private actors.
B. Bureaucratic Workarounds
Even in the absence of ‘hard constraints,’ state actors may choose to rely on / cooperate with DCN firms to workaround legal/administrative requirements and/or political opposition, speed up response times, limit the visibility of their role, etc.
Both A and B require varying degrees of cooperation between state actors and DCN firms. They are also not mutually exclusive.
C. Platforms as Substitutes
In cases of inaction by state actors, or when their desired/preferred actions are at odds with government policy prescriptions/direction, DCN firms may resort to acting unilaterally, essentially substituting the role of state actors.
And while privatisation in the context of national security, both formal and informal, are not unique to platforms, Chachko argues that it is the change in scope that is significant:
The breadth of security and geopolitical policy and execution discretion that platforms currently exercise is striking. Questions such as what to do about genocide in Myanmar, what kinds of coordinated behavior constitute security threats and require enforcement, what foreign government blowback might ensue following such enforcement, what is necessary to secure the Indian election and protect its integrity, how to respond to Turkish demands to silence opposition,or what constitutes credible information about COVID-19 are complex and open-ended. They require far broader and more diverse expertise and greater exercise of policy discretion than identifying individual terrorism suspects or monitoring violent groups, finding breaches of computer systems, exposing zero-day vulnerabilities, or even attributing computer breaches to perpetrators.