Networked Misinformation, Between Hate and a Hard Place, En-gendering disinformation
MisDisMal-Information Edition 47
What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 47 of MisDisMal-Information
In this edition:
Networked Misinformation and Content Cartels
Between hate and a hard place
En-gendering disinformation
Networked Misinformation and Content Cartels
As Renee DiResta (2021) writes in The Atlantic, “Misinformation is networked; content moderation is not.” In this fragmented moderation landscape, content whose spread is limited on one platform can have wide reach on another.
I don’t usually start with a quote, but this one seemed apt. It is from a recent paper that analysed how Donald Trump’s tweets, that we were flagged/labelled [visual indicator only] or restricted [restricted forms of engagement such as viewing, retweeting, replying, etc.], spread across Twitter and how the same message spread across Facebook, Instagram and Reddit (HKS Misinformation Review - Zeve Sanderson, Megan A. Brown, Richard Bonneau, Jonathan Nagler and Joshua A. Tucker).
Some key findings from the paper
On Twitter: Tweets that were labelled spread further than those that were neither labelled nor restricted.
On other networks: In general, for posts containing the same ‘messages’, those that were restricted on Twitter spread further those that were labelled or not labelled. But there are some subtleties to highlight:
Facebook: Messages with/without labels had a similar “average number of posts on public Facebook pages and groups”. Messages that were restricted had “a higher average number of posts, were posted to pages with a higher average number of page subscribers, and received a higher average total number of engagements.”
Instagram: On average number the posts, the pattern was similar to Facebook. However, with engagement, there was a difference in that “posts with a hard intervention received the fewest engagements, while posts with no interventions received the most engagements.”
Reddit: Reddit doesn’t report engagement numbers in the same way as other platforms, so researchers had to use subreddit size (users) and frequency of posts: “messages that received a hard intervention on Twitter were posted more frequently and on pages with over five times as many followers as pages in which the other two message types were posted.”
The authors are careful to point out that these results don’t suggest that the “Streisand effect” is in action since the nature of the messages themselves could have played a part.
In conclusion, they say:
Here, we show how content moderation policies on one platform may fail to contain the spread of misinformation when not equally enforced on other platforms. When considering moderation policies, both technologists and public officials should understand that moderation decisions currently operate in a fractured online information environment structured by private platforms with divergent community standards and enforcement protocols.
I think this is a crucial point. And recognising this ecosystemic nature of mis/disinformation was one of the reasons we proposed the term Digital Communication Networks (DCNs) with 3 components: capabilities, operators and networks. In fact, the networked nature of the information ecosystem means there are implications beyond just mis/disinformation (also, something we highlighted in the paper). It is also important to move away from the binary of treating users as either passive consumers of information + narratives and active disinformers, or that a limited set of actors exercise control over the information ecosystem. For this last bit, I find Kate Starbird’s Participatory Disinformation model pretty useful (I wrote about it in 39: Of Polarisation, propaganda, backfire and participatory disinformation) because it identifies the presence of closed-feedback loops, varying incentives and, challenges with control. Note: challenges with control don’t necessarily make them fragile.
But returning the idea of misinformation being networked and siloed content moderation not being an adequate response, we are likely to see regulatory forces pushing DCN operators towards ‘more cooperation’. At this point, we’d do well to recap some of the costs of ‘Content Cartels’ that Evelyn Douek wrote about (Knight First Amendment Institute)
Compounding accountability deficits
Creating a false patina of legitimacy
Augmenting the power of the powerful
These are all sub-heads from the essay but pretty self-explanatory, so I won’t elaborate.
The GIFCT [Global Internet Forum to Counter Terrorism] represents an interesting case study (Emma Lhanso explains this very well on an episode of the Lawfare podcast). Also worth reading is this Erin Saltman interview with Issie Lapowsky on the GIFCT’s struggles to expand the definition of ‘violent extremism’ (The Protocol)
Aside: Incidentally, the 177-page report referred to in this report included BJP as an example of a Level 1 (Fringe Group) engaged in non-violent extremism (see page 51 for the framework and 62 for the table). See for yourself who actually seems to have covered this story (Google Dork Link). Spoiler: not a very long list.
Between hate and a hard place
Speaking of politics in India. Let’s go back to Renée DiResta’s article referenced in the first quote (written in the context of right-wing election fraud narratives in the U.S.):
Misinformation has entered its industrial era. It has always existed, but now it is girded by structures, moves via clear pathways, and can be redirected at new targets. It is no longer the province of conspiracist novitiates and social-media amateurs. Yesterday’s “election fraud” is today’s “dangerous vaccines.” The dynamic is predictable, but seemingly unpreventable.
Earlier this month, as news broke that Taliban forces had gained control of Kabul, the term ‘Indian Muslims’ was trending on Twitter in India (this phenomenon is neither new nor isolated, as we saw in 17 - Hate is trending, trends are not).
I extracted ~ 22K tweets with this term using Twitter’s API. The image below includes the tweets that were retweeted the most (with handles removed). Apart from the first tweet, which drew attention to this very phenomenon, the others targeted Muslims and, to some extent, opposition voices in India.
No surprises here, to be honest.
I also looked at account creation dates (note: the chart goes in descending order of accounts created per day from Left to Right). Look at the first date on the bottom left. That’s a day before these tweets were extracted.
Let’s look at the top 10 dates. 5 of these are from the week preceding 16th August.
I also looked into the activity specifically from accounts that were created after 9th August. And here’s where I noticed something disturbing. The accounts that were the most active were split between those that engaged in islamophobic content and those that engaged with trends supporting Imran Khan and/or PTI.
The account with the most tweets
2nd most tweets
4th highest number of tweets (account with 3rd highest number of tweets no longer existed)
5th highest number of tweets
I should make 3 things clear, though:
1) There is no way to attribute who was actually operating these accounts. There is also limited activity history to draw preliminary inferences from since the accounts are relatively new.
2) Saying that an account is active (i.e. posting content) is very different from saying that such content received engagement, and actually had any impact. While engagement, as defined by Twitter in terms of Likes and Retweets is still measurable, the impact is not.
3) Despite this activity, the tweets that received the most retweets were largely directed against Muslims in India using a range of tropes.
Coming back to the disturbing bit. There is a high likelihood of an ‘industrial element’ (organisation + financial incentives) to this pattern of activity. And this intermingling of domestic politics and geopolitics makes things harder for voices in the domestic opposition. I touched on this in 43: The many influences of influence.
Imagine hypothetical states A and B that have an adversarial relationship. Party X is in power in State A. Domestic opponents in State A criticise/oppose a number of Party X’s actions. State B opposes and criticises a certain subset of these (relevant to state B). Also note, Domestic opponents interest in Party X is significantly higher than State B unless some form of overt aggression from State B is in the picture. Thus, it is inevitable that there will be some convergence between the issues, arguments and narratives employed by Domestic opponents and State B. For simplicity, I haven’t represented internal dynamics within State B.
This presents 2 challenges for Domestic opponents in State A:
Avoid being co-opted/misused by State B operatives.
Avoid being characterised as State B agents or ‘speaking the same language’ by Party X and its allies.
At a time when we’re seeing (once again) increasing instances of “sachet-ised communal violence” … (borrowing this framing from a podcast I heard a few years ago, but I can’t remember which one)
… this places the burden of, at least, ensuring the following on those speaking out against it.
Avoid getting manipulated at the input stage.
Prevent distortion/co-option after output.
Caught between hate and a hard place.
En-Gendering Disinformation
In a sobering opinion piece appearing in The Hindu on August 10th, Pooja Chaudhuri wrote about misinformation through a feminist lens (italics indicate that it is a direct quote of the title).
The active participation of vocal women, especially from minority communities, is resisted by those who do not wish the social order to be disrupted. This isn’t to say that men are not targeted online, but the attacks faced by both sexes are vastly different. Misinformation/disinformation also targets men and women differently and unsurprisingly so, especially in India where gender disparity among Internet users is high.
A report by The Wilson Center earlier this year defined gendered disinformation:
Gendered and sexualized disinformation is a phenomenon distinct from broad-based gendered abuse and should be defined as such to allow social media platforms to develop effective responses. The research team defines it as “a subset of online gendered abuse that uses false or misleading gender and sex-based narratives against women, often with some degree of coordination, aimed at deterring women from participating in the public sphere. It combines three defining characteristics of online disinformation: falsity, malign intent, and coordination.”
The intersectional challenge is something both highlight.
Pooja Chaudhuri:
But misinformation like other forms of abuse has inter-sectional challenges. While actor Swara Bhaskar receives some of the most sexist troll attacks, activist Safoora Zargar is targeted for being a woman as well as a Muslim. After her arrest for participating in protests against the Citizenship (Amendment) Act, pornographic videos were shared in Ms. Zargar’s name on social media. Organised disinformation and sexism intersect with Islamophobia, castetism, religious bigotry and other forms of discrimination to threaten vocal women from minority communities.
The Wilson Center report (authored by Nina Jankowicz, Jillian Hunchak, Alexandra Pavliuc, Celia Davies, Shannon Pierson and Zoë Kaufmann)
Over half of the research subjects were targeted with gendered or sexualized disinformation narratives, with women of color subjected to compounded, intersectional narratives also targeting their race or ethnicity.
Related: This SunoIndia podcast about the disturbing service that ‘auctioned’ Muslim women is an important listen.
Maria Giovanna Sessa, in an analysis of misogyny and misinformation related to COVID-19, listed this 👇 as one of the main takeaways:
Misogynistic narratives tend to produce either a negative representation of women as enemies and opponents in public debate or a pitiful depiction of women as victims, often in order to push a social or political agenda.
Let’s also look at the perpetrators.
The Wilson Center report highlighted the use of ‘malign creativity’:
the use of coded language, iterative, context-based visual and textual memes, and other tactics to avoid detection.
For example, the word “bitch” may be represented using spaces or special characters. This makes these abusive terms and the narratives they support difficult to detect for automatically. Furthermore, in the context of specialized narratives ornicknames,suchas“Stretchin’Gretchen”or“HeelsUpHarris,”ahumancontentmoderatormaylackthecontext to understand and take action against abusive content when faced with a one-off, target-generated report about one of these coded narratives.
E.g. 2 of 5 most active accounts created just a few days before the hashtag ArrestSwaraBhasker was trending (if you’re feeling a sense of Deja Vu, then yeah, it happens quite often, see Edition 7) also posted a hashtag (in Hindi) that can be considered an example of malign creativity.
In a Global Humanities edition on Identity and Nationhood, Amrita De writes about Masculinities in Digital India. One takeaway for me was that gendered disinformation is both a means and an end.
A policy brief titled ‘'Misogyny: The Extremist Gateway?’ developed by the UNDP Oslo Governance Centre addresses the overlaps between misogyny and violent extremism.
Related: A recent Global Disinformation Index report on occurrences of ads for brands from the UK appearing on misogynistic disinformation stories observed the following false narratives:
The feminist movement as destroying men’s masculinity.
Questioning gender is against human nature.
Feminism censors, bans and legislates.
Successful women will not find partners.
Feminism is to blame for men’s “toxic” masculinity.
Feminism harms women.
Women in the military are a national security risk.