Of Amplification Mechanics, Disinformationception, WHO let the bad advice out and Asides
MisDisMal-Information 16
What is this?* This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research.*
What this is not?A fact-check newsletter. There are organisations like Altnews, Boomlive etc who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 16 of MisDisMal-Information
Hate Unlimited and Mechanics of Amplification
The Negative Positive Loop
Many years ago, I read an article that..er…spoke to me. I couldn’t find the link (thanks google algorithm that now makes it very difficult to search for something from anything more than a year or 2 ago) but the basic premise was this - our capacity to hate is infinite, our capacity to care is finite.
Sorry, didn’t mean to throw you into a gif-wall to start with (ok, maybe I did, just a little bit) but if you look around, it is tempting to believe that is the case.
I tracked 2 outrage cycles this week.
The Book about the Delhi riots that was (launch event), then almost wasn’t (publisher backed out) and finally was again (new publisher came onboard).
A news anchor engaging in outright dangerous (and hateful) speech.
We seem to forgetting about 1) so I’ll come back to that. Indicative popularity of the hashtag bloomsburyindia using OSOMe’s Trends tool
Let’s talk about the second case now. Said news anchor insinuated that an apparent increase in the number of muslim citizens clearing the Union Public Service Commission (UPSC) exams was part of some nefarious plot to take over the bureaucracy. I will refrain from commenting on specific numbers since this is isn’t something I track, but qualitatitvely the assertion seems absurd.
Anyway, he tweeted a teaser video to an upcoming show where he promised to reveal all. Understandably, people were incensed (there was also no shortage of people who agreed with him) but let’s focus on the former group first. He was called out. The mechanism that many accounts chose to use was a Quote Tweet. In some cases they also choose to speak out against the hashtag he used and happened to include it in their tweets. You would have observed that I am tiptoeing with language here, not because I want to avoid offending anyone but because these mechanics are extremely complex - there are no defined does and don’ts that we can happily stick to.
The next 2 images indicate how the hashtag and his tweet itself spread. Note that I used Hoaxy to generate this, so it should be considered indicative at best
I did a short thread regarding this, the gist of which was that even the act of calling him out seems to be aiding its spread.
To be fair, I did see people use screenshots too but I don’t have a way to estimate how those spread unless I look tweet by tweet. And since Twitter is not really suited for nuance, I had to shed a lot of it.
I should categorically state that at no point am I advocating not pushing back. ‘Ignore them’ is often raised as a possible course of action - but the reach of this anchor puts him far above that threshold.
Calling out such acts has the potential to send a signal about what is acceptable and what isn’t.
It creates awareness
It can recruit more to the cause.
But, the way platforms operate - the mechanism of calling out can give the content an *algorithmic reward* leading to further amplification.
Aside: Kate Starbird did a thread (+ post) on a Trump tweet that Twitter took action against. Why am I talking about this? When I pointed this case out to her, she replied saying that RTs of QTs gave the Trump tweet much more reach than through RTs of the original
Since I ruled out ‘Ignore’ in this case - I attempted to map out how interactions may play out between Group A (his supporters), Group B (those calling him out). I ended up with a bunch of feedback loops (mostly positive). Irrespective of whether enough pressure is generated on the platform, law enforcement and any private entities he may be associated - polarisation wins. How do you tame the beast without feeding the monster?
I should repeat that I am not advocating against pushback. I’ll just borrow Whitney Phillips’ phrasing again - Pushback is important, it also deeply fraught.
Now, in this case - Twitter may or may not choose to take action despite multiple users claiming to have reported him. He also posted a screenshot indicating it did not find any issues with the tweets reported. Assuming it is authentic, and they don’t react to additional pressure, it looks like no action from them will follow. While people have filed multiple police reports, it remains to be seen what action will be taken. If there is, one can be sure that Freedom of Expression will be invoked, just as it was in the context of Bloomsbury withdrawing from publishing the book about the Delhi Riots - and more positive feedback loops will follow - that are are anything but positive.
Aside again (you were warned, it is in the title, after all): An article by Samantha North , Lukasz Piwek and Adam Joinson analysed political tribalism in twitter discussions of Brexit. In a section titled Mechanisms of Group Polarisation they assert:
Polarization is defined as “members of the deliberating group predictably moving towards a more extreme point in the direction indicated by their predeliberation tendencies” (Sunstein, 2002 , p. 176).
One especially concerning factor is the rise of affective polarization, defined as hostility towards the rival side. Affective polarization can be as equally potent for political party as for race (Iyengar & Westwood, 2015 ), and can spill over into discrimination and “open animus” even in nonpolitical settings.
For example, researchers have found that mutual dislike between Republicans and Democrats is primarily driven not by policy attitudes, but instead by exposure to messages that attack the outgroup and reinforce negative views (Iyengar, Sood, & Lelkes, 2012 ).
Their analysis also revealed:
1) Increased tribalism over time.
2) Interaction between online and real-world events.
Gulp
Regular programming resumes
And yes, he does seem to basking in the attention, inviting people to debate him on the show. His follower-base has also grown by roughly 2% since this started. Is the proverbial monster being fed?
Another aside: read this piece on right-wing debate me culture
P.S. As I was writing this, BarandBench is reporting that the Delhi HC has stayed the broadcast of the episode. Cue: Workarounds and FoE paeans
I want to come back to this concept of algorithmic reward for a bit because it has led to some interesting conversations during the week. It seems crazy to me that so much energy was spent on debating whether the act of calling out dangerous speech would inadvertently give it greater reach on the platform. Platforms need to be more explicit about what leads to algorithmic amplification and what doesn’t. I’ll even go far enough to say that there should be ways to interact with content (especially in the context of calling out hate or dangerous speech) without giving it an ‘algorithmic reward’.
Oh, I promised I’ll come back to the book. There was a lot of division over whether opposing it amounts to censorship or whether it was self-goal. These arguments make some valid points if you strip away the context. But the question is - can you?
If you use the Dangerous Speech Framework (which I referenced in Edition 11), then this thread covers it best:
This also clearly applies in said journalist’s case:
(Image Source )
Remember the image with the hashtag? Well, here’s the representation for the Devanagari hashtag he used. Limited inadvertent amplification + pushback.
Prateek, you promised me infinite hate!
Yes, yes. Don’t worry. More hate you shall have! But because I have to eventually hit send, it will be of the finite variety.
Kunal Purohit writes about the rise of Hatemongers on YouTube in India. Read this along with Rebecca Lewis’ post on how YouTube amplifies far-right content.
From 1 - trigger warning
Phatak and Mishra created their own style of content: videos of themselves in still cars opining on various matters—from feminism to militant attacks to contentious government policies. Most of them are angry abuse-filled rants in Hindi at their targets. Both Phatak and Mishra often issue calls to action—Mishra, when he issued the rape threat against Joshua, also asked his army of followers to abuse her on social media.
The response from social media has been desultory. Facebook is facing a major crisis over claims that it deliberately ignored hate speech from BJP leaders. When asked about the issues, YouTube said, “We have strict policies prohibiting harassment on YouTube, and terminate any channel that repeatedly or egregiously violates those policies, and in accordance with our strike system.” But in practice, the systems are opaque and uncertain.
From 2 (emphasis added)
… my research has indicated that users don’t always just stumble upon more and more extremist content — in fact, audiences often demand this kind of content from their preferred creators. If an already-radicalized audience asks for more radical content from a creator, and that audience is collectively paying the creator through their viewership, creators have an incentive to meet that need. Thus, the incentives of YouTube audiences and creators form a feedback loop that drives more and more extremist content.
Suzanne van Geuns and Corinne Cath-Speth talk about Cloudflare’s decision to de-platform DailyStormer and 8chan because of their hate-filled content to raise the question : What is the political process behind their content moderation decisions?
Thinking of a company like Cloudflare as an internet sheriff is especially ironic given the increasingly common comparison between the internet and public utilities. When the pandemic shifted education, work, and social life online, the need for working internet infrastructures became acute enough to make it feel like an essential good.
Archit Mehta on the link between information disorder and radicalisation.
Syndicated piece from NYT, in Indian Express (no paywall) about a monk who had to flee from Cambodia after a series of targeted disinformation campaigns. emphasis and comment added
Facebook said it had nearly tripled its human content moderators in Cambodia, although it would not say how many people worked in Khmer, the local language. From January to March, Facebook said, it took down 1.7 billion fake accounts worldwide.
But none of the tripwires appear to have been triggered in the case of the monk *(keep this in mind)*, even as his fate was front-page news in the government-controlled media. Over the years, Luon Sovath said, he has been the repeated victim of fake Facebook accounts set up in his name and reported them to the company. “I want to say to Facebook, you should help to restore and defend human rights and democracy in Cambodia,” he said.
Russel Brandom, in TheVerge on the Kenosha Guard’s Facebook page which was reported by multiple users (2 are explicitly mentioned in the piece, difficult to know for sure how many others reported it unless Facebook reveals that) but not taken down by Facebook before the shooting.
We don’t have to go as far for Cambodia for examples of people being forced to relocate/go into hiding. *(h/t to Venkat Ananth for this story)* Srishti Jaswal on her experience after a post was shared out of context, which resulted in her becoming a victim of targeted harassment and threats. She also chronicles the experience of others who have been in similar situations.
Wicked Problems or just Insurmountable?
Have you hit the gif-wall yet? Don’t worry - we’re not done.
While addressing the Irish Parliament in November 2019, Áine Kerr described Information disorder as a Wicked Problem.
We are in the midst of a Wicked Problem and running out of time.
A wicked problem, by its very definition, is something that’s difficult or almost impossible to solve because of incomplete, contradictory, and changing requirements. It cannot be solved through one course of action.
Our Wicked Problem today is an Information Disorder.
Content Moderation
Content Moderation at scale is hard… very hard.
Remember the Kate Starbird tweet I linked earlier? Well, that was part of a study of how Twitter’s action against a tweet by Donald Trump, which was flagged as misleading, affected its spread.
Remember, this is in election/civic integrity domain, something that all platforms are taking very seriously in the lead-up to the American Elections later in the year, to the extent that they are working on contingencies in case Trump resists a transfer of power.
Facebook is mulling a kills-switch for political ads, which Nina Jankowicz argues is addressing the wrong problem.
Yet, as this tweet (and the thread) by Jason Kint points out, Twitter did take action after it hit around 8K RTs, while Facebook appeared to have taken none at all.
Judging by the screenshot, it looks like it took Twitter somewhere between 7-8 hours to act. In this context, 7-8 hours was probably just about ok. On the day of elections - probably not.
In a tense situation, likely to lead to violence: most definitely not. There’s so much that can go wrong. Business Insider reports in the context of Kenosha, that one of reasons for Facebook not acting against the page in question was that the issue didn’t go to the right team.
Oh, and the shooter is being celebrated by some people on Facebook.
Now, if this is the case with elections and violence in the US. I am not too particularly optimistic when it comes to dangerous speech in a global south country.
Disinformationception
Alex Stamos has famously said that disinformation about disinformation is still disinformation. And Thomas Rid, in Active Measures observed that it is possible for agencies to overstate the effects of their active measures and become victims of their own disinformation. A corollary to this would be that if you overestimate the disinformation has had on you, you are disinforming yourself. I guess we can use a collective refer to these situations : Disinformationception.
Gabrielle Lim writes about the Risks of Exaggerating Foreign Influence Operations.
Shape-shifting
Shayan Sardarizadeh with a thread about how QAnon has been affected since Facebook took action.
Brandy Zadrozny and Ben Collins on a number of ‘Save Our Children’ rallies in the US as a means to ‘amplify and co-opt’ and how local news channels seemed to be under-prepared to report on such movements. Something Zarine Kharazian was tracking.
For OneZero, Will Oremus on Why Facebook can’t crush QAnon.
And to make the problem even more wicked, Reuters claims that there is ‘small but growing Russian support for QAnon conspiracies’.
Given the shape-shifting nature of disinformation actors, it seems like cross platform norms/agreements/standards are the way to do. But Emma Llanso writes in TechDirtthat cross platform collaboration shouldn’t be a backdoor to censorship.
Meanwhile in India
Union Minister for Information and Broadcasting Prakash Javadekar stressed on the importance of media literacy to ‘combat fake news’. He also pointed that there are now fact-checks units in every state.
In Kerala, the state government’s decision to expand its fact-checking initiatives beyond COVID-19 related news to all categories is being opposed.
Himanshi Dahiya chronicles Madhu Kishwar’s repeated trysts with Information Disorder.
Zeenews reports that ISI is using Khalistani groups to run disinformation campaigns against India. DailyO has a post calling on India to give Pakistan ‘a taste of its own medicine’ via Information Warfare. At this point, I want to remind you that in Active Measures, Thomas Rid insisted that “It is impossible to excel at disinformation and at democracy at the same time.”
Around the world
WHO let the bad advice out?
Financial Times points out missteps at The Lancet, The New England Journal and WHO with regard to COVID-19 misinformation.
If there is to be a postmortem of the pandemic information ecosystem, conspiracy theorists and snake-oil salesmen should not be the only targets for criticism. A real cure for the “infodemic” would include some honest introspection from gatekeepers as well.
Bellingcat’s summary of CCP meme inspired attack on Tedros Adhanom Ghebreyesus.
Let’s say China
(Search for that term on YouTube, you won’t be disappointed)
Graphika’s report on Chinese Disinformation in Taiwan which concludes that the latter is a successful model of mobilisation against disinformation.
Aside (ha!) : You really should check out Audrey Tang’s (Taiwan’s Digital Minster) pinned tweet.
Paul Mozur covers police action against activists, dissidents after the passage of the National Security Law in Hong Kong.
Keep-it-on
Is there any scope for Telecom companies to challenge Internet Shutdown orders? David Sullivan, in Lawfare, lists 5 ways for them to do so. I am a bit skeptical to be honest. At least in India, it doesn’t seem like there is any will or stomach to fight them
When will Bangladesh restore internet connectivity to Rohingya refugees in camps? ‘Very Soon’ if this article is to be believed.
Also…
One of those rare instances (interview with the executive director of the Association of Independent Press of Moldova) where someone acknowledges that the threat of domestic disinformation outweighs that from foreign sources.
Tomiwa Ilori makes the case that Content Moderation in African countries is even harder.
Given that authoritarian governments are on the rise in Africa, platforms might also have to deal with even more state-sponsored coordinated attacks like we have already seen in Nigeria and Zimbabwe. While those sorts of attacks are happening around the world, they pose extra threat here given the already shrinking civic space in Africa , as governments tend to combine these bad laws and authoritarian practices to pressure platforms
Trade deals or not, New Zealand appears to be importing Brexit variety populism.
Facebook islegally challenging an order by Thailand’s government that asked it to block a group critical of the monarchy.
A report by T.Colley, F.Granelli and J.Althuis concluded that conservatives in the UK were more liberal with their employment of disinformation.
Some solutions…sort of
On Harmful speech. Susan Benesch’s proposals for improved regulation of harmful online content.
Samantha North on 3 ways to spot coordinated inauthentic behaviour online.
Ray Serrato on the tools he uses for OSINT work.
Amy Yee on identifying reliable information, especially for ‘seniors’.
Study Corner
Cory Doctorow’s Open Accèss book “How to destroy surveillance capitalism”
This is over a year old, but I stumbled on it recently - Gabrielle Lim has painstakingly put together an annotated bibliography for disinformation.
Rita Singh writes about the role of the human voice in the communication of digital disinformation.
ASPI on vaccine politics and disinformation.
Disinformation on disinformation is still disinformation. In effect, 2 wrongs don't make a right. Savvy.