Of DisAgrimants, Pusback, De-platforming trains and WhoOp(ed)sy daisies
MisDisMal-Information Edition #10
What is this? This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive etc who already do some great work. It may feature some of their fact-checks periodically
Welcome to Edition #10 of MisDisMal-Information.
Dis-Agrima-nts and Pushback
You know where I am going with this. Just in case, you don't - here is a quick recap. A long long time ago, standup comic Agrima Joshua made a few jokes about the rumoured super powers of a statue and uploaded that video to YouTube. Since then, there were laughs (at the jokes) and crickets (no, not for the jokes, but to represent the time that passed in between) until sometime last week when someone somewhere discovered their outrage bones and ...well... outraged about it.
Ok Prateek, but isn't this like Tuesday for women and other gendered minorities on Social Media? Yes, unfortunately, it is. But just because it is normalised doesn't mean we don't try to understand what happened.
Among all the abuse, there was also one particularly worrying piece of content - a video where the creator essentially threatened her with sexual violence, among a host of other abuses.
Let's consider how this played out in different strands of the information ecosystem (sides and colours used for representation are purely coincidental, almost).
(Note: that representation of the different strands as singular entities, does not mean to imply that all activities are coordinated - the purpose of the abstraction is to make the diagram clearer. And that the boxes appearing on either side are not necessarily all that different even though we judge them differently.
Also, it is not bad handwriting, I prefer to think of it as weak encryption)
The green rectangular blob represents the threat I mentioned above. Understandably, many people were outraged. The video was amplified along with calls to restrict his online presence as well as real-world consequences for this act.
Now, based on Edition 9, you can probably tell that I will likely disagree with the amplification aspect that 1 entails. And when this was pointed to Swara Bhaskar, her response was (I am paraphrasing of course) that it was important to highlight the kind of abuse prevalent on social media platforms). As someone who faces no abuse - I cannot argue with where she comes from, so we'll leave it at that today. It is important to note that this call to action, does not advocate doxxing anyone "if you know where he lives, File (sic) a police complaint". Ultimately, despite Mr. Mishra's deleting the video and attempting to "clarify" his position he was booked by the local police. Unfortunately, though, as TheWire reports, the arrest has meant that Agrima Joshua is now facing more threats.
And even while all this was unfolding, older content posted by other stand-up comics kept surfacing. In fact, this article by Shoaib Daniyal traces these events back to Kenny Sebastian and the Chinese app ban and lists others.
Again, this isn't unique or new. When Safoora Zargar was finally granted bail, hashtags accusing her of being a terrorist and working with ISIS were active.
In the subsequent weeks when Vikas Dubey and Kenny Sebastian were in the news, they were both "helpfully" advised by twitter users to "get pregnant" so that they could 'get relief from arrest on humanitarian grounds'. When the Tamil Nadu government decided to refer to the Tablighi Jamaat gathering as 'single source, - that term itself was coopted and used to signal "minority pandering". Or when the 'Stop Funding Hate' movement called out OpIndia, the latter ascribed various ulterior motives to it and then actually reported an increase in voluntary contributions.
We are happy to inform you that due to this campaign by soft Islamists, we have seen over 700% jump in our daily revenues which comes in the form of voluntary payments, while ad revenues have seen no dip.
The underlying point here is that the Information Ecosystem is well, an ecosystem. So it is essential, to consider actions from the lens of consequantialism in addition to morality. I am not suggesting that there should be no pushback but we should be aware that will also be used to fuel more information pollution. As Whitney Philips sums it up - "Pushback is important; it is also deeply fraught"
Aside: This ecosystem approach is something Jane Lytvynenko talks about in the latest episode of Arbiters of Truth on the Lawfare podcast too.
Aboard the de-platforming train
Writing for Quillete Nathan Cofnas lists 3 reasons that de-platforming may not work (he said "will", I saw "may" because...see previous section)
Draw attention to the person/content being banned. Or Streisand effect.
A ban may reduce audience in the short term, but the audience that does stay becomes more dedicated.
Censorship fuels more conspiracy theories.
And, as is a frequent occurrence now, most of these instances are in response to moral outrages - which means that once a platform does it, others are under more pressure to follow suit. Alternatively, they come to their own judgement that it is lower risk than not taking any action.
Becca Lewis alludes to this:
WhoOp(-ed)sy daisies
Adam Rawnsley writes in the Daily Beast that a number of convervative media outlets ran opinion pieces from what turned out to be fake personas. First spotted by Marc Owen Jones, this network shared some behavioural patterns as described in the article
had Twitter accounts created in March or April 2020;
presented themselves as political consultants and freelance journalists mostly based in European capitals;
lied about their academic or professional credentials in phony LinkedIn accounts; used fake or stolen avatars manipulated to defeat reverse image searches;
and linked to or amplified each others’ work.
As Kate Starbird points out:
Most of these articles have now been removed, but they entered the information ecosystem at a different time. What happens to the pollution they may have created?
Meanwhile, Avid Ovadya has been calling on those who generate synthetic image datasets to prevent their misuse.
What's App India?
A study by Kiran Garimella and Dean Eckles looked into 'Images and misinformation in political groups' on WhatsApp.
The study was based on a sample of 2500 images (2000 popular and 500 random).
They found that:
Image related information disorder is prevalent in public political WhatsApp groups.
3 categories were the most common: out of context images or memes and photoshopped (essentially sythentic).
Automated verification of information disorder is very hard to do.
The study also tried to determine if there were any attributes that could be used to predict whether an image of "contains misinformation or not"
We hypothesize that the following category specific features would work well.
(i) Out-of-context images: Web domains. Web domains indicate the domains which are returned on a Google reverse image search for the image. Out-of-context images are typically shared by low quality domains or by domains which might have fact checked the image
(ii) Manipulated images: We used a state of the art computer vision technique, which detects pixels which might have been manipulated (Wu et al. 2019). From this, we computed the fraction of pixels in an image which could have been manipulated. The intuition was that this fraction would be higher for manipulated images. However, the technique fails in our case since a lot of images which do not fall into this category are also falsely labeled as manipulated.
(iii) Memes: text on the image. Memes contain a lot of text on them. This category mostly contains memes with false quotes and statistics. The idea was that using text related features (e.g. tf-idf vectors), we could identify such false text.
Oh, and since we're on the topic of memes, another study published on the same site looked into meme factories in Singapore and Malaysia and how their strategies have evolved in response to COVID-19. ---
Book them all
A News Portal was booked for spreading fake news.
In Goa, owners of some restaurants and bakeries filed a complaint against 'unknown persons' for circulating messages that these outlets were unsafe and should be avoided.
As it turns out, someone was running an inauthentic Twitter account pretending to be Maharashtra's Cyber Cell.
In a story that evokes a strong sense of deja-vu, 3 journalists were booked for a report saying that Telangana CM KCR had COVID-19.
We're not with the deja-vu yet, in Mizoram too, a 54 year old man was arrested for spreading "fake news"
Mumbai Police has blocked 1816 pieces of objectionable content between April and June. I wonder if we'll ever get to know - What content? What grounds were used to evaluate this etc? I am not holding my breath.
In an interview to the New Indian Express, Mr. Javadekar said "Fake news more dangerous than paid news". Aside: The interview also covered the topic of the draft Environment Impact Assessment report. Conveniently, some of the NGOs who mobilised around this issue found their websites blocked. See IFF's post on the subject.
Is it all academic?
Good money gone bad:
Global Disinformation Index published a report indicating that Ad Tech was funding COVID-19 information disorder sites based on an analysis of 500 sites.
Wink Wink Nudge Nudge
Poynter's Factually newsletter talks about a study which indicated that a nudge to judge the veracity of a headline before sharing content made a material difference to how frequently it was shared.Influencer campaigns
A Carnegie study highlights the challenges of countering influence operations since they defy easy categorisation, social media platform content standards are only applied in parts and not to the whole campaign.Self Study
A civil rights audit the Facebook commissioned itself essentially skewered it for failing to act adequately against hate speech and disinformation. Casey Newton's major gripes against the audit are that it ignores the company's size and does not seem to call out its business model.
Not-Bad-News
Enough Prateek, you're giving me PTSD with all the bad news
Hey, I warned you all the way back in Edition 2, but fine - here are some things that cannot be considered bad news.
Mumbai Mirror's story on Clyde Crasto's effort to counter information disorder.
Dr. Netha Hussain's work against COVID-19 was featured by the UN's social media handles (No, it wasn't UNESCO)
Twitter's Terrible, Horrible, No Good, Very Bad Day
Joseph Cox covered how the wave of the account takeovers that affected Twitter accounts like Elon Musk, Bill Gates, Uber, Apple and many others were not the result of a technical vulnerability but the outcome of an employee being compromised.
What's interesting that with all these accounts at their disposal, the attacker chose to propagate a crypto-scam and not start WW3 (maybe this belongs in the not-bad-news section then).
Among the steps Twitter took to mitigate this was to prevent tweets which included anything resembling a hash from being published. Of course, this was overkill as many tweets were inadvertently blocked resulting in censorship claims - just like Facebook's spam detection feature going into overdrive, shortly after they announced that they would increase reliance on algorithmic content moderation.
By Twitter's own admission around 130 accounts were affected and not all of them were compromised.
To 350 and beyond
If you are keeping count, by next week, the Union Territory of J&K will have crossed 350 days with internet restrictions. And based on what happened in court this week, it looks like the 1 year mark will almost certainly be breached.
In June, Logically HQ published a 2-part investigationon a campaign that was used to 'pink-wash' atrocities in J&K
Meanwhile, J&K Bank filed an FIR against unscrupulous elements for circulating fake news.
Sagrika Kissu writes about the new Media Policy in J&K.