What is this?This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research
What this is not?A fact-check newsletter. There are organisations like Altnews, Boomlive etc who already do some great work. It may feature some of their fact-checks periodically
Welcome to Edition #5 of MisDisMal-Information.
Achievement unlocked: misinformation via an Information Disorder newsletter. An ErRoR iN EdiTioN 4
Apparently, people type like this. Which is why in the YouTubers v/s TikTokers section of Edition 4, the percentages of potentially Islamophobic and anti-China tweets were higher than the 3.5% each number I reported. Turns out the query I was using was case-sensitive. I have updated Edition #4 to reflect these changes. Sorry about that.
However, ~7.3% of the tweets contained references to religiously loaded terms like hindu, muslim, jihad, lovejihad, religion, culture etc. And another ~5.7% had references to china/chinese, revenge, virus, some statements incorrectly attributed to tiktok's founder etc
The discourse continues
Unfortunately, I wasn't able to spend a lot of time analysing Twitter last week because I was trying publish research on the assessment of COVID-19 tech interventions (subtle plug).
I did manage one more extract of the BanTikTokInIndia hashtag on 19th May though. Using the now improved parameters, I looked for similar content again. There seemed to have been a drop in the overall percentage of 'charged' tweets. Around 5.3% could be judged to be Islamophobic while ~4% could be judged to be anti-China. I do not recommend looking up this trend on Twitter at the moment, it contains a fair amount of graphic animal cruelty and implied sexual crime related content.
Another set of trends I did notice but wasn't able to spend a great amount of time on pointed to a concerted effort around reservation/article 30. As usual, I won't post the actual content here, but I hope to spend some time looking into it this week.
To Bot or Not To Bot?
That is the question to which nobody seems to have an answer. Sorry, some background first. NPR ran a story with the headline "Nearly Half of Accounts Tweeting About Coronavirus Are Likely Bots". Now as one eagle eyed twitter user pointed out (self-pat), this story had surfaced back in April too, but did not garner the same amount of attention. Why? I have no idea.
The article references researchers at Carnegie Mellon University who analysed ~200 million tweets since January and "found that about 45% were sent by accounts that behave more like computerized robots than humans." Unfortunately, there's not much else about methodology etc that is known at this time.
There is a fair amount of skepticism about this number.
But that hasn't stopped a number of other publications from running it with very similar headlines. And if it seems like it was just last week that I was complaining about how a study was covered, well, it was.
A study on BMJ Global Health looked into YouTube as a source of information on COVID-19: a pandemic of misinformation?. Now, I had an issue with the way this study was covered. The researches started with a pool of 150 videos and whittled that down to 69, of which a quarter had misleading content and 62M views. For comparison, Youtube claims that one billion hours of content are watched daily on its platform generating 'billions' of views. Meanwhile, the study itself was covered by some publications with the title that 1 in 4 videos on YouTube are factually inaccurate. From the starting point of this study, that's quite an extrapolation.
Evelyn Douek has a thread compiling a lot of the pushback around this story.
The bottomline is that identifying such activity is hardly binary. Various tools, while indicative, should not be considered as the single source of truth. That's not to say that we shouldn't use them, we just shouldn't accept what they say as gospel.
You get the point now. Jumping to conclusions: bad, healthy skepticism:... healthy. Hmmm, I guess that applies to more than just this story.
Throwing the book at a tough problem won't solve it
Maneesh Chhibber writes in ThePrint that "Twitter, Facebook profited a lot from India’s hate agenda. Time to pull the plug with a law."
In the midst of the coronavirus pandemic, the French parliament has passed a law that makes it mandatory for social media and technology companies such as Twitter, Facebook, and Google to remove hateful content within 24 hours of being flagged by users. If the content pertains to terrorism and child pornography, it must be removed “within one hour”. Failure to comply could end in these companies facing fines of up to $1.36 million. It is time India does something similar. I suggest this knowing fully well that such a law is prone to misuse. Governments in India could use it to throttle free speech, censor unfavourable content, and stifle rightful dissent. At the very least, most law enforcement agencies have stopped functioning independently and now work instead like an extension of the Bharatiya Janata Party (BJP), which is in power at the Centre and in many states.
I certainly understand where this comes from. It is hard to be comfortable with the amount of power that these private players have amassed. One of the suggestions was to weed out anonymous accounts. Incidentally, a BJP member has also petitioned the Supreme Court because of the 'seditious content' on social media. This plea also asked for a KYC of all social media handles. Now, you'll probably recall that the current version of the Draft Personal Data Protection also referenced 'voluntary' social media verification. And while this may sound like a good idea in theory, as with most things, it will first disproportionately impact vulnerable people first. Don't take my word for it (you should never do that), listen to this episodefeaturing Danielle Citron on Feminism and National Security where she touches on this aspect.
We shouldn't moral panic ourselves into a situation that we cannot get out of. Because we know states will not easily concede powers they acquire (Why would they?). ForeignPolicy has a round up on how this is being used to silence critics in SE-Asia. Actions speak louder than words goes the cliche, with that in mind read about the case of this PIB Fact Check.
Order, Order!
Since we're on the subject of courts. The Delhi HC "issued notice to social media platforms to remove illegal groups for the safety and security of children in cyberspace". The next hearing is scheduled for mid-July. Keep an eye on this case.
Let's talk about TikTok
As I alluded to earlier in this edition, a lot of disturbing content has indeed surfaced on TikTok. To the extent that the chairperson of NCW stated her preference that the platform be banned. Let's put this in context. Sure, the platform could do more but disturbing content is not just a TikTok problem. Just look at any Twitter trend for more than 5 comments, or for that matter the comments section on TOI, or YouTube, or Facebook, or Instagram, or Reddit. Banning any or all of these isn't going to fix that (nor is it going to count for revenge against China as many tweets suggest). I apologise for being extra sermon-y in this edition, but this a problem that goes deeper than just one platform. And wider than just one country. Shubhangi Mishra points out that it is harder to change culture in schools than it is to lower TikTok's rating on the play store.
Cause and effect
Last Friday, an "erroneous message" led to many migrants gathering at Palace Grounds in Bengaluru in the hopes of returning home to Odisha. There was similar incident in Kannur last week too.
On Friday, scores of migrants who had registered to return home, received a message on their phones, informing them that a train would be leaving from Bengaluru to Puri in Odisha on Saturday. The message was received by many migrant workers who had registered to go back home – and was forwarded on WhatsApp to hundreds of others. However, it turned out that the message was an ‘error’ – there was no train waiting for the migrants, only chaos as thousands of people gathered.
In Mizoram, a 42 year old man was arrested for a Facebook post regarding people returning the state.
On being interrogated, the man confessed that his post was purely based on hearsay and did not even verify whether it was true or not and believed those persons mentioned in the posts may mistakenly be taken as the ones cleared for community quarantine facility of home quarantine.
In Nagpur, a blogger was warned for posting a doctored photo of a tiger sighting.
In Haryana, a Congress Leader was arrested for an objectionable tweetfor allegedly hurting religious sentiments.
Unfortunately, tragic events are extremely attractive for information disorder. A PIA plane crash resulted in multiplemisleading claims about the final moments leading up to the crash.
Only partially related, but read this essay by Freedom House on how the "The health crisis has provided both motivation and cover for increased persecution of minority faith groups."
Content Moderation is HARD
In this article for BuzzFeed, Alex Kantrowitz poses a valid question about Facebook that a lot of people have been asking "The social media platform is siding with scientists to stop the spread of harmful misinformation about the pandemic. If it can do it now, why wasn't it doing it all along?" And an opinion piece in AlJazeera argues that tech companies need to act on election time disinformation in the same way they have done for COVID-19.
While it is true they are willing to be more interventionist when it comes to COVID-19. Facebook and other social media platforms are finding out that even COVID-19 is deeply political and many of the same set of seemingly impossible trade-offs apply during a pandemic as well.
Just ask Twitter who are unsure of what to do about a certain leader of the free world. Who incidentally, wants to form a panel to review complaints of anti-right bias.
Or Evan Greer who was flagged by Facebook for sharing a post that an independent fact-checker classified as partly false.
But it appears that Facebook has applied the “partly false” flag not just to that page’s post, but to anyone who posts the same article.
Meanwhile, Mark Weinstein (CEO of a social networking company called MeWe) claims that social media censorship is worse than useless
Food for thought on content moderation:
The Plandemic Hydra Grows On
This one really is quite the hydra. Deleting anything off the face of the internet is hard, so it is no surprise that this keeps popping back up. NYTattempted to trace how it is spreading. (It is just me, or does everything sound like a virus reference these days) WashPo ran a story about services like Google Drive, Internet Archive etc being used as vehicles to keep this content alive. Does this mean that we should treat all services that deal with UGC the same way? Alex Stamos offers some insight.
And Kate Starbird published a fascinating "data memo" tracing some of the activity around the documentary's protagonist in early, mid April. She also specifically focuses on a right-wing conspiracy theory website whose pseudonymous owner was RT-ed by Donald Trump.
Sarah Zhang highlights how misinformation about potential vaccines is already spreading.
In other news
The thinking in Australia seems to be that a boycott of Google and Facebook could an effective way to get them to share advertising revenue. As Rasmus Kleis Nielsen points out, such experiments haven't succeeded in the past. But who knows, 2020 is a strange year.
I think it is a rule that you cannot do an edition of an information disorder newsletter without mentioning China and Russia, so here goes. Chinese diplomatic and state media accounts have reportedly tweeted 90,000 times since April. I get that this is a nice hook since China is involved, but we should hold off on our tin-foil hats until there is a reasonable comparison with other countries. Again, I don't mean to be preachy, but it would be bad karma for an information disorder letter to cause more hysteria. The Russian Embassy in the US, meanwhile, hit out the State Department on Twitter for allegedly offering a grant of $250,000 for exposing Russian Health Disinformation.
Burundi reportedly resorted to an internet disruption on election day. Now, internet shutdowns are not directly related to information disorder, but the stated intent is generally to "curb the flow of misinformation".
Ok, whoever said Information Disorder is a high growth industry was right. I had set aside > 50 links for this weeks edition but since I am already past 2000 words - for your sanity and mine - I will end this edition here.
Sorry, last preachy thing for today. As you read this on what I expect is blazingly fast internet connection (for the most part) spare a thought for a part of India that has had disrupted connectivity for nearly 300 days now.