Of Things that aren’t there, Midinformation, Belarus and Twitter offices
MisDisMal-Information Edition 14
*What is this?* This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research.*
What this is not?A fact-check newsletter. There are organisations like Altnews, Boomlive etc who already do some great work. It may feature some of their fact-checks periodically.
Welcome toEdition 14 of MisDisMal-Information
Things that aren’t there
The fans that weren’t and the laws that aren’t
Earlier this week, Mumbai Mirror carried a story about Badshah alias Aditya Prateek Singh Sisodia (turns out that he is my namesake, sort of) claiming that he had “ bought around 7.2 crore views for Rs 72 lakh for one of his songs, called Pagal Hai, in a bid to set a world record. ” The police went so far as to say he confessed but the rapper himself states that he ‘categorically denied all allegations’.
But as Karthik S points out, the business of ‘fakes likes and views’ maybe unethical, but is not illegal, yet. And Bhanuj Kappal writes in livemint, that he isn’t the only one.
*Aside: New York Times published an interactive story on follower factories in early 2018.*
Indeed, a report on Government Responses to Disinformation on Social Media Platforms indicates that while some countries Denmark, Sweden have referenced the use of bots in the context of elections, none of them have taken any legislative action so far. A draft version of the Interstate Broadcasting treaty in Germany proposes instituting an obligation on social media intermediaries ‘to identify social bots’. The primary considerations are electoral. The other related thread comes from the direction of linking social media profiles to real world identies. China and Belarus enforce this. India and Kyrgyztan have proposals the reference this. Brazil did too, but the subsequent drafts of the ‘disinformation law’ appears to have relaxed this requirement.
So while the ‘influencer industry’ may be relieved for now, all of us probably shouldn’t. Jenna Hand, for FirstDraft writes about government overreach during the pandemic.
Hungary , Romania , Algeria , Thailand and the Philippines are among the countries that have instituted new laws or invoked emergency decrees giving authorities the power to block websites, issue fines or imprison people for producing or spreading false information during the pandemic. In Cambodia and Indonesia , social media users have been arrested after allegedly posting false news about the coronavirus. In Egypt, a journalist who had been critical of the government’s response to the pandemic and was detained for “spreading fake news” contracted the virus in custody and diedbefore he could be tried. Even in South Africa , where freedom of expression is a constitutional right, politicians criminalized the publication of any statement made “with the intention to deceive any other person” about Covid-19, government measures to address the disease or — in a sign of the country’s grim experience with HIV/AIDS — a person’s infection status.
And the International Press Institute is maintaining a tracker of countries that have passed ‘fake news laws’ during the pandemic. My sense is that it understates matters since countries have also used existing laws instead of passing new ones.
The Google searches that weren’t there…
Ok, mini rant time. News18 buzz ran a story titled ‘Is Kamala Harris Hindu?’ What Many Indians Searched for After Biden Picked US Vice President Candidate’. I was intrigued to see what numbers they would back it up with. Well, here they are:
After her nomination was announced, many in India started to look her up, but instead of looking at her achievements, the most common search terms were ‘Kamala Harris religion’ ‘Kamala Harris Hindu’ ‘Kamala Harris religion’
That’s it. If only Google had a tool that indicates relative search interest and search trends?
I guess they do, which indicated that searches for her religion (red) or whether she is a hindu (blue) were likely far lower than searches for her name in general (yellow). It is possible that some of those searching for her by name may have been interested in her religion and even a small percentage of us (indians) will be “many”, but I will let artist formerly known as Marky Mark sum up how I feel.
The enemy that isn’t there?
If you have been following discourse around information disorder, you will be all too familiar with the tendency to blame as much as possible on foreign interference. Thankfully, the conversation seems to be moving towards domestic disinformation too.
Writing in Foreign Policy, Seva Gunitsky:
Treating disinformation as an alien disease ignores the fact that it is perfectly compatible with democratic norms and thrives inside democratic states.
The same factors that promote healthy democracies also promote the spread of disinformation. Democratic deliberation requires free flows of information and multiple competing narratives.
What we see emerging now is the “democrat’s dilemma”—controlling information is inimical to democracy, but allowing it to spread unchecked creates disinformation that can undermine democratic discourse. Increasingly, this trade-off appears hardwired into modern democratic regimes.
Some of this realisation (some, not all) is being driven by a growing understanding of QAnon.
This thread by Deepa Seetharaman rounds up a number of QAnon stories.
I’ve covered QAnon substantially in the last few editions. And a lot of that has been with a sense of deja vu. This may partly be because I recently read Rohit Chopra’s book analysing politics in the indian social media sphere[goodreads link].
This understanding, though, is also troubling because it becomes difficult to envision to way out when you can’t even have a civil conversation. Anne Applebaum in The Atlantic writes about what you can do when the facts don’t get through. For the record, I am not really a fan of the Lincoln Project’s content which she references, but I can see why it appeals to many
The Post that wasn’t there
Earlier in the week, Bangalore witnessed some violence in reaction to a post by an MLA’s nephew. By the following morning the hashtags ‘BangaloreRiots’ was trending on Twitter and everyone and their second cousins were being asked to ‘unequivocally’ condemn violence. I am going to encourage you to consider this beyond the obvious though. In the information ecosystem, anything rarely happens in isolation. If dangerous speech exists, it isn’t going to pause just because someone (or in this case, many someones) did something we don’t agree with.
There are 3 scenarios that could play out (ok, there are more, but stay with me):
A - Stay quiet. Some people will get called out for not saying something (most won’t). There will be some attacks on these people (squiggly blue lines) in addition to the existing chorus (solid blue lines).
B - Qualified criticism. The existing chorus will continue. Some will get called out, and may choose to defend themselves. This may lead to clashes with the existing chorus(criss-crossing squiggly green and blue lines), or even mobilise new voices that may have stayed quiet otherwise.
C - Unequivocal criticism. Everyone joins the existing chorus.
Now in reality, the entire ecosystem will be a sum of myriad such choices playing out. They will all create tension and information pollution as well as have long term effects - although degrees may vary.
But this isn’t why this sub-section exists in this section of the edition. It exists because something else didn’t exist. What? Pooja Chaudhuri from Altnews investigates the claims that the post in question was a response to another post denigrating a Hindu deity. It wasn’t.
In response to the events in Bangalore:
Telagana’s DGP and Hyderabad’s Police Commissioner simultaneously urged caution and threatened strict action.
Also linked to the Bangalore incident, a report in TOI states that Kolkata’s police commissioner also “cautioned social media users and asked them to refrain from posting fake news.” It further stated that ~ 200 people were prosecuted in April and May for “fake posts”.
Information that isn’t anywhere
Ok, I may be getting a little carried away with this theme, this is the last one, I promise.
An Xiao Mina writes about missing information or midinformation, which applies itself rather well to the whole COVID-19 situation:
In the case of emerging knowledge, it might be helpful to think not just about misinformation but midinformation. We know a little now, we’ll know more later, and we may never know everything ever. In other words, information stands in the middle, and we’re trying. Scientists are gaining some clarity, but it’s going to take some time for scientific consensus to build, and for public understanding to catch up.
Midinformation, in other words, is the sort of information crisis that happens when not all the facts are known. In that vacuum of knowledge, all kinds of rumors, conspiracies, misunderstandings and misconceptions can emerge, because it’s comforting to have an anchor that feels true and reliable.
To my untrained mind, this is reminiscent of the concept of data voids. Which Michael Golebiewski and Danah Boyd defined in the context of search engines.
“There are many search terms for which the available relevant data is limited, nonexistent, or deeply problematic. … We call these low-quality data situations ‘data voids.’”
And, in another extremely interesting post, Tommy Shane extends the concept of data voids beyond search engines to social media platforms asserting that they are search engines too given the way people interact with them. He has 3 asks from platforms:
1) A Google trends equivalent for social media platforms.
2) More precision from Google trends.
3) A connection between interest and results.
Meanwhile in India
In editions 11 and 12, I touched upon Prashant Bhushan and the Supreme Court. As I was writing this edition, the court’s 108 page judgement holding him in contempt came out. Anyway, it turns out Twitter may have benefitted (hard to say for sure) from disabling Mr. Bhushan’s Tweets - so says the judgement. LiveLaw’s twitter account was kind enough to tweet the specific page.
I have a joke about this situation but Twitter disabled it.
As I was wrapping up this edition, WSJ broke(paywall) a story about Facebook favouring the ruling party with policy enforcement decisions in India.
“A core problem at Facebook is that one policy org is responsible for both the rules of the platform and keeping governments happy,” Facebook’s former chief security officer, Alex Stamos
Bonus points for dropping it late at night on a Friday, after all the years of Facebook’s Friday news dumps.
2 people were arrested in Bhubaneswar for “allegedly spreading misinformation on the pandemic on social media”.
A number of journalists tweeted that Ex-President Pranab Mukherjee had passed away. His family refuted these claims - in contrasting ways.
Pallavi Pundir, writes for Vice, about an organisation called AMPAK cares with a minimal digital footprint that was responsible for the Kashmir related ads at Times Square, New York on August 5th.
Staying with J&K, a fake twitter handle was creating impersonating the UT’s new Lieutenant Governor, Manoj Sinha. An FIR was registered against ‘unknown persons’.
India’s curious Twiplomacy continues. In the screenshots below
Exhibit A: India’s Syrian embassy retweeting a handle dunking on what is possibly an account from Pakistan impersonating someone from China. I say that because a twitter search for its tweets from 2017 indicate the usage of Hindi/Urdu in Roman script + going through a lot of tweets that indicate it could be from Pakistan. (Warning: sexist content)
Aside: This handle was also participating in activity on the TamilsNotHindus hashtags, commenting on which female actor from Tamil Cinema is the “best-looking”. 🤦♂️
Exhibit B: India in Iceland with a mini thread on Pakistan.
Exhibit C: India in Eritrea retweeting a dunk-tweet on Pakistan.
Ragamalika Karthikeyan writes for The News Minute about Nandini Jammi - who tracks brands funding websites with hateful content.
I don’t explicitly cover Cyber security very often, but there is an overlap between information operations and cyber-offensive operations. This post by Gunjan Chawla and Vagisha Srivastava defines ‘offensive cyber capabilities’
Around the world
Belarus
Belarus held Presidential elections this week, the aftermath of which were marked with protests over strong suspicions of rigging. The government allegedly resorted to shutting down the internet.
The government denied it and claimed disruptions were a result of DDoS attacks, but information put out by Cloudflare did suggest otherwise.
The Hybrid Warfare Analytical Group also claimed to have tracked information operations on Russian TV about the protests in Belarus
There did seem to be some divergence in TV coverage v/s other press coverage though.
Pakistan
Female journalists and commentators in Pakistan put out a joint statement against trolling and attacks by accounts affiliated to the ruling party. Where else have we seen this? 🤔
Of course, we know this is all too common in India too. Approximately a month ago, DW published a story about two Al Jazeera journalists who were a target of coordinate harassment.
I also noticed some rather interesting activity pertaining to Twitter. Over the last few days, there has been some buzz on the hashtag - WeWantTwitterOfficeInPak. This is due to a perceived pro-India stance that Twitter has taken, and some of the tweets seem to indicate that a local presence would result in Twitter being held accountable.
Aug 10 - 11: Minimal activity during this period
Many posts include a reference to the suspension of 200 accounts that were suspended for posting Kashmir related content. Except, this doesn’t seem to be a recent event, news reports point to Aug 2019 though some of the tweets were worded as if these were recent events.

USA
After reports that antifa.com redirected to Joe Biden’s website, Mashable put out an explainer on why this didn’t necessarily mean that Joe Biden owned the domain.
With Kamala Harris now confirmed as the VP candidate, Jane Lytvynenko put out a thread rounding up some of the disinformation that surfaced around the time she started her presidential campaign.
UK
A 12-week consultation aiming to address funding of political ads was announced. It was criticised for coming “with no deadline for implementing it or hint at the scale of the punishments planned.”
Meghan Markle’s husband spoke about redesigning social media which is currently ‘dividing us’.
Big Tech Watch
Facebook released its Community Standards Enforcement Report. I concur with this article by Sonal Khetarpal which raises the point of country specific filters. There are a lot of numbers in there, which give me a ton of information but very little knowledge. For eg.
The amount of content (hate speech) we took action on increased from 9.6 million in Q1 to 22.5 million in Q2.
While we’re on hate speech - Reuters ran a story about Facebook not providing evidence pertaining to the Rohingya genocide in Myanmar, yet.
The other thing to watch out for in the report is the impact of a shift to algorithmic moderation. Protocol’s Issie Lapowsky has a thread. The impact was most felt in the self-harm and CSAM categories.
Daphne Keller makes a related point.
Action on fake accounts actually decreased. Also, Instagram doesn’t have a fake account category in its section yet.
Algorithmic bias/racism was in the spotlight again after Instagram deleted a photo and threatened to suspend the model’s account.
Also on Instagram - Vox details the ‘powerpoint activism’ that is evident all over the platform. Pity, their algorithms didn’t get it.
Online activism, coupled with in-person organizing, reached a zenith in June, as daily Black Lives Matter protests erupted across the country. Instagram, once an apolitical din, reflected that change. It no longer felt appropriate — even for celebrities and influencers, who tend to exist unfazed by current events — to skip over politics and resume regular programming.
Twitter rolled out its modified conversation settings to all users allowing them to set limits on which groups of users can reply to a tweet. I am cautiously optimistic, but only time will tell.
Extremely interesting takeaways in this post by @suzannexie. I was especially concerned about quote-tweeting. (colour coding = my subjective gut-feel) Green - initial data = grounds for cautious optimism. Orange - I am skeptical Red - found it a little concerning/disconcerting.There is a whole blog explaining it: https://t.co/lHC05t75umMahima Kaul @misskaul
Study Corner
Early detection of internet trolls: Introducing an algorithm based on word pairs / single words multiple repetition ratio - A study by Sergei Monakhov “suggest(s) a quantitative measure for identifying troll messages which is based on taking into account certain sociolinguistic limitations of troll speech, and discuss(es) two algorithms that both require as few as 50 tweets to establish the true nature of the tweets, whether ‘genuine’ or ‘troll-like’.”
Global Disinformation Index documented ads sponsored by NGOs/Charities which appeared alongside disinformation. A number of charities then wrote a letter to Google:
“Charities, which depend on online advertising for donations to keep the lights on and serve their constituencies, have their ads placed on websites that openly spread hate speech, promote dangerous falsehoods about the spread and prevention of COVID-19, and even encourage violence,” the letter reads. “In too many cases, this means the charities are paying Google, only for Google to damage their reputation and undermine their mission.”
Part 1 of a Bellingcat investigation of Yevgeny Prigozhin:
Now, a long-running investigation by Bellingcat, The Insider and Der Spiegel has uncovered that Yevgeny Prigozhin’s disinformation, political interference and military operations are tightly integrated with Russia’s Defense Ministry and its intelligence arm, the GRU. Prigozhin’s private infrastructure – along with that of other government-dependent entrepreneurs, like Kostantin Malofeev – it appears serves as a deniable veneer and a round-tripping money laundering channel for government-mandated overseas operations.