Of Unbundling social media, fake news, importing conspiracies

MisDisMal-Information Edition 44

What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.

What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.

Welcome to Edition 44 of MisDisMal-Information

In this edition

  • Various proposals to ‘unbundle’ social media platform features.

  • Fake engagement and its negative externalities

  • … Meanwhile, in India - Twitter’s travails, Conspiracy theories, India-centric misinformation research.

Unbundling Social Media

Published in early June, an analysis of the social media engagement with Donald Trump’s engagement before and after his accounts were banned indicated that though his agenda-setting powers had waned, some statements still received significant engagement, because they were amplified by (mostly) supporters and opponents (ridiculing them) [Davey Alba, Ella Koeze and Jacob Silver - NYTimes]

Before the ban, the social media post with the median engagement generated 272,000 likes and shares. After the ban, that dropped to 36,000 likes and shares. Yet 11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more.

Building on the question of ‘how this happened’, Richard Reisman writes [Tech Policy Press]:

Understanding how that happens sheds light on the growing controversy over whether “deplatforming” is effective in moderating extremism, or just temporarily drives it out of view, to intensify and potentially cause even more harm.  It also illuminates the more fundamental question: is there a better way to leverage how social networks work to manage harmful speech in a way that is less draconian and more supportive of free expression? Should we really continue down this road toward “platform law” — restraints on speech applied by private companies (even if under “oversight” by others) — when it is inevitably “both overbroad and underinclusive” — especially as these companies provide increasingly essential services. 

There are a few proposals that aim to play in between this ‘platform law’ regime and free-speech-free-for-all binary. The general idea is to wrest control over the flow of information on platforms from platforms themselves - with approaches as varied as mandating interoperability, creating a marketplace of content sorting algorithms, etc.

In no particular order, some of these are:

Magic APIs - Daphne Keller

  1. PLATFORM CONTENT REGULATION – SOME MODELS AND THEIR PROBLEMS (magic APIs section is towards the end)

  2. If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great. (essentially an updated version of 1 for 2020)

Protocols not Platforms - Mike Masnick

Competitive Compatibility / Adversarial Interoperability - EFF [(I didn’t find a single document that outlines this, so analysis for this is pieced together from various sources - mostly Cory Doctorow’s work)]

  1. Competitive Compatibility: Year in Review 2020

  2. Why it’s easier to move country than switch social media

Middleware - Stanford Working Group on Platform Scale

  1. Report of the Working Group on Platform Scale

  2. Was Twitter Right To Have Booted Trump? (Francis Fukuyama and Jillan C. York)

  3. Fake News and Conspiracy Theories - Francis Fukuyama (you may remember this from 34 - Of Hex, Liers and (not just)Video deepfakes)

You may have realised that I’ve excluded BlueSky. For now, I’m considering it a Twitter solution (technically, it is funded by Twitter but supposed to be independent), but it is worth keeping an eye on.

A full description of each of these is outside the scope of this edition, a good frame to think of them are as variations of solutions that will let others build on top of the existing platforms, and the resultant competition will (hopefully) address some of the problems arising out of the centralisation of power in platforms. E.g. There could be different algorithms for sorting feeds on Twitter/Facebook allowing people to choose between, say, ones designed by news producers or entertainment content producers, etc.

I will try to compare/contrast them based on 6 parameters (based on my own understanding). I do recommend reading through the material I linked to earlier.

Aim(s)

Magic APIs

  • Licensing hard-to-duplicate resources to newcomers in markets subject to network effects (give users more choices among competing rulesets or rule makers).

Protocols not Platforms

  • Reverting to an internet powered by a system of protocols instead of centralised platforms would do a better job of protecting user privacy and free speech.

  • Most current solutions will lead to outcomes that will leave us worse off.

Competitive Compatibility

  • Encourage interoperability and block anticompetitive mergers with or without the cooperation of the platform(s).

Middleware

  • Reduce political threats posed by platform control

Intended Target(s)

Magic APIs

  • Not explicitly prescribed, but Keller has alluded to challenges making rules specifically targeting 'bigness'.

Protocols not Platforms

  • Envisioned as a universal practice.

Competitive Compatibility

  • Targets 'bigness' but does not necessarily limit it to that.

  • Increased focus on lock-in compared to network effects.

Middleware

  • Targets 'bigness'. It explicitly names Facebook, Amazon, Apple and Twitter but doesn't necessarily exclude others.

Depth of unbundling

Magic APIs

  • Not explicitly prescribed. Indicative example:

    In the platform context, this would mean that Google or Facebook opens up access to the “uncurated” version of its service, including all legal user-generated content, as the foundation for competing user-facing services. Competitors would then offer users some or all of the same content, via a new user interface with their own new content ranking and removal policies.

Protocols not Platforms

  • Not explicitly prescribed.

  • It goes to the extent of moving data out of platforms to user-controlled blobs.

Competitive Compatibility

  • It does not specify any upper limit but maintains that a user should be able to delegate all aspects of interaction to a third party.

Middleware

  • It varies between performing 'essential functions' and 'supplemental filters'.

  • Advocates an intermediate role:

    provides filters for specific news stories and develops ranking and labeling algorithms, which are then integrated into the main platform

User Data

Magic APIs

  • No prescription, but it lists this explicitly as a sticking point in terms of ownership. Does an individual own their’s friends’ data, their interaction with her/his posts?

Protocols not Platforms

  • In its most ambitious version - every user will manage their own data via 'blobs', but that's not a prerequisite.

Competitive Compatibility

  • Address harms with privacy law

  • Limit commercial use of data

Middleware

  • No specific prescription. Infer preference for status quo based on its reference to platforms being able to retain their business models.

Degree(s) of regulation

Magic APIs

  • Not explicit, but reference to 'unbundling requirements analogous to telecom' implies that it would follow from some sort of regulation.

  • Approaches it from ‘bigness’.

Protocols not Platforms

  • It is meant to be market-driven.

Competitive Compatibility

  • Suggests regulation to block anti-circumvention effects.

  • Approaches it from ‘smallness’.

Middleware

  • Interoperability/Opening up APIs may happen by consent or decree. But expresses the likelihood that legislation may be required to open up APIs.

  • Prescribes standards/guidelines that middleware companies will need to adhere to, which can be outlined by a regulator or the platforms themselves.

Business Models

Magic APIs

  • No prescription, but it indicates allocation of a revenue split (ads) will be complex.

Protocols not Platforms

  • Move away from inter-platform competition. Though some are viewing Twitters Bluesky as a means to compete/differentiate from Facebook.

  • Agents that interface between blobs in data stores and services.

  • With data and privacy controlled by users, data-hungry models may not thrive. So a return to intent-based or brand advertising is likely.

  • There will be competition for business models.

  • Token-based

Competitive Compatibility

  • No specific prescription.

Middleware

  • Revenue Sharing

  • Directly selling subscription or ads

Questions… Questions...

All 4 are interesting solutions, and we should, perhaps, engage with them more deeply. As I was reading through some of these proposals, many questions/thoughts came up in my head. I don’t have good answers today.

  1. Why would platforms change/cooperate?

  2. How are these approaches better than user controls provided by platforms?

  3. Will the increased complexity for users hamper adoption?

  4. Should this be limited to 'bigness'? If yes, on what principles (that do not seem arbitrary)?

  5. Larger systemic incentives unchanged (e.g. media will still report on egregious content, etc.)

  6. Does this mean ‘bad’ content stays up? Even if we can’t agree on what ‘bad’ is.

  7. Could this mean more filter bubbles?

  8. Network effects could still apply to middleware solutions, and one of them could accrue a significant influence on narratives.

  9. How do we avoid being in a similar position again? E.g. Limited transparency, dominant solutions, etc.

  10. How will this impact different layers of the moderation stack? Should it? (See 27: Of ‘Antivaxtionals’ and D(r)ump(f)ing Donald > The Content Moderation Stack)

Related:

  • In December, Mike Masnick, Daphne Keller and Cory Doctorow laid out some of these in an insightful podcast [TechDirt].


Fake News

No, No, I haven’t joined the ‘fake news’ bandwagon (I still avoid using the term). Instead, this section looks at recent news/stories around fake engagement.

▶️ Sophie Zhang, writing in RestOfWorld, describes how personal vanity pushes users to use Facebook ‘autolikers’ to drive up engagement.

  • Using ‘autolikers’ require users to give them access to their accounts, which is then abused to create fake engagement for others - in exchange for receiving fake engagement. She contends that users feel their accounts cannot be used since no passwords are shared as part of the process.

  • There were some interesting data points in there too.

    • While people assume most fake engagement is political, it apparently makes up less than 1% of fake engagement on Facebook.

    • Most accounts that engage inauthentic activity are not fake.

      In the first half of 2019, we knew internally that there were roughly 3 million known fake engagers on Facebook, of whom only 17% were believed to be fake accounts. We believed the other 83% were largely self-compromised — many of them through autolikers.

    • Why is this bad? Well, negative externalities, as economists would say.

      This arrangement seems to deliver benefits to both themselves and the autolike business — but only because the costs are borne by others. They do not realize that they are contributing to the gradual erosion of trust in their fellow users and organizations, and corrupting the civic discourse in their nation.

  • I also liked the distinction between misinformation and inauthentic activity, which, as she rightly pointed out, people tend to conflate.

    Another element to keep in mind: Observers commonly conflate the use of inauthentic accounts with misinformation, two separate and largely unrelated problems. Misinformation is a function of what the person is saying, and does not depend on who the person is. If someone said the moon is made of cheese, this is misinformation, regardless of who's saying it. Inauthenticity depends only on the identity of the user, regardless of what they are saying. If I have 50 fake accounts telling the world "cats are adorable," this is still inauthentic activity, even though there's nothing wrong with saying that.

Related:

▶️ Amazon is blaming social media companies for fake reviews [Alex Hern - TheGuardian]

This year a Which? investigation found companies claiming to be able to guarantee “Amazon’s Choice” status on products – an algorithmically assigned badge of quality that can push products to the top of search results – within two weeks, and others claiming to have armies of reviewers numbering in the hundreds of thousands.

Amazon says the blame for those organisations should lie with social media companies, who it says are slow to act when warned that fake reviews are being solicited on their platforms

▶️ ‘Rightwing firm posed as a leftist group on Facebook to divide Democrats’ for the 2018 midterm elections [Julia Carry Wong - TheGuardian].


… Meanwhile, in India

▶️ Twitter’s terrible, horrible, no good, very bad year… continues

Is this thing on? Why, I am referring to Twitter’s Intermediary Status, of course.

  • Twitter has also been summoned by the Parliamentary Standing Committee on Information and Technology [Aikik Sur - MediaNama]

  • Twitter’s India head was questioned by the Delhi Police in connection with the “Congress Toolkit” case. This story, filed on 17th June, still claims without any caveats that Twitter has lost its intermediary status 🤷‍♂️ [HindustanTimes]

▶️ FIR se F.I.R

This is all playing out in parallel with Twitter Inc. and Twitter Communications India being named in an FIR by the Ghaziabad Police, along with TheWire, multiple individual journalists, and a Congress spokesperson.

This is very much a developing story with the Ghaziabad police contesting the victim’s claim of a communal angle [Bismee Taskin - ThePrint], organisations like the International Press Institute speaking out against the FIR. Meanwhile, the UP Government has said it will take strict action against those ‘spreading fake news on social media’ [RepublicWorld] [Archive link] (Go and read the lede, I promise you won’t be disappointed). The Delhi Police is claiming to have received complaints Swara Bhaskar and Twitter’s India MD [EconomicTimes].

And the story would likely have evolved between the time I schedule this edition and it hits your inbox.

Recommended Reading about the Intermediary Rules:

▶️ Conspiracy Theories

▶️ Follow-up on Time Magazine’s story on HJS and SS (edition 43)

▶️ Content Moderation by Courts

  • The Delhi High Court directed ‘TheCognate’ to block social media posts alleging that India Today “alleging to show that there has been a contrasting and biased approach in its reporting against Muslim community concerning COVID protocol violations relating to religious gatherings at Kumbh Mela and Mecca Masjid [LiveLaw]

▶️ Misinformation research about India

  • Educative Interventions to Combat Misinformation: Evidence from a Field Experiment in India - Sumitra Badrinathan [Cambridge University] I wrote about the draft back in 37 - Of Madness of the feeds, Har(dly)vard(th) Fact-checking, and it points to the presence of motivated reasoning.

  • Misinformation on covid-19 Pandemic in Youtube and its Impact on Viewers in Kerala - LakshmyRavindran, Dr. S. Dinesh Babu [Annals of RSCB]

    An Online survey was conducted on 325 samples to measure the impact of the misinformation videos on the general public of Kerala. The video analysis revealed that 28% of the videos contain misinformation. The online survey disclosed that the impact of such misinformation is not significant on the people of Kerala. It is concluded that the ill effects of misinformation can be countered through increased awareness on health and hygiene among the people. This study also suggests the need for using media to promotehealth literacy, effective cyber laws to curb the propagation of fake news as areas that have scope for improvement.

    Incidentally, a study published last week by the American Psychological Association suggested that “generic warnings about online misinformation, such as those used by governments and social media companies, are unlikely to be effective”. [Quantifying the effects of fake news on behaviour: Evidence from a study of COVID-19 misinformation - Ciara M. Greene and Gillian Murphy]

  • Tiplines to Combat Misinformation on Encrypted Platforms: A Case Study of the 2019 Indian Election on WhatsApp - Ashkan Kazemi, Kiran Garimella, Gautam Kishore Shahi, Devin Gaffney, and Scott A. Hale. [arXiv]

  • Claim Matching Beyond English to Scale Global Fact-Checking - Ashkan Kazemi, Kiran Garimella, Devin Gaffney, and Scott A. Hale. [arXiv]

    I’ll repeat what I’ve said many times before (maybe not here, but just ask the poor people who have to talk to me) - we need a lot more such research in India. Keep it coming…


Of the many influences of influence

MisDisMal-Information Edition 43

What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.

What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.

Welcome to Edition 43 of MisDisMal-Information

The influence of Influence

This was prompted by recent investigations on influence operations from India (DFRLabs -covered in 42) and Pakistan (Graphika Labs).

I’ve divided the section into 3 parts:

  • The first part is adapted from my contribution to Technopolitik and delves into how we perceive the effectiveness of influence operations.

  • Parts 2 and 3 are based on analyses by the Partnership for Countering Influence Operations, which catalogued both how platform policies approach them and the various interventions they’ve enacted.

How influential are influence operations?

In its May 2021 Coordinated Inauthentic Behaviour Report, Facebook disclosed that it had taken down a network that originated in Pakistan and targeted domestic audiences in Pakistan as well as global audiences with content in English, Arabic and Pashto. An accompanying report by Graphika Labs identified 5 kinds of narratives, one of which consisted of content that was ‘anti-India’. 

Across the network’s various assets/accounts on Facebook and Instagram, it had ~800,000 followers across 40 accounts and 25 pages, 1200 Facebook group members across 6 groups, and 2400 followers across 28 Instagram accounts. While interesting, these numbers don’t tell us much about how effective the activities of such networks are. Commenting on the era of disinformation operations since the 2010s in Active Measures, Thomas Rid categorised them as more active, less measured, high tempo, disjointed, low-skilled and remote. In contrast, he said, earlier generations were slow-moving, highly skilled, labour intensive and close-range [Aside: It is worth clarifying that Graphika Labs attributed the network to a Public Relations firm and not to a state actor]. 

Coming back to the question of effectiveness, in Hype Machine, Sinan Aral refers to the concept of “lift” or the change in behaviour caused by a message/series of messages. The word ‘change’ is crucial since it implies the necessity of determining causality, not just establishing correlation. Even more so, when it comes to voting behaviours, political opinions, etc. For this reason, assessments based just on the number of impressions, followers or engagement are incomplete and ignore the ‘selection effect’ of targeting messages to a user who was predisposed towards a certain course of action already. And while lift has not yet been quantified in the context of influence operations due to the complexities of reconciling offline behavioural change with online information consumption, Aral suggests such targeting is most effective when directed towards ‘new and infrequent’ recipients. Or in the electoral context, at undecided voters or those unfamiliar with a certain political issue. In other words, ‘change happens at the margins’.

At least some operations appear to be adapting. Facebook highlights a shift from ‘wholesale’ (high volume operations that broadcast messages at a large scale) to ‘retail’ (fewer assets to focus on a narrow set of targets) operations in a report on the State of Influence Operations 2017-2020. We should expect to see both kinds of operations by different actors based on their capabilities. The thing to note, though, is that there appears to be, at some level, a convergence with an earlier era of disinformation operations.

Postscript: I should add that even though the impression based method does not exclude ‘selection effect’, we don’t seem to have a good way to measure the long term effects of repeated exposure yet. Yes, we do know of ‘illusory truth effect’ (repeated statements are perceived to be more truthful than new statements) and ‘continued influence effect’ (belief in false information persists even after underlying information is corrected/fact-checked) from a fact-checking perspective (and I would argue, from the lens of an individual). What is the effect on this self-selected cohort? And what are the knock-on effects in the context of polarisation (affective and knowledge-based)?

How do platforms approach them?

In April, Jon Bateman, Natalie Thompson and Victoria Smith analysed how various platforms approach Influence Operations based on their Community Standards.

Broady, they concluded, there are 2 types of approaches:

Generalised approaches include the use of short, sweeping language to describe prohibited activity, which enables platforms to exercise discretion.

Particularised approaches include the use of many distinct and detailed policies for specific types of prohibited activity, which provides greater clarity and predictability.

These approaches have implications that go beyond the mere framing of Community Standards.

Generalised

- Based on standards: Loose guides that may require significant judgement

- Give platforms flexibility to enforce in spirit

- Take less time and effort to craft, tweak (potentially better for smaller, newer platforms)

Particularised

- Based on rules: Specified set of necessary and sufficient conditions leading to defined outcomes

- More transparent and predictable for users.

- Can help defuse perceptions or arbitrary decision-making.

Of course, no platform falls completely in either of these buckets, but Facebook and Twitter tend towards the ‘particularised model’.

There’s also a tendency not to use commonly used terms like misinformation, disinformation, influence operations, etc. Instead, the approach has been to try and break them down into sub-categories such as spam, harassment, etc. And, rely on generalised terms and/or self-coined terms like Coordinated Inauthentic Behaviour (Facebook), Coordinated Harmful Activity (Twitter), I-C-BATMAN?NOPE (Damn, I thought I could slip that one past you), etc.

The section that I was most fascinated with was the one about the various elements of platform policies and the ABCDE framework.

A - Actors

B - Behaviours

C - Content

D - Distribution

E - Effects.

Aside: You might recall Camille Francois’ ABC framework from Edition 1 or the ABC(D) modification suggested by Alexandre Alaphilippe in the context of disinformation. The E appears to have gained currency since COVID-19 and (possibly) the 2020 U.S. Elections.

While much of the public conversation is focused on ‘actors’ (content too, to be honest), the policies focus the most on behaviours.

(Image Source)

How ‘influential’ are their responses?

I should point out that this sub-heading should include the work effective instead of influential, but I wanted to throw in some wordplay. The other important question is what kind of responses platforms end up formulating and how they actually fare.

Kamya Yadav wrote about this in January 2021.

  • 83 of 92 platform announcements regarding interventions happened in 2019 (17) and 2020 (66). Potential reasons include:

    • Real-world events (COVID, US elections) could have triggered a new wave of influence operations.

    • More demands for accountability from governments, media, users, etc.

    • Malicious actors evolve, necessitating new countermeasures.

    • Experts have learned more about how influence ops work, resulting in new interventions.

  • Redirection (53) and Labelling (24) accounted for 77 of the 104 interventions identified. There are also some astute observations about the growing prevalence of these kinds of interventions.

    • They counter-balance ‘wholesale’ bans and takedowns.

    • Conversely, they place a greater burden on individual actors to choose how to respond. This is not a value-judgement since this also implies that users have more choice compared with takedowns/bans.

  • Also notable that most of these were user interface/user experience tweaks. I couldn’t confirm whether the ‘nicer’ News Feed changes that were reversed in December 2020 were considered or not.

  • Only 8% of the initial announcements stated whether or not the various interventions had been tested for effectiveness before a mass rollout.

Internal Influences

Ok, let’s zoom out a little bit. There’s another aspect of this we should consider - how can it impact domestic politics?

Imagine hypothetical states A and B that have an adversarial relationship. Party X is in power in State A. Domestic opponents in State A criticise/oppose a number of Party X’s actions. State B opposes and criticises a certain subset of these (relevant to state B). Also note, Domestic opponents interest in Party X is significantly higher than State B unless some form of overt aggression from State B is in the picture. Thus, it is inevitable that there will be some convergence between the issues, arguments and narratives employed by Domestic opponents and State B. For simplicity, I haven’t represented internal dynamics within State B.

This presents 2 challenges for Domestic opponents in State A:

  • Avoid being co-opted/misused by State B operatives.

  • Avoid being characterised as State B agents or ‘speaking the same language’ by Party X and its allies.

This is not necessarily unique to the Information Age. However, it does present new opportunities for State B to co-opt and misuse legitimate arguments raised by Domestic opponents - which also widens the scope of issues it can use beyond just those that are directly relevant to State B. This was one of the trends Facebook highlighted in its State of IO report:

Blurring of the lines between authentic public debate and manipulation: Both foreign and domestic campaigns attempt to mimic authentic voices and co-opt real people into amplifying their operations.

And, it also gives Party X allies additional opportunities to delegitimise/stigmatise Domestic opponents. Remember, ‘there are no internal affairs’ affects Domestic opponents too.


… Meanwhile in India

▶️ It has been hard to miss the spate of ‘legal request’ notifications that Twitter has been sending various users (just search for mentions of @TwitterIndia). Pranesh Prakash points out that this been in practice ‘for long’. Twitter’s transparency reports for India (available till Jan-Jun 2020 as of writing this) certainly indicates an upward trajectory. We’ll have to wait till January 2022 to see similar numbers for Jan-Jun 2021. Jul-Dec 2021 should be coming out soon, so it will be interesting to see if the upward trajectory holds.

Related:

  • Twitter restricts accounts in India to comply with government legal request [Manish Singh - TechCrunch]

  • If you head over to Right Wing Twitter, you’ll chance upon similar screenshots of emails from Twitter due to ‘legal requests’ doing the rounds.

▶️ Ayushman Kaul’s investigation [DFRLab] into Facebook pages operated by Hindu Janjagruti Samiti and Sanatan Sanstha.

Facebook comprised of at least 46 pages and 75 groups to promote hostile narratives targeting the country’s religious minority populations. Leveraging a potential reach of as many as 9.8 million Facebook users, the organization has published written posts, professionally edited graphics, and video clippings from right-wing and state-affiliated media outlets to demonize India’s religious minorities and stoke fear and misperception among India’s majority Hindu community.

As you read this, keep in mind 2 (potentially contradictory) things from the first part of section 1 - the impressions/reach may not directly correlate with effects, and we don’t yet have a way to measure the long-term impact of repeated messaging, even if they are already pre-disposed/have bought into the message(s).

Another report on these groups indicated that Facebook ‘quietly banned’ the pages in September 2020. It points that this was partial, and an additional ‘32 pages with more than 2.7 million followers between them remained active on Facebook until April.’ [Billy Perrigo - Time]

Ayushman’s investigation includes a press release published in Sanatan Prabhat indicating that 4 pages of Sanatan Sanstha were ‘closed’.

Aside: If anyone has figured out how Facebook defines the ‘militarised’ in the context of militarised social movements, please let me know.

▶️ TeamSaath’s Twitter account was suspended for a few hours on Friday, 11th June. The account, which as its profile suggests stands against "Abuse Troll Harassment", highlights abusive Twitter accounts and encourages users to report them. Now, Twitter does not seem to have an explicit policy against encouraging mass reporting. It is also unknown if the account’s suspension was itself a result of mass reporting 🤷‍♂️.

Aside #1: From a Katie Notopoulos post in 2017 [BuzzFeedNews] on her experience with being mass-reported.

But for now, Twitter is getting played. They’re trying to crack down on the worst of Twitter by applying the rules to everyone, seemingly without much context. But by doing that, they’re allowing those in bad faith to use Twitter’s reporting system and tools against those operating in good faith. Twitter’s current system relies on a level playing field. But as anyone who understands the internet knows all too well, the trolls are always one step ahead.

Aside #2: In August 2020, Facebook did suspend a network from Pakistan that encouraged mass reporting through a browser extension (they called it ‘coordinated reporting of content and people’).

Ok, getting back. This is where the part about policies focusing heavily on behaviours (from part 3 of the first section) came back to me.

(Image Source)

You’ll see that Actors were addressed the least in policies - so it could be that TeamSaath’s stated motivations mattered less. I’ll repeat here, we don’t know why Twitter initially suspended the account.

There’s a growing conversation about context-specific enforcement [Jordan Wildon - Logically]. While I completely understand the motivation behind that approach, I am sceptical of platforms’ abilities to get this right and worry about the type of outcomes it will actually lead to. For example, here’s a random sample of Tweets that were marked ‘sensitive’, and I have not understood why.

Please excuse the shoddy image editing.

#1

#2

#3


Of Algo-Read'ems, (Col)Lapses in Time, Moderating Moderation

MisDisMal-Information Edition 42

What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.

What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.

Welcome to Edition 42 of MisDisMal-Information

Curate and Algo-Read’em

Last weekend, I ran a little poll on Twitter to try and find out how people sorted their feeds - algorithmically or reverse-chronologically. This was driven by an unscientific observation that most people around me tended to use the algorithmic or ‘Home’ option instead of the ‘Latest Tweets’ option that I primarily rely on, and massive FOMO over what I could be missing out on.

What, I said it was little, didn’t I?

Needless to say, that ridiculously small sample didn’t really help me in any way. In April, though, Jack Bandy and Nicholas Diakopoulos had published a study on how Twitter’s algorithmic curation affects what users see in their timelines (there’s a Medium version too).

Wait, Prateek, Twitter has a very limited and privileged set of users, why do I care?

Fair point. I’m using Twitter as an example, but it is a broader point. As Sinan Aral writes in Hype Machine:

“the Hype Machine’s content-curation algorithms reduce consumption diversity through a Hype Loop that nudges us toward polarization: Friend-suggestion algorithms connect us with people like ourselves. So the content shared by our contacts is biased toward our own perspectives. Newsfeed algorithms further reduce content diversity by narrowing our reading options to the items that most directly match our preferences. We then choose to read an even narrower subset of this content, feeding biased choices back into the machine intelligence that infers what we want, creating a cycle of polarization that draws us into factionalized information bubbles.”

So there are multiple levels of curation/filtration with varying degrees of human-algorithm interaction at work:

  • Who platforms recommend we connect with.

  • Who we ultimately connect with on platforms.

  • What newsfeeds recommend to us, based on what we do.

  • What we finally choose to click on/read.

Bandy and Diakopoulos had to rely on eight “automated puppet accounts” and comparing their chronological and algorithmic feeds. Another post describing their process specifies that they basically cloned/emulated a set of accounts from an identified set of left and right-leaning communities and sampled 50 tweets from their timelines twice a day. Note that the puppets didn’t actually click on links, which is a signal that Twitter does use as a signal for personalisation, so that’s a potential limitation. Anyway, here’s what they found.

  • Twitter’s algorithm showed fewer external links:

    On average, 51% of tweets in chronological timelines contained an external link, compared to just 18% in the algorithmic timelines

  • Many Suggested Tweets

    On average, “suggested” tweets (from non-followed accounts) made up 55% of the algorithmic timeline.

  • Increased Source Diversity

    the algorithm almost doubled the number of unique accounts in the timeline

    …algorithm also reined in accounts that tweeted frequently: on average, the ten most-tweeting accounts made up 52% of tweets in the chronological timeline, but just 24% of tweets in the algorithmic timeline

  • Shift in topics

    They clustered tweets containing political, health, economic information and information about fatalities. Except for the political cluster, other categories had reduced exposure.

  • ‘Slight’ partisan echo chamber effect

    Tagging accounts as Influencers on the Left, Niche Left, Bipartisan. Niche Right and Influencers on the Right, here’s what happened

    For left-leaning puppets, 43% of their chronological timelines came from bipartisan accounts (purple in the figure below), decreasing to 22% in their algorithmic timelines

    Right-leaning puppets also saw a drop. 20% of their chronological timelines were from bipartisan accounts, but only 14% of their algorithmic timelines

    Notably, the representation of Influencers on the Left and Right remained more or less constant across both types of timelines and left/right-leaning accounts.

Yes, there are limitations to this study, and we need to learn more about these kinds of effects. Disappointingly, for those of us who love a good villain, this neither absolves algorithms nor exonerates them.

Also Read:

Benedict Evans’ 2018 post about ‘The death of the newsfeed’:

according to Facebook, its average user is eligible to see at least 1,500 items per day in their newsfeed. Rather like the wedding with 200 people, this seems absurd. But then, it turns out, that over the course of a few years you do ‘friend’ 200 or 300 people. And if you’ve friended 300 people, and each of them post a couple of pictures, tap like on a few news stories or comment a couple of times, then, by the inexorable law of multiplication, yes, you will have something over a thousand new items in your feed every single day.

This overload means it now makes little sense to ask for the ‘chronological feed’ back. If you have 1,500 or 3,000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up, which can only be 10% or 20% of what’s actually there. This will be sorted by no logical order at all except whether your friends happened to post them within the last hour. It’s not so much chronological in any useful sense as a random sample, where the randomiser is simply whatever time you yourself happen to open the app. ’What did any of the 300 people that I friended in the last 5 years post between 16:32 and 17:03?’ Meanwhile, giving us detailed manual controls and filters makes little more sense - the entire history of the tech industry tells us that actual normal people would never use them, even if they worked. People don't file. 

And yet, I’ll be sticking with my combination of Latest Tweets + Lists + Advanced Search on keywords and lists, filtered by engagement.

They will curate, and I’ll go read’em.


(col)Lapses in time

A video made by a 5-year old complaining about excessive online classes on Sunday, uploaded as a status message on Whatsapp, took on a life of its own, resulting in the district administration announcing a reduction in the duration and number of classes on Tuesday [Naveed Iqbal - Indian Express].

Imagine how long this would have taken through ‘official channels’ and ‘established procedure’.

In a very different example of this ‘time collapse’, Kashmir Hill noted, while writing about Emily Wilder being fired by The Associated Press after old posts were dug up.

Part of the problem is how time itself has been warped by the internet. Everything moves faster than before. Accountability from an individual’s employer or affiliated institutions is expected immediately upon the unearthing of years-old content. Who you were a year ago, or five years ago, or decades ago, is flattened into who you are now. Time has collapsed and everything is in the present because it takes microseconds to pull it up online. 

The way I see it, there are 2 distinct ‘time collapses’ to consider here.

  1. The time lapsed between something from the past and now - a lapse in judgement, a problematic position/opinion, etc. Referred to as 1.

  2. Now that this ‘something’ has been brought up, the time to respond/react. Referred to as 2.

We’ve seen this countless times in India too, pulling up old tweets/posts and then (typically) pressurising an employer to take action, who then (again, typically) cave and oblige.

1 (also, typically - not always) centres around an individual and seems to have increased thanks to the internet. 2 (generally) attains significance when enacted by some sort of collective - a community/group of people, an institution, etc. and has significantly shrunk in our modern information ecosystem.

There’s a lot to consider about 1, the calling out it results in and wide range of outcomes it can have. Indeed, there have been many notable examples over the last few weeks alone [stand-up comics past tweets/jokes being brought up].

As someone who thinks about both public discourse and public policy, 2 grabs my interest more. Note that I am considering the pressure to respond/react faster and not the ability to respond/react faster.

  • What kind of institutional responses does this time-collapse lead to? Do we have the luxury to really think things through? Is the need to be ‘seen’ to be respondeing quickly eclipsing the need to put together ‘quality’ responses?

  • The kind of responses communities/insitutions craft in 2, can determine how much 1 remains a factor. As Kashmir Hill quotes Krystal Ball

    “The less successful it is, the less that it works,” she said, “the less interest in it people are ultimately going to have.”

Also Read:

Charlie Warzel - The Internet is Flat


Moderating Content Moderation

“Content Moderation is the essence of what platforms do” - (heavily paraphrased) Tarleton Gillespie. - Yes, true.

“We need platforms content moderate the hell out of conent on platforms” - Simplistic version of the argument made by many people on the internet - Err, let’s talk about this.

Back when platforms started playing a more active role with COVID-19 information disorder, Rohan Seth and I speculated in a document about their responses.

The bits about calls for more moderation by society and a more interventionist role by platforms seem to have played out.

As Evelyn Douek writes in Wired in an aptly titled piece ‘More Content Moderation Is Not Always Better’

The internet is sitting at a crossroads, and it’s worth being thoughtful about the path we choose for it. More content moderation isn’t always better moderation, and there are trade-offs at every step. Maybe those trade-offs are worth it, but ignoring them doesn’t mean they don’t exist.

As companies develop ever more types of technology to find and remove content in different ways, there becomes an expectation they should use it. Can moderate implies ought to moderate. After all, once a tool has been put into use, it’s hard to put it back in the box. 

Oh, and look, India finds a mention too:

Authoritarian and repressive governments around the world have pointed to the rhetoric of liberal democracies in justifying their own censorship. This is obviously a specious comparison. Shutting down criticism of the government’s handling of a public health emergency, as the Indian government is doing, is as clear an affront to free speech as it gets. But there is some tension in yelling at platforms to take more down here but stop taking so much down over there. So far, Western governments have refused to address this.

Aside: As I was writing this edition, cartoonist @MANJUtoons tweeted that this account was the subject of a legal request from India.

Anyway, there’s 2 big takeaways here.

  1. Content Moderation at scale is impossible to do well, as Mike Masnik says. More moderation almost certainly means more false positives. Just ask Palestinian users who have been facing this for years. Perhaps we need to moderate how much content we moderate, and

  2. Just deleting content is not going to solve the complex politico-socio-economic problems we have to fix. So maybe we need moderate our expecations about outcomes from simply increasing how much content we moderate.

Also Read:

I realise this the 2nd Benedict Evans’ post in this edition - Is content moderation a dead end?

However, it often now seems that content moderation is a Sisyphean task, where we can certainly reduce the problem, but almost by definition cannot solve it. The internet is people: all of society is online now, and so all of society’s problems are expressed, amplified and channeled in new ways by the internet. We can try to control that, but perhaps a certain level of bad behaviour on the internet and on social might just be inevitable, and we have to decide what we want, just as we did for cars or telephones - we require seat belts and safety standards, and speed limits, but don’t demand that cars be unable to exceed the speed limit. 

Hence, I wonder how far the answers to our problems with social media are not more moderators, just as the answer to PC security was not virus scanners, but to change the model - to remove whole layers of mechanics that enable abuse. So, for example, Instagram doesn’t have links, and Clubhouse doesn’t have replies, quotes or screenshots. Email newsletters don’t seem to have virality. Some people argue that the problem is ads, or algorithmic feeds (both of which ideas I disagree with pretty strongly - I wrote about newsfeeds here), but this gets at the same underlying point: instead of looking for bad stuff, perhaps we should change the paths that bad stuff can abuse.

Also Read #2


… Meanwhile in India

▶️ A book with 56 blank pages titled ‘Masterstroke’ made it onto Amazon’s listings [Nivedita Niranjankumar - BoomLive]

Related:

  • This is not surprising, since we have seen past coverage about conspiracy theory books on Amazon [BuzzfeedNews], or how it puts ‘misinformation at the top of your reading list’ [TheGuardian] and even a study by ISD Global. What was implication? I’ll go read em… er.. the algorithm.

  • (This is the last Benedict Evan’s link in this edition) Does Amazon know what it sells?

    There’s an old cliché that ecommerce has infinite shelf space, but that’s not quite true for Amazon. It would be more useful to say that it has one shelf that’s infinitely long. Everything it sells has to fit on the same shelf and be treated in the same way - it has to fit into the same retailing model and the same logistics model. That’s how Amazon can scale indefinitely to new products and new product categories

▶️ Ayushman Kaul investigates India v/s Disinformation and Press Monitor [DFRLab]

This instance is the latest in a series of cases in which PR companies and digital communications firms operate online publishing networks claiming to fight disinformation while simultaneously aligning with individual politicians or governments. The DFRLab has reported on prior examples of so-called “disinformation-as-as-service” providers, including our coverage of Operation Carthage and Archimedes Group.

In a written response to questions posed by the DFRLab, the company confirmed that it had contracts with the Indian government and some of the country’s foreign embassies to conduct media monitoring services but claimed that India Vs. Disinformation was a separate independent initiative not related to these contracts. Press Monitor also told the DFRLab that it created the websites as a means of gaining the favor of and receiving further commercial contracts from the Indian government.

Related:

  • If you’ve been a subscriber for a while, you may recall that India v/s Disinformation made an appearence in edition 13 > New India, New Twiplomacy.

  • And, as I said in edition 29 > Eww Disinfo, expect to see more investigations focusing on India.

    There’s also a broader point here. Influence operations can be found wherever you look for them (if you look hard enough and are smart enough, obviously) - we just seem to hear a lot more about Russian, Chinese and Iranian ones today because a lot of resources are focused on looking there. In fact, Mahsa Alimardani, who spoke about Iranian influence operations before the EUDisinfo team said (in the context of an overemphasis on Iran) [from ~14:22:30 in the video]

▶️ An intriguing piece: “Covid19 Is the First Global Tragedy in a Post‑Truth Age. Can We Preserve an Authentic Record of What Happened?” [Saumya Kalia - TheSwaddle]

▶️ The Telagana police is ‘closely watching’ social media [Aihik Sur - Medianama]


Around the world in 10 points

▶️ Twitter appears to be gearing up for a wider rollout of Birdwatch, its crowdsourced fact-checking programme. [Lucas Matney - TechCrunch].

  • Read Poynter’s analysis on issues with Birdwatch

▶️ Advertisers want to audit platform transparency efforts

▶️ The Truth Brigade will counter right-wing disinformation on social networks. [Cat Zakrzewski - Washington Post]

▶️ EU wants to strengthen its Code of Practice on Disinformation.

▶️ Self-regulation 2:0? A critical reflection of the European fight against disinformation - Ethan Shattock

▶️ In Pakistan, pro-government fact-checkers are trolling and targeting journalists. [Ramsha Jahangir - Coda]

▶️ Australia’s drug regulator is considering referring COVID vaccine misinformation to law enforcement. [Paul Karp - TheGuardian]

▶️ Misinformation and the Mainstream Media

▶️ Do You See What I See? Capabilities and Limits of automated multimedia content analysis - Carey Shenkman Dhanaraj Thakur Emma Llansó.

▶️ #Scamdemic, #Plandemic, or #Scaredemic: What Parler Social Media Platform Tells Us about COVID-19 Vaccine - Annalise Baines, Muhammad Ittefaq and Mauryne Abwao.


Of Spaghetti on the wall, one nation one internet, fact(un)checked

MisDisMal-Information Edition 41

What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.

What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.

Welcome to Edition 41 of MisDisMal-Information

Around 100 years ago, in early April in edition 35 - I had promised you a less ‘doom and gloom’y edition. I may not deliver fully on that, but for a change, I’ll start off with something positive.

Not spaghetti on the wall

In a study titled ‘Combining interventions to reduce the spread of viral misinformation’, a team comprising Joseph B. Bak-Coleman, Ian Kennedy, Morgan Wack, Andrew Beers, Joseph S Schafer, Emma S. Spiro, Kate Starbird and Jevin D. West attempted to compare individual interventions to curb the spread of false information (deplatforming, virality circuit-breakers, etc.) with a combination of different types of interventions. The results?

we reveal that commonly proposed interventions–including removal of content, virality circuit breakers, nudges, and account banning—are unlikely to be effective in isolation without extreme censorship. However, our framework demonstrates that a combined approach can achieve a substantial (~50%) reduction in the prevalence of misinformation. Our results challenge claims that combating misinformation will require new ideas or high costs to user expression. Instead, we highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity, and democratic processes around the globe.

Let’s dig in a little. Here’s what happened when they looked at individual interventions:

  • Content removal: With the assumption that platforms have the ability to perfectly remove all instances of a particular piece of content, outright removal resulted in ~93% (median) reduction in tweets, replies, quote-tweets and retweets) if done within 30%. And ~50% reduction if done after a 4-hour delay.

  • Virality circuit breakers: A 10% reduction in virality implemented after 4 hours can lead to a 33% reduction in the spread of false information.

  • Nudges + reduced reach: I’ll quote here ‘Nudges that reduce sharing by 5, 10, 20, and 40% result(ed) in a 6.6, 12.4, 22.6, and 38.9% reduction in cumulative 164 engagement, respectively’.

  • Account bans:

    • For ~1500 accounts removed in early 2021, engagement with false information dropped by 12%.

    • Then they considered a 3 strikes scenario. For verified accounts, this resulted in a ~8% reduction in engagement. It appeared to make a significant difference when the threshold for removal was set at having 10K followers.

With combination interventions, they considered 2 levels.

  • Modest: ~36% reduction in the volume of misinformation.

    • Reducing Virality: Applied to 5% of the content, reducing virality by 10% and enforced after 2 hours.

    • 20% of this content was removed after 4 hours.

    • Nudges resulted in 10% less sharing of false information.

    • 3 strikes rule for account bans applied to those with >100K followers.

  • Aggressive: ~49% reduction

    • Reducing Virality: Applied to 10% of the content, reducing virality by 20% and enforced after 1 hour.

    • 20% of this content was removed after 2 hours. This isn’t explicitly mentioned, I’ve inferred this based on how it was worded.

    • Nudges resulted in 20% less sharing of false information.

    • 3 strikes rule for account bans applied to those with >50K followers.

I’ve obviously simplified heavily here, so as usual, I will recommend reading the actual paper too. Nevertheless, what the analysis (relying on simulations) does point to is the need to explore using multiple kinds of interventions simultaneously. I will caution that any such combinations should be tested, and the results should be published before we start throwing spaghetti on the wall to see what sticks. Because experiences of many around the world tell us that it is the marginalised and vulnerable who are the most adversely impacted by arbitrary steps.

Related: Facebook announced it would notify people if a page they visit has repeatedly shared false information, reduce the reach of people who share false information repeatedly (beyond the offending posts, which it already claimed to do), as well a redesigned prompt for when people are about to post debunked false claims. Notably, it didn’t say how many strikes it would take to reduce the reach of all posts by a user. What are the odds we’ll this being applied to any prominent Indian accounts any time soon?


One earth one internet v/s One nation one internet

Whether it is the Indian state's renewed face-off with Twitter, or Russian threats to throttle Google's traffic, or even Canada doing an ‘atmanirbhar’ internet (with a bill that wants to prioritise Canadian content) - such events always raise the questions about models of internet governance and power balance (or imbalance) between various participants.

In 'Four Internets' Kieron O’Hara and Wendy Hall identified 5 types of digital governance models; Silicon Valley's Open Internet driven by technology; Brussel's Bourgeois Internet with its focus on peace, prosperity and cohesion through rules; Beijing's Authoritarian Internet with its emphasis on control and surveillance; DC's Commercial Internet which places the interest of private actors at the centre. Moscow's spoiler model exploiting an open, decentralised internet featured as an addendum.

18 months later, In India, Jio, and the Four Internets, Ben Thompson highlights an 'increased splintering in the non-China model' with a 'U.S. Model' (similar to a combination of O'Hara and Hall's open and commercial internets), European model (similar to bourgeois internet), and an Indian model characterised by 'unencumbered' foreign participation in 'digital goods' and a 'tighter leash' over the physical layer. Jack Balkin's 2018 essay 'Free Speech is a triangle' depicted dyadic interactions between states, corporations and societies and their interactions via an inverted triangle. The ability to speak is an outcome of the power struggle between these participants.

A recent paper by Demos provides a framework to visualise these models and relative power. It identifies four powers that will determine rules for and shape the internet - states, corporations, individuals and machines, with each of them enjoying some degree of control/power.

Image Source - Page 9

The reality of the internet as a 'network of networks' and a 'borderless entity' also implies each model and its four powers will interact with other models/powers and influence them. Such models are not a perfect representation of reality but provide useful frameworks to think about the future of the internet.


Fact (un) checked

In a public post on her newsletter ‘Checking Facts Even If One Can't’, Zeynep Tufekci writes, in the context of fact-checks around lab-leak theory last year.

The cluster of “lab leak” theories itself needs unpacking, as it includes claims that seem to range from plausible but uncertain to what I’d consider unlikely and distracting.  But nonetheless, it’s useful to walk through one example of a “fact-check” from Politifact from last year that has recently been “archived”:

An honest evaluation in September 2020—before the WHO investigative trip and everything that has been revealed since—would be something along the lines of this: “We don’t know and there are a lot of conflicting opinions about this, and the evidence base is incomplete and different groups of scientists have different views. We are not in a position to assign plausibility levels because that’s what scientific debate is about and we are not scientists or investigative journalists, and we are supposed to fact-check things that are clear facts, not resolve complex scientific debates taking place in a politically-charged landscape”.

I’ve been thinking of this extensively in the context of the claims attributed to Luc Montagnier that those who have taken the vaccine will die within 2 years. Now, one part of debunking this claim has been clarifying that the statement was misattributed. But let’s suppose he did actually say that. How do you debunk the ‘2 years’ part of the claim since no one actually received the vaccine more than 2 years ago? This is not analogous to the example in Zeynep Tufekci’s post, of course. The question I am trying to get to is how to do you square the need for authoritative public messaging with the importance of conveying the underlying complexity.

Related: “Facebook will no longer take down posts claiming that Covid-19 was man-made or manufactured” [Cristiano Lima - Politico].


Meanwhile in India

GoI’s ‘feuds’ with Twitter, Facebook and Whatsapp are all escalating and will likely have evolved by the time I write this edition and hits your inboxes.

  • On the Twitter front, both the Delhi Police [ANI Twitter thread] and MEITY (GOI_Meity tweet) have put out strongly (and oddly) worded press releases after Twitter’s statement earlier on Thursday, which among other things, expressed concern for the safety of its employees [Soumyarendra Barik-Entrackr]. Remember Vittoria Elliot’s story on ‘hostage-taking laws’ from edition 39?

  • On the Whatsapp, Facebook front, there are few better sources to follow for the traceability debate in India than Aditi Agrawal’s reportage. Her most recent piece (at the time of writing) based on the petitions filed by Facebook and Whatsapp is also a must-read.

Ok, now back to the stuff I had planned for this section.

  • In edition 1, I wrote about an analysis of patterns in COVID-19 misinformation in India by Syeda Zainab Akbar, Divyanshu Kukreti, Somya Sagarika and Joyojeet Pal (go back and read it, if you haven’t). Now, Syeda Zainab Akbar and Joyojeet Pal have put out another analysis based on the 2nd wave.

    Read this along with Meghna Rao’s RestofWorld piece on misinformation in a family whatsapp group.

  • FirstDraft’s Carlotta Dotto and Lucy Swinnen investigated islamophobic tweets from India in the context of social media conversations around Palestinians. This is certainly a trend, I had done a preliminary analysis of a similar phenomenon around violence in Sweden and Norway in September 2020 [TheWire], and noticed a similar pattern when the BLM protests started out around a year ago [Edition 6]. Sections of ‘Indian Twitter’ just seem to be waiting for a trigger to jump on.

  • IndianExpress with a round-up of Mr. Ramdev’s ‘controversial remarks’. Related: Alishan Jafri on the contrast between GoI’s attempts to curb the usage of ‘Indian variant’ v/s inaction in response to other narratives [TheWire]

  • 2 arrests that received some coverage:

    • A YouTuber from Ludhiana was handed over to the Arunachal Pradesh police for ‘racial remarks against a Congress MLA from Arunachal Pradesh and for bearing ill will towards the people of the state.’

    • Yasser Arafat for posting a ‘a pro-Palestine image and comment on his social media page’.

      Arafat’s post on his Facebook page, which he calls Azamgarh Express and on which he usually posts local news that is sometimes combined with his own views, had simply noted that in Gaza the coming Friday, every house and every vehicle would fly the Palestinian flag.

      But some readers among the Azamgarh Express page’s 17 lakh followers appeared to have misread the post as an appeal by Arafat for every Muslim in Azamgarh to raise the Palestinian flag in their home and on their vehicle on the coming Friday.


Around the world

Of Voice, Loyalty, Sticks, Flag-elation, Flagellation and Seeing Clearly

MisDisMal-Information 40

What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.

What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.

Welcome to Edition 40 of MisDisMal-Information

Voice, Loyalty and Stick

The situation in Gaza is quite grim. Even with the ceasefire, there’s no way around that. One thing that has been notable to watch, though, is the mobilisation of support for Palestinians. 

From Social Media is the Mass Protest (NYTimes)

“It feels different this time, it definitely does,” said  Amani Al-Khatahtbeh , 29, the Palestinian-Jordanian-American founder of MuslimGirl.com, whose posts on the topic have been ubiquitous across social media over the past week. “I wasn’t expecting this to happen so quickly, and for the wave to shift this fast. You don’t see many people out on the streets in protest these days, but I would say that social media is the mass protest.”

In a recently published paper about the Farmer Protests in India, based on a framework that looks at mobilisations through the lens of identity, network and immediate causes, I wrote about a network based on ‘liberal bonds’. For context, the network aspect is relevant because it helps a movement scale.

Third, an existing ecosystem of groups/individuals that self-identify as liberals. This network is held together by ‘liberal bonds’. The bonds within this grouping can vary from weak to strong in terms of strength and have formed over years of iterative engagement on and off social media platforms. Nevertheless, interaction over social media platforms is essential in this network’s width and mobilising potential. Boundaries of this network are not well-defined as participants can voice support or exit depending on causes, though they are hardening over time. 

Here’s how I had attempted to define ‘liberal’ since it is used in many ways in regular discourse. This is not comprehensive by any stretch, but I think you’ll get an idea of where I am coming from.

The third identity relates to individuals/groups/entities who self-identify as ‘Liberal’. This label is often attributed/misattributed in contemporary discourse. However, in this paper, the term is not meant as a complimentary or pejorative descriptor. It aims to capture a broad identity-type that classifies its own political identity as favouring individual rights and social justice in opposition to what it considers state support for majoritarianism, exclusion, chauvinism, and discrimination. This identity group broadened the support base for the movement.

There are shades of this around the support for the Palestinian cause as well as many of the same voices that supported BlackLivesMatters, MeToo, StopAsianHate (in the U.S.), and anti-CAA, Farmer protests (in India) have called out Israeli aggression. The fluidity of boundaries is evident, though, since not all voices are unequivocal. From the same NYT article:

Perhaps an even more telling measure of the online fervor was the backlash awaiting the singer Rihanna, who, under normal circumstances, can do no wrong in fans’ eyes, when she condemned “the violence I’m seeing displayed between Israel and Palestine!” drawing accusations that she was equating the two sides’ actions and the consequences. Sample reply: “You sounded like ‘all lives matter.’”

And the backlash seems to be an indication of the hardening or a call for consistency, depending on how you look at it (i.e. if you don’t stick together, you may get the stick. This is also why I dropped the exit from exit, voice and loyalty, and replaced it with a stick)

There’s certainly an upside to this dynamic. A loose coalition of sorts that comes together for a ‘just’ cause. Going back to the paper:

With the identity-based networks transcending international boundaries due to various interacting identity-types and global information flows enabled in particular by social media platforms, encrypted messaging services, such networked protests immediately capture attention across the world. Thus, states should expect greater scrutiny and sharper criticism of their response to it, as well as their track record of dealing with similar movements in the past.

Such mobilisation is extraordinary. Yet, the fact that one needs to rely on the extraordinary is an indication that the ordinary is failing for any number of reasons. 

And, just as there is a coalition for a just cause, there can be an opposing one for a cause it believes to be ‘just’ too. Another thing that seemed evident during the farmer protests was a counter-movement that sprung up in response to support for the protests. There are shades of that here, too, as right-wing groups (even in India) have supported (and cheer-led) Israel’s actions. This creates a dynamic conflict between these networks. From the paper (sorry, 2nd last time)

The existing political identity-based networks are always ‘ready-to-go’ and quickly enter into a state of conflict comprised of many simultaneously occurring engagements ranging from well-reasoned, good-faith arguments to whataboutery, ad hominem attacks, sealioning, overstating or minimising perceived harms and outright fabrication/falsif ication of information. These conflicts often overlap resulting in perpetual cycles of mobilisation and counter-mobilisation even as immediate causes shift.

And, there is a risk of flattening too:

Conversely, repeated eruptions of protests with short/no intervals over time can also lead to a flattening of global responses and attention. In which case, the costs of relying on attrition may gradually decrease. 

Post-script

If you’re troubled by the reports of tech platforms moderation decisions and ‘glitches’ disproportionately affecting Palestinian users, I recommend reading Jillian York’s book Silicon Values, where she documents a pattern of marginalised populations around the world being affected across aptly titled chapters: Offline Repression Is Replicated Online, Profit over People and Extremism Calls for Extreme Measures.


Flag-elation and Flagellation 

Another ‘toolkit’ saga is unfolding as the several right-wing handles posted screenshots of a document allegedly made by the INC. An Altnews fact check by Pooja Chaudhuri and Pratik Sinha contends that one set of screenshots were forged. 

(Image source)

On 21st May, Twitter labelled/flagged some tweets containing these screenshots under their ‘manipulated media policy’. This included what appears to be one of the earliest tweets that got significant attention by a handle ‘teambharat_’, as well as some by Sambit Patra and other prominent right-wing accounts. I’ve added these to the Labeled Tweet Repository [Notion] I’ve been manually maintaining since December 2020. After this, the hashtag ‘ManipulatedMedia’ was trending on Twitter for ~12 hours as per Trendinalia[link]. If you hop over and look at the tweets, some were obviously elated (sorry, I had to close out that pun).

The Union government, though, has asked Twitter to remove the labels. We know this only because ‘sources’ have spoken to the media. [Surabhi Agarwal - Economic Times]:

“Such tagging by Twitter appears prejudged, prejudiced and a deliberate attempt to colour the investigation by local law enforcement agencies,” according to official sources who termed Twitter’s action as a “clear overreach, which is totally unwarranted”. 

You will recall that it was not so long ago (~ 2 weeks) that the Union Government asked social media platforms to ‘control misinformation and discourage fake news’ in the context of COVID-19 Economic Times].

MediaNama published(paywall) the advisory. I am paraphrasing from the call-to-action section:

1- Run awareness campaigns for users not to upload/circulate false news/misinformation related to COVID-19, which can create panic, disturb public order and social tranquillity

2- Take immediate action to disable /remove such content 

3- Promote dissemination of authentic information related to COVID-19 as far as possible.

4- Issue warnings to those misusing platforms ‘indulging’ in such fraudulent activities.

In this particular case, The Internet Freedom Foundation has said that they will be filing an RTI to determine which laws were used as the basis for this.

It will be interesting to see what sort of response they get. In the meanwhile, we’ll probably have to brace ourselves for another round of sabre-rattling between the Indian state and Twitter. And oh, ban Twitter is trending on Koo [archive link] (the archive link has it trending much lower than its no. 3 position on the explore page). There was some activity on the hashtag ‘BanTwitterInIndia’, (rubs eyes), Twitter too. Not the first time, tbh.

—————

Now, let’s move from flags and elation to flagellation. You’re probably aware that Arvind Kejriwal tweeted about a ‘new form of coronavirus’ in Singapore - for which he faced some backlash [PTI - Economic Times]. Singapore also appears to have invoked its ‘anti-disinformation’ law - POFMA (Protection from Online Falsehoods and Manipulation Act) [ANI - Business Standard]

The MOH instructed the POFMA Office to issue General Correction Directions to Facebook, Twitter and SPH Magazines Pte Ltd (HardwareZone forum), read MOH statement. Facebook, Twitter and SPH Magazines are required to carry the Correction Notice to all end-users in  Singapore who use Facebook, Twitter and HardwareZone.com.

As TheWire reports, Singapore also seems to have considered charging Arvind Kejriwal under the law.

The Singapore envoy then cautioned that his government has considered bringing charges against Kejriwal under a domestic act targeting fake news.

“So, indeed in Singapore, there is an act called the Protection from Online Falsehoods and Manipulation Act known as POFMA. It is meant to mitigate the spread of misinformation, so we reserve the right to invoke POFMA on some of the comments and assertions made by the honourable chief minister on this topic,” Wong said.

The article also points out that there is a clause for extra-territorial application:

There is an  extra-territoriality clause in POFMA that allows for action to be taken against a person outside Singapore in case of communication that is “prejudicial to public health, public safety, public tranquillity or public finances” and “incite feelings of enmity, hatred or ill‑will between different groups of persons”, among others.

Now here’s where this gets interesting. In our current information ecosystem, domestic politics and international relations meld together in ways that seem to completely change incentives. There are no internal affairs, or TANIA? Anyway, the union government probably does not want to be seen supporting or defending Arvind Kejriwal in any way (at least, not publicly). Indeed, the MEA and the foreign minister have said that the Delhi CM has ‘no competence to pronounce on COVID variants or aviation policy’ and that ‘[Kejirwal] does not speak for India’. 

Politics was always about performance. But, social media dynamics make politics more performative than ever before.

Related: Listed to this Social Media Politics episode on The Cultural Sociology of Political Performance, Icons, and Social Media.


Can we see clearly now…

I am going to end this edition on a sobering note. Evgeny Morozov wrote an opinion piece in The Guardian titled ‘Privacy activists are winning fights with tech giants. Why does victory feel hollow?’.

He makes two observations on the strategy of using privacy transgressions as the centrepiece that really stuck out to me.

That strategy presumed that such legal transgressions would continue in perpetuity. Now that Alphabet – and soon, perhaps, Facebook – are rushing to leverage machine learning to create personalized ads that are also privacy-preserving, one begins to wonder if putting so many critical eggs into the proverbial privacy basket was a wise choice. Terrorized by the ubiquity and eternity of “surveillance capitalism”, have we made it all too easy for technology companies to actually live up to our expectations? And have we wasted a decade of activism that should have been focused on developing alternative accounts of why we should fear big tech?

Something similar is likely to happen in other domains marked by recent moral panics over digital technologies. The tech industry will address mounting public anxieties over fake news and digital addiction by doubling down on what I call “solutionism”, with digital platforms mobilizing new technologies to offer their users a bespoke, secure and completely controllable experience.

The importance of collecting around issues strategically is even more important as the coalition I referenced in the first section gets stronger. There have been sporadic successes with getting tech companies to respond. And as these networks form stronger bonds and get better at working together, they can effect more change. 

Just as Morozov referenced privacy, I have concerns about ‘Transparency’ as an end. Now, I am not saying that it shouldn’t be a goal. But I can foresee it easily being distorted into something counterproductive if what comes out of it are (extremely) large data dumps that either no one can make sense of or require a significant amount of investment/effort to do so. And that’s not the only way it can be counter-productive. Think about how live streaming certain proceedings seem to have robbed them of serious/meaningful proceeding and turned them into a contest for social media shareable power snippets (performative politics from the second section ftw). We need to arrive at some sort of agreement on what constitutes ‘meaningful transparency’. It is in quotes because I don’t know what meaningful means in this context.

Post email update:

I had a few thoughts after I scheduled the email, that I felt should go up on the web version of this edition.

Another example of this situation is when we argue that a states/companies doing are ‘xyz’ which doesn't have a legal basis. And while that’s often a legitimate concern, where I think that can fall short is if the discourse fixates on just having a law rather than what said law would allow/disallow (For example, I’ve read countless articles where there’s a passing reference to the fact that India doesn’t have data protection law, and there isn’t always space to get into the nuances of what the bill in its current form will not address, what it will make worse and what it will explicitly allow). Because, and we’ve seen this, we can always a get a badly written law (either intentionally or unintentionally), or it can be interpreted in a way that leads to adverse outcomes, or get yet another law that isn’t implemented/implementable. To be fair, when most people advocating for a legal basis for something, they tend to have some idea of what they would want to see in such a law, though that can get lost in public discourse which, as I said earlier, can become singularly focused on just having one, or nullified in the way it is implemented - and that’s the point I’m zeroing in on.


Loading more posts…