What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 44 of MisDisMal-Information
In this edition
Various proposals to ‘unbundle’ social media platform features.
Fake engagement and its negative externalities
… Meanwhile, in India - Twitter’s travails, Conspiracy theories, India-centric misinformation research.
Unbundling Social Media
Published in early June, an analysis of the social media engagement with Donald Trump’s engagement before and after his accounts were banned indicated that though his agenda-setting powers had waned, some statements still received significant engagement, because they were amplified by (mostly) supporters and opponents (ridiculing them) [Davey Alba, Ella Koeze and Jacob Silver - NYTimes]
Before the ban, the social media post with the median engagement generated 272,000 likes and shares. After the ban, that dropped to 36,000 likes and shares. Yet 11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more.
Building on the question of ‘how this happened’, Richard Reisman writes [Tech Policy Press]:
Understanding how that happens sheds light on the growing controversy over whether “deplatforming” is effective in moderating extremism, or just temporarily drives it out of view, to intensify and potentially cause even more harm. It also illuminates the more fundamental question: is there a better way to leverage how social networks work to manage harmful speech in a way that is less draconian and more supportive of free expression? Should we really continue down this road toward “platform law” — restraints on speech applied by private companies (even if under “oversight” by others) — when it is inevitably “both overbroad and underinclusive” — especially as these companies provide increasingly essential services.
There are a few proposals that aim to play in between this ‘platform law’ regime and free-speech-free-for-all binary. The general idea is to wrest control over the flow of information on platforms from platforms themselves - with approaches as varied as mandating interoperability, creating a marketplace of content sorting algorithms, etc.
In no particular order, some of these are:
Magic APIs - Daphne Keller
PLATFORM CONTENT REGULATION – SOME MODELS AND THEIR PROBLEMS (magic APIs section is towards the end)
If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great. (essentially an updated version of 1 for 2020)
Protocols not Platforms - Mike Masnick
Competitive Compatibility / Adversarial Interoperability - EFF [(I didn’t find a single document that outlines this, so analysis for this is pieced together from various sources - mostly Cory Doctorow’s work)]
Middleware - Stanford Working Group on Platform Scale
Was Twitter Right To Have Booted Trump? (Francis Fukuyama and Jillan C. York)
Fake News and Conspiracy Theories - Francis Fukuyama (you may remember this from 34 - Of Hex, Liers and (not just)Video deepfakes)
You may have realised that I’ve excluded BlueSky. For now, I’m considering it a Twitter solution (technically, it is funded by Twitter but supposed to be independent), but it is worth keeping an eye on.
A full description of each of these is outside the scope of this edition, a good frame to think of them are as variations of solutions that will let others build on top of the existing platforms, and the resultant competition will (hopefully) address some of the problems arising out of the centralisation of power in platforms. E.g. There could be different algorithms for sorting feeds on Twitter/Facebook allowing people to choose between, say, ones designed by news producers or entertainment content producers, etc.
I will try to compare/contrast them based on 6 parameters (based on my own understanding). I do recommend reading through the material I linked to earlier.
Aim(s)
Magic APIs
Licensing hard-to-duplicate resources to newcomers in markets subject to network effects (give users more choices among competing rulesets or rule makers).
Protocols not Platforms
Reverting to an internet powered by a system of protocols instead of centralised platforms would do a better job of protecting user privacy and free speech.
Most current solutions will lead to outcomes that will leave us worse off.
Competitive Compatibility
Encourage interoperability and block anticompetitive mergers with or without the cooperation of the platform(s).
Middleware
Reduce political threats posed by platform control
Intended Target(s)
Magic APIs
Not explicitly prescribed, but Keller has alluded to challenges making rules specifically targeting 'bigness'.
Protocols not Platforms
Envisioned as a universal practice.
Competitive Compatibility
Targets 'bigness' but does not necessarily limit it to that.
Increased focus on lock-in compared to network effects.
Middleware
Targets 'bigness'. It explicitly names Facebook, Amazon, Apple and Twitter but doesn't necessarily exclude others.
Depth of unbundling
Magic APIs
Not explicitly prescribed. Indicative example:
In the platform context, this would mean that Google or Facebook opens up access to the “uncurated” version of its service, including all legal user-generated content, as the foundation for competing user-facing services. Competitors would then offer users some or all of the same content, via a new user interface with their own new content ranking and removal policies.
Protocols not Platforms
Not explicitly prescribed.
It goes to the extent of moving data out of platforms to user-controlled blobs.
Competitive Compatibility
It does not specify any upper limit but maintains that a user should be able to delegate all aspects of interaction to a third party.
Middleware
It varies between performing 'essential functions' and 'supplemental filters'.
Advocates an intermediate role:
provides filters for specific news stories and develops ranking and labeling algorithms, which are then integrated into the main platform
User Data
Magic APIs
No prescription, but it lists this explicitly as a sticking point in terms of ownership. Does an individual own their’s friends’ data, their interaction with her/his posts?
Protocols not Platforms
In its most ambitious version - every user will manage their own data via 'blobs', but that's not a prerequisite.
Competitive Compatibility
Address harms with privacy law
Limit commercial use of data
Middleware
No specific prescription. Infer preference for status quo based on its reference to platforms being able to retain their business models.
Degree(s) of regulation
Magic APIs
Not explicit, but reference to 'unbundling requirements analogous to telecom' implies that it would follow from some sort of regulation.
Approaches it from ‘bigness’.
Protocols not Platforms
It is meant to be market-driven.
Competitive Compatibility
Suggests regulation to block anti-circumvention effects.
Approaches it from ‘smallness’.
Middleware
Interoperability/Opening up APIs may happen by consent or decree. But expresses the likelihood that legislation may be required to open up APIs.
Prescribes standards/guidelines that middleware companies will need to adhere to, which can be outlined by a regulator or the platforms themselves.
Business Models
Magic APIs
No prescription, but it indicates allocation of a revenue split (ads) will be complex.
Protocols not Platforms
Move away from inter-platform competition. Though some are viewing Twitters Bluesky as a means to compete/differentiate from Facebook.
Agents that interface between blobs in data stores and services.
With data and privacy controlled by users, data-hungry models may not thrive. So a return to intent-based or brand advertising is likely.
There will be competition for business models.
Token-based
Competitive Compatibility
No specific prescription.
Middleware
Revenue Sharing
Directly selling subscription or ads
Questions… Questions...
All 4 are interesting solutions, and we should, perhaps, engage with them more deeply. As I was reading through some of these proposals, many questions/thoughts came up in my head. I don’t have good answers today.
Why would platforms change/cooperate?
How are these approaches better than user controls provided by platforms?
Will the increased complexity for users hamper adoption?
Should this be limited to 'bigness'? If yes, on what principles (that do not seem arbitrary)?
Larger systemic incentives unchanged (e.g. media will still report on egregious content, etc.)
Does this mean ‘bad’ content stays up? Even if we can’t agree on what ‘bad’ is.
Could this mean more filter bubbles?
Network effects could still apply to middleware solutions, and one of them could accrue a significant influence on narratives.
How do we avoid being in a similar position again? E.g. Limited transparency, dominant solutions, etc.
How will this impact different layers of the moderation stack? Should it? (See 27: Of ‘Antivaxtionals’ and D(r)ump(f)ing Donald > The Content Moderation Stack)
Related:
In December, Mike Masnick, Daphne Keller and Cory Doctorow laid out some of these in an insightful podcast [TechDirt].
Fake News
No, No, I haven’t joined the ‘fake news’ bandwagon (I still avoid using the term). Instead, this section looks at recent news/stories around fake engagement.
▶️ Sophie Zhang, writing in RestOfWorld, describes how personal vanity pushes users to use Facebook ‘autolikers’ to drive up engagement.
Using ‘autolikers’ require users to give them access to their accounts, which is then abused to create fake engagement for others - in exchange for receiving fake engagement. She contends that users feel their accounts cannot be used since no passwords are shared as part of the process.
There were some interesting data points in there too.
While people assume most fake engagement is political, it apparently makes up less than 1% of fake engagement on Facebook.
Most accounts that engage inauthentic activity are not fake.
In the first half of 2019, we knew internally that there were roughly 3 million known fake engagers on Facebook, of whom only 17% were believed to be fake accounts. We believed the other 83% were largely self-compromised — many of them through autolikers.
Why is this bad? Well, negative externalities, as economists would say.
This arrangement seems to deliver benefits to both themselves and the autolike business — but only because the costs are borne by others. They do not realize that they are contributing to the gradual erosion of trust in their fellow users and organizations, and corrupting the civic discourse in their nation.
I also liked the distinction between misinformation and inauthentic activity, which, as she rightly pointed out, people tend to conflate.
Another element to keep in mind: Observers commonly conflate the use of inauthentic accounts with misinformation, two separate and largely unrelated problems. Misinformation is a function of what the person is saying, and does not depend on who the person is. If someone said the moon is made of cheese, this is misinformation, regardless of who's saying it. Inauthenticity depends only on the identity of the user, regardless of what they are saying. If I have 50 fake accounts telling the world "cats are adorable," this is still inauthentic activity, even though there's nothing wrong with saying that.
Related:
In India's 'click factory', a follower for Re 1, a 'like' for 44 paise [Chandrima Banerjee - Times of India]
▶️ Amazon is blaming social media companies for fake reviews [Alex Hern - TheGuardian]
This year a Which? investigation found companies claiming to be able to guarantee “Amazon’s Choice” status on products – an algorithmically assigned badge of quality that can push products to the top of search results – within two weeks, and others claiming to have armies of reviewers numbering in the hundreds of thousands.
Amazon says the blame for those organisations should lie with social media companies, who it says are slow to act when warned that fake reviews are being solicited on their platforms
▶️ ‘Rightwing firm posed as a leftist group on Facebook to divide Democrats’ for the 2018 midterm elections [Julia Carry Wong - TheGuardian].
… Meanwhile, in India
▶️ Twitter’s terrible, horrible, no good, very bad year… continues
Is this thing on? Why, I am referring to Twitter’s Intermediary Status, of course.
On 16th June, several stories claimed that Twitter had ‘lost’ its Intermediary Status, implying safe-harbour provisions would no longer apply. Random sample - TimesNow, HindustanTime, FinancialExpress. Most of these reports attributed it to ‘Government sources’.
However, Internet Freedom Foundation noted that this status is not something that the executive can unilaterally lift and needs to be determined in court [that seems to be the general consensus among people who’ve received formal legal education as of the evening on 16th June].
Twitter has also been summoned by the Parliamentary Standing Committee on Information and Technology [Aikik Sur - MediaNama]
Twitter’s India head was questioned by the Delhi Police in connection with the “Congress Toolkit” case. This story, filed on 17th June, still claims without any caveats that Twitter has lost its intermediary status 🤷♂️ [HindustanTimes]
▶️ FIR se F.I.R
This is all playing out in parallel with Twitter Inc. and Twitter Communications India being named in an FIR by the Ghaziabad Police, along with TheWire, multiple individual journalists, and a Congress spokesperson.
This is very much a developing story with the Ghaziabad police contesting the victim’s claim of a communal angle [Bismee Taskin - ThePrint], organisations like the International Press Institute speaking out against the FIR. Meanwhile, the UP Government has said it will take strict action against those ‘spreading fake news on social media’ [RepublicWorld] [Archive link] (Go and read the lede, I promise you won’t be disappointed). The Delhi Police is claiming to have received complaints Swara Bhaskar and Twitter’s India MD [EconomicTimes].
And the story would likely have evolved between the time I schedule this edition and it hits your inbox.
Recommended Reading about the Intermediary Rules:
From Google to Whatsapp, and Twitter to Koo, assessing the compliance status of Intermediaries [Aditi Agrawal - ForbesIndia]
What does it mean to lose safe harbour? [Aditi Agrawal - ForbesIndia]
▶️ Conspiracy Theories
Qanon-esque activites on Telegram as per Dr. Sumaiya Shaikh. Continuing to import tropes. Also, where is George Soros?
Omkar Khandekar writes about the Justice for Sushant Singh Rajput campaign [Article14].
Venkat Ananth, who wrote an in-depth piece (paywall) on this phenomenon last year, on its current state.
▶️ Follow-up on Time Magazine’s story on HJS and SS (edition 43)
▶️ Content Moderation by Courts
The Delhi High Court directed ‘TheCognate’ to block social media posts alleging that India Today “alleging to show that there has been a contrasting and biased approach in its reporting against Muslim community concerning COVID protocol violations relating to religious gatherings at Kumbh Mela and Mecca Masjid [LiveLaw]
▶️ Misinformation research about India
Educative Interventions to Combat Misinformation: Evidence from a Field Experiment in India - Sumitra Badrinathan [Cambridge University] I wrote about the draft back in 37 - Of Madness of the feeds, Har(dly)vard(th) Fact-checking, and it points to the presence of motivated reasoning.
Misinformation on covid-19 Pandemic in Youtube and its Impact on Viewers in Kerala - LakshmyRavindran, Dr. S. Dinesh Babu [Annals of RSCB]
An Online survey was conducted on 325 samples to measure the impact of the misinformation videos on the general public of Kerala. The video analysis revealed that 28% of the videos contain misinformation. The online survey disclosed that the impact of such misinformation is not significant on the people of Kerala. It is concluded that the ill effects of misinformation can be countered through increased awareness on health and hygiene among the people. This study also suggests the need for using media to promotehealth literacy, effective cyber laws to curb the propagation of fake news as areas that have scope for improvement.
Incidentally, a study published last week by the American Psychological Association suggested that “generic warnings about online misinformation, such as those used by governments and social media companies, are unlikely to be effective”. [Quantifying the effects of fake news on behaviour: Evidence from a study of COVID-19 misinformation - Ciara M. Greene and Gillian Murphy]
Tiplines to Combat Misinformation on Encrypted Platforms: A Case Study of the 2019 Indian Election on WhatsApp - Ashkan Kazemi, Kiran Garimella, Gautam Kishore Shahi, Devin Gaffney, and Scott A. Hale. [arXiv]
Claim Matching Beyond English to Scale Global Fact-Checking - Ashkan Kazemi, Kiran Garimella, Devin Gaffney, and Scott A. Hale. [arXiv]
I’ll repeat what I’ve said many times before (maybe not here, but just ask the poor people who have to talk to me) - we need a lot more such research in India. Keep it coming…