What is this? MisDisMal-Information (Misinformation, Disinformation and Malinformation) aims to track information disorder and the information ecosystem largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc., who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 36 of MisDisMal-Information
I know I promised a more optimistic edition after 35; unfortunately, 36 is not quite that edition. On the contrary, I’ve gone full dystopian.
Must-Carry Water and Gettin’ Twitchy at Internet Scores
Last week, in a case pertaining to whether Donald Trump, as President, blocking people on Twitter was a violation of the U.S. First Amendment, Justice Clarence Thomas filed a rather interesting concurring verdict making a case for digital platforms like Twitter, Facebook and Google to be regulated as public utilities (the case itself was thrown out since Donald Trump was no longer President and so, irrelevant). The fact that he was no longer on Twitter loomed large in Justice Thomas’ argument highlighting the disparity in control that Twitter and Donald Trump had. The argument runs into 12 pages and essentially makes the ‘must-carry’ case for digital platforms, in that they are public squares and are obliged to carry all speech and must be regulated as such (public utilities). If you’ve been following related discourse, then this cleaving along the lines of platforms must moderate more and platforms must be neutral + carry everything is not new. It seems to find a basis in the U.S. First Amendment (I am not an expert) that all lawful speech is protected. But then scholars also point out that platforms themselves have their First Amendment rights that protect their speech and expression (there’s also Section 230).
Aside: In this post from September 2020, Daphne Keller lists the various models being thrown around [highly recommended read].
But sitting here in India, all this seems rather quaint. The position between platforms must moderate more (must-remove), and platforms must protect freedom of speech (must-carry) depends on whether a politically aligned ally is affected or not (I am guilty of this myself too). Our very own First Amendment actually clarified restrictions on Free Speech. From Gautam Bhatia’s book on Freedom of Speech in India (Offend, Shock or Disturb):
So, I think it is fair to say that the must-carry content regime is hardly a serious contender here. And with the Intermediary Guidelines and Digital Media Ethics Code, the must-remove regime has been formalised. But the one kind of must-carry that can find purchase here is when companies end up carrying water for the state due to regulation, fear or the need for self-preservation. Hold that thought.
Another interesting development last week (with limited impact in India, for now) was Twitch instituting its ‘Off-Service Conduct Policy’. From their post announcing this policy
we believe that the occurrence of severe offenses committed by Twitch users that may take place entirely off-service can create a substantial safety risk to the Twitch community. As a result, we will issue enforcements against the relevant accounts, up to an indefinite suspension on the first offense for some behaviors, which can take place offline or on other internet services, including:
Deadly violence and violent extremism
Terrorist activities or recruiting
Explicit and/or credible threats of mass violence (i.e. threats against a group of people, event, or location where people would gather).
Leadership or membership in a known hate group
Carrying out or deliberately acting as an accomplice to non-consensual sexual activities and/or sexual assault
Sexual exploitation of children, such as child grooming and solicitation/distribution of underage sexual materials
Actions that would directly and explicitly compromise the physical safety of the Twitch community
Explicit and/or credible threats against Twitch, including Twitch staff
Now, platforms taking actions for behaviour that happened off their services is not new behaviour for them and was prevalent even before Donald Trump was deplatformed by multiple platforms pretty much simultaneously.
Casey Newton, on Platformer (paywall):
In practice, most of the big platforms will remove an account if the person behind it has committed a truly heinous crime — perpetrate a mass shooting, and Facebook and YouTube will pull your account down pretty quick these days. But the mechanics behind this process remain opaque, and to my knowledge almost no platforms have laid out comprehensive public guidelines for how they address what the companies call off-platform or off-service behavior.
As Newton later points out, the eight categories fall under 2 buckets - physical violence and sexual exploitation. This is narrow, specific, and not something many people will disagree with (let’s set aside concerns about false reporting, etc., for now, something Twitch has actually tried to address). Another thing you’ll be aware of if you follow information age discourse is that such mechanisms tend to evolve both horizontally (across companies) and vertically (in scope).
Now, there are 2 possible objections here. First, a lot of what we do is already scored in some way or the other (credit scores, user ratings in various apps, etc.). Or that this idea of ’social media credit scores’ is too far fetched. In ‘The Rise of Content Cartels’, Evelyn Douek traces the use of mechanisms to address child sexual abuse material (CSAM) to ‘terrorist content’, coordinated inauthentic behaviour and synthetic media (not in the paper, but I’ve also seen suggestions of a database of fact-checked content along similar lines). Or, if you look at it differently, from narrowly defined categories where there is some sort of consensus to harder to define categories where we are not just ‘not close to any sort of consensus’ but pretty polarised. So, in my opinion, it is not outside the realm of possibility that this could be the first step to something like an internet credit score somewhere down the road.
Picture this - booting trollish handles off commonly used platforms without having to wait for them to engage in harmful behaviour on that particular platform. Where do I sign?
Ok, now I know I asked you to hold that thought about carrying water, so let’s return to that. In a sobering personal essay of what it is like to cover Technology as a beat in India, Pranav Dixit writes about the Intermediary Rules and Digital Media Ethics Code:
When the rules were announced, experts around the country cried foul. The Internet Freedom Foundation, an organization in New Delhi that fights for digital rights, said that the new rules would “fundamentally change the way the internet will be experienced in India” and termed them “unconstitutional.” Editors of digital news operations have said that the new rules “run us down” and have called them “an attempt to kill digital democracy.”
But so far, American technology companies have been silent.
Netflix, Amazon, and WhatsApp declined my requests to comment on the new rules. Facebook and Google did not respond.
A Twitter spokesperson said, “Twitter supports a forward-looking approach to regulation that protects the Open Internet, drives universal access, and promotes competition and innovation. We believe regulation is beneficial when it safeguards citizens’ fundamental rights and reinforces online freedoms. We are studying the updated Intermediary Guidelines and engaging with a range of organizations and entities impacted by them. We look forward to continued engagement with the Government of India and hope a balance across transparency, freedom of expression, and privacy is promoted.”
But this doesn’t have to be limited to American companies nor platforms. Content Moderation in the stack (where companies at deeper layers of the internet are also forced to make decisions about platforming/de-platforming content - See Edition 21: The Content Moderation Stack) is here to stay.
Image Credit: Navigating the Tech Stack, Joan Donovan
In Parler’s case, Amazon and Apple cited their lack of content moderation practices as reasons to deny it their respective services (hosting and the app store).
Picture this: Kicking off platforms that either enable or political neutrality in the face of dangerous speech. Where do I sign?
And, content moderation in the stack happens in India already. Some examples from layer 6. There’s some level of speculation here, which I’ll address.
In March, Karan Saini published a list of ~2700 websites blocked at ACT.
And in a paper (How India Censors the Web) published in 2019 and then updated in 2020 - Kushagra Singh, Gurshabad Grover, and Varun Bansal looked at 4379 potentially blocked websites sourced from Government orders, court orders and user reports.
Ok, but Prateek, how is this moderation? Aren’t they just responding to Government orders?
Possible, but the paper also notes that only 1115 websites (out of 4379) were blocked on all the 6 ISPs they covered (ACT, Airtel, BSNL, Jio, MTNL and Vodafone). Which indicates there is a significant amount of discretion at work.
Plus, you may also recall that in July 2020, the websites of FridaysForFuture, LetIndiaBreathe and ThereIsNoEarthB and DuckDuckGo were blocked. And while there were Government orders in those cases, in at least one instance, the blocks were also at the domain service provider instead of being limited to ISPs.
For all its defiance and letting tweets flow, Twitter, in February, reportedly took action against 97% of accounts flagged by the Government of India (it would be unfair of me not to reiterate that reporting at the time suggested there were threats of penal action against employees). This was attributed to unnamed official sources, but the company never contradicted or denied those claims. And so were are left wondering what really happened (note that they did ultimately add these actions to the Lumen Database).
And with growing pressure by states to regulate the internet within their territories, Ben Thompson notes (based on conversations companies in the stack - Stripe, Microsoft Azure, Google Cloud and Cloudfare, mainly 3 and 4 in the figure)
It’s a bad mix, and public clouds in particular would be better off preparing for geographically-distinct policies in the long run, even as they deliver on their commitment to predictability and process in the meantime, with a strong bias towards being hands-off. That will mean some difficult decisions, which is why it’s better to make a commitment to neutrality and due process now.
Picture this: Companies across the stack may work together to define rules or guidelines for cross-platform and off-platform behaviour. Experience suggests that while a lot of lip service is paid to transparency, it doesn’t always pan out that way. Governments can get companies to carry water for them either through coercion, threats or co-option and even confidentially due to the existing legal regime. Wait a minute, did I sign up for this?
I guess you can say, I am gettin’ twitchy about this.
Meanwhile in India
I was rooting for something along the lines of the Election Integrity Partnership in India. Meedan announced that 6 fact-checking groups have formed a consortium called Ekta (unity).
Ekta brings together AFP Fact Check, BOOM Live, Factly, India Today Fact Check, Vishvas News and WebQoof. All the participating groups are part of Facebook’s third-party fact-checking program with Meedan. This program integrates the Check fact-checking tool with the WhatsApp Business API to receive and respond to messages at scale.
The post states that one of the aims is to create an ‘impactful consortium model that continues to collectively address mis/disinformation beyond elections’. It will be interesting to see what that’s like and if it will lead to some sort of content partnerships so that a wider net can be cast versus overlapping fact-checks. Note, the different distribution networks each of them may have built means there is some merit to even the overlapping fact-checks.
Antivax groups are proliferating on Telegram in India. I wonder if there was a way for TOI and TheQuint to report on them without naming so many of them and citing content. I counted 8 Telegram and 1 YouTube channel between them.
YouTube suspended a channel belonging to Millat Times’ for 90 days for violating its medical misinformation policy. It had reported on a protest by daily wage workers against a lockdown. [Ismat Ara - TheWire]
This is a tricky one, to be honest. On the one hand, platforms have had to institute policies to deal with anti-lockdown protests and content in many parts of the world. But, on the other hand, after the humanitarian crisis that last year’s lockdown in India sparked off, you can understand why daily wage workers would want to protest. Of course, this is all further complicated by the fact it will be incredibly difficult for platforms to allow some anti-lockdown content and not others because of how narratives and content can morph and interact.
Central Railway registered a case against an unidentified person for circulation ‘fake videos’ of overcrowded trains [TOI].
Taberez Ahmed Neyazi, Antonis Kalogeropoulos and Rasmus K. Nielsen published a study on misinformation concerns and participation in sharing online news among internet users in India. Note that this was limited to English-language users. Some observations:
There didn’t seem to a correlation between concern about misinformation and partisanship.
There was possibly ‘less active distrust’ on Facebook and Twitter compared to Whatsapp.
Facebook and Twitter users were more likely to engage with news than Whatsapp users, which the authors point out appears to be in contrast with rising cynicism and declining intention to engage among users in the West.
Concern about misinformation was not a ’significant predictor’ affecting engagement with news.
And yes, applications for Takshashila’s May 2021 cohort for courses in Public Policy, Technology and Policy, Defence and Foreign Affairs, and Health and Life Sciences are open.