Of The Model and Dark Pool Sales Houses, Vaccinationals and Urban(anti)vaxxals
What is this? This newsletter aims to track information disorder largely from an Indian perspective. It will also look at some global campaigns and research.
What this is not? A fact-check newsletter. There are organisations like Altnews, Boomlive, etc. who already do some great work. It may feature some of their fact-checks periodically.
Welcome to Edition 28 of MisDisMal-Information
I may have gone overboard with images/gifs in this edition. So if you are blocking images in your mail client, I recommend viewing this in a browser (or downloading content for this edition).
The Model, and Deep-See diving in Dark Pool Sales Houses
ooh, sounds ominous! What does all this mean?
Sorry, I am going to throw a lot of mumbo jumbo at you. But I’ll build it up in a way that makes sense.
First, let’s talk about the model..
No, not that one. Business Model
Not that either.
Ok, now we’re getting somewhere. The Business Model
In Edition 26 - Doom Profits, using Tim Hwang’s book - Subprime Attention Crisis, we covered how Opacity was a huge problem in Digital Advertising via standardisation and commodicification (screenshot recap).
Well, digital ad spends now seem to have surpassed other forms (in the U.S. at least). Yay?
And as Tim Hwang had highlighted, this ad-based model underpins the economy of the internet as we know it today - and shapes incentives, across the board. See this Tow Center report on platforms funding journalism and fact-checking during the pandemic.
Whether it is social media platforms trying to grab as much of your attention possible. Such as Facebook rolling back changes that was supposed to boost news from authoritative sources.
Or media/news websites that drown you in ads the moment you land on them or any website that ‘helpfully’ auto-loads the next page/segment as you scroll towards the bottom of a page (Aside: Have you noticed how painful it is get to any ‘About Us’ sections that are near the bottom of the page now?)
Or using ‘content partnerships’ through the likes of Taboola/Outbrain to increase website traffic. Josh Sternberg recently wrote is Vice about how these partnerships push misleading and false content onto pages belonging to news publications (these are called chumboxes, and is common practice across Indian publications too - ‘See how this person made x amount of money sitting at home’). See an example from The New Indian Express (picked at random).
“The ‘content discovery widgets’ on the page can not only make discovery of false info websites easier, those widgets can also directly carry programmatic ads, resulting in ad revenue for the disinfo site,” said Augustine Fou, a digital marketer and independent fraud auditor. “This way content discovery tech platforms' role in spreading disinfo is two-fold—they facilitate the discovery of false information, and they help disinfo sites make money so they can continue spreading more misinformation.”
And while a lot of people agree that the many of the problems caused by social media platforms are a consequence of their business models, it is surprising how little mainstream attention beats like adtech tend to get (you’ll see variants like ‘ad tech’, ‘ad-tech’, ‘ad:tech’ too).
P.S. It gets plenty of coverage as ‘trade’ beat. But that doesn’t always result in connecting it to the larger issues being debated in more mainstream coverage.
This is slowly starting to change, some of it as a consequence of pressurising companies using their business models.
Ok, Prateek, we get it - pay more attention to the *workings* of the business model. Easy peasy, right? Now, what?
Phew, you’re with me so far. But no, not easy, just look at Luma Partner’s illustration of the Display ads ecosystem - they call it Display LumaScape. This one makes my head spin.
Ok, but where are the Deep Seas and Dark Pools?
Fine, you asked for it. Now this where things may get a little jargon-y.
Back in June/July 2020 - Nandini Jammi and Claire Atkin’s newsletter Branded, published an edition based on their research along with Zach Edwards’ (his technical post) theorising how Breitbart could potentially still be making money even if advertisers chose to block it from serving their ads. I do recommend reading at least the first post, but let me include a few excerpts here)
Every website has a number of account IDs to identify them on ad exchanges.
There are two types of account IDs: DIRECT and RESELLER.
DIRECT IDs tell advertisers that they’re bidding directly on one website.
RESELLER IDs tell advertisers that they’re bidding on inventory across multiple websites.
Here’s an example, from say, NDTV (as part a standard defined in 2017, a pubisher is required this information at example.com/ads.txt - so in the case: ndtv.com/ads.txt):
Sometimes, media conglomerates share the same account ID across their owned websites. If Condé Nast wanted to, they could do this with Vanity Fair, WIRED, and Teen Vogue. To make it clear that they’re sharing account IDs, they label one website with a DIRECT label, and the others with a RESELLER label. This is called pooling, also known as a ‘sales house,’ and it’s generally acceptable because at the end of the day, it’s all done within the same organization
And here’s the key (italicised emphasis mine):
What outlets are not supposed to do, though, is share their DIRECT account ID with websites and companies that are completely unrelated to them. It’s not a direct sale, it mislabels the inventory, and it funnels advertiser money towards shared advertising accounts owned by unknown entities. That’s why we’re calling this dark pooling.
And finally, the dark pool sales house:
The mislabelling of DIRECT account IDs across websites means that these sites are sharing data (good for retargeting!) and ad revenues. One way to describe this grouping of DIRECT account IDs is a “sales house.” That makes these groupings “dark pool sales houses.”
I should clarify, that there was some pushback, which they linked to in their follow-up post.
I’ll summarise (along with some paraphrasing based on my own understanding):
1) The IAB (the body that develops industry standards) came out defending ads.txt stating that publishing this information in the ads.txt surfaces any malpractice rather than obscuring it. The tone was a little tongue-in-check and led to a not-so-friendly twitter exchange between them and Zach Edwards. This is all linked in their post.
2) As part of the standard, Ad Networks are also required to publish a file (called sellers.json) that makes their associations clear. This means, that someone can go to this file, and verify whether a publisher ID belongs to a certain company or not - and then determine if it has been mislabeled on a publisher’s ads.txt file. (Credit to Jay Pinho, whose tweet I discovered this through.)
3) Even these mislabeled entries, are not evidence of fraud themselves. It needs to be accompanied with a back-room deal of sorts (so that this ID is shared across multiple sites and they split the revenues). Why, you ask? Well, if you’re pushing out content where advertisers are going to be pressured to block you, it is better to add as much opacity as possible, no? But, as I said, this needs to be established. Until then, the reasons can vary on a scale from ‘intern copying whatever adtech partner sent them’ to ‘oops!’ and ‘Let’s be evil’.
Now, to the Deep-See.
Wait, didn’t you say Deep Sea?
No! That was you in the italics, remember? Anyway, DeepSee is a firm that sells services to combat ad-fraud (since this is, at heart, a policy-focused newsletter, just keep that in mind for the sake of tracking incentives). They published a deeper investigation into this phenomenon.
Globally, 10% of ads.txt enabled sites have 71 or more non-unique DIRECT entries.
“That’s pretty crazy,” we thought.
“Maybe this is an artifact of poorly ranked sites dragging us down(?)” we mused.
But, the data showed quite the opposite.
Ok, now it gets even more interesting.
As part of this research, they also published a file with the ‘top 15,000 most non-unique DIRECT ads.txt entries encountered globally’. From an admittedly not very thorough look, this is a list of some sites I saw on it. Many will seem familiar. And a more thorough search, will probably uncover many more.
I went a little further. I picked a few news sites at random, then looked up a bunch of entries with DIRECT labels and cross-verified with what the Ad platforms put on their sellers.json files. (Yes, yes, I know. I have a healthy information diet!)
I am going to include some screenshots for now, and there is a viewable link to Evernote note here, where I jotted down some of this as I went through it. And while this is just a tiny sample, I do expect this to be consistent across most news sites. (Reminder to view web version or download images, if you haven’t done so already)
If you want to run these checks on your own, I have also listed the steps on the Evernote link mentioned earlier [obviously, this is based on compiling the efforts of the posts referenced earlier]. This cursory check tells us - mislabeling is non-zero.
Again, I will reiterate - this is not evidence of fraud in itself. Though, someone should reach out to them to understand why they may have mislabeled entries. Maybe someone from Newslaundry? Or The Ken? [who else doesn’t run ads?]
But, if this is not outright fraud, why does anyone care?
Well, this doesn’t mean there are no bad faith actions here. And because of the opacity and incentives - we neither know where in the ecosystem the diversion from the proposed standards are coming from, nor is there a reason for anyone else to particularly care.
And secondly, whether you agree with them or not, ‘defund’ efforts are getting attention. While, it is mainly directed at TV News Media for now, I do expect it to make their way to the web as well. In which case, transparency and compliance are relevant. Otherwise, we could very well be looking at these Dark Pool Sales Houses in India too.
Some more reads about ‘The Model’
This report by the Alliance for Securing Democracy - The Weaponised Web: Levers in the Digital Advertising Ecosystem.
Joan Donovan in Harvard Business Review on how social media platforms’ hunger for scale resulted in supercharging disinformation.
Daisuke Wakabayashi and Tiffany Hsu, for New York Times, about a preferential deal that Google may have given Facebook, based on a draft version of the antitrust complaint against Google filed by 10 states in the U.S. Google’s rebuttal about the complaint here.
And, Shoshana Wodinsky for Gizmodo on the ‘Butt Pyjamas’ and the ad network behind it.
Vaccinationals, Antivaxtionals and urban(anti)vaxals
Ok, I will try to stop the vaccine puns before I get jabbed in the shoulder. The first one is borrowed, and the other 2 are the type of epithets I expect to be conferred on those raising questions (if you’ve seen them in the wild, please let me know!).
But since the last edition, India’s vaccination drive kicked off (on 16th Jan).
Nearly 8 Lakh people have been vaccinated.
While there have been 4 deaths, authorities have not linked these to the vaccine and were said to be the result of co-morbidities (same link as pt 1).
The National Health minister has asked state and local authorities to refute "rumours and disinformation campaigns” (Livemint).
Over the past few days, virologist Dr T Jacob John has been receiving calls from the medical fraternity questioning the efficacy of the vaccine. “The hesitancy is because of a lack of information, trust and transparency. Some doctors want to know why there is no choice on which vaccine is administered,” he says.
So naturally, it hasn’t helped that advisories about when not to take a particularly vaccine came out AFTER the vaccination drive started or that health workers were given a consent form about a vaccine being in ‘clinical trial’ mode. Now couple this with news of adverse events and deaths (whether authorities link them to the vaccine or not) from India and around the world - some people will start assuming correlation = causation.
Again, I have little locus standi to comment on the science and the process, so I am going to reference another article by my colleague Dr. Shambhavi Naik very aptly titled ‘will indian scientific temper survive 2021’.
The COVID-19 vaccine approval process has been opaque, ambiguous and questionable. But the Drug Controller General of India refused to answer questions and the decision-making remains shrouded in secrecy. We don’t know who all constituted the subject expert committee that recommended the vaccine candidates for approval or the data that the candidates’ makers presented to secure it.
The process by itself has been shoddy – but well-educated experts labelling those demanding transparency as “anti-nationals” or “anti-vaxxers” has really hit the nail in the scientific temper coffin. Instead of building trust by releasing information about the approval process, the government and scientists have asked people to blindly trust them, and have been offended when that trust was not given.
Somewhere, we missed an opportunity to educate people about vaccine side-effects and adverse events as well as instil faith in the ability of the healthcare system to handle these events. As a rudimentary dipstick-analysis of sorts, let’s look at tweets from the Ministry of Health and Family Welfare mentioning side-effects or adverse reactions in the context of COVID-19.
Till 31-Dec-2020. No Tweets specific to COVID-19 vaccinations.
From 01-Jan-2021 to 07-Jan-2021: No Tweets.
From 08-Jan-2021 to 15-Jan-2021: 4 Tweets.
This isn’t straightforward, of course, and there was always going to be a tradeoff to be made between priming people about the realities surrounding the rollout of vaccines with contracted development periods and potentially scoring a self-goal sowing doubt early on, I acknowledge that. But making public health a nationalism issue - was never going to be the right call because reality, when it hits, knows no nationality.
Meanwhile, I’ve set up some custom searches on Twitter to look for tweets pertaining to:
Vaccines, effects and India - link. The intent behind these filters was to track narratives around side-effects and adverse events.
Vaccines, religious references and India - link. These filters were meant to track narratives with religious/nationalistic references. Definitely, the uglier of the two.
Meanwhile in India
I guess, I missed this last year - but some time in March 2020, people were arrested for spreading rumours about bird flu, in Maharashtra. Now, the minister of animal husbandry and dairy development has warned that it could happen again. [TOI]
FirstDraft was put out a Hindi version of its guide on reporting on disinformation. This is good news, and I believe we need this in more languages.
In Andhra Pradesh, 5 people were arrested for ‘spreading false news of vandalisation of idols on the arch at the Singarayakonda Lakshmi Narasimha Swamy temple on social media.’ [New Indian Express]
Remember ‘CensorWebSeries’ hashtags I analysed in Edition 6. Well, they can claim another victim since the makers of “Tandav” have said that they will make ’suggested’ changes to the show [TOI]. Of course, it wasn’t just the hashtags. UP Chief Minster’s media advisor had told the crew to prepare for arrest after an FIR was registered in the state for hurting religious sentiments [The Free Press Journal]. And the Information and Broadcasting Ministry sent them a notice as well [Medianama]. The Ministry is also on the task on regulation for OTT platforms [Hindustan Times].
According to government officials familiar with the matter, the issue of self-regulation in digital media was taken up at the highest levels this month and ministry of Information and Broadcasting has decided to frame an overarching statute under which digital media can regulate itself.
Benjamin Strick struck again (running joke, I tweet this at him often). No, but seriously, check out the investigation of a ‘copypasta’ [meaning they copy-paste the same content again 🤷♂️] targeting the TMC in West Bengal. Aside: There’s going to be a lot more in the run up to elections in the state.Benjamin Struck #3
Benjamin Strick @BenDoBrownA copypasta campaign in India is targeting politically-sensitive tags #TMCHataoBanglaBachao & #KrishokSurokhaAbhijan on Twitter. I captured data over the past week for analysis. Report: https://t.co/1kubd3ctZ6 CC @TwitterSafety Here's some findings in this #OSINT thread 🧵👇 https://t.co/QUPib6i8Hi
Related: Ishaana Aiyanna and Priyam Nayak wrote about disinformation and the West Bengal elections in December, 2020 (A Prelude to the West Bengal Election Disinformation Campaigns).
In an unfortunate case of real-life ‘minority report’ meets ’thoughtcrimes’ - Kunal Purohit’s story on the arrests of Munawar Faruqui, Prakhar Vyas, Nalin Yadav, Pratik Vyas and Edwin Anthony [Article 14].
“They were going to do it, anyway. All of their jokes were about Hindu gods and goddesses. It isn’t as if they would have not cracked these jokes if there was no hungama."
And the follow-up story, about the U.P. Police registering an FIR around 9 months after a complaint based on a YouTube video.
YouTube and Whatsapp are on their way to the 500 million users mark in India, reports Manish Singh [TechCrunch]. YouTube should be opening itself more to information disorder research.
Nayantara Ranganathan writes about the link between an emoji and majoritarianism in India. [Under a Blood-red Flag - Logicmag]
The central danger of the emoji is that it is as generative of people’s ideas about what counts as Hindu or Indian as it is reductive of Hinduism’s and India’s complexities. … What the consortium has helped give permanence to in this case is the Rashtriya Swayamsevak Sangh’s vision for Hindu supremacism in India and worldwide. It is a small but potent example of the way that Silicon Valley has chosen to make money from hate.
Leaked Whatsapp chats once again dominated the news..er.. internet outrage cycle as conversations between Arnab Goswami and Partho Dasgupta (former CEO of Broadcast Audience Research Council - BARC) became available the public domain. I won’t go into the details
Here’s a set of tweets from Altnews’ @zoo_bear using the hashtag Arnab, that has many excerpts.
Not surprisingly, TV channels were late to this. And for some reason, TimesNow is now promoting their tweets related to this story. What did I do to get targeted?
Around the world
About 3 quarters of Australians who responded to poll believed that the Prime Minister has a responsibility to clearly and publicly criticize" members of his governing Liberal National Party coalition who spread misinformation about the pandemic. [BusinessWorld via ANI/Xinhua]
Twitter locked the account of China’s U.S. embassy on the grounds that it dehumanised a group of people [Hindustan Times].
Related 1: Kabir Taneja writes about the intersection of Social Media, Foreign Policy and Extreme Narratives [GNET]. The conclusion:
The gaps between domestic politics, posturing, and international affairs being blurred can cause strains in foreign policy outreach beyond just government-to-government affairs. Going forward, deeper understanding of where social media fits in between foreign policy and public diplomacy will require clarity
Related 2: If you want to search through tweets by Indian embassies for certain keywords - here is a custom search that will help you do just that. I took the extreme narratives bit seriously and designed this one around China, Pakistan and minorities - but you can edit the terms easily enough.
Google announced that it will give USD 3M to news and fact-checking orgs to combat vaccine related misinformation [Niemen Lab].
Google’s new fund is open to news organizations of any size, as long as they can demonstrate experience with debunking false information or form a partnership with a recognized fact-checking organization. Projects that demonstrate “clear ways to measure success” and aim to reach groups “disproportionately affected by misinformation” will be prioritized, Google’s news and information credibility lead, Alexios Mantzarlis, wrote in Tuesday’s announcement.
In the run up to elections in Uganda - Facebook shut down Uganda government linked accounts [DW]. And then the government shutdown the internet itself (on Jan 13) . It restored the internet on 18th January, only to block Social Media platforms again [Quartz Africa].This is how the internet shutdown in looks like across Google Products. Youtube got blocked a little before the complete shutdown. Analysis: kaggle.com/vinifortuna/ug…Internet blackout in Uganda is still going after 20+hrs. Visualizations from both and
Doug Madory @DougMadoryAlmost exactly 10 years after internet shutdown in Egypt, the govt of Uganda has ordered the "suspension of the operation of all internet gateways" blacking out internet service across country during national election. #KeepItOn #UgandaDecides2021 https://t.co/8rTpUVfEmI
An AP wire story identifies podcasts as a loophole in social media content moderation (Hello Clubhouse, Twitter Voice, and just about every platform that lets you voice (and video) chat)[Hindustan Times]. Alex Krantowitz for OneZero, on the moderation wars coming to Spotify, Substack and Clubhouse.
As smaller platforms take off and fill with content, the cost to moderate can be overwhelming. “That’s why you see so much emphasis on automation,” the former Twitter employee said. “Taken to its logical conclusion, you could have a full federal jobs program moderating content on a platform like Facebook or Twitter.”
Trump, Parler and their terrible, horrible, no-good, very bad week
Reactions/Views/Tweets about Donald Trump and Parler’s deplatforming continued to pour in.
FirstDraft has a Google Doc compiling platform reactions to the events of 06-Jan-2021.
3 takes to read:
Heidi Tworek [CIGI] : The Dangerous Inconsistencies of Digital Platform Policies
But platforms affect billions around the world beyond the United States. Policies cannot simply be made based on when horrific events occur at the US Capitol. For all the problems of platforms not reacting until after January 6, they barely react at all to similar threats in other countries. As Dia Kayyali tweeted on January 10, “Dear (most white, “western”) people exclaiming over the de-platforming of Trump: the rest of the world is watching and shaking their heads, knowing unless something massive changes they’ll continue to be ignored as states use social media to incite atrocities. When platforms weigh priorities, are 5 dead people in Washington DC heavier than all the bodies in India or Myanmar or the many other places states use social media to incite violence?”
In 2020, platforms did things that executives had often said could not be done. It is crucial to debate whether these actions addressed the right problems in the right places or the right ways. But another important question is why platforms acted when they did. The when tells the rest of the world everything it needs to know about who really counts for platforms. Changing these dynamics will be a crucial challenge in 2021.
As social media companies have gotten more involved as intermediaries in news and political coverage, the difference between how they present themselves and how they actually function has been reaching a breaking point.
This's why, in the past few years, we have begun to see platforms make decisions that implicitly, if not explicitly, acknowledge their roles as media companies.
If they acknowledge it too openly, that would put them at risk of increased regulation and oversight, and it could potentially put them on the hook for more costly and robust moderation decisions. It would also force them to develop a more rigorous and consistent approach to the difficult decisions about which voices deserve to be amplified.
After Trump, there were calls for suspension of the accounts of the Brazilian President and Indian Prime Minister. The tag #ModiNext was active for sometime too. I thought it was because of this:It would seem logical to ban from Twitter for inciting the mass murder of more than 2,000 Muslims in 2002 and then another 50 in 2019, not to mention the way in which he has mass radicalized Indian society to pre-genocidal levels.
I wasn’t able to do a full tweet analysis, but a quick Hoaxy indiciated that it likely was not. The biggest cluster was due to OpIndia. Here’s a gif for your viewing pleasure.
Meanwhile, Ryan Mac and John Paczkowski’s Buzzfeed News article about Parler and AWS says this:
In an email obtained by BuzzFeed News, an AWS Trust and Safety team told Parler Chief Policy Officer Amy Peikoff that the calls for violence propagating across the social network violated its terms of service. Amazon said it was unconvinced that the service’s plan to use volunteers to moderate calls for violence and hate speech would be effective.
“Recently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms," the email reads. "It’s clear that Parler does not have an effective process to comply with the AWS terms of service.”
I can think of so many others that would run afoul of this. Even comments sections on newspapers.
P.S. Has anyone looked at Splint-tok’s content moderation policies and practices?
Ethan Zuckerman and Chand Rajendra-Nicolucci on deplatforming our way to the alt-tech ecosystem:
by deplatforming toxic communities and sending them towards the alt-tech ecosystem, we may be reducing their influence, but also losing our ability to study their conversations.
This points to a larger lesson. Building a healthy social media ecosystem will be full of tradeoffs, and it’s important to understand and highlight them, not because the changes are necessarily wrong, but because examining and responding to tradeoffs will be crucial to ensuring well-meaning changes don’t cause us to take one step forward and two steps back. Alt-tech presents powerful questions about speech online. Is it better to exile toxic speech from popular platforms if it risks making communities even more extreme?
Post midnight update: I would like to thank Nick Clegg for publishing his post after I went to bed. But passive aggressive barbs aside - Facebook has referred Trump’s suspension to the Oversight Board. Which could be really interesting.
Evelyn Douek was calling for this, and somehow already has a post written up about what this could mean.
evelyn douek @evelyndouekOne interesting tidbit: because Facebook suspended Trump's *account* and not any individual piece of content, there's no appeal to the @OversightBoard. Unless Facebook refers it. But for some reason we haven't heard much about it this time... 🤔
Read Daphne Keller’s tweets on this subject too:
Anyone know if they invoked the expedited review?
There could be additional updates on this story between the time the edition gets scheduled and reaches your inboxes.
This should have been in last week’s edition, but here it is anyway. Joyojeet Pal and Ankur Sharma on the use of ‘antinational’ on Indian Politics Twitter [The Wire].
that while both the BJP and INC attack each other using ‘anti-national’, the INC focuses on the party, the RSS and a few key leaders including Narendra Modi, Sambit Patra and Amit Shah, or uses terms to target the BJP.
For the BJP, however, the most-used term is not the name of another party, but rather JNU, referring to the Jawaharlal Nehru University
2 reports based on gender and disinformation.
Oxford Internet Institute released the 2020 edition of its ‘Industrialised Disinformation’ report. The 2019 edition was called ‘Global Disinformation Disorder’.
organized social media manipulation campaigns operate in 81 countries, up from 70 countries in 2019, with global misinformation being produced on an industrial scale by major governments, public relations firms and political parties.
India has moved from being classifed as ‘medium’ capacity in 2019 to ‘high’ capacity in 2020. Yay?
How are they defined?
High cyber troop capacity involves large numbers of staff, and large budgetary expenditure on psychological operations or information warfare. There might also be significant funds spent on research and development, as well as evidence of a multitude of techniques being used. These teams do not only operate during elections but involve full-time staff dedicated to shaping the information space. High-capacity cyber troop teams focus on foreign and domestic operations. They might also dedicate funds to state-sponsored media for overt propaganda campaigns.
Medium cyber troop capacity involves teams that have a much more consistent form and strategy, involving full-time staff members who are employed year-round to control the information space. These medium-capacity teams often coordinate with multiple actor types, and