Skip to main content
 

The Trolls from Olgino, the Sabateurs from Menlo Park

There's a lot in the news this morning about online influence campaigns conducted by the Internet Research Agency, a propaganda firm with close ties to the Russian government. Two reports were prepared for the Senate Intelligence Committee: one by an Austin-based security firm called New Knowlege, and the other by the Oxford Internet Institute's Computational Propaganda Project.

As of today, both are freely available online. Here's the full New Knowlege report; here's the full Oxford Institute Institute report.

This is the first time we've really heard about Instagram being used for an influence campaign, but it shouldn't be a surprise: if I say the word "influencer", it's probably the first platform that you think of. Like any decent digital advertising campaign, this one was cross-platform, recognizing that different demographics and communities engage on different sites. In a world where 44% of users aged 18 to 29 have logged out of the Facebook mothership, any campaign hoping to reach young people would to include Instagram. And of course, that's why Facebook bought the service to begin with.

News stories continue to paint this as some kind of highly sophisticated propaganda program masterminded by the Russian government. And it does seem like the Russian government was involved in this influence campaign. But this is how modern digital campaigns are run. People have been building Facebook Pages to gain as many likes as possible since the feature was released, precisely so they can monetize their posts and potentially sell them on to people who need to reach a large audience quickly. Influencers - people who are paid to seed opinions online - will represent $10 billion in advertising spending by 2020.

It is, of course, deeply problematic that a foreign influence campaign was so widespread and successful in the 2016 election - I have no desire to downplay this, particularly in our current, dire, political environment. But I also think we're skimming the surface: because of America's place in the world, it's highly likely that there were many other parallel influence campaigns, both from foreign and domestic sources. And all of us are subject to an insidious kind of targeted marketing for all kinds of things - from soft drinks to capitalism itself - from all kinds of sources.

The Iowa Writers' Workshop is one of the most influential artistic hubs of the twentieth century. Over half of the creative writing programs founded after its creation were done so by Iowa graduates; it helped spur the incredible creative boom in American literature over the next few decades. And its director, Paul Engle, funded it by convincing American institutions - like the CIA and the Rockefeller Foundation - that literature from an American, capitalist perspective would help fight communism. It could be argued that much of the literature that emerged from the Workshop's orbit was an influence campaign. More subtle and independent than the social media campaigns we see today, for sure, but with a similar intent: influence the opinions of the public in the service of a political goal.

And of course, Joseph Goebbels was heavily influenced in his approach by Edward Bernays, the American founder of modern public relations, who realized he could apply the principles of propaganda to marketing. Even today, that murderous legacy lives on: the Facebook misinformation campaigns around the genocide in Myanmar are its spiritual successor.

So political influence campaigns are not new, and they have the potential to do great harm. The Russian influence campaign is probably not even the most recent event in the long history of information warfare. While it's important to identify that this happened, and certainly to root out collusion with American politicians who may have illegally used this as a technique to win elections, I think it's also important to go a level deeper and untangle the transmisison vector as well as this particular incident.

Every social network subsists on influence campaigns to different degrees. There's no doubt that Facebook's $415 billion market cap is fuelled by companies who want to influence the feed where half of adultsdisproportionately from lower incomes - get their news. That's Facebook's economic engine; it's how it was designed to work. The same is true of Instagram, Twitter, etc etc, with the caveat that a feed with a lower population density is less valuable, and less able to have a measurable impact on the public discourse at large. There's one exception: while Twitter has significantly lower user numbers, it is heavily used by journalists and educators, who are then liable to share information gleaned there. Consider the number of news stories of the form, "here's what Trump tweeted today," which are then read by people who have never logged on to Twitter and wouldn't otherwise have seen the tweets.

The root cause of these misinformation campaigns is that people will do whatever they can to obtain, or hold onto, power. I don't think solving this is going to be possible during the entire remaining span of human civilization. So instead, let's think about how we can limit the "whatever they can" portion of the sentence. If people are going to use every means at their disposal to obtain power, how can we safety-check the tools we make in order to inhibit people from using them for this purpose?

Moving on from targeted advertising is a part of the answer. So is limiting the size of social networks: Facebook's 2.27 billion monthly active users are a disease vector for misinformation. As I've written before, its effective monopoly is directly harmful. Smaller communities, loosely joined, running different software and monetized in different ways, would make it much harder for a single campaign to spread to a wide audience. Influence campaigns would continue to run, but they would encounter community borders much more quickly.

A final piece is legislation. It's time for both privacy and transparency rules to be enacted around online advertising, and around user accounts. For their protection, users need to know if a message was posted by a human; they also need to know who placed an advertisement. And advertising for any kind of political topic in an election period should be banned outright, no matter who placed it, as it was in the UK. You can't have a democratic society without free and open debate - and you can't have free and open debate if one side is weighted with the force of millions of dollars, Facebook's market cap be damned.

 

Photo by Jakob Owens on Unsplash

· Posts · Share this post