On the eve of another election back home – June 2 is voting day in Ontario – spare a thought for artificial intelligence (AI) as it creeps into democracies here and around the world.
When Twitter drives people crazy, mind control algorithms are blamed for spreading extremist tweets.
When Facebook is accused of electoral losses, Russian AI is accused of distorting democratic discourse.
Will all these algorithms, which feed on fear and loathing, continue to polarize our politics and undermine democracies?
These are the questions we tackled at a conference on the risks — and realities — of AI at the Munk School of Global Affairs & Public Policy. The first thing you learn about machine learning, the dominant form of AI, is not to let its daunting complexity scare you away.
AI is not rocket science, nor is it reinventing political science. It’s a data tool that can be used and abused, depending on how humans — more specifically, politicians — handle its powers.
The panel I moderated — on AI and the future of democracy — produced some mildly terrifying but also surprisingly reassuring predictions (full disclosure: I’m a senior fellow at Munk).
Our artificial intelligence guru – Henry Farrell, a political scientist at the Johns Hopkins School of Advanced International Studies – shared his research on how it interacts with human and political intelligence (he also runs the “Monkey Cage” blog on Democracy at the Washington Post).
Farrell tackled head-on the proposition that AI weakens the West while strengthening the rest. The fear is that algorithms are flooding democracies with “destructive nonsense, while undemocratic regimes are stabilized by the combination of machine learning and surveillance.”
There are growing predictions that a future AI autocracy like China could “beat democracy at its own game” and supplant our system of government. But his research suggests that dictators who rely on AI tools for surveillance and repression are sowing the seeds of their own destruction through isolation. By suppressing individual expression and reinforcing their own top-down styles of governance, they are flying blind – without the benefit of crowd-sourcing, whether at street level or online.
Yet democracies still face undeniable challenges in coping with the rapid iterations and gyrations of machine learning; no one disputes that AI can codify and amplify our worst impulses.
The algorithms that underpin Twitter and Facebook are quantitative but also predictive: they analyze a large number of past decisions in order to anticipate or influence future impulses. Amazon’s website makes money by recognizing patterns in your purchase history to recommend books or boots it thinks you’ll buy next; TikTok studies your penchant for cat videos (or NDP leader Jagmeet Singh’s dance moves) before flagging more to keep you engaged for advertisers.
In fact, political operators have been exploiting similar marketing research tools for decades – drawing on the statistical power of public opinion polls, the dark magic of focus groups and the dark arts of attack advertisements. More recently, census and other valuable databases have been harvested and mined, sliced and diced, to micro-target sub-demographic groups with uncanny quantitative precision and predictability, but without accountability.
It’s far from just licking envelopes, and it came long before AI. Algorithms only reinforce the old objectives of political combat with new weapons.
The best evidence suggests that these platforms – from embryonic search engines to early social media – were well on their way to fattening up social discourse even before they armed themselves with algorithms that prioritized controversy over timelines in their online feeds. line (the old Facebook posted your posts in sequence; the new algorithms rank content based on the most likes and shares and rank them at the top of your feed). All of this suggests that the central, pre-existing issue is how social media brings out our unfiltered, uninhibited inner selves — rather than just the AI technology that relies on retweets for rankings.
The possibility that AI techniques can “reshape people’s political views” remains unproven: “It’s really, really hard to persuade people of things they don’t want to believe in the first place,” Farrell noted.
Rather than simply scapegoating AI for the decline of democracy, we also need to look at the already entrenched trends that have been undermining our discourse for decades.
“If we didn’t have machine learning, if we were back in the mid-1990s, we probably wouldn’t be in a very different world,” Farrell explained. “It would probably happen anyway.”
Which reminds me of how opponents of gun control like to assert, seductively, that guns don’t kill people, people kill people. By analogy, AI doesn’t drive people crazy, people drive people crazy (crazy as angry but also mad).
The thing is, machine learning isn’t entirely innocent — it’s more of an afterthought: By weaponizing whispers of disagreement into mass discord, AI turns knife fights into gunfights.
My biggest concern is that the mass media continue to publicize social media, amplifying their algorithms to an even wider audience. Major newspapers obsessively shine a spotlight on the dark recesses of Twitter — a platform where relatively few real people (beyond bots) spend much time tweeting.
And then there is American television. Despite all the attention paid to Twitter’s secret algorithms – the black box of machine learning – Fox News’ open manipulation of American public opinion is unfolding in plain sight.
So who is the bigger influencer (or manipulator) – TV or AI?
Farrell’s answer is that it is a symbiotic relationship. Fox is harvesting extremist tweets from outliers on Twitter – lagging behind trolls – in order to feature them on shows, acting as a “conveyor belt” from social media to mass media.
This means that AI is not so much mind controlling as it is influencing how the body politic views itself. By wielding a distorted “mirror” that primarily reflects what Twitter trolls say, rather than what society quietly thinks, algorithms cloud our discourse.
That said, much of the anger captured by social media can be real, even if virtual. It appears online in real time, rather than at election time; and even as people dig deeper into echo chambers and rabbit holes, their grievances may still have real roots.
“Internal flaws within democracy are perhaps exacerbated by social media, but have more fundamental causes,” Farrell concludes. “People get angry when they don’t see any more opportunities for their children.”
A timely message for politicians across the province as they prepare to embark on the campaign trail: don’t blame the algorithm for the anger.
But here is another trap of polarization. People can start giving up on elections if they think too many heads are in the sand – or stuck in a rabbit hole:
“If you think your fellow citizens are brain-controlled zombies unresponsive to argument, you won’t be so interested in democracy,” Farrell said.
It turns out that the lingering danger to democratic idealism is not algorithm but cynicism. Turn down the noise, but don’t turn a deaf ear to the elections.
JOIN THE CONVERSATION