AI-filtered humanity produces a bitter brew.
Normally, my next sentence would tell you why. Hook, then sinker; the standard opening formula of writers everywhere.
But I’m breaking that rule today because that’s not what I originally wrote, and I want you to know why.
The original sentence, which came to me fully formed as I opened my eyes Sunday morning, was this: Humanity unfiltered produces a bitter brew. w
Yes, I thought, I’ll write about the challenge of online engagements that seem to have no end, building off the research I’d done for my latest CBC Radio appearance about the debate swirling around the streaming hit Beef and the casting of agent provocateur David Choe.
So, I hopped online to do some research and fell down a rabbit hole about rape culture, Hollywood nepotism, Asian representation, and the problematic Friday afternoon news drop.
It fried my brain and sent me running to the couch and Ted Lasso, the cure for my latest overdose of Internet outrage.
I’m no stranger to angry people.
I’ve interviewed hundreds of them, inspired many angry letters to the editor, including one from a former prime minister, and been the recipient of public callouts, including from a neighbour who once yelled as I walked by her house, ‘Why can’t you ever write anything nice.’ Hello to you too, Mae.
But despite my familiarity with outrage, what we’re experiencing online is different.
It’s a phenomenon known as the filter bubble, a term coined by Eli Pariser in 2010 to describe the intellectual isolation caused by the machine-generated personalization of Internet searches via websites and social media.
Over the years, I’ve read, written and thought a lot about filter bubbles and the unique challenge they pose for democracy, public discourse and people-centred innovation.
Yet, despite my knowledge of its effects, I am, as I realized this week, not immune from its influence.
Filter bubbles are a by-product of digital networks.
They are created by artificial intelligence that uses our individual and collective digital engagements, including smartphones that passively listen to real-life conversations, to pre-select, highlight and filter online sources that reinforce our existing views, desires and habits.
We don’t control our filter bubbles; some distant corporation’s AI does.
When Pariser first posited his theory, digital media evangelists pushed back, arguing that search engines and social media simply enabled people to behave online as they did in real life.
We selectively choose who we hang out with, who we listen to, and where we get our news, collectively known as an echo chamber.
All Google and Facebook have done is enable us to extend our echo chambers into the digital space.
Yes, but there is one clear difference: we control our echo chambers and can choose to step outside them; we don’t control our filter bubbles, and as I experienced this week, it is tough to travel beyond them.
The vastness of the Internet combined with AI’s power makes it next to impossible for us to discover ideas that significantly diverge from our own.
It’s why we could each google ‘Beef’ and ‘David Choe’ and get a different first page of links, and why we continue to hold such strong opinions about ‘Justin Trudeau,’ ‘Megan Markle,’ ‘Donald Trump,’ ‘Ukraine,’ ‘vaccine,’ and ‘pineapple on pizza.’
We click on a story, and that choice prompts the AI to offer similar stories that take us deeper into a singular way of thinking.
Every day we make trivial and substantive choices via Internet searches and social media scrolling powered by AI engines that use our interests to further the commercial and political interests of others.
This is why when I, a woman who regularly reads stories about feminism, social movements, and Hollywood, Googled ‘David Choe,’ the Internet’s front page fed me a series of think pieces, commentaries and timelines detailing why I should question his casting. It took me a few tries to locate examples of his art, articles, and the defencw of his casting; I kept being fed the critique of it instead.
AI-generated filter bubbles don’t build trust; they raise doubt.
By reinforcing our biases, filter bubbles harden our resistance to other ways of thinking, working and living.
It is a form of digital power harnessed to influence and direct public sentiment to shift economic and political power for those who can afford to command it.
The consequence of our passive acquiescence of AI is an angry, anxious and increasingly risk-averse world.
One of the public’s misperceptions about AI is that it is only just starting to arrive courtesy of SaaS products such as ChatGPT, Microsoft Azure and Adobe’s AI-generated design tools.
While it is true that 2023 marks the year of widespread public engagement with AI, we’ve been living with its influence for close to 20 years, via search engines, big data, machine-based learning and IBM’s Watson winning that game of Jeopardy! Call it what you want, it’s all AI.
That’s long enough to step back and consider what it’s done to us and what we want to do about it.
Because if we’re ever to reduce economic and political volatility, we need the next generation of AI to produce a more balanced brew.