Contra Pariser On Filter Bubbels
A look at the empirical evidence on the effects of algorithmic personalization
According to Eli Pariser in acclaimed 2011 book The Filter Bubble, algorithmic personalization locks us in, well, filter bubbles. He writes: “Personalization filters serve a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar in leaving us oblivious to the dangers lurking in the dark territory of the unknown” (my emphases).
Is it really that bad?
It’s certainly true that the internet creates a high-choice media environment. We have access to news and political information from a diverse array of media and news sources, more so that in the pre-digitization days. Now, Pariser fears that, in such a high-choice media environment, we’ll pick only those media and content that reinforce our existing beliefs: “Consuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult.”
In a second step, as platform algorithms learn from the user’s choices, and users make those choices predominantly from the options promoted by the algorithms, a self-reinforcing feedback loop gradually curtails choice to an increasingly narrow and homogeneous set of options.
This is bad, because it increases polarization and threatens democracies by limiting political information and discussions.
Of course, the reverse is also possible. In a diverse media environment, chances are the information we’re exposed to is also diverse. But, with Pariser, many commentators seem to believe that control over opinion exposure is inversely correlated with exposure to opinion-challenging information due to this feedback loop of self-selected and algorithmic personalization. Or in normal words, as Bill Gates put it, “[Technology such as social media] lets you go off with like-minded people, so you’re not mixing and sharing and understanding other points of view … It’s super important. It’s turned out to be more of a problem than I, or many others, would have expected.”
The unbearable sameness of search results
So that’s the theory, but what do the data say?
In his famous book, Pariser offers mainly anecdotal evidence. His favorite factoid in support of his hypothesis is that Google now personalizes search results according to 57 different signals. He had some friends Google some terms, and they got different results.
But since it’s just one case, this is hardly compelling evidence as such. In fact, Google already claimed a decade ago that Pariser’s experience with divergent search results was atypical, and follow-up research seems to support that interpretation. Studies show that around 11% of Google searches differ due to personalisation. That means they’re about 90% similar.
Personalization on news websites is still in its infancy, and even less prevalent. For example, Haim et al. (2018) tested Google News by establishing a small number of fake user profiles with different thematic interests, expressed both explicitly through personalisation settings and implicitly through simulated search and browsing histories. After initial training they used each profile to query identical terms and assessed differences in search results. The study found minor or no effects on the diversity of content and sources in search results.
Such findings disprove Eli Pariser’s foundational hypothesis that search results are so highly personalised.
The irrelevance of social media
Rather than search engines, you may point to social media as facilitating echo chambers. The evidence on this point is limited because most data is proprietary. In the most notable study, Facebook researchers estimated that the News Feed algorithm reduced exposure to cross-cutting material for users who self-identify as conservative or liberal by 5 and 8 percent, respectively. However, only about 4 percent of users include their political preferences in their profile, making it difficult to generalize.
Perhaps counterintuitvely, I don’t think social media can be an important source of echo chambers. Because it’s not where most people get their information in the first place. Few people even have a Twitter account, for example. And only 11% of Americans say they get news from social media “often”; for the top news sites, social media referrals represent only about 10 percent of total traffic.
In his TedTalk, Pariser suggests that using Facebook means “you’ll tend to see significantly more news that’s popular among people who share your political beliefs. . . . The Facebook news feed algorithm in particular will tend to amplify news that your political compadres favor.” The mistake here is to assume that people are on Facebook to talk about politics, or that they make their friend and follower connections primarily because of shared political ideology. Surveys indicate this is a minority approach: “Only 23% of Facebook users and 17% of Twitter users say [that] most of the people in their network hold political beliefs similar to theirs.” This means the vast majority of Facebook and Twitter users are exposed to a variety of political perspectives, if their connections talk about politics at all.
All in all, if filter bubbels exist, it’s not because of what our Facebook timeline and search engine results (don’t) show us. Either those don’t show enough variation for there to be echo chambers, or they hardly factor into what information about the state of the world we receive in the first place.
Self-selected personalization?
So it seem new media technologies aren’t pushing us into disconnected informational spaces. But maybe human psychology is enough to create those? After all, we prefer reading something we agree with over reading something uncongenial, right?
Actually, empirical studies tend to offer little support for the notion that we avoid or seek political information depending on its anticipated content.
On television, for example, media outlets with a significant partisan or ideological slant simply do not reach most of the population.
In fact, usage of isolated media outlets wouldn’t even be evidence of being in an echo-chamber, because it turns out that those who get a lot of partisan information also consume an above-average amount of mainstream news.
Accordingly, in the current fragmented media landscape, people can and in fact do access an abundance of news sources. In surveys, around 8% of participants scores so low on measures of media diversity that they could be considered at risk of living in an echo chamber, visiting just one or two news services without other perspectives.
If filter bubbles exist, the empirical evidence suggests they are a reality for relatively few people.
In reality, most people by far still get their news via traditional sources. In Europe, this happens most notably public-service television. But even among partisans in the US, the media diet of Republicans and Democrats is, apart from a small minority of extremists, actually quite similar.
Yet even if filter bubbels exist, I want to add, to support the filter-bubble theory in its full glory, it doesn’t suffice to prove that some group on some social media platform appears disconnected from the wider world. That’s not enough, because real people do not participate only within a single social media space, or a single platform. Someone who lives in an apparent filter bubble on Facebook while consuming a broad diet of mainstream news through other channels does not actually live in an information cocoon.
The rareness of information bubbels is not, I think, surprising. The entire premise that we prefer reinforcement over being in the know strikes me as false. Living in a filter bubble makes for lonely coffee breaks.