Trouble in the bubble

How technology made it difficult to understand the world

Per Grankvist
21 min readFeb 6, 2018

Being well-informed is especially important if you have the power to unleash fire and fury upon another nation with the intent of totally destroy that nation. Which is why the President of the United States is receiving a daily brief containing important and classified documents, early each morning, irrespective of where in the world he might be. One of the folders included in the daily brief contains the latest analysis of issues related to national security and global conflicts from the Office of the Director of National Intelligence. Another folder contains documents with background information on the day’s meetings, who is to be involved, and any topics likely to be raised. A third folder contains a summary of public opinion and press clippings exemplifying how the President and the agenda he is pursuing are currently being portrayed in the media. The purpose of these folders is to offer the President a complete overview of the situation at home and abroad, based on robust documentation, so that he or she is able to make balanced decisions with the nation’s best interests at heart.

Naturally, different presidents have had varying wishes as to what should be included in the brief and how often it should be presented. John F. Kennedy preferred a concise briefing in a format that allowed him to carry the most essential information in the inner pocket of his jacket. George W. Bush did not want to be briefed on Sundays. Barack Obama liked to receive all folders and information digitally. Donald Trump has his own opinions about the content of the briefing folders itself. He only wants positive news.

White House staff is, therefore, tasked with sorting through the media stream to identify the most positive news reports on the President and his policies. Their task is to find 20–30 items, twice a day. The content is a mixture of positive assessments from TV morning show hosts, admiring tweets, articles reprinting favorable quotes about the President from people on the street, appreciative social media posts, homages in the conservative or right-wing press, and the occasional image of Mr. Trump looking imposing in a television appearance. On days when positive news is thin on the ground, staff in charge of putting together the folder have been known to wander the White House in search of favorable images of the President for inclusion in what is known internally as “the propaganda folder”. The purpose of the folder is to keep the famously ill-tempered and egocentric President in good spirits by enabling him to maintain a distorted image of himself and the world.

How do you go about obtaining a correct image of the world? One way is to follow public discourse to see what various pundits think about current events. If you’re like me, this means that you read the newspapers or use a news app on your phone. You may scroll through what people are saying on Twitter or your feed on Facebook.

Listening to the arguments of both sides in a debate is however not the same as understanding what is true and what’s not. If person A claim climate change is man-made and person B claims it is not, the truth is not somewhere in between. Besides that, what you’re getting in your feed might only be voices that sound like person B, declaring that the talk of climate change is exaggerated or that it’s all a conspiracy by the left.

During an election campaign, the difficulty in getting an objective view of the word is particularly demanding. Partisan news and targeted ads only confirm what supporters of a particular candidate or political party already believe to be true and reinforce their biases of their opponents. All sides seem to convey an image “of the other side” as a group of people that is always providing an exaggerated or false view of reality.

One used to be able to rely on media to get an objective view of the world. Being impartial is what journalism is supposed to be about. Even if some media chose to lean to the left and others to the right, they would share the same basic approach to the craft. An editor might choose to tell a story from a particular angle, but they would remain impartial doing so in order to avoid being seen as a propaganda tool for one party. Impartiality is a basic tenet of journalism. There are media outlets that occupy a position more to the right or left in their attitudes to the world and to power, but they still share the same basic approach to their craft. Even if the editor chooses a certain angle to a news story, they will attempt to avoid picking sides, being perceived as biased, or appearing to propagandize for one point of view or the other. Truthfulness, accuracy, objectivity, impartiality, fairness and public accountability has been part of the journalistic creed for more than a century.

The opinions of both sides in a conflict should be reported and the reader/listener/viewer should always be able to trust the facts. By comparing how different media outlets report an event, you can also see whether a news story is true (several reports independently stating the same basic facts) and obtain a somewhat more in-depth understanding of what has occurred (thanks to the various angles the outlets have chosen to adopt).

At least, that’s what it used to be like.

Over the course of a few decades the public trust in media has slowly deteriorated. Accusations of media withholding information or refraining from reporting some news as a result of political motives have become more common in recent years. Some media outlets have even been publicly accused of publishing things news that are completely made up in order to help one party advance their political agenda. For the average reader, listener or viewer it’s not easy to discern whether a new story is correct. Some guidance is available, such as giving due consideration to who has published the story. To complicate things, news may be published on websites that appear confusingly similar to highly reliable news sources but which are actually faked for the sole purpose of fooling visitors into believing one thing or another, and then disseminating it.

One method of checking the facts, in the past and theory at least, is by using an encyclopedia. Nowadays, we “google” instead. Search engines, as we all know, work quite differently from an encyclopedia. The results displayed when you search are not sorted based on their veracity, but rather on their relevance. Or more precisely, what others searching for the same thing consider relevant. If the question is a simple one, such as who is the current President of the United States, the answer will generally be correct. If the question is more complicated, such as whether or not Donald Trump is considered to be a good president, there will seldom be a uniform answer, which makes it difficult to check whether something is true or false. In addition, it is often said that Google adapts its search results to the individual user, making the work of fact-finding even harder as the answer depends on who is asking.

If you are like me, and many others, you will obtain a great deal of your news via Facebook. Mixed with updates from friends and acquaintances, you will find links to the news that is currently engaging people the most. Once upon a time, Facebook was just one app among many on our phones. Today, it is a political and cultural force with worldwide influence and, for many people, the effects of this change first became apparent in conjunction with the US presidential election of 2016. When the companies that created the platforms we call “social media” talk about their users they are prone to rhetoric. Users are “members” and are described as being part of a digital “community.”

Once upon a time, Facebook was just one app among many on our phones. Today, it is a political and cultural force with worldwide influence.

When these companies first appeared, they promised to provide platforms unparalleled in human history. A forum where all stakeholders — citizens, politicians, media, NGOs (non-governmental organizations), and businesses alike — would be able to congregate and discuss the issues close to their hearts. In this new public sphere, the lonely were to find soulmates, engagement was to be born and flourish, all voices were to be heard and given the chance to participate in a common discussion about the things that mean something to them.

It felt like a new kind of freedom, and the social media platforms seemed to deliver what they promised, but, in reality, this was never anything but a commercial opportunity. Companies behaved like liberal thinkers and used language reminiscent of democratic institutions, despite the fact that they determined all of the rules. To behave like a quasi-state may sound cynical, but cynicism is not an adequate explanation for how the current state of affairs came to be. It is not a trait that I have come across particularly often among entrepreneurs during the almost twenty years I have been closely following the development of different digital services. It is, however, common to harbor a naive belief in the digitalization of society as solely a positive development for citizens.

Of course, in many cases, users have proved to be equally starry-eyed. During the development phase of these privately-owned digital societies, the image of the platforms as tools enhancing democracy has served corporations well. It is easier for users to share large parts of their private lives with a business that lives by capitalizing their users’ data if they believe that the company is somehow something more. As these large online platforms have grown, thereby influencing an ever-increasing proportion of the public sphere, their influence over that sphere — both positive and negative — has become more obvious.

Nowadays, it is meaningless to divide reality into the physical and the virtual. The digitalization of everyday processes has fused these together. Atoms and bits occur helter-skelter. Digitalization has resulted in what was once static becoming elastic, often adapted to whoever is using it. The intention is good. The optimistic view of the effects is often extremely naive. As an example, it sounds handy that news services customize and filter content based on user preferences, but the effect is simultaneously to reduce our ability to mentally collate a folder of news clippings when we want to obtain a true picture of reality. Like it or not, technology is making it harder to understand the world as it creates a big bubble around us.

It is meaningless to divide reality into the physical and the virtual. The digitalization of everyday processes has fused these together. Atoms and bits occur helter-skelter.

Algorithms, filter bubbles, and fake news

In order to obtain new knowledge, we must trust in others who know more. Human knowledge is essentially social. This means that the more people we have contact with, the more likely we are to obtain new knowledge. From a historical perspective, people in general have always benefited from as many people as possible having a forum to share their knowledge. The flipside of the democratization of the public sphere that the Internet has contributed to is that it becomes difficult to decide who has useful knowledge and who is pedaling falsehoods, whether accidentally or deliberately.

In order to assist their users in navigating the broad but fragmented information stream, Facebook, Instagram, Twitter, and many other companies use algorithms. Algorithm is simply a fancy way of describing an equation that takes into consideration a number of factors in order to present a certain result. When you look at your Facebook feed, you will find posts from friends that appear to be random. However, their composition is actually a selection made by an algorithm, an algorithm that guesses which of all of your friends’ posts are the most relevant to you and prioritizes the order of that information.

Many news sites do exactly the same thing, adapting their front pages to the preferences of the visitor. Algorithms have learned that I have little or no interest in sports (I never click on links to sports-related articles) and, therefore, sports news is rarely shown in my feed. I do, however, occasionally see articles linked to horse show jumping, as I have a friend who is an accomplished equestrian, and I obviously read articles about him. Another parameter is that social media algorithms prioritize posts linked to those we have close social relationships with.

In their haste to compile feeds that are as relevant as possible, algorithms filter out everything we appear to dislike until we are surrounded solely by things we will in all likelihood enjoy and want to read. This is true for topics as well as for sources. If you read the New York Times, you will see more articles from that source versus other sources (such as Fox News). The effect is that we are left in a filter bubble in which we see only confirmation of what we already believe. In addition, this big bubble grows ever more constrictive over time as the algorithm, by showing us only things we like, reinforces its own assumptions about what it is we like and narrows results further. Everything else is filtered out. The result is an echo chamber in which only certain opinions resound.

To have knowledge implies the possession of information, facts and skills acquired by a person through experience or education that forms the basis of healthy, well-founded convictions on which we can base our worldview.

However, sometimes we fool ourselves. If we are convinced that something is true, we have a tendency to seek confirmation of our own beliefs. This phenomenon is called “confirmation bias” and has been recognized by psychologists for decades. Instead of exposing our convictions to critical review, we do all we can to confirm them, ignoring information that conflicts or challenges those convictions. Confirmation bias then, is the human equivalent of the algorithm and plays a central role in understanding people’s ability to form an idea of the world — offering a direct parallel to the algorithm’s ability to compile social media feeds. Thus, in social media, an unfortunate reinforcement effect arises when human confirmation bias meets algorithms’ filtering and information prioritization. The effect is that it becomes even more difficult to construct an accurate view of reality.

Confirmation bias is the human equivalent of the algorithm and plays a central role in understanding people’s ability to form an idea of the world.

Donald Trump has repeatedly used the term “fake news” whenever he hears something that doesn’t fit with or challenges his view of the world. A folder in his morning brief containing nothing but positive news is a filter bubble in miniature that excludes all negativity. The effect is a well-developed immunity to facts (which in Mr. Trump’s case began to develop long before he became president). It is, however, also an example of “politically motivated reasoning,” to use Dan Kahan’s term. Mr. Kahn is a professor of law and psychology at Yale Law School who, together with his colleagues, has studied our tendency to seek confirmation of the opinions associated with our own ideological and cultural group. Their research shows that, when an issue is politically charged, we often base our convictions on emotions instead of evidence. By his frequent habit of ending his tweets with “SAD!”, Donald Trump offers a practical illustration of this phenomenon. The result of politically motivated reasoning is a polarization and negation of the facts. What we believe about various issues becomes an effect of our political thinking rather than any well-founded knowledge or objective facts. It’s the reason why political debates are so boring to watch nowadays — the opponents are mainly occupied with expressing what they believe and think and feel, often without any connection to facts.

Confirmation bias and politically motivated reasoning create excellent conditions for those who wish to disseminate fake news. By creating headlines that appear to confirm existing convictions, the creators of fake news lead us to click on links to learn more in the hope of having these convictions confirmed. One should keep in mind that there is money to be made in the advertising associated with the stories so that the more titillating and inflammatory the headline, the more money gets made by those fake news creators. This is a point worth making as there are some who don’t care about the ideology and are only in it for the money.

One of the most notorious examples of fake news during the 2016 US presidential campaign was the Pope’s supposed support for Donald Trump. Over one million people liked and shared this false article. (The Pope does not endorse political candidates.)

If we compare the 20 most popular news articles about the US election on Facebook in the four months preceding election day November 2016 with the 20 most popular “fake news” articles during the same period, we get an idea of just how strong an impulse confirmation bias is. The total number of reactions (likes, comments) to the fake news amounted to 8.7 million, compared to 7.3 million for the verified news!

The underlying purpose of social media, and all other digital services that offer personalization, is to persuade users to use the service in question as much as possible. The problem is that users have neither insight into, nor the ability to influence, the weight that algorithms attach to various types of personal information and behavior. There is no “off” button for algorithms if I would have preferred to view my friends’ posts in the chronological order that is not possible today.

Over recent years, algorithms have been developed in an attempt to include things that I had no idea that I liked. This is done by comparing my individual profile with other profiles that are in some way similar to mine, and using information about what these other users like in order to suggest the same things to me. Put simply, the algorithm creates a stereotype that can be used to make assumptions about what I ought to like, and makes recommendations or suggestions accordingly. This is presumably why I never see articles about yoga in my feed, despite my interest in the subject. I am, after all, a man, and the stereotypical yoga practitioner is a woman. This is also a sign that an algorithm is never completely neutral. The biases and values of those who create algorithms affect the code. Normally we are oblivious to this, simply because the programmers themselves give no thought to it and we have no insight into the biases it introduces on our behalf.

Thus, there are many interacting factors that make it difficult to create an accurate worldview. Add to this the fact that the media landscape is considerably more fragmented than a few decades ago. It is easy to understand those who blame digital media, in general, for this development and who point out that social media contributes to a polarization of public discourse. Their proposed solution is to demand accountability on the part of these social networks and new platforms for their users’ worldviews. By altering digital and social media algorithms to suggest news articles that challenge the user’s worldview, they should be able to prevent the formation of filter bubbles. I have several reservations about such a proposal.

First, the proposal is based on the assumption that our worldview is solely formed by our social media feeds — as if we had never thumbed through a newspaper, never seen television news, never picked something up from the radio, or listened to our friends express their opinions on current affairs. Digital media alone cannot be blamed for the creation of filter bubbles. If you choose to read only a certain type of newspaper — perhaps with a more conservative bent — you also risk becoming trapped in a filter bubble.

Second, social networks and platforms are not democratic institutions. On joining, we accept terms and conditions that clearly stipulate that we can neither exercise influence over how the service is designed nor demand accountability for any undesirable effects that we may experience as a regular user.

Third, I am opposed to the idea that someone, or an algorithm, would diagnose my feed in order to ascertain whether my worldview is balanced and, thereafter, “medicate” me until such time as balance is restored to my news consumption. I understand that the intention is good, but the solution remains absurd.

The fourth reservation is simply to state the obvious; all attempts to correct someone’s self-image against their express wishes is in opposition to all liberal ideas about the individual’s right to self-determination. (“My delusions may lack credence, but they are still mine.”) Over the course of several decades, in many countries, the state’s responsibility for — and, therefore, its ability to decide on behalf of — its citizens has decreased in favor of increased opportunities for citizens to take their own responsibility and make their own decisions. The idea that a corporation should take responsibility for ensuring that their users are generally educated, much in the way that the European welfare states sought to do for citizens in the mid-1950s, is as outdated as it is absurd.

A fifth reservation takes the form of a reminder about our ability to, consciously or unconsciously, create our own filter bubbles by searching for information that affirms our own worldview — our confirmation bias. Even if algorithms were to provide us with a balanced diet of news, we would still give greater weight to any news that confirms our understanding than to that which challenges it. (And we would be annoyed that irrelevant or uninteresting news is cluttering our news feeds.)

Nevertheless, I think I understand why this proposal arose. When we view filter bubbles as a technical challenge, we automatically search for a technical solution on which to pin our hopes. However, the problem is a more complex one than we can hope to solve by simply altering some algorithms. The problem is intrinsically linked to our inability to take responsibility for our own news diet, our ignorance of our inherent confirmation bias, and of the resulting confining filter bubble. If we are to solve the problem, we need general education on where the responsibility lies for the existence of filter bubbles — and that is with each and every one of us. Not with social media or the platforms, and it most certainly is not with the state but with you and me and everyone trying to obtain a balanced view of the world. The solution is to educate people about this responsibility, to help them take control of their information gathering, and perform basic source criticism. The realization that one exists within a big bubble of some kind is a first important step.

Deceptive filter bubbles

It seems to me both Donald Trump’s and Hilary Clinton’s campaign organizers were clearly unaware of the extent to which filter bubbles were polarizing the presidential election. Instead of an election campaign where all citizens were able to listen to or read the candidates’ arguments on important social issues, it would be fairer to describe the Trump and Clinton campaigns as entirely separate parallel competitions aimed at getting the largest possible percentage of their own constituency into the polling station to vote. Rather than discussing policies or giving more thought to which arguments seemed attractive to their rival’s supporters at campaign rallies, both sides resorted to describing their adversary in emotional terms as mentally unstable, or other unflattering terms.

One must remember that everyone who votes in an election does so because their choice seems somehow logical. To dismiss one’s adversary in emotional terms as morally corrupt, and their supporters as ideologically extreme, poorly educated numbskulls, or as a self-righteous elite, is to subconsciously filter out facts that challenge our beliefs and restrict the opportunity to understand their world. This is an example of the politically motivated reasoning we spoke of earlier. Emotions affect our ability to make an objective judgement.

I once considered myself to be reasonably well-informed about US domestic politics. Since working as a volunteer on Barack Obama’s first presidential campaign in 2008, I have followed US politics from a distance, with interest and admittedly through a liberal filter. This confirmation bias left me unable to understand how anyone could consider voting for Donald Trump. Fascinated by my own inability to grasp the idea, I manipulated Twitter’s algorithm into recommending likeminded accounts so that I could create two separate bubbles fed by Republican and Democrat Twitter users, respectively. This allowed me to begin studying the bubbles from within, hearing the arguments as if I were an entirely normal Trump or Clinton supporter. In character, the bubbles were extremely similar to one another. Each had its influential voices, its fair share of fake news, its rowdy supporters, and loyal media outlets routinely ridiculing the other candidate. In both bubbles, astonishment was rife as to how the hell any reasonable person could even consider voting for the other bubble’s dimwit candidate. Both bubbles seemed to scent victory. And yet, both sides expressed great surprise when the election results were in.

No one expected Donald Trump to win. Someone so politically inexperienced, with such a tarnished reputation as a New York real estate billionaire, should not have been able to win a primary, should not have been able to become a presidential candidate, and should not have been able to be elected president. If you were to compare him to previous candidates and the scrutiny they were subjected to, and the conduct that was expected before they could win the trust of the people and their vote, the result was unprecedented.

A surprised winner.

Filter bubbles are insufficient as an explanation for why Donald Trump won the presidential election. They do, however, explain why so many people guessed wrong despite the signals that a Clinton victory was far from secure. When we see only what confirms our own worldview, it is inevitable that we will misinterpret signs to the contrary. Faced with the facts and with the demand to explain how they came about, it is perhaps unworthy of a political expert to point the finger of blame elsewhere, even if this is human. If you believe in your own objectivity, confirmation bias itself will ensure that the idea of being influenced by confirmation bias is dismissed before you even have time to consider the possibility.

Filter bubbles are insufficient as an explanation for why Donald Trump won the presidential election. They do, however, explain why so many people didn’t saw the victory coming.

After the unexpected Yes vote on Brexit in June 2016, many pundits laid the blame for the prognosticative fiasco squarely at the feet of pollsters. After the unexpected result in the US presidential election, criticism was primarily aimed at deceptive algorithms and the filter bubbles they create. Why had Facebook failed to enlighten its users of this possibility? Why did no alarms sound indicating that the message in the feed mirrored the monotonous reverberation of an echo chamber with false information swirling in the mix?

Audible echo chambers

Day-to-day behavior — such as how we use media, obtain news, or consume music — takes time to change. It is difficult to change behavior overnight. However, if change takes place in increments, we will perceive it as a simplification or as something people we know are already doing, and we will adapt ourselves surprisingly quickly and without giving it much thought. In order to show how much our media habits can change in only a few decades, we need only look at how the music industry has changed over the same period. The algorithms that control Spotify, Apple Music, and other streaming services offer us suggestions for new artists and new songs based on what we already like. Initially this seems great as the algorithms make our search for new music so much easier. However, as the artists suggested never deviate from a given genre, the feeling sneaks up on us that we have been consigned to an echo chamber.

One day, not so long ago, it dawned on me that all of the artists suggested to me by Spotify were singer-songwriters. All that remained were slow, sonorous voices accompanied by guitar or piano. Norah Jones, James Blunt, Joni Mitchel, Ellie Goulding, Tracy Chapman. The problem is, that’s not what I like. Not only, anyway. My musical taste is considerably more varied than that, but, for some reason, algorithms appear to have given more weight to these particular artists, something that first irritated me and then made me angry. Why weren’t the algorithms fulfilling their function? And if that was the case, why couldn’t I reset the algorithms or ask them to re-index the information in my listening history to correctly reflect my tastes?

If you compare the music feed with the news feed, there is no real difference; they both work in the same way and according to the same logic. The major difference is, of course, that you can hear when your music feed has become an echo chamber, quite literally. At certain points in this book, we will, therefore, take a diversion via music. By understanding how certain artists broke through and how the public reacted to the death of an artist, we can compare media habits at various points in time — how news is communicated and how it is received.

Donald Trump’s relationship with music is largely unknown. However, in an interview with the BBC he did admit that he liked to listen to Frank Sinatra, Tony Bennet, Elton John, and Eminem. “If you love a certain kind of music, don’t let other people’s tastes influence your own,” said Mr. Trump in his book Think Like a Billionaire (2004). “Whatever’s the best for you is the best. Never forget that.” This advice to create an echo chamber filled with music you already like is reminiscent of his desire to only receive positive news in his file of press clippings.

The problem is perhaps not that bubbles seem to appear around us, both analogue and digital, but that we don’t inhabit enough bubbles. As long as you vary your social circle to include different people in different contexts, read different media outlets from time to time, discuss various subjects in different forums, you’re probably going to be okay. When you find yourself in many bubbles you gain many perspectives while at the same time each individual bubble becomes less significant. If you constantly move in the same circles, discussing roughly the same news and watching roughly the same videos on YouTube, it’s time for a change. Take a trip, seek adventure! Or stay at home and join a choir. Or how about following someone new on Snapchat, subscribe to a new podcast, go to a new digital source for news? Do anything, as long as it takes you out of your ordinarily routine. The world is unknown and unknowable in so many ways. The choice is between withdrawing in fear of the unknown or meeting the world with curiosity. Those who lose their inquisitiveness are those that run the greatest risk of getting stuck in a bubble of things they already know and opinions they already have.

An unwillingness to accept new information or influences from new sources is a worrying sign, implying as it does that you already have the necessary knowledge. Indeed, doubting your own understanding of the world is a sign of a sound mind.

This is an excerpt from Per Grankvists new book The Big Bubble — how technology is making it harder to understand the world, published by IngramSpark.

--

--

Per Grankvist

Exploring storytelling as a tool to get us to sustainable future even quicker @viablecities