The New
Mind Control
The internet has spawned subtle forms of influence
that can flip elections and manipulate everything we
say, think and do.
By Robert Epstein
March 02, 2016
"Information
Clearing House"
- "Aeon"
- Over the past century, more than a few great
writers have expressed concern about humanity’s
future. In The Iron Heel (1908), the
American writer Jack London pictured a world in
which a handful of wealthy corporate titans – the
‘oligarchs’ – kept the masses at bay with a brutal
combination of rewards and punishments. Much of
humanity lived in virtual slavery, while the
fortunate ones were bought off with decent wages
that allowed them to live comfortably – but without
any real control over their lives.
In We
(1924), the brilliant Russian writer Yevgeny
Zamyatin, anticipating the excesses of the emerging
Soviet Union, envisioned a world in which people
were kept in check through pervasive monitoring. The
walls of their homes were made of clear glass, so
everything they did could be observed. They were
allowed to lower their shades an hour a day to have
sex, but both the rendezvous time and the lover had
to be registered first with the state.
In Brave
New World (1932), the British author Aldous
Huxley pictured a near-perfect society in which
unhappiness and aggression had been engineered out
of humanity through a combination of genetic
engineering and psychological conditioning. And in
the much darker novel 1984 (1949), Huxley’s
compatriot George Orwell described a society in
which thought itself was controlled; in Orwell’s
world, children were taught to use a simplified form
of English called Newspeak in order to assure that
they could never express ideas that were dangerous
to society.
These are all
fictional tales, to be sure, and in each the leaders
who held the power used conspicuous forms of control
that at least a few people actively resisted and
occasionally overcame. But in the non-fiction
bestseller The Hidden Persuaders (1957) –
recently released in a 50th-anniversary edition –
the American journalist Vance Packard described a
‘strange and rather exotic’ type of influence that
was rapidly emerging in the United States and that
was, in a way, more threatening than the fictional
types of control pictured in the novels. According
to Packard, US corporate executives and politicians
were beginning to use subtle and, in many cases,
completely undetectable methods to change
people’s thinking, emotions and behaviour based on
insights from psychiatry and the social sciences.
Most of us
have heard of at least one of these methods:
subliminal stimulation, or what Packard called
‘subthreshold effects’ – the presentation of short
messages that tell us what to do but that are
flashed so briefly we aren’t aware we have seen
them. In 1958, propelled by public concern about a
theatre in New Jersey that had supposedly hidden
messages in a movie to increase ice cream sales, the
National Association of Broadcasters – the
association that set standards for US television –
amended its code to prohibit the use of subliminal
messages in broadcasting. In 1974, the Federal
Communications Commission opined that the use of
such messages was ‘contrary to the public interest’.
Legislation to prohibit subliminal messaging was
also introduced in the US Congress but never
enacted. Both the UK and Australia have strict laws
prohibiting it.
Subliminal
stimulation is probably still in wide use in the US
– it’s hard to detect, after all, and no one is
keeping track of it – but it’s probably not worth
worrying about. Research suggests that it has only a
small impact, and that it mainly influences people
who are already motivated to follow its dictates;
subliminal directives to drink affect people only if
they’re already thirsty.
Packard had
uncovered a much bigger problem, however – namely
that powerful corporations were constantly looking
for, and in many cases already applying, a wide
variety of techniques for controlling people without
their knowledge. He described a kind of cabal in
which marketers worked closely with social
scientists to determine, among other things, how to
get people to buy things they didn’t need and how to
condition young children to be good consumers –
inclinations that were explicitly nurtured and
trained in Huxley’s Brave New World. Guided
by social science, marketers were quickly learning
how to play upon people’s insecurities, frailties,
unconscious fears, aggressive feelings and sexual
desires to alter their thinking, emotions and
behaviour without any awareness that they were being
manipulated.
By the early
1950s, Packard said, politicians had got the message
and were beginning to merchandise themselves using
the same subtle forces being used to sell soap.
Packard prefaced his chapter on politics with an
unsettling quote from the British economist Kenneth
Boulding: ‘A world of unseen dictatorship is
conceivable, still using the forms of democratic
government.’ Could this really happen, and, if so,
how would it work?
The
forces that Packard described have become more
pervasive over the decades. The soothing music we
all hear overhead in supermarkets causes us to walk
more slowly and buy more food, whether we need it or
not. Most of the vacuous thoughts and intense
feelings our teenagers experience from morning till
night are carefully orchestrated by highly skilled
marketing professionals working in our fashion and
entertainment industries. Politicians work with a
wide range of consultants who test every aspect of
what the politicians do in order to sway voters:
clothing, intonations, facial expressions, makeup,
hairstyles and speeches are all optimised, just like
the packaging of a breakfast cereal.
Fortunately,
all of these sources of influence operate
competitively. Some of the persuaders want us to buy
or believe one thing, others to buy or believe
something else. It is the competitive nature of our
society that keeps us, on balance, relatively free.
But what would
happen if new sources of control began to emerge
that had little or no competition? And what if new
means of control were developed that were far more
powerful – and far more invisible – than
any that have existed in the past? And what if new
types of control allowed a handful of people to
exert enormous influence not just over the citizens
of the US but over most of the people on Earth?
It might
surprise you to hear this, but these things have
already happened.
Google
decides which web pages to include in search
results, and how to rank them.
How it does so is one of the best-kept secrets in
the world, like the formula for Coca-Cola
To understand
how the new forms of mind control work, we need to
start by looking at the search engine – one in
particular: the biggest and best of them all, namely
Google. The Google search engine is so good and so
popular that the company’s name is now a commonly
used verb in languages around the world. To ‘Google’
something is to look it up on the Google search
engine, and that, in fact, is how most computer
users worldwide get most of their information about
just about everything these days. They Google
it. Google has become the main gateway to virtually
all knowledge, mainly because the search engine is
so good at giving us exactly the information we are
looking for, almost instantly and almost always in
the first position of the list it shows us after we
launch our search – the list of ‘search results’.
That ordered
list is so good, in fact, that about 50 per cent of
our clicks go to the top two items, and more than 90
per cent of our clicks go to the 10 items listed on
the first page of results; few people look at other
results pages, even though they often number in the
thousands, which means they probably contain lots of
good information. Google decides which of the
billions of web pages it is going to include in our
search results, and it also decides how to rank
them. How it decides these things is a deep, dark
secret – one of the best-kept secrets in the world,
like the formula for Coca-Cola.
Because people
are far more likely to read and click on
higher-ranked items, companies now spend billions of
dollars every year trying to trick Google’s search
algorithm – the computer program that does the
selecting and ranking – into boosting them another
notch or two. Moving up a notch can mean the
difference between success and failure for a
business, and moving into the top slots can be the
key to fat profits.
Late in 2012,
I began to wonder whether highly ranked search
results could be impacting more than consumer
choices. Perhaps, I speculated, a top search result
could have a small impact on people’s opinions about
things. Early in 2013, with my associate Ronald E
Robertson of the
American
Institute for Behavioral Research and Technology
in Vista, California, I put this idea to a test by
conducting an experiment in which 102 people from
the San Diego area were randomly assigned to one of
three groups. In one group, people saw search
results that favoured one political candidate – that
is, results that linked to web pages that made this
candidate look better than his or her opponent. In a
second group, people saw search rankings that
favoured the opposing candidate, and in the third
group – the control group – people saw a mix of
rankings that favoured neither candidate. The same
search results and web pages were used in each
group; the only thing that differed for the three
groups was the ordering of the search results.
To make our
experiment realistic, we used real search results
that linked to real web pages. We also used a real
election – the 2010 election for the prime minister
of Australia. We used a foreign election to make
sure that our participants were ‘undecided’. Their
lack of familiarity with the candidates assured
this. Through advertisements, we also recruited an
ethnically diverse group of registered voters over a
wide age range in order to match key demographic
characteristics of the US voting population.
All
participants were first given brief descriptions of
the candidates and then asked to rate them in
various ways, as well as to indicate which candidate
they would vote for; as you might expect,
participants initially favoured neither candidate on
any of the five measures we used, and the vote was
evenly split in all three groups. Then the
participants were given up to 15 minutes in which to
conduct an online search using ‘Kadoodle’, our mock
search engine, which gave them access to five pages
of search results that linked to web pages. People
could move freely between search results and web
pages, just as we do when using Google. When
participants completed their search, we asked them
to rate the candidates again, and we also asked them
again who they would vote for.
We predicted
that the opinions and voting preferences of 2 or 3
per cent of the people in the two bias groups – the
groups in which people were seeing rankings
favouring one candidate – would shift toward that
candidate. What we actually found was astonishing.
The proportion of people favouring the search
engine’s top-ranked candidate increased by 48.4
per cent, and all five of our measures shifted
toward that candidate. What’s more, 75 per cent of
the people in the bias groups seemed to have been
completely unaware that they were viewing biased
search rankings. In the control group, opinions did
not shift significantly.
This
seemed to be a major discovery. The shift we had
produced, which we called the Search Engine
Manipulation Effect (or SEME, pronounced ‘seem’),
appeared to be one of the largest behavioural
effects ever discovered. We did not immediately
uncork the Champagne bottle, however. For one thing,
we had tested only a small number of people, and
they were all from the San Diego area.
Over the next
year or so, we replicated our findings three more
times, and the third time was with a sample of more
than 2,000 people from all 50 US states. In that
experiment, the shift in voting preferences was 37.1
per cent and even higher in some demographic groups
– as high as 80 per cent, in fact.
We also
learned in this series of experiments that by
reducing the bias just slightly on the first page of
search results – specifically, by including one
search item that favoured the other
candidate in the third or fourth position of the
results – we could mask our manipulation so
that few or even no people were aware that
they were seeing biased rankings. We could still
produce dramatic shifts in voting preferences, but
we could do so invisibly.
Still no
Champagne, though. Our results were strong and
consistent, but our experiments all involved a
foreign election – that 2010 election in Australia.
Could voting preferences be shifted with real voters
in the middle of a real campaign? We were skeptical.
In real elections, people are bombarded with
multiple sources of information, and they also know
a lot about the candidates. It seemed unlikely that
a single experience on a search engine would have
much impact on their voting preferences.
To find out,
in early 2014, we went to India just before voting
began in the largest democratic election in the
world – the Lok Sabha election for prime minister.
The three main candidates were Rahul Gandhi, Arvind
Kejriwal, and Narendra Modi. Making use of online
subject pools and both online and print
advertisements, we recruited 2,150 people from 27 of
India’s 35 states and territories to participate in
our experiment. To take part, they had to be
registered voters who had not yet voted and who were
still undecided about how they would vote.
unlike
subliminal stimuli, SEME has an enormous impact –
like Casper the ghost pushing you down a flight of
stairs
Participants
were randomly assigned to three search-engine
groups, favouring, respectively, Gandhi, Kejriwal or
Modi. As one might expect, familiarity levels with
the candidates was high – between 7.7 and 8.5 on a
scale of 10. We predicted that our manipulation
would produce a very small effect, if any, but
that’s not what we found. On average, we were able
to shift the proportion of people favouring any
given candidate by more than 20 per cent overall and
more than 60 per cent in some demographic groups.
Even more disturbing, 99.5 per cent of our
participants showed no awareness that they were
viewing biased search rankings – in other words,
that they were being manipulated.
SEME’s
near-invisibility is curious indeed. It means that
when people – including you and me – are looking at
biased search rankings, they look just fine.
So if right now you Google ‘US presidential
candidates’, the search results you see will
probably look fairly random, even if they happen
to favour one candidate. Even I have trouble
detecting bias in search rankings that I know
to be biased (because they were prepared by my
staff). Yet our randomised, controlled experiments
tell us over and over again that when higher-ranked
items connect with web pages that favour one
candidate, this has a dramatic impact on the
opinions of undecided voters, in large part for the
simple reason that people tend to click only on
higher-ranked items. This is truly scary: like
subliminal stimuli, SEME is a force you can’t see;
but unlike subliminal stimuli, it has an enormous
impact – like Casper the ghost pushing you down a
flight of stairs.
We published a
detailed
report about our first five experiments on SEME
in the prestigious Proceedings of the National
Academy of Sciences (PNAS) in August 2015. We
had indeed found something important, especially
given Google’s dominance over search. Google has a
near-monopoly on internet searches in the US, with
83 per cent of Americans specifying Google as the
search engine they use most often, according to the
Pew Research Center. So if Google favours one
candidate in an election, its impact on undecided
voters could easily decide the election’s outcome.
Keep in mind
that we had had only one shot at our participants.
What would be the impact of favouring one candidate
in searches people are conducting over a period of
weeks or months before an election? It would almost
certainly be much larger than what we were seeing in
our experiments.
Other types of
influence during an election campaign are balanced
by competing sources of influence – a wide variety
of newspapers, radio shows and television networks,
for example – but Google, for all intents and
purposes, has no competition, and people trust its
search results implicitly, assuming that the
company’s mysterious search algorithm is entirely
objective and unbiased. This high level of trust,
combined with the lack of competition, puts Google
in a unique position to impact elections. Even more
disturbing, the search-ranking business is entirely
unregulated, so Google could favour any candidate it
likes without violating any laws.
Some courts have even ruled that Google’s right
to rank-order search results as it pleases is
protected as a form of free speech.
Does the
company ever favour particular candidates? In the
2012 US presidential election, Google and its top
executives donated more than $800,000 to President
Barack Obama and just $37,000 to his opponent, Mitt
Romney. And in 2015, a team of researchers from the
University of Maryland and elsewhere
showed that Google’s search results routinely
favoured Democratic candidates. Are Google’s search
rankings really biased? An
internal report issued by the US Federal Trade
Commission in 2012 concluded that Google’s search
rankings routinely put Google’s financial interests
ahead of those of their competitors, and anti-trust
actions currently under way against Google in both
the
European Union and
India are based on similar findings.
In most
countries, 90 per cent of online search is conducted
on Google, which gives the company even more power
to flip elections than it has in the US and, with
internet penetration increasing rapidly worldwide,
this power is growing. In our PNAS article,
Robertson and I calculated that Google now has the
power to flip upwards of 25 per cent of the
national elections in the world with no one
knowing this is occurring. In fact, we estimate
that, with or without deliberate planning on the
part of company executives, Google’s search rankings
have been impacting elections for years, with
growing impact each year. And because search
rankings are ephemeral, they leave no paper trail,
which gives the company complete deniability.
Power on this
scale and with this level of invisibility is
unprecedented in human history. But it turns out
that our discovery about SEME was just the tip of a
very large iceberg.
Recent
reports suggest that the Democratic presidential
candidate Hillary Clinton is making heavy use of
social media to try to generate support – Twitter,
Instagram, Pinterest, Snapchat and Facebook, for
starters. At this writing, she has 5.4 million
followers on Twitter, and her staff is tweeting
several times an hour during waking hours. The
Republican frontrunner, Donald Trump, has 5.9
million Twitter followers and is tweeting just as
frequently.
Is social
media as big a threat to democracy as search
rankings appear to be? Not necessarily. When new
technologies are used competitively, they present no
threat. Even through the platforms are new, they are
generally being used the same way as billboards and
television commercials have been used for decades:
you put a billboard on one side of the street; I put
one on the other. I might have the money to erect
more billboards than you, but the process is still
competitive.
What happens,
though, if such technologies are misused by the
companies that own them? A
study by Robert M Bond, now a political science
professor at Ohio State University, and others
published in Nature in 2012 described an
ethically questionable experiment in which, on
election day in 2010, Facebook sent ‘go out and
vote’ reminders to more than 60 million of its
users. The reminders caused about 340,000 people to
vote who otherwise would not have. Writing in the
New Republic in 2014, Jonathan Zittrain,
professor of international law at Harvard
University, pointed out that, given the massive
amount of information it has collected about its
users, Facebook could easily send such messages only
to people who support one particular party or
candidate, and that doing so could easily flip a
close election – with no one knowing that this
has occurred. And because advertisements, like
search rankings, are ephemeral, manipulating an
election in this way would leave no paper trail.
Are there laws
prohibiting Facebook from sending out ads
selectively to certain users? Absolutely not; in
fact, targeted advertising is how Facebook makes its
money. Is Facebook currently manipulating elections
in this way? No one knows, but in my view it would
be foolish and possibly even improper for Facebook
not to do so. Some candidates are better
for a company than others, and Facebook’s executives
have a fiduciary responsibility to the company’s
stockholders to promote the company’s interests.
The Bond study
was largely ignored, but
another Facebook experiment, published in 2014
in PNAS, prompted protests around the
world. In this study, for a period of a week,
689,000 Facebook users were sent news feeds that
contained either an excess of positive terms, an
excess of negative terms, or neither. Those in the
first group subsequently used slightly more positive
terms in their communications, while those in the
second group used slightly more negative terms in
their communications. This was said to show that
people’s ‘emotional states’ could be deliberately
manipulated on a massive scale by a social media
company, an idea that many people found disturbing.
People were also upset that a large-scale experiment
on emotion had been conducted without the explicit
consent of any of the participants.
Facebook’s
consumer profiles are undoubtedly massive, but they
pale in comparison with those maintained by Google,
which is collecting information about people 24/7,
using
more than 60 different observation platforms –
the search engine, of course, but also Google
Wallet, Google Maps, Google Adwords, Google
Analytics, Chrome, Google Docs, Android, YouTube,
and on and on. Gmail users are generally oblivious
to the fact that Google stores and analyses every
email they write, even the drafts they never send –
as well as all the incoming email they
receive from both Gmail and non-Gmail users.
if Google set
about to fix an election, it could identify just
those voters who are undecided. Then it could send
customised rankings favouring one candidate to
just those people
According to
Google’s
privacy policy – to which one assents whenever
one uses a Google product, even when one has not
been informed that he or she is using a Google
product – Google can share the information it
collects about you with almost anyone, including
government agencies. But never with you.
Google’s privacy is sacrosanct; yours is
nonexistent.
Could Google
and ‘those we work with’ (language from the privacy
policy) use the information they are amassing about
you for nefarious purposes – to manipulate or
coerce, for example? Could inaccurate information in
people’s profiles (which people have no way to
correct) limit their opportunities or ruin their
reputations?
Certainly, if
Google set about to fix an election, it could first
dip into its massive database of personal
information to identify just those voters who are
undecided. Then it could, day after day, send
customised rankings favouring one candidate to
just those people. One advantage of this
approach is that it would make Google’s manipulation
extremely difficult for investigators to detect.
Extreme forms
of monitoring, whether by the KGB in the Soviet
Union, the Stasi in East Germany, or Big Brother in
1984, are essential elements of all
tyrannies, and technology is making both monitoring
and the consolidation of surveillance data easier
than ever. By 2020, China will have put in place the
most ambitious government monitoring system ever
created – a single database called the
Social Credit System, in which multiple ratings
and records for all of its 1.3 billion citizens are
recorded for easy access by officials and
bureaucrats. At a glance, they will know whether
someone has plagiarised schoolwork, was tardy in
paying bills, urinated in public, or blogged
inappropriately online.
As Edward
Snowden’s revelations made clear, we are rapidly
moving toward a world in which both governments and
corporations – sometimes working together – are
collecting massive amounts of data about every one
of us every day, with few or no laws in place that
restrict how those data can be used. When you
combine the data collection with the desire to
control or manipulate, the possibilities are
endless, but perhaps the most frightening
possibility is the one expressed in Boulding’s
assertion that an ‘unseen dictatorship’ was possible
‘using the forms of democratic government’.
Since
Robertson and I submitted our initial report on SEME
to PNAS early in 2015, we have completed a
sophisticated series of experiments that have
greatly enhanced our understanding of this
phenomenon, and other experiments will be completed
in the coming months. We have a much better sense
now of why SEME is so powerful and how, to some
extent, it can be suppressed.
We have also
learned something very disturbing – that search
engines are influencing far more than what people
buy and whom they vote for. We now have evidence
suggesting that on virtually all issues where people
are initially undecided, search rankings are
impacting almost every decision that people make.
They are having an impact on the opinions, beliefs,
attitudes and behaviours of internet users worldwide
– entirely without people’s knowledge that this is
occurring. This is happening with or without
deliberate intervention by company officials; even
so-called ‘organic’ search processes regularly
generate search results that favour one point of
view, and that in turn has the potential to tip the
opinions of millions of people who are undecided on
an issue. In one of our recent experiments, biased
search results shifted people’s opinions about the
value of fracking by 33.9 per cent.
Perhaps even
more disturbing is that the handful of people who do
show awareness that they are viewing biased search
rankings shift even further in the
predicted direction; simply knowing that a list is
biased doesn’t necessarily protect you from SEME’s
power.
Remember what
the search algorithm is doing: in response to your
query, it is selecting a handful of
webpages from among the billions that are available,
and it is ordering those webpages using
secret criteria. Seconds later, the decision you
make or the opinion you form – about the best
toothpaste to use, whether fracking is safe, where
you should go on your next vacation, who would make
the best president, or whether global warming is
real – is determined by that short list you are
shown, even though you have no idea how the list was
generated.
The technology
has made possible undetectable and untraceable
manipulations of entire populations that are beyond
the scope of existing regulations and laws
Meanwhile,
behind the scenes, a consolidation of search engines
has been quietly taking place, so that more people
are using the dominant search engine even when they
think they are not. Because Google is the best
search engine, and because crawling the rapidly
expanding internet has become prohibitively
expensive, more and more search engines are drawing
their information from the leader rather than
generating it themselves. The most recent deal,
revealed in a
Securities and Exchange Commission filing in
October 2015, was between Google and Yahoo! Inc.
Looking ahead
to the November 2016 US presidential election, I see
clear signs that Google is backing Hillary Clinton.
In April 2015, Clinton hired
Stephanie Hannon away from Google to be her
chief technology officer and, a few months ago, Eric
Schmidt, chairman of the holding company that
controls Google,
set up a semi-secret company – The Groundwork –
for the specific purpose of putting Clinton in
office. The formation of The Groundwork prompted
Julian Assange, founder of Wikileaks, to dub Google
Clinton’s ‘secret
weapon’ in her quest for the US presidency.
We now
estimate that Hannon’s old friends have the power to
drive between 2.6 and 10.4 million votes to Clinton
on election day with no one knowing that this is
occurring and without leaving a paper trail. They
can also help her win the nomination, of course, by
influencing undecided voters during the primaries.
Swing voters have always been the key to winning
elections, and there has never been a more powerful,
efficient or inexpensive way to sway them than SEME.
We are living
in a world in which a handful of high-tech
companies, sometimes working hand-in-hand with
governments, are not only monitoring much of our
activity, but are also invisibly controlling more
and more of what we think, feel, do and say. The
technology that now surrounds us is not just a
harmless toy; it has also made possible undetectable
and untraceable manipulations of entire populations
– manipulations that have no precedent in human
history and that are currently well beyond the scope
of existing regulations and laws. The new hidden
persuaders are bigger, bolder and badder than
anything Vance Packard ever envisioned. If we choose
to ignore this, we do so at our peril.
Robert Epstein is a senior research psychologist
at the American Institute for Behavioral
Research and Technology in California. He is the
author of 15 books, and the former
editor-in-chief of
Psychology Today. This article is a
preview of his forthcoming book,
The New Mind Control.
|