The vicious circle has been spinning for many years: Web sites have lurid, unconfirmed complaints about suspected fraudsters, sex criminals, deadbeats and fraudsters. People slander their enemies. The anonymous posts appear high in the Google results for the names of the victims. Then the websites charge the victims thousands of dollars to remove the posts.
This circle of slander was lucrative for the websites and their associated middlemen – and devastating for the victims. Now Google is trying to break the loop.
The company plans to change its search algorithm to prevent websites operating under domains like BadGirlReport.date and PredatorsAlert.us from appearing in the list of results when someone searches for a person’s name.
Google also recently came up with a new concept it calls “known victims”. When people report to the company that they have been attacked on websites that request post removal, Google automatically suppresses similar content when it searches for their name. “Known Victims” also include people whose nude photos were published online without their consent so that they can request that explicit results be suppressed for their names.
The changes – some already made by Google and others planned for the coming months – are in response to recent New York Times articles documenting how the libel industry, with the ignorant help of Google, is making victims of victims.
Recognition…David Crotty / Patrick McMullan via Getty Images
“I doubt it will be a perfect solution, especially not right off the bat. But I think it really should have a significant and positive impact, ”said David Graff, Google’s vice president of global policies and standards, and trust and security. “We can’t monitor the internet, but we can be responsible citizens.”
This is a momentous change for victims of online defamation. Google, which powers an estimated 90 percent of the world’s online search, has historically resisted human judgment playing a role in its search engine, despite the fact that in recent years it has given way to increasing pressures to combat misinformation and abuse appear at the top of the results.
Initially, the founders of Google saw its algorithm as an unbiased reflection of the internet itself. It uses an analysis called PageRank, named after co-founder Larry Page, to determine the value of a website by measuring how many other websites are linked to it , as well as the quality of these other websites, based on the number of linked websites you have.
The philosophy was: “We never touch the search, no way, no way. If we start touching search results it’s a one-way street to a curated internet and we’re no longer neutral, ”said Danielle Citron, law professor at the University of Virginia. A decade ago, Professor Citron put pressure on Google to block so-called revenge porn that comes up when searching for a person’s name. The company initially resisted.
In a 2004 statement, Google expressed its aloof view of why its search engine popped up anti-Semitic websites in response to a search for “Jude”.
“Our search results are generated completely objectively and are independent of the beliefs and preferences of those who work at Google,” said the company’s statement, which it deleted a decade later. “The only sites we leave out are those we are legally required to remove or those that maliciously try to manipulate our results.”
Google’s early interventions in search results were limited to things like web spam and pirated movies and music, as required by copyright law, as well as financially damaging information like social security numbers. Only recently has the company reluctantly played a more proactive role in cleaning up search results.
The most notable case came in 2014 when European courts established the “right to be forgotten”. Residents of the European Union can request that, in their opinion, inaccurate and irrelevant information about them be removed from search engines.
Let us help you protect your digital life
Google unsuccessfully fought the court ruling. The company said its job is to make existing information accessible and does not want to get involved in regulating content that appears in search results. Since the inception of the law, Google has been forced to remove millions of links from personal name search results.
After Donald J. Trump was elected president, there was even more pressure to change. After the election, one of the top Google search results for “2016 Final Voting Count” was a link to an article that falsely claimed that Mr Trump, who won the electoral college, also won the referendum.
A few months later, Google announced an initiative to provide “algorithmic updates to display more meaningful content” to prevent intentionally misleading, inaccurate, or offensive information from being displayed in search results.
It was around this time that Google’s aversion to technical harassment from its results began to wane.
The Wayback Machine archive of Google’s guidelines for removing items from search results tracks the company’s development. First, Google agreed to remove nude photos posted online without the subject’s consent. Then it began to delete medical information. Next came fake pornography, followed by websites with “exploitative removal” guidelines and then so-called doxxing content, which Google defined as “disclosing contact information with intent to cause harm”.
According to Google, the removal request forms are visited millions of times each year, but many victims are unaware of their existence. This has allowed “reputation managers” and others to bill people for removing content from their results that they could request for free.
Pandu Nayak, the head of Google’s search quality team, said the company started fighting websites asking people to remove defamatory content a few years ago in response to the rise of a thriving industry that showed up and then mug shots was charged with deletion.
Google has started to rank such exploitative websites lower in its results, but the change didn’t help people who don’t have a lot of information online. Since Google’s algorithm loathes a vacuum, posts accusing such people as drug addicts or pedophiles might still feature prominently in their results.
Defamatory websites have relied on this feature. You couldn’t ask for thousands of dollars to remove content if the posts didn’t damage people’s reputations.
Mr Nayak and Mr Graff said Google was unaware of the scale of this problem until the Times articles pointed it out earlier this year. They said changes to Google’s algorithm and the creation of the “known victims” classification would help solve the problem. In particular, one of their favorite methods is making it harder for websites to gain a foothold on Google: copying and reposting defamatory content from other websites.
Google recently tested the changes, with contractors comparing the new and old search results side by side.
The Times had previously compiled a list of 47,000 people who were featured on the libel pages. When searching for a handful of people whose results were previously littered with defamatory posts, the changes Google made were already apparent. For some, the posts had disappeared from their first results page and their image results. For others, the posts were mostly gone – with the exception of one from a new smear site, CheaterArchives.com.
CheaterArchives.com can illustrate the limits of Google’s new protection. Since it is relatively new, it is unlikely to have caused complaints from victims. These complaints are one way Google uses to find defamatory websites. Additionally, CheaterArchives.com does not specifically promote post removal as a service, which may make it harder for victims to remove them from their results.
Google executives said the company wasn’t motivated solely out of sympathy for victims of online defamation. Instead, it’s part of Google’s longstanding efforts to combat websites that are trying to appear higher than they deserve in search engine results.
“These sites are frankly playing with our system,” said Graff.
Still, Google’s move is likely to raise the question of the company’s effective monopoly on what information is in the public domain and which is not. Indeed, this is one of the reasons Google has been so reluctant to intervene in individual search results in the past.
“You should be able to find anything that can be found legally,” says Daphne Keller, who was a lawyer at Google from 2004 to 2015, worked on the search product team during that time and is now studying how platforms should work at Stanford become. Google, she said, “is just flexing its own muscles and deciding which information should go away.”
Ms. Keller did not criticize her former employer, but rather lamented the fact that lawmakers and law enforcement agencies have largely ignored the libel industry and extortionate practices, leaving Google to clean up the mess.
That Google may be able to solve this problem with a policy change and adjustments to its algorithm is “the benefit of centralization,” said Ms. Citron, a professor at the University of Virginia, who has argued that technology platforms have more power than governments to prevent online abuse to fight.
Professor Citron was impressed with the changes made by Google, particularly the “known victims” label. She said that such victims were often posted repeatedly and that websites made the damage from scratching each other worse.
“I applaud your efforts,” she said. “Can you do better? Yes, you can.”
Aaron Krolik contributed to the coverage.