Yelp places belief and security within the highlight – .

Yelp puts trust and safety in the spotlight – TechCrunch

Yelp released its first Trust and Security Report this week with the aim of explaining the work it is doing to combat fraudulent and otherwise inaccurate or unhelpful content.

With a focus on local business reports and information, Yelp could be relatively free of the misinformation that other social media platforms are struggling with. But of course, Yelp reviews are at stake in their own way as they can have a huge impact on business results.

Like other online platforms, Yelp uses a mix of software and human curation. On the software side, one of the main tasks is to sort reviews according to recommended and non-recommended. The Trust and Safety Group’s Product Manager, Sudheer Someshwara, told me that a review may not be recommended because it appears to have been written by someone with a conflict of interest or requested by the company, or by a user who has not done me haven’t posted many reviews yet and “We just don’t know enough information about the user to recommend these reviews to our community.”

“We take fairness and integrity very seriously,” said Someshwara. “No one at Yelp can override decisions made by the software. That even includes the engineers. “

He added, “We treat every company the same whether they advertise with us or not.”

Credit: Howl

The company states that more than 18.1 million reviews were posted in the past year, of which 4.6 million (about 25%) were not recommended by the software. Someshwara noted that even if a review isn’t recommended, it isn’t completely removed – users just need to find it in a separate section.

There will be moves, but this is one of the places where the User Operations team comes in. Aaron Schur, Vice President of Law, Trust, and Safety said, “We’re making it easy for both businesses and consumers to flag reviews. Any content marked this way will be reviewed by a living human to decide whether to remove it in violation of our guidelines. “

Last year, about 710,000 reviews (4%) were completely removed for violating company policies, according to Yelp. Of these, more than 5,200 were removed for violating the platform’s COVID-19 guidelines (including prohibiting auditors from claiming they received COVID from a company or complained about mask requirements, or that a company had to shut down due to security regulations) . Another 13,300 were removed between May 25 and the end of the year for threats, indecency, hate speech or other harmful content.

“Every current event that takes place will find its way onto Yelp,” said Noorie Malik, vice president of user operations. “People are turning to Yelp and other social media platforms for a voice.”

However, expressing political beliefs can run counter to what Malik called Yelp’s “guiding principle”, which is “real firsthand experience.” So Yelp has developed software to detect unusual activity on a page and will also add a consumer alert if it believes there are “monstrous attempts to manipulate reviews and ratings.” For example, it says the number of media-related incidents increased 206% year over year.

It’s not that you can’t express political opinions in your reviews, but the review needs to come firsthand rather than being prompted by reading a negative article or an angry tweet about the business. Sometimes, she added, it means that the team is “removing content with a point that we agree with”.

An example to illustrate this distinction: Yelp will write off reviews that appear to be driven by media coverage that suggest that a business owner or employee has acted racially. At the same time, in December 2020, two companies were labeled with a “Business Accused” Warning of Racism “which reflects” clear evidence of monstrous, racist acts by a business owner or employee “.

In addition to looking at individual reviews and activity spikes, Yeshp said Yelp will also perform “stab operations” to find groups posting fraudulent reviews.

In fact, his team appears to have closed 1,200 review ring-related user accounts and reported nearly 200 groups to other platforms. And an updated algorithm has just been introduced to better identify ratings from these groups and not recommend them.