I believe for the first time, Google has provided more detail on how it handles review spam policy enforcement for Google Maps. The search company wrote a blog post, accompanied by a short video, describing what they do to tackle issues with reviews in Google Maps.
The blog post goes over the policies, the enforcement of the policies, how Google moderates reviews with machine learning and human help and more.
Here is the video if you want to watch it first or not:
Google said “when governments and businesses started requiring proof of COVID-19 vaccine before entering certain places, we put extra protections in place to remove Google reviews that criticize a business for its health and safety policies or for complying with a vaccine mandate.” So I asked some local SEO experts on this and this is what I heard:
I can offer one perspective. My hospital has seen a handful of antivaxxer review bombs automatically removed by Google following several incidents that received widespread attention.
— Jared Caraway (@jaredcarawaytx) February 2, 2022
I haven’t read Google’s piece yet but it is conceivable that rants against masking are proactively removed.
That being said their whole moderation system is flawed, under trained and under staffed and all too often misses obvious abuses
— Mike Blumenthal (@mblumenthal) February 2, 2022
Google added it uses both humans and algorithms to fight review spam, and machine learning is Google’s “first line of defense,” the company said. Google’s algorithms look at reviews from these angles:
- The content of the review: Does it contain offensive or off-topic content?
- The account that left the review: Does the Google account have any history of suspicious behavior?
- The place itself: Has there been uncharacteristic activity — such as an abundance of reviews over a short period of time? Has it recently gotten attention in the news or on social media that would motivate people to leave fraudulent reviews?
Google’s automated methods look for patterns, Google disclosed a couple patterns it looks for including (1) a group of people leaving reviews on the same cluster of Business Profiles to a business or (2) a single place receiving an unusually high number of 1 or 5-star reviews over a short period of time.
Google said its “human operators regularly run quality tests and complete additional training to remove bias from the machine learning models.”
But while the automated methods act quickly, sometimes things do need human review. And humans can flag reviews in the platform for human review. Google said its “human operators works around the clock to review flagged content. When we find reviews that violate our policies, we remove them from Google and, in some cases, suspend the user account or even pursue litigation.”
Anyway, we all know Google has its issues, espesially with map spam and review spam – but it is nice to see Google trying to be more transparent about it – on some level.
Forum discussion at Twitter.