Google Maps reviews

Google Maps: how review moderation works

In this article, we are going to learn how Google moderates and fights inappropriate content on Maps.

Today, online reviews are an integral part of the purchasing and consumption process of Internet users. In fact, according to the European Consumer Centre France (ECC), nearly 90% of consumers rely on online customer reviews before ordering a product or booking a service.

At the same time, the growth of online reviews generates numerous frauds: false positive reviews, negative reviews published by competitors, deletion of negative reviews… Customer review platforms must therefore be more vigilant. Google, which publishes reviews via Google Maps, now explains the moderation process of this continuous flow of information.

Strict rules imposed by Google

In order to fight against abusive content published by Internet users, Google has established a strict policy, which may change depending on the societal context (such as the pandemic or the presidential elections).

The rules dictated by the Mountain View firm have two main objectives:

  • The opinions must be based on real experiences,
  • Illegal, irrelevant or disrespectful comments should not be published.

Types of reviews that violate Google’s policies

Google has published a list of the types of reviews that violate these policies:

  • deliberately false content and spam,
  • copied or stolen photos,
  • off-topic reviews,
  • defamatory language,
  • personal attacks,
  • unnecessary or incorrect content.

It should also be noted that these rules apply to all user-generated content, namely

  • the texts of reviews,
  • notes,
  • images and videos,
  • questions and answers,
  • captions,
  • hashtags,
  • tags,
  • links,
  • metadata.

Google uses a machine learning system to moderate reviews

To facilitate the management of abusive reviews, Google uses an automatic moderation system that supports the teams in charge of this mission.

As we receive a large number of reviews, we realized that we needed both human analysis, for the finesse of its evaluation, but also machine learning, for its capacity in terms of volume.

The machine learning system is therefore able to identify repeating patterns and remove undesirable content before it is even online, thus reducing the workload of the teams involved.

The machine learning system evaluates several factors:

  • the content: the algorithm analyzes whether the content is prohibited or subject to possible restrictions, based on the rules defined by Google,
  • the account: the system checks the user’s profile, and detects potential suspicious activities,
  • the location: the machine learning tool inspects the number of reviews received in a defined period of time, and whether this establishment has already encouraged users to publish false reviews, via social networks or the press in particular.

If the algorithms do not detect any violations, Google says the review can be posted in just a few seconds.

Google encourages users to report malicious reviews

Even if the moderation system by automatic learning is constantly optimized, it sometimes happens that an inappropriate review slips through the net. To address this issue, Google makes it easy for users to report reviews.

If the review concerns your establishment, simply go to the dedicated help page, which allows you to manage your reviews as well as your removal requests. As a consumer, you can also report a review that is deemed inappropriate.

In addition to the deletion of the malicious review, its author is exposed to two risks: to see his account suspended, and to be subject to legal proceedings.

Peter