The 0.1-Star Math: What Your Google Rating Actually Does to Revenue
How a 0.1-star difference moves real revenue, worked through restaurants, home services, and salons. The Harvard Business School paper, the rounding-threshold problem, and why review count multiplies the rating effect.
A 0.1-star difference does not sound like much. On a Google profile it looks identical to most users. But in a 2012 paper from Harvard Business School, Michael Anderson and Jeremy Magruder ran a regression on 328 San Francisco restaurants and found that one extra star on Yelp produced a 5 to 9 percent revenue lift. Per 0.1 star, that is roughly 0.5 to 1 percent of revenue. For a restaurant doing $1.2 million in annual revenue, a 0.1-star move is $6,000 to $12,000 a year. Sustained.
Owners look at me funny when I say this. The reaction is usually some version of "no way, my customers do not look at stars that closely." They do not need to. Google does the looking for them, by ranking businesses with higher star averages higher in the local pack. The customer never has to compare 4.0 vs. 4.1; the algorithm does that, and the customer just clicks on whoever is on top.
This piece walks through the actual math, worked across three industries, and the two threshold effects that make some 0.1-star moves matter more than others.
The Harvard paper, in plain English
Anderson and Magruder used a clever natural experiment. They knew that Yelp rounds the displayed rating to the nearest half-star, so a restaurant with a true 3.74 displays as 3.5 stars while one at 3.76 displays as 4.0 stars. Two restaurants with essentially identical service quality, separated by 0.02 stars in the database, get displayed as different ratings on the public profile.
The authors compared revenue between restaurants on either side of the rounding threshold. Restaurants that crossed up to the next half-star saw an estimated 5 to 9 percent revenue lift, holding everything else constant. Restaurants that stayed below the threshold did not.
The paper has been cited 1,400 times since publication. It has been replicated in adjacent industries:
- Hotels (Cornell, 2014): Roughly $1.42 in nightly room rate per 0.1-star change on Tripadvisor, holding location and amenities constant. For a 200-room hotel running at 70 percent occupancy, that is around $73,000 a year.
- Home services (BrightLocal, 2024): Customers who saw a 4.5-star contractor were 2.4 times more likely to call than those who saw a 4.0-star contractor with otherwise-identical profile data.
- Doctors (JAMA, 2018): Patients were 1.6 times more likely to book a 4.5-star primary care physician than a 4.0-star one in the same insurance network.
The size of the effect varies by industry, but the direction is consistent. Higher star ratings produce more revenue at the same level of foot traffic.
Worked example 1: a restaurant
Mid-size restaurant in a US city. Annual revenue $1.4 million. Currently has 187 Google reviews and a 4.1-star average. Roughly 60 percent of new diners come from Google (search and Maps).
The Google profile displays 4.1 as "4 stars" with the 0.1 hidden behind a tooltip. Diners scrolling the local pack see "4 stars" and assume average. Move the rating to 4.3 and the profile starts showing "4.5 stars" (Google rounds 4.3 up at the half-star threshold on most surfaces, though the rule changes occasionally).
A 0.2-star move from 4.1 to 4.3 corresponds, per the Harvard estimate, to a 1 to 2 percent revenue lift. On $1.4 million that is $14,000 to $28,000 a year. The math gets more interesting when you account for the display threshold: crossing the 4.25 boundary (where Google starts showing 4.5 stars instead of 4) adds an additional step-change effect, since the visual signal to the customer changes from "average" to "very good."
How many new reviews does a 0.1-star move take? It depends on existing review count. At 187 reviews, moving from 4.1 to 4.2 requires the next 30 reviews to all be 5 stars. From 4.2 to 4.3 requires another 35. The math gets harder as the count grows, which is why owners with mature profiles often see slower star-rating gains than newer ones.
Practical implication: small businesses with under 50 reviews can move their average significantly with disciplined collection. Businesses over 200 reviews need either disciplined collection at higher volume or careful response-and-flag work to remove off-topic or fake reviews.
Worked example 2: a home services contractor
Two-truck HVAC company in a US suburb. Annual revenue $890,000. 64 Google reviews, 3.9-star average. About 40 percent of new customers come from Google search; the rest is referrals.
The 3.9-star average displays as "4 stars" on most surfaces but flags as below-average to comparison-shopping customers. The HVAC industry conversion data from BrightLocal shows that customers comparing two contractors will pick the higher-rated one 78 percent of the time when ratings differ by 0.5 stars or more, and 61 percent of the time at 0.3 stars or more.
If the company moves its rating from 3.9 to 4.4 (0.5 stars), the model from BrightLocal predicts roughly a 30 percent lift in call-out conversion among Google-sourced leads. Applied to $356,000 of Google-sourced revenue (40 percent of $890,000), that is around $107,000 in additional revenue annually.
The number sounds enormous because home services is one of the highest-impact industries for review-driven conversion. The data backs it up. We wrote a longer piece on this dynamic in our home services reviews guide.
Worked example 3: a hair salon
Single-location salon. Annual revenue $310,000. 41 Google reviews, 4.7-star average. About 70 percent of new clients come through Instagram and Google together.
This salon is in a different position than the previous two. At 4.7 stars from 41 reviews, the conversion ceiling is more about review count than rating. Customers do not differentiate strongly between 4.7 and 4.9 in salon discovery; both read as "very good." But customers do differentiate between 41 reviews and 200+ reviews, because they intuit that more reviews means lower variance and more reliable signal.
Moving from 41 to 200 reviews at the current 4.7 average corresponds, in salon-industry data from Booksy and Fresha (booking platforms that publish aggregate conversion data), to roughly a 12 to 18 percent lift in new-client booking conversion. On $217,000 of booking-driven revenue (70 percent of $310,000), that is $26,000 to $39,000 annually.
The lever for this salon is volume, not rating. Pushing for one more 5-star review per month is wasted; pushing for the next 50 is the right move.
Two threshold effects to know
The display rounding threshold
Google rounds displayed ratings to the nearest 0.1 or 0.5 depending on surface. The 4.25 threshold (where the rounding flips from "4" to "4.5" on some surfaces) is the most consequential. A move from 4.24 to 4.26 looks identical in your dashboard but creates a step-change in the customer's visual perception of your profile.
Practical implication: if your rating is at 4.1 to 4.2, moving up by 0.1 to 0.2 captures the full threshold benefit. If you are at 4.0 or lower, work toward 4.5 (the next visible threshold) rather than incremental gains. If you are already at 4.5 or above, focus on review count rather than further rating gains.
The trust-with-volume threshold
Customers discount star ratings backed by very few reviews. A 5.0-star rating from 3 reviews reads as untested. A 4.6-star rating from 240 reviews reads as proven. Most customers cannot articulate this, but it is observable in click-through and booking data.
The volume threshold sits around 50 to 100 reviews in most industries. Below 50 reviews, every additional review materially improves trust. Between 50 and 150, the effect attenuates. Above 150, additional reviews have minimal marginal effect on trust signal but continue to help with the rating math.
Practical implication: small businesses under 50 reviews should prioritize volume over rating optimization. Above 150 reviews, additional volume helps less and rating moves matter more.
What this means for review collection priority
Three rules of thumb based on the math above:
- If you have under 50 reviews: the priority is volume. Collect any genuine review at any rating. Worry about the rating after you cross 80.
- If you have 50 to 150 reviews and a rating of 4.0 to 4.3: the priority is moving the rating to cross the 4.25 display threshold. A 0.2-star move at this stage produces the largest revenue lift.
- If you have 150+ reviews and a rating of 4.5 or above: the priority is sustaining velocity. Review recency matters for local-pack ranking (we wrote about this in the local SEO ranking factors piece). Stagnant profiles lose ranking even if their rating is high.
The temptation at any stage is to chase fake reviews to inflate the numbers. We covered why that backfires in our piece on review gating and FTC enforcement. The math above only works for genuine reviews; faked or filtered review pipelines collapse the moment Google's spam filters catch up, which currently runs on a 60-to-90-day detection window.