Local Services Ads Lead Quality Is a Machine Learning Decision, Not a Human One
If you have been running Google Local Services Ads for a while, you probably remember the old rhythm: a questionable lead comes in, you flag it, you dispute it, and somewhere on the other side there is a human editorial review that decides whether you get credited.
That era is fading fast.
Google now describes Local Services Ads lead evaluation as an automated, machine-learning-based system that determines whether a lead is valid and high quality, including whether it is a good match for your business. Leads are assessed when the customer first makes contact, and Google says leads determined to be invalid or low quality are not charged. Then, even if you are charged, Google says those leads can be reassessed over time and credited automatically if later determined to be low quality.
What Google Actually Says About the Machine Learning Review Process
Google’s clearest statement is in its documentation about automated lead credits. They say they have trained machine learning models to understand which leads are high quality, that leads are first assessed at initial contact, that invalid or low-quality leads are not charged, and that charged leads may be reassessed over time with automatic credits issued if the model later decides the lead was low quality.
For businesses in Portland and everywhere else, that shift changes how you should think about lead quality, lead disputes, and the signals you feed into your account.
Why “In Review” Exists and What It Really Means
In a machine learning pipeline, “in review” is usually a confidence problem.
Some leads are obvious. A clean match, correct service, correct location, normal behavior, normal context. Those are easy for an automated system to classify quickly.
Other leads are ambiguous. The customer intent is unclear. The service requested is close to what you offer but not quite. The location is borderline. The call pattern looks odd. The contact details seem suspicious. A human might resolve that ambiguity in seconds after hearing the full context, but a model often needs more signals or more time.
Google’s documentation supports the idea that lead evaluation is not always a one-and-done decision, because it explicitly says charged leads can be reassessed over time and credited later if determined to be low quality. So when you see “in review,” interpret it as: the system is not confident yet.
A Practical Model of How Google’s ML Lead Evaluation Works
Google does not publish its feature list or its thresholds. Still, the workflow they describe maps to a pattern you see across many modern automated quality systems.
There is typically a first-pass classifier at the moment of contact, because Google needs to decide whether to charge.
Then there is a second-pass reassessment, because new information becomes available later, or because the system runs deeper analysis when it is not under strict real-time constraints.
That is why you can see a lead look billable today and then be credited later.
What the ML System Is Probably Trying to Predict
Even without knowing the exact inputs, you can still think clearly about the outputs.
Google is trying to predict whether a lead should be charged.
In plain English, that usually breaks down into questions like these:
- Was this a real person with real intent to hire a provider, or was it spam, solicitation, or accidental?
- Was this inquiry actually for the services the advertiser offers?
- Was it actually in the advertiser’s service area and within reasonable fulfillment expectations?
- Was this lead duplicative or otherwise not meaningful as a new opportunity?
Google’s public language focuses on “high quality,” “invalid,” “low quality,” and “good match,” which are broad buckets, but they point to the same practical reality: the system is attempting to protect the pay-per-lead model from garbage inputs while still monetizing legitimate demand.
Why This Matters More in 2026 Than It Did Before
Portland businesses are dealing with the same trend you see across Google Ads: more automation, fewer manual levers, and more dependence on signal quality.
LSAs are a perfect example because the platform is intentionally simplified. You are not building huge keyword lists. You are not micromanaging match types. You are feeding Google a set of services, a service area, a schedule, and a reputation profile, and then you are letting automation decide who gets matched and what gets charged.
If you want a broader overview of how we approach Google Ads strategy and performance tuning beyond LSAs, you can also check out our dedicated Google Ads services page here: Google Adwords.
The Hidden SEO Lesson Inside LSA Machine Learning
LSAs are not “SEO,” but the logic is familiar. Both organic search and LSAs are ranking and matching systems.
In organic, you win by aligning intent, relevance, and trust signals.
In LSAs, you win by aligning intent, relevance, and trust signals, then you also need the lead-quality system to classify your leads correctly and credit the junk when it slips through.
How to Reduce “Gray Area” Leads That Trigger Reviews and Billing Friction
The goal here is not to outsmart Google. The goal is to make your business profile and your targeting so unambiguous that the model has fewer borderline cases to classify.
Here are a few high-impact ways to tighten things up:
- Tighten service categories and job types. Over-broad service lists increase the odds that leads look “possibly relevant” even when they are not.
- Audit service area boundaries. Edge-of-boundary leads are classic ambiguity generators.
- Make your business hours accurate. Clear hours reduce weird patterns and missed expectations.
- Respond quickly and consistently. Even when lead quality is strong, slow response can make good leads look unproductive in aggregate.
- Use lead feedback consistently. If your dashboard offers lead rating or feedback fields, treat it like long-term signal hygiene.
Lead Grading and Feedback: What It Can and Cannot Do
You will see a lot of marketing content claiming that grading leads trains the system and improves future targeting. The honest version is more nuanced.
Google emphasizes the machine learning review and the automated crediting mechanism, but it does not promise a direct, immediate “your ratings will retrain the model for your account next week” outcome.
Still, from a systems perspective, consistent feedback is valuable, because large-scale ML systems often incorporate aggregated feedback as training labels or calibration signals over time. If you can do it consistently, it is worth doing.
Where This Is Headed Next
My bet is that lead quality evaluation becomes even more automated, more continuous, and more context-aware. That could mean better filtering and fairer billing over time. It could also mean that sloppy configuration gets punished faster, because the system has less tolerance for ambiguity when it is making decisions at scale.
Either way, the businesses that win will be the ones that treat LSAs like an intent-matching engine that needs clean inputs, not like a vending machine that randomly spits out leads.
Want Help Cleaning Up LSA Lead Quality?
If you are getting too many “in review” leads, too many wrong-service calls, or too many charges that do not feel aligned with real opportunities, that is usually fixable, but it starts with an audit of your LSA inputs and how they map to actual customer intent.
If you want me to take a look, head over to the contact page on this site and send me what vertical you are in, what you are seeing most often in “in review,” and a screenshot of your lead types breakdown.


