Bad data nearly cost me my job less than two years into my career. Well, technically, my poor response to a territory deployment built on bad data nearly cost me my job. But that’s a less fun hook to a blog post.
Here’s the story: We were growing fast and needed to completely redesign territories. The approach of “reps grab whatever accounts they want” wasn’t going to work anymore. SalesOps built a strong scoring algorithm with all the criteria to score accounts: industry, revenue, employee count. They pulled all our customer data to see what “great fit” looks like, and could predict potential spend nearly perfectly when applied to current customers.
The approach, weighting, and algorithm were well designed and well tested. The next step was creating geographic territories based on zip codes. They’d take all accounts in a zip code, score them, then sum up all accounts to give the zip code a score. The logic was sound: if we know the score of a zip code, we know how much revenue to expect from that area.
My territory ended up as suburban Grand Rapids, Michigan — all the zip codes in the area that weren’t in the metro/city. When I dug in, I immediately found a massive issue: I had accounts that were scored high that actually sucked.
The Territory Deployment That Almost Ended My Career
The problem wasn’t the algorithm used to score the data. It wasn’t the zip code-based approach. The problem was the data powering the model.
I’d look at accounts that our data said had 750 employees and $300 million revenue. Reality was they had 14 employees and not enough revenue to build a working website. Out of my 2,000+ accounts, maybe 20 were actually a good fit for what we sold.
So I threw a fit and nearly got fired. Ended up suspended without pay for a day — bless up merciful sales leaders!
But that’s not the point of the story. The point is that great algorithms and strong strategies fail when the underlying data isn’t accurate.
Why Great Algorithms Fail With Bad Data
Bad data quality can destroy even the best sales algorithms. Most territory scoring systems rely heavily on revenue and employee count because these seem like obvious indicators of account quality. But this approach has two critical flaws.
First, data quality for privately held companies is usually garbage. I’ve never spoken to a salesperson who thought they had great CRM data. The impact is painful: ops will highlight accounts for you to work based on numbers that are often completely wrong.
Second, even with perfect data, quantitative measures don’t tell the whole story. Which would you rather prospect into: a $500 million company with 2,000 employees but no clear use case for your product, or a $50 million company with 500 employees and multiple clear use cases?
The better approach is what I call the Use Case + Budget framework. Use Case means you can identify where and how your product would be used. Budget means the company likely invests, or would invest, in solutions like yours. You’re looking for potential budget — not already allocated budget.
My two favorite signals for potential budget are competitive solution usage and headcount growth in target departments. If they’re spending on a competitor, there’s potential budget for you to capture. If they’re hiring in departments that use your type of solution, they’re investing in growth.
This framework would have saved me from that disastrous territory deployment. Instead of relying on inflated revenue numbers, I would have looked for companies that actually built software (use case) and were hiring developers (budget signal).
The AI Amplification Problem: Making Bad Decisions Faster
Now, with AI, companies are able to make really bad decisions even faster because their fancy AI tooling and strategy is powered with incomplete or inaccurate data.
AI tools amplify existing data problems rather than solving them. When you feed an AI system bad data about account revenue, employee count, or industry classification, it doesn’t magically fix those errors — it just makes confident predictions based on garbage inputs.
The risk is even higher now because AI can process thousands of accounts instantly. Where manual territory planning might have resulted in a few hundred mis-scored accounts, AI-powered systems can mis-score entire territories in seconds. The speed advantage becomes a massive liability when the underlying data is wrong.
This is why sales teams need to proactively address the risks in their data before implementing AI territory planning tools. The same discipline that prevents deal risks applies to data quality: identify problems early, before they compound.
What Sales Teams Should Do Before Implementing AI Territory Planning
Manual account validation should precede any AI territory planning implementation. Before you let algorithms make territory decisions, you need to understand what good data looks like in your system.
Start by manually scoring a sample of your Priority 1 accounts using the Use Case + Budget framework. For each account, research whether they actually have a use case for your solution and signals that indicate budget availability. Compare your manual scores to what your current system shows.
If you find significant discrepancies — like accounts scored as high-value that have no use case, or small companies with strong use cases scored as low priority — you have a data quality problem that needs fixing before AI can help.
The manual validation process also helps you understand why AEs shouldn’t rely solely on RevOps data for territory decisions. RevOps teams do excellent work, but they’re often working with the same incomplete data sources that caused my Grand Rapids disaster.
Once you’ve validated your data quality and established a baseline of accurate account scoring, then you can implement AI tools to scale the process. But skip the validation step, and you’re just automating bad decisions at unprecedented speed.
The lesson from my near-firing experience still holds: great algorithms can’t save you from bad data. In the AI era, that lesson matters more than ever.











