I’ve been experimenting with smart lead filtering lately, and I’m genuinely confused about whether I’m optimizing or sabotaging myself. I set up filters for company size, revenue, job title keywords, and a few other signals. My outreach volume dropped by about 40%, but my reply rate went up. Sounds good, right?
Here’s the problem: I can’t tell if that 40% I’m filtering out is actually dead weight or if I’m just being too aggressive with my criteria. Like, I’m filtering out companies under 50 employees, but what if some of those smaller companies are actually high-intent? Or what if I’m missing founder-types who don’t have typical titles?
I also wonder about the trade-off between quality and quantity. In B2B sales, volume matters, but so does conversion. Has anyone actually measured the ROI of aggressive filtering vs. casting a wider net and relying more on your messenger copy to do the heavy lifting? Because right now, I’m getting better metrics on paper, but my actual revenue pipeline feels thinner.
Here’s the uncomfortable truth: filtering is sexy because it feels like you’re being smart, but 90% of the time, people filter too aggressively. They’re trying to find the “perfect” prospect, when the reality is that there are way more imperfect prospects who still have budget and pain than there are perfect ones.
My take? Don’t filter on what they are; filter on what they do. Instead of filtering out companies under 50 employees, filter for those that have recent hiring spikes, product launches, or funding announcements. Those signals matter way more than headcount. And job titles? Forget exact matches. Look for roles that suggest decision-making power or buying authority. A “Operations Manager” at a 30-person company might be more valuable than a “VP of Sales” at a 500-person enterprise, depending on your product.
You need data to answer this, not intuition. Here’s what I do: I tag every lead I filter out with a reason—“under 50 employees”, “no LinkedIn presence”, “tech-only company”, whatever. Then quarterly, I sample 100 of those filtered-out leads, manually reach out to them with a different message, and track results separately.
I’ve found that usually, 15-25% of filtered-out leads actually respond and book calls. That tells me my filters are maybe 10-15% too aggressive. So I adjust. The key is having this feedback loop. Without it, you’re flying blind. Also, pro tip: use a CRM webhook to log which filters excluded each lead. Then build a dashboard showing filter performance. This takes 30 minutes to set up and saves you months of guessing.
In recruiting, aggressive filtering usually means losing access to high-potential candidates who don’t fit the template. A developer might not have the exact tech stack you filtered for, but they could learn it in two weeks and be perfect for your team.
What I’ve learned is to use filtering as a sorting mechanism, not a gate. Your first pass filters should move prospects into tiers—Tier 1 (perfect fit), Tier 2 (good fit), Tier 3 (possible fit). Then you personalize differently for each tier: deep personalization for Tier 1, lighter personalization for Tier 2, conversational but less specific for Tier 3. This way, you’re not losing leads; you’re just optimizing your effort allocation. The volume stays roughly the same, but your ROI per message might actually improve.
From a safety perspective, aggressive filtering is actually your friend. When you’re sending fewer, more targeted messages, you’re less likely to trigger LinkedIn’s spam detection. But here’s the flip side: if your filters are too aggressive, you end up sending so few messages that your account looks dormant, which can also be flagged as suspicious.
The sweet spot I’ve seen is: filter down to about 150-300 prospects per week that you actually want to reach out to. Then send to about 40-50 of them per day, spread across the day. This keeps your activity looking natural without overwhelming the system. If your current filter brings you down to less than 150 per week, you might want to loosen criteria slightly just to keep account activity healthy.
Here’s what I learned the hard way: I was filtering for companies with 50-500 employees and specific industries. Felt good, right? But I was leaving huge money on the table. I had a friend who basically ignored all my filters and just reached out to literally anyone who had the job title I was targeting, regardless of company size or industry. His reply rate was lower, but his deal volume was way higher because he was reaching out to like 500+ people per week vs. my 150.
Eventually, I met somewhere in the middle. I loosened my filters, but I got smarter with my messaging. Like, instead of one generic message, I wrote three different versions—one for enterprises, one for mid-market, one for small companies. Each one addressed their specific challenges. Suddenly, the volume worked in my favor again. So maybe the issue isn’t your filter; it’s that you need to message differently to people in different segments.
Great question! I actually want to point out something about how smart lead filtering works in LiSeller. When you set multiple filters, they compound—so filtering for company size and industry and job title significantly reduces your pool. But our system also lets you create scoring rules instead of hard filters. Like, a prospect gets points for matching criteria, and you can set a minimum score threshold.
This is way more flexible. A founder with the wrong job title but perfect company profile might score 75 points and still make the cut. A VP of Sales at the wrong company size but in your industry might score 70 points. Both get through, even though neither fits every criterion perfectly. This usually gives you 10-15% more volume while keeping quality high. Have you explored scoring rules, or are you using hard filters only?
I’ve worked with teams running both strategies, and here’s what I’ve seen: aggressive filtering works only if your messaging is genuinely compelling and your ICP is very tight. If either of those isn’t true, you’re leaving money on the table.
Here’s my recommendation: measure your “Cost Per Opportunity” (total time invested divided by opportunities created), not just reply rate. If your reply rate is 12% but you’re sending 60 messages, that’s 7 replies. If you loosen filters to send 200 messages at a 5% reply rate, that’s 10 replies. The second might be more valuable even with a lower percentage. Track this for two weeks with both strategies and compare. The data will tell you if your filtering is actually optimizing for revenue or just for vanity metrics.
One final tactical point: I’d recommend testing filter combinations instead of assuming. Like, keep company size loose (10-500 employees) but tighten job title keywords. Or keep industry broad but filter for recent growth signals (hiring, funding, job postings). Run these combinations in parallel and track results separately. You’ll quickly figure out which filters actually predict high-intent prospects vs. which ones are just reducing noise for the sake of it. My bet is you’ll find two or three filters that matter, and the rest are just making you feel productive without adding real value.