Am i scheduling follow-ups too soon? testing 2-day vs. 5-day gaps to avoid looking desperate

okay, so i’ve been running structured a/b tests on follow-up timing, and i’m seeing some weird results that don’t match what i expected or what i’ve read in a lot of outreach guides.

the conventional wisdom says “follow up after 3-5 days to stay top of mind without being annoying.” but i’ve been testing different intervals, and my data is suggesting something different. when i follow up after 2 days, i’m getting more opens (people are checking my message), but not necessarily more replies. when i follow up after 5-7 days, i’m getting fewer opens, but a higher percentage of those opens convert to actual replies.

my theory is that the 2-day follow-up catches them while they’re in their inbox, but they’re not ready to commit. the 5-7 day follow-up hits them when they’ve had time to think, or maybe they’re reconsidering and suddenly it makes sense.

right now i’m running about 500 outreach sequences per week, and i’ve isolated follow-up timing as a variable (same message templates, same lead filter, same everything—just the timing changes). the sample is big enough that i’m confident the pattern is real.

my concern is: am i just seeing noise in the data, or is there actually something to this? has anyone else noticed that waiting longer actually improves reply quality, even if it means fewer overall opens? i’m trying to figure out if the 5-7 day gap is actually better for conversion, or if i should stick with the 3-day sweet spot that everyone talks about.

this makes total sense from a psychology angle. the 2-day follow-up is interrupting their thought process. they see a new message, they mentally file it under “deal with later,” and then they’re back to work. you’re fighting their momentum.

the 5-7 day follow-up hits after they’ve moved on to other priorities. when your message lands, it’s a new context—and if your hook is different than your first message (which it should be), it’s like a fresh conversation starter.

so you’re not seeing “better quality” replies at 5-7 days. you’re seeing replies from people who actually engaged with your first message, thought about it, and needed a reminder. that’s real interest.

i’d test a 3-day follow-up with a completely different angle vs. a 5-day follow-up with a reinforcement angle. that might solve the problem—keep the momentum with 3 days, but change the hook so it doesn’t feel like a repetition.

just to nerd out on this—are you measuring “opens” and “replies” through liseller’s tracking, or are you manually checking? because there’s a huge difference. liseller’s open tracking is based on read receipts (when they actually view your message), but it’s not 100% reliable because linkedin’s tracking has limitations.

if you’re relying on open data, it might be misleading. like, someone might have your message “open” in their notifications but never actually read it. whereas a reply is a concrete action—you know they engaged.

i’d honestly ignore “opens” and focus purely on “reply rate” for your test. that’s the real metric. opens are a lagging signal; replies are the actual outcome you care about.

from my recruiting experience, timing is super context-dependent. if someone just changed jobs or got promoted, a 2-day follow-up hits them in chaos mode—not gonna happen. a 5-7 day follow-up hits them when they’re a bit more settled and actually thinking about their career.

but if someone is actively job searching, a 2-3 day follow-up is perfect because they’re in “actively looking” mode and your message is fresh in their mind.

so the real question might be: do you have enough data broken down by prospect intent level? like, are prospects who already looked at your profile getting different results than cold prospects? that might explain the pattern you’re seeing.

also, what’s your first message doing? if your first message is too salesy or asks for too much commitment, a 5-7 day follow-up has to soften the ask or change the angle. if your first message is just a question or social proof, 2-3 days is better. timing and message type are linked.

timing matters for account safety too. if you follow up after 2 days on every sequence, linkedin’s algorithm might flag that as bot-like behavior. follows-ups sent at the same interval across thousands of accounts looks suspicious.

i’d recommend varying your follow-up timing—some 2 days, some 4, some 6—so it looks more organic. even better, tie it to their engagement. if they viewed your profile, follow up faster. if they ignored you, wait longer or skip the follow-up entirely.

this also solves your testing problem: natural variation makes for better data anyway.

dude, i tested this exact thing last quarter. went with the 5-day follow-up, and reply rate went from 2.8% to 3.9%. it was real enough that i switched my whole operation over to that timing.

my theory: first message is an interrupt. they’re busy, they see it, they don’t know you, they’re skeptical. 5 days later, they’ve thought about it (maybe subconsciously), or they’re just less defensive. when your follow-up lands, it’s not an interruption—it’s a nudge.

i’m now testing two follow-ups on the 5-day model—first message, then follow-up at day 5, then another at day 12. and that’s performing even better. but you have to nail the angle on each one so it doesn’t feel repetitive.

so in liseller, you can actually set up smart follow-up sequences that trigger based on activity, not just time. like, you can say “follow up 2 days after they view your profile, but wait 5 days if they don’t view it.” that might solve for exactly what you’re seeing.

try building a sequence with conditional timing: if they engage (profile view), follow up fast. if they don’t, wait longer. that should naturally optimize for the best response without you having to manually manage intervals.

have you explored conditional follow-ups in liseller yet, or are you just setting static intervals?

also, consider your industry. b2b software? 5-7 days makes sense—decision makers are in back-to-back meetings. recruitment? 2-3 days because people are actively scrolling. service-based? depends if they’re in-market or not. your rule might not be universal.