How to decide what to build next for your SaaS (without guessing)

If you’re building a SaaS, the question “What should we build next?” never goes away. It just changes clothes:

  • “Should we do this feature request or fix the churn?”
  • “Are we missing a competitor checkbox?”
  • “Should we rebuild onboarding or add an integration?”
  • “Do we need a roadmap… or do roadmaps make us worse?”

And if you’re honest, the scary part isn’t that you don’t know what to build next. The scary part is that every option sounds reasonable and you don’t have a reliable way to choose.

This guide is a decision system you can actually use. It’s designed for the reality most founders live in: limited time, incomplete data, loud feedback, quiet churn, and a product that is never “done enough” to stop learning.

The real goal (what “what to build next” is secretly about)

Most teams treat “what to build next” like a backlog problem. It’s not. It’s a strategy + learning problem.

When you choose what to build next, you’re choosing:

  • Which customers you’re saying “yes” to (and which you’re saying “not now”)
  • Which risks you’re trying to reduce next (activation, retention, pricing, positioning, distribution, reliability)
  • Which narrative you’ll be able to sell (internally and externally)
  • Which trade-off you’ll live with (speed vs polish, breadth vs depth, flexibility vs opinionated UX)

So the right framing isn’t “what should we build?” It’s:

What is the most valuable thing we can learn or unlock in the next 2–6 weeks that changes our trajectory?

That’s the bar. Not “did we ship something.” Not “did users ask.” Not “is competitor doing it.” Trajectory.

Why most prioritization fails (even when it looks structured)

Let’s call out the common failure modes, because they keep repeating in new tools and new jargon.

1) You optimize for loudness, not leverage

Support tickets, Slack DMs, and feature requests are high volume and emotionally persuasive. They feel like certainty. But they’re often:

  • about edge cases
  • about “my workflow” not “our market”
  • about symptoms, not causes

The worst part: you can ship what they asked for and still not fix the underlying problem.

2) You treat “users want it” as the same as “it will move the business”

Users can want something and still not churn if you don’t build it.

Or they can want something and never pay for it.

Or they can want it because they don’t understand your product.

Desire is signal. It’s not a decision.

3) You can’t separate product work from company work

“Build X” sometimes really means:

  • Fix positioning
  • Fix pricing
  • Fix onboarding messaging
  • Fix distribution
  • Fix expectations

If you don’t separate “build” from “change,” you keep building to solve non-building problems.

4) You don’t write down the reasoning, so you can’t learn

If your decisions live in your head (or in a chaotic thread), you can’t:

  • evaluate whether your assumptions were right
  • notice the same debate happening again
  • scale decisions beyond one person

And in 6 weeks you’ll say, “Why did we build this?” and you’ll have no answer.

A practical framework: Decide based on constraints, risks, and outcomes

Here’s the core model:

  1. Pick the constraint (what is currently limiting growth)
  2. Name the risk you’re reducing next (what could kill the business)
  3. Define the outcome (how you’ll know it worked)
  4. Choose the smallest bet that can change your mind

You can use this whether you have 10 users or 10,000.

Step 1: Identify your current constraint (choose one)

Most SaaS businesses have one constraint that matters most at a time. Pick the one you’re actually living.

  • Activation constraint: People sign up but don’t hit value fast enough.
  • Retention constraint: People get value once but don’t come back or don’t renew.
  • Expansion constraint: Customers stay but you can’t grow revenue per customer.
  • Positioning constraint: You don’t have a crisp story; the right people don’t self-select.
  • Distribution constraint: The product is good enough, but growth is bottlenecked by acquisition.
  • Reliability constraint: Bugs/performance issues are eroding trust and referrals.

If you try to solve two at once, you usually solve neither.

Quick diagnostic (no analytics required)

Answer these honestly:

  • Are signups happening but “Aha” is rare? → activation
  • Are people using it but leaving? → retention
  • Are people staying but capped on plan? → expansion or packaging
  • Are you getting wrong-fit users constantly? → positioning
  • Is the product good but no one knows? → distribution
  • Are you embarrassed by stability? → reliability

Pick the constraint. Write it down.

Step 2: Convert your constraint into a “risk you must reduce”

Constraints are abstract. Risks are actionable.

Examples:

  • Activation constraint → risk: “New users don’t understand the product in the first 3 minutes.”
  • Retention constraint → risk: “We’re not part of a weekly workflow.”
  • Distribution constraint → risk: “We can’t reliably create intent; we only get random spikes.”

Write the risk as a sentence that could be false. That matters because you need to test it.

Step 3: Decide what outcome you’re optimizing for (not a feature)

If you pick features, you’ll argue about features. If you pick outcomes, you’ll argue about truth.

Good outcomes are:

  • measurable (even if approximate)
  • time-bounded
  • linked to the constraint
  • realistic for your stage

Examples:

  • Activation: “Increase first-session ‘Aha’ from 18% to 28% in 4 weeks.”
  • Retention: “Increase week-4 active rate from 22% to 30%.”
  • Reliability: “Cut support tickets tagged ‘sync bug’ by 60%.”

If you don’t have instrumentation yet: choose a proxy you can observe (support tags, manual cohort checks, user interviews). Don’t wait for perfect analytics to start making better decisions.

Step 4: Build the smallest bet that can change your mind

This is where most roadmaps go wrong. They assume the “real” work is big.

Small bets aren’t “tiny features.” They’re decisive experiments:

  • A new onboarding path (not a full redesign)
  • One integration that unlocks a workflow (not 10 integrations)
  • A reliability sprint focused on one failure mode
  • A pricing page + packaging change (not a full reposition)
  • A single “killer” report/dashboard (not a whole analytics suite)

The question is:

What is the smallest thing we can do that would either meaningfully improve the outcome or give us strong evidence we’re wrong?

If it can’t change your mind, it’s not a bet. It’s busywork.

How to handle feature requests without becoming a feature factory

Feature requests are useful, but only if you treat them as evidence, not instructions.

When someone asks for a feature, capture these four things:

  1. Who is asking (segment, plan, job-to-be-done)
  2. What they were trying to do (the goal)
  3. What blocked them (the friction)
  4. What happens if they fail (the consequence)

If you only capture the requested feature, you lose the context that helps you design the right solution.

The “three interpretations” trick

For any feature request, write three interpretations:

  • Interpretation A: They need the feature as described.
  • Interpretation B: They need a workflow outcome, and the feature is one possible way.
  • Interpretation C: They’re confused about your existing product, and the real fix is clarity.

Then decide which interpretation is most likely. You’ll often find the highest leverage path is B or C.

A prioritization score that doesn’t lie (as much)

If you want a scoring model, keep it brutally simple. Complex scoring creates fake precision.

Score each candidate initiative 1–5 on:

  • Constraint alignment: does it attack the current constraint?
  • Reach: how many of the right users will it affect?
  • Impact on outcome: if it works, does it meaningfully move the metric?
  • Confidence: do we have evidence (not vibes)?
  • Effort: how many focused days?

Then compute:

Priority = (Alignment × Reach × Impact × Confidence) / Effort

This is not math worship. It’s a forcing function: it makes you talk about effort, confidence, and alignment explicitly.

Two rules:

  • If confidence is 1, you need discovery, not build.
  • If effort is huge, split it into smaller bets.

The “decision doc” (the fastest way to stop repeating debates)

Before you commit to building, write a short decision doc. One page. No fluff.

Decision doc template

  • Constraint: (activation/retention/etc.)
  • Risk: (what might be false)
  • Outcome: (metric/proxy + timeframe)
  • Bet: (what we’re building/changing)
  • Why this, why now: (evidence)
  • What we’re not doing: (explicit trade-off)
  • How we’ll evaluate: (what data/interviews)

This makes decisions legible. It also becomes your future learning archive.

Example: making a “what to build next” decision in real life

Let’s say you have a SaaS with 1,200 signups/month and low conversion to paid.

Symptoms:

  • People sign up, click around, then leave.
  • Support emails say: “Looks cool but not sure how to use it.”
  • Paid users are happy, but new users don’t “get it.”

Decision:

  • Constraint: activation
  • Risk: users don’t reach value fast enough
  • Outcome: increase first-session “Aha” from 18% → 28% in 4 weeks
  • Bet: a guided first-run flow + one opinionated default template (not a full redesign)
  • Not doing: integrations sprint (tempting but not the bottleneck)
  • Evaluate: event tracking + 8 user calls + cohort check

Now you have an explicit hypothesis and a measurable target. Even if you fail, you learn something specific.

The part nobody wants to hear: sometimes the right “next build” is not building

Sometimes the highest leverage move is:

  • removing options
  • changing pricing/packaging
  • rewriting onboarding copy
  • narrowing your ICP
  • fixing a reliability issue that makes people distrust you

If your current constraint is trust, shipping more features can make it worse.

If you’re unsure, ask:

Will this change increase clarity, reduce friction, or increase trust for the users we want?

If the answer is no, it’s probably not the right next build.

FAQ

“What if users are asking for lots of different things?”

That usually means one of:

  • your ICP is too broad (wrong-fit users)
  • your product promise is unclear
  • you’re missing a core workflow step that makes the product feel incomplete

Look for patterns in outcomes, not in features. Many different requests can point to the same underlying job.

“What if competitors have a feature we don’t?”

Ask two questions:

  1. Is that feature actually driving their growth, or is it table stakes?
  2. Would building it improve our constraint right now?

Competitor parity is a tax. Pay it only when it reduces churn, removes sales friction, or unlocks distribution.

“What if we have no data?”

You always have data. It’s just messy:

  • sales calls
  • support tickets
  • churn reasons
  • activation confusion
  • win/loss notes
  • user interviews

Write it down, categorize it, and make decisions from patterns—not from the last conversation you had.

Related reading

A simple way to make this easier (and more consistent)

The hard part isn’t having ideas. It’s keeping all the raw inputs—captures, feedback, quotes, churn reasons, “we should…” thoughts—organized enough that patterns emerge.

That’s the problem Caret is built for: it’s an AI product brain that turns messy captures into structured insights, highlights recurring opportunities, and helps you decide what to build next (with the reasoning attached).

If you want, start by capturing everything for a week—then use it to generate a clean set of opportunities and pick the next smallest bet that actually moves your constraint.