MVP Feb 24, 2026 24 min read

How I Launch MVPs That Actually Ship — And Actually Work

CB

C B Mishra

Strategic Technical Project Manager

By C B Mishra  •  Strategic Technical Project Manager  •  Multiple 0-to-1 SaaS, Mobile & Enterprise Launches  •  cbmishra.com

Most products never ship. Not because they run out of money, not because the market doesn’t want them, and not because the team isn’t talented enough. They don’t ship because no one ever defined what ‘done’ meant clearly enough to stop building and start launching.

I have led 0-to-1 product builds across SaaS platforms, mobile applications, enterprise tools, health-tech, fintech, edtech, and IoT dashboards. In almost every engagement, the founders or clients arrived with one of two problems: either they had a vision with no execution structure, or they had an execution structure with no strategic clarity about what they were actually building and why.

This blog is the complete framework I use to take a product from a conversation to a live, validated, user-facing reality — with a methodology that scales from a 6-week MVP to a 6-month enterprise platform launch. Every step is drawn from projects I have personally run. Every mistake listed is one I have personally made or watched someone else make on my watch.

📌   What this playbook covers: The complete 0-to-1 lifecycle — from idea validation and naming to BRD, MVP scoping, sprint delivery, go-to-market, and post-launch iteration. This is not a startup theory blog. It is a practitioner’s delivery guide with templates, decision frameworks, and real war stories from the field.

1. The Fundamental Mistake: Confusing Activity With Progress

The graveyard of 0-to-1 products is littered with Figma files, Notion docs, and half-built Jira boards belonging to products that were in ‘development’ for 18 months and never launched. The teams working on them weren’t lazy. They were busy. They were designing, debating, refining, pivoting, re-scoping, and building — constantly in motion, rarely making decisions, never converging on a shippable product.

This is the fundamental 0-to-1 trap: mistaking activity for progress. The antidote is a framework that forces decision-making at every stage — that makes it impossible to move forward without answering the questions that need to be answered before the next phase begins.

The Three Questions That Determine Whether a 0-to-1 Product Ships

  1. What is the one problem this product solves, for whom, and how do we know that problem is real? (Validation question — must be answered before a single wireframe is drawn)
  2. What is the minimum set of features that proves this solution works for the target user? (Scope question — must be answered before a single sprint is planned)
  3. What does success look like at 30, 60, and 90 days post-launch, and who is accountable for each metric? (Accountability question — must be answered before go-live)

If a founding team or client cannot answer all three questions clearly, the product is not ready to enter development. Starting development before these questions are answered is not ‘moving fast.’ It is building the wrong thing efficiently.

📖   Real example: A client came to me in 2022 with a CRM product idea and a ₹30L budget for development. They had a 40-page feature list and a 6-month timeline. In our first 2-hour session, I asked the three questions above. They could not answer Question 1 specifically — they had three different target customers in mind, each with fundamentally different workflows. We spent 3 weeks on validation before touching a wireframe. Those 3 weeks eliminated 60% of the feature list, changed the target customer entirely, and reduced the build timeline from 6 months to 11 weeks. The product launched, got paying customers in Month 2, and is still running today.

2. Phase 0 — Idea Validation: The Work That Happens Before the Work

Phase 0 is the most skipped and the most valuable phase in any 0-to-1 product journey. It is the period before a single design is drawn, a single API is chosen, or a single developer is hired. Its purpose is simple and non-negotiable: prove that the problem is real, that users will pay to have it solved, and that your proposed solution is the right one.

The Validation Stack I Use

Step 1: Problem Interview (not Product Interview)

The worst validation research asks users about the product. The best validation research asks users about the problem. ‘Would you use an app that does X?’ is a product question — and users will say yes to almost any product question because they don’t want to disappoint you. ‘Walk me through the last time you had to deal with X. How did you handle it? What did you wish was different?’ — that is a problem question, and it reveals real pain.

I conduct a minimum of 12–15 problem interviews before signing off on a product concept. Not surveys. Not polls. 30-minute conversations, recorded with permission, with people who match the target profile. I look for three signals: emotional intensity (do they light up or grimace when describing the problem?), current workarounds (are they solving it anyway, badly?), and willingness to pay (have they already paid something — money or time — to solve it?).

Step 2: Competitive Landscape Map

If no one else is trying to solve this problem, one of two things is true: either you’ve found a genuine market gap, or the problem isn’t real enough for anyone to have tried. Both are possible. You need to know which one you’re in. I map every direct and indirect competitor — not to copy them, but to understand the solution space, identify what users hate about existing options, and find the wedge where our product can win.

Step 3: The Riskiest Assumption Test

Every product idea rests on a set of assumptions. Some are safe (users have smartphones). Some are risky (users will share financial data with a third-party app). The Riskiest Assumption Test identifies the single assumption that, if wrong, makes the entire product concept invalid — and designs a cheap, fast experiment to test it before committing to development.

For a B2B SaaS product I was scoping in 2022, the riskiest assumption was that operations managers — our target users — would have budget authority to purchase a ₹8,000/month tool without a procurement cycle. I tested this with three conversations and a mock pricing page. Result: they didn’t. Procurement required CFO sign-off above ₹5,000/month. We adjusted the pricing model to ₹4,500/month with a premium tier, and changed the sales strategy to target department managers instead of IT managers. Zero code written. Maximum learning.

Step 4: The ‘Fake Door’ or Landing Page Test

Before building, build a description of what you’re going to build. A single landing page with a clear value proposition, a target customer statement, and a ‘Join the Waitlist’ or ‘Book a Demo’ CTA. Run ₹5,000–₹15,000 in targeted ads to it. Measure the conversion rate. If 3–5% of visitors take the CTA action, you have signal. If 0.5% do, you need to revisit the value proposition before you build anything.

🚨   The most expensive validation mistake: substituting team enthusiasm for user research. I have been in rooms where every person on the founding team was completely convinced the product was needed — and every single one of them matched the target customer profile, which created a massive confirmation bias bubble. They built for themselves, not for the market. Validate with strangers. Your team’s excitement is not market validation.

3. Phase 1 — Brand, Naming, and Product Identity: The Decisions That Follow You Forever

Brand and naming feel like marketing problems. They are actually architecture problems. The name of your product, the positioning statement, and the visual identity make structural decisions about your target market, your pricing tier, and your competitive positioning that will take 18 months to change if you get them wrong.

The Naming Framework I Use

A good product name clears five tests. I run every candidate name through this list before recommending it to a client:

TestWhat It ChecksHow to Test
PronounceabilityCan target users say it correctly on first read?Say it aloud to 5 people. Count mispronunciations.
Spell-abilityCan users type it correctly after hearing it once?Say it over the phone. Ask them to type it.
Trademark availabilityDoes it conflict with existing trademarks in your class?IP India search + USPTO if international scope.
Domain availabilityIs the .com (or .in) available, or a credible variant?Instant check. No .com = credibility friction in B2B.
Search neutralityDoes a Google search for the name return your product?Search the name. If 10 other things show up, rethink.
Category clarityDoes the name signal what the product does or who it’s for?Ask 5 strangers what they think the product does.

Positioning: The One-Sentence Test

Every product needs a positioning statement that passes the one-sentence test: ‘[Product name] is the [category] for [target customer] who needs [core value proposition] — unlike [key competitor] which [key limitation].’

This isn’t marketing copy. It is a forcing function for strategic clarity. If you cannot fill in every blank precisely, you do not have a clear enough product definition to start building. I have ended Phase 0 sessions early because a founding team could not agree on who the target customer was. That disagreement, resolved in a 2-hour workshop, saves 4 months of building the wrong product.

💡   Positioning clarity test: Put your one-sentence positioning in front of 5 people who match your target customer profile. Ask: ‘Would this be useful to you? Would you pay for it?’ If fewer than 3 of the 5 say yes with conviction, your positioning needs work before your product does.

4. Phase 2 — The MVP Scoping Session: The Most Important Meeting in a Product’s Life

The MVP scoping session is the meeting where a product idea becomes a buildable specification. It typically takes 4–6 hours across one or two sessions. Done well, it is the most valuable 6 hours in the entire product lifecycle. Done badly, it produces a scope so large and undefined that the product will never ship.

What an MVP Actually Is (and Isn’t)

The word ‘minimum’ in Minimum Viable Product has been so severely abused that it no longer means anything to most founders. I use a specific definition:

An MVP is the smallest set of features that (a) solves the core problem for the target user, (b) can be built and deployed within the committed timeline and budget, and (c) generates enough user feedback to make the next product decision with confidence.

An MVP is not a prototype. It is not a demo. It is not a ‘beta with all the features but rough edges.’ It is a real product that real users can use to solve a real problem — with nothing else in it.

In an MVP ✅NOT in an MVP ❌
The core workflow that solves the primary problemSecondary workflows (‘nice to have’ features)
User authentication and basic access controlAdvanced permission systems with role hierarchies
The minimum data model to support the core workflowReporting, analytics, and data export
Basic error states and failure messagesPolished animations and micro-interactions
One payment method if monetisation is coreMultiple payment gateways and currencies
Manual admin processes where automation isn’t criticalAutomation of every background process
Mobile-responsive web OR native mobile — not bothSimultaneous web + iOS + Android launch

The MoSCoW Scoping Method — How I Run the Session

I run every MVP scoping session using the MoSCoW prioritisation method: Must Have, Should Have, Could Have, Won’t Have (this time). But I run it with a strict constraint that most teams ignore: the Must Have list must fit within 60% of the available development budget. The remaining 40% is held in reserve for the Should Have items that prove necessary, the unexpected technical complexity, and the iteration cycle after first user feedback.

My rules for the Must Have list:

  • Each item must be independently testable. If you can’t write a pass/fail test case for it, it’s not specific enough.
  • Each item must be traceable to a validated user pain point. If it didn’t come from user research, it goes to Should Have at minimum.
  • The entire Must Have list must be deliverable by a team of 3–5 engineers in the committed timeline. If it isn’t, you need to cut — not expand the team.
  • No item can be ‘the feature that makes us different.’ The differentiator belongs in V1.1, not the MVP. Differentiation requires user feedback to validate. The MVP generates that feedback.
📖   The scope negotiation I have every time: The founding team presents a Must Have list with 47 items. I ask them to mark which 5 items, if removed, would make the product unlaunchable. They struggle to identify more than 8. We build the 8. We launch in 10 weeks. Users tell us 3 of the remaining 39 items actually matter. We build those 3 in the next sprint cycle. 36 items are permanently removed from the backlog because users never asked for them. This happens on almost every project.

5. Phase 3 — The BRD and Technical Architecture Sprint

Once MVP scope is locked, the next two weeks are the most document-intensive of the entire project. This is where I produce the BRD and technical architecture spec in parallel — because the technical architecture decisions must be made before sprint planning begins, and they must be made in the context of the full BRD, not the features in isolation.

The Technical Architecture Decision Framework for 0-to-1 Products

For a 0-to-1 product, every technical architecture decision should be evaluated against one primary question: does this choice optimise for speed to first user feedback, or does it optimise for long-term scale? For an MVP, speed wins — with explicit, documented exceptions for decisions that would be catastrophically expensive to reverse (database schema, auth architecture, API contract design).

Architecture DecisionMVP-Optimised ChoiceScale-Optimised Choice (for later)
Backend frameworkNode.js / Django — fast to build, large talent poolMicroservices — after you know your domain boundaries
DatabasePostgreSQL — relational, flexible schema evolutionSharding / NoSQL — after you know your query patterns
Mobile platformFlutter — single codebase, fast iterationNative iOS + Android — after you know platform-specific needs
AuthenticationFirebase Auth or Auth0 — no-build, production-readyCustom auth — after you have specific compliance requirements
File storageAWS S3 or GCP Cloud Storage — commodity, scalableCDN + edge caching — after you know traffic geography
Background jobsSimple task queue (Bull / Celery) — single queueMessage broker (Kafka / RabbitMQ) — after you need pub/sub
PaymentsRazorpay or Stripe — fastest integration, broad coverageCustom payment orchestration — after you have volume leverage
DeploymentAWS Lightsail or Railway — simple, managed, cheapKubernetes / ECS — after you need auto-scaling

The dangerous middle ground is choosing scale-optimised architecture for an MVP timeline. I have seen founding teams spend 6 weeks setting up Kubernetes for an application that had 12 users. The infrastructure was technically excellent. The product never got to the 12th user because the team ran out of budget before the first user could provide feedback.

The Stack Decision I Push Back On Most: ‘We Need Microservices’

No you don’t. Not yet. Microservices solve a problem you don’t have at 0-to-1 — the problem of multiple large teams needing to deploy independently at high frequency without affecting each other. A monolith built with clean domain separation is the right architecture for every MVP I have ever delivered. It can be decomposed into services later, once you know where the domain boundaries actually are — which you don’t know before users tell you.

The teams that insist on microservices from Day 1 are optimising for a future complexity they haven’t earned yet. Start simple. Scale deliberately. The refactoring cost of moving from a clean monolith to services is far lower than the development tax of building microservices prematurely.

6. Phase 4 — Sprint Delivery: The Engine Room of 0-to-1

With BRD locked, architecture decided, and team assembled, the sprint delivery phase begins. For a 0-to-1 product, I run a modified Agile framework — shorter sprints, more flexible ceremonies, and a bias toward shipped features over process compliance.

My 0-to-1 Sprint Structure

WEEK 1–2Foundation Sprint Project setup, auth, DB schema, CI/CD pipeline, core data models, empty screens wired up. Nothing user-facing ships but the skeleton is complete.2 weeks
WEEK 3–6Core Feature Sprints Must Have features built in priority order. Each sprint ships a testable slice of the product. Internal demos at end of each sprint — no hidden progress.2 weeks each
WEEK 7–8Integration + Error State Sprint Third-party integrations completed, all error states implemented, loading states and empty states designed. The product must handle failure gracefully before beta users see it.2 weeks
WEEK 9–10Beta + Feedback Sprint Closed beta with 10–20 target users. Structured feedback sessions. Critical bug fixes. No new features — only stability and critical UX friction removal.2 weeks
WEEK 11Go-Live Prep Final QA sweep, App Store submission (if mobile), production infrastructure validation, onboarding flow test end-to-end, team go-live briefing.1 week
WEEK 12+Hypercare + Iteration Active support window, user behaviour analysis, prioritised backlog for V1.1 based on real user data.Ongoing

The Rules I Enforce During 0-to-1 Sprints

Rule 1: No scope additions during Weeks 1–8

New feature ideas go into a ‘parking lot’ Notion doc. They are reviewed at the start of the Beta Sprint (Week 9) and only the ones validated by beta user feedback enter the backlog. Every other idea waits for V1.1. The fastest way to miss a launch date is to add features after the first sprint. Every addition creates downstream dependencies. Three ‘small’ additions in three consecutive sprints can push a launch by 6 weeks.

Rule 2: Error states and empty states are not optional

Most developers build the happy path first and treat error/empty states as polish. I treat them as core functionality — scheduled in the Integration Sprint, not deferred to ‘later.’ A product that shows a blank white screen when an API fails, or an empty list with no guidance, is not ready for users. Users judge a product’s quality most harshly when something goes wrong. A graceful error message builds trust. A white screen destroys it.

Rule 3: Internal demos every sprint, no exceptions

At the end of every sprint, the team demos what shipped to the full stakeholder group — founders, investors if present, key internal users. Not a status update. A working demo. This serves three purposes: it forces completeness (you can’t demo broken functionality), it surfaces stakeholder concerns early, and it builds momentum. Stakeholders who see progress every two weeks stay engaged. Stakeholders who get status emails for three months get anxious and start micromanaging.

Rule 4: Definition of Done includes QA sign-off, not just developer sign-off

A feature is not done when the developer says it is done. It is done when a QA pass on the feature’s acceptance criteria returns zero critical or high-severity bugs. Developer-only ‘done’ definitions produce products that work perfectly on the developer’s machine and break immediately in the hands of a non-technical user. Budget for QA from Sprint 1. Do not add it at the end.

7. Phase 5 — The Beta Programme: Your Most Valuable Two Weeks

The closed beta is not a formality. It is the highest-information-density phase of the entire product development lifecycle. Twenty structured user sessions in two weeks will tell you more about your product than six months of internal debate. But only if you run it correctly.

Who to Recruit for Beta

Beta users must be recruited from your validated target customer profile — not from your personal network, your investors’ portfolio companies, or your team’s friends and family. Friendly beta users are the least useful beta users. They want you to succeed. They will overlook friction. They will not tell you the product is confusing because they don’t want to hurt your feelings.

Recruit beta users who have the problem your product solves and have no personal relationship with your team. Offer them something valuable — early access, a discount, a gift card, genuine status as a founding user. But recruit them for their fit with the problem, not their likelihood to be positive.

The Beta Session Structure I Use

  • 5-minute intro: What the product is, what you’re testing, that their honest feedback is the only valuable feedback
  • 15-minute unmoderated task completion: Give them 3 core tasks. Don’t help. Watch where they get stuck, what they misread, where they click and find nothing.
  • 10-minute structured interview: What did you expect that wasn’t there? What was confusing? What was the one thing you most wanted to do that you couldn’t?
  • 5-minute rating: NPS score and one open text: ‘What would make you recommend this to a colleague?’

The unmoderated task completion is the most valuable part. Where users get stuck is more informative than anything they say. Users will tell you the product is ‘fine’ while visibly struggling to complete a basic task. Watch the behaviour, not just the words.

What to Fix Before Launch vs What to Defer

Feedback TypeFix Before Launch ✅Defer to V1.1 ⏭️
Core workflow blockedCritical bug or UX failure blocking task completionFeature works but needs polish
Confusing label / copyRename if 3+ users misread the same elementSingle user preference on wording
Missing error messageAny unhandled failure showing blank or crashImproved error message copy
Performance issueLoad time > 5 seconds on any core screenMinor optimisation on secondary screens
Missing feature requestFeature that 8+ of 20 users asked for unpromptedFeature that 1–2 users mentioned once
Visual inconsistencyElements that look broken or misalignedDesign preferences and aesthetic opinions

8. Phase 6 — Go-to-Market: The Launch Is Not the End, It’s the Beginning

Most 0-to-1 product teams treat go-to-market as an afterthought — something to figure out after the product is built. This is the second most common reason good products fail to gain traction. (The first is building the wrong product. The second is building the right product and not getting it in front of the right people at the right moment.)

The GTM Framework I Use for 0-to-1 Products

Step 1: Define the GTM Motion Before Launch Day

Is this product sold top-down (enterprise sales, account executives, procurement cycles) or bottom-up (product-led growth, self-service signup, viral spread)? The answer determines your entire GTM infrastructure — the pricing model, the onboarding flow, the sales enablement collateral, the customer success process, and the success metrics.

A B2B SaaS with a ₹40,000/month enterprise tier needs a sales-led motion: demos, proposals, legal reviews, pilots. A B2C app priced at ₹299/month needs a product-led motion: frictionless signup, instant value, referral mechanics. Building a sales-led product with product-led distribution (or vice versa) is a common, expensive mismatch.

Step 2: The Launch Cohort Strategy

I always recommend launching to a defined cohort of 50–100 users before opening to general availability. Not a soft launch. Not a quiet launch. A deliberate, structured cohort launch where every user is personally onboarded, every experience is tracked, and every churn event is investigated.

The cohort launch gives you: a controlled feedback loop before scale, a group of engaged early users who become case studies and referrers, and the ability to fix critical issues before they affect thousands of users. The cohort is your most valuable asset in the first 90 days — more valuable than any marketing campaign.

Step 3: The Activation Metric, Not the Acquisition Metric

Most early-stage products measure acquisition (sign-ups, downloads, installs). The metric that actually predicts product success is activation — the percentage of users who reach the ‘aha moment’ where the product’s value becomes undeniable. For a project management tool, activation might be ‘created their first project and invited a team member.’ For a fintech app, it might be ‘completed KYC and made a first transaction.’

Define your activation event before launch. Instrument it in your analytics. Optimise for it obsessively in the first 90 days. A product with 1,000 sign-ups and 12% activation is in far better shape than a product with 5,000 sign-ups and 2% activation.

Step 4: The 30-60-90 Day Success Framework

TimeframePrimary FocusKey Metric
Day 1–30Activation and retention of launch cohortActivation rate > 40%; Day-7 retention > 25%
Day 31–60Identify and fix top 3 friction points from cohortSupport ticket volume decreasing week-on-week
Day 61–90First expansion: cohort referrals + channel testNPS > 30; at least 1 referral acquisition per 10 users

9. The 0-to-1 Failures I’ve Watched (And What They Actually Cost)

In every consulting engagement and enterprise delivery project I’ve run, I have had a front-row seat to the ways 0-to-1 products fail. Here are the most common failure modes — with the real cost attached.

Failure Mode 1: The Pivot That Never Ends

A founding team that pivots before getting to 100 users is almost always pivoting away from discomfort, not toward insight. Pivoting is only valid data if you have user feedback that clearly shows the current direction is wrong. Pivoting because you’re anxious, because a competitor launched something different, or because an advisor suggested a different market — those are not data-driven pivots. They are expensive distractions.

Real cost: One client I worked with pivoted three times in eight months. Total development budget consumed: ₹45L. Users served: 0. Product launched: 0. The fourth direction is what they should have built in Month 1 — which was clearly implied by the original validation research that they had conducted but not listened to.

Failure Mode 2: Launching to an Audience of Zero

A technically perfect product with no distribution strategy is a tree falling in an empty forest. I have seen teams spend 8 months building and 2 weeks on go-to-market — and then wonder why no one signed up. Distribution is not a post-launch problem. It is a pre-build decision. Who are the first 100 users, where do they live online, how will you reach them, and what will make them tell 10 friends?

Failure Mode 3: The Feature Factory After Launch

After launch, the temptation to build more features is overwhelming — especially when users request them. But adding features before understanding why users churn is the most common way to turn a recoverable product into an irretrievable one. More features increase complexity, slow onboarding, and fragment the user experience. The right response to post-launch user requests is to understand the underlying problem, not to immediately build the stated solution.

Failure Mode 4: Premature Scaling

Scaling infrastructure, team, and marketing spend before achieving repeatable, measurable product-market fit is one of the most destructive things a founding team can do. Product-market fit is a feeling AND a number. The feeling: you’re struggling to keep up with demand. The number: for B2B SaaS, approximately 40% of users say they would be ‘very disappointed’ if the product went away (the Sean Ellis test). Until both are true, every rupee spent on scaling is a rupee spent before the product is ready to be scaled.

🚨   The most expensive 0-to-1 mistake I have ever witnessed: A client raised ₹2.5 Cr in angel funding on the back of a prototype and immediately hired 12 people and rented office space. They had 0 paying users. By the time they had a launchable product, 9 months later, they had spent ₹1.8 Cr on team and infrastructure. The remaining ₹70L lasted 3 months post-launch — not enough time to reach sustainable unit economics. The product was genuinely good. The sequencing killed it.

The 0-to-1 Checklist: The Questions That Must Be Answered at Each Phase

Phase 0: Validation

  • Have I conducted 12+ problem interviews with target users — not product demos?
  • Have I identified and tested the riskiest assumption with a real experiment?
  • Have I run a landing page or fake-door test and measured CTA conversion?
  • Can I answer all three fundamental questions (problem, minimum feature set, success metrics)?

Phase 1: Brand and Positioning

  • Does the product name pass all six naming tests?
  • Can I state the one-sentence positioning with every blank filled in?
  • Has the positioning been validated with 5 target customers?

Phase 2–3: Scoping and Architecture

  • Is the Must Have list independently testable, user-research-backed, and within 60% of budget?
  • Has the tech stack been chosen for MVP speed, with scale decisions explicitly deferred?
  • Is the BRD signed by the business owner and the FRD signed by the technical lead?

Phase 4: Sprint Delivery

  • Is there a scope freeze in place for Weeks 1–8?
  • Are error states and empty states scheduled in the sprint plan — not deferred?
  • Is QA sign-off included in the Definition of Done for every feature?

Phase 5–6: Beta and GTM

  • Are beta users recruited from the target customer profile — not from personal network?
  • Has the activation event been defined and instrumented in analytics?
  • Is there a 30-60-90 day post-launch success framework with named accountability?
  • Has the GTM motion (sales-led vs product-led) been explicitly decided before launch?

Every successful 0-to-1 product I have shipped passed all of these gates. Every failed product I have inherited missed at least three. The checklist is not a bureaucracy — it is the map. Use it.

C B Mishra  |  Strategic Technical Project Manager 

Available for 0-to-1 product consulting, MVP delivery, and GTM strategy  •  Book a free call

Tagged in:

MVP Delivery Product Strategy
CB

C B Mishra

Strategic Technical Project Manager

7+ years directing enterprise-scale digital transformations, ERP implementations, and high-performing engineering teams globally.