Every founder I’ve worked with has had the same database panic moment.
You’re starting your MVP. You open your schema file. And suddenly you’re paralyzed by questions:
SQL or NoSQL? Postgres or MongoDB? Should I normalize? How many tables? What about scaling? What if I choose wrong and have to rewrite everything?
So you spend a week reading database design articles, watching YouTube tutorials about sharding strategies, and designing a 47-table schema for an app that doesn’t have users yet.
Stop. Your MVP database needs to do one thing: not get in your way.
Here’s the no-BS guide to database design for MVPs — from someone who’s built databases for fleet tracking systems processing millions of GPS pings, manufacturing ERPs, and tiny SaaS products alike.
The Only Database Decision That Matters
Use Postgres.
That’s it. That’s the section.
Okay, let me explain why:
- Postgres handles everything. Relational data, JSON documents, full-text search, geospatial queries, time-series data. Whatever your MVP needs, Postgres can do it.
- Every hosting platform supports it. Supabase, Neon, Railway, Render, AWS RDS, PlanetScale (now supports Postgres) — you’ll never be locked out.
- It scales far beyond your MVP. Instagram ran on Postgres. Discord runs on Postgres. You’ll hit $10M ARR before you hit Postgres limits.
- The ecosystem is massive. ORMs, migration tools, monitoring, backups — everything works with Postgres.
“But what about MongoDB?” Use it if your data is genuinely document-shaped (CMS, content management, product catalogs with wildly varying attributes). For 90% of MVPs — user accounts, orders, subscriptions, content — Postgres is better.
“But what about Firebase/Firestore?” Fine for prototypes. Painful for anything with relationships. You’ll regret it at 1,000 users when you need a join.
“But what about Supabase?” Supabase IS Postgres. With a nice UI, auth, and real-time subscriptions bolted on. Great choice for MVPs.
Decision made. Moving on.
The MVP Schema Pattern
Every MVP, regardless of what it does, has the same core tables:
-- The Universal MVP Schema
-- 1. Users (every app has these)
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT UNIQUE NOT NULL,
name TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- 2. The Core Thing (whatever your app is about)
-- For a project tool: projects
-- For a marketplace: listings
-- For a CMS: posts
CREATE TABLE [core_thing] (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
title TEXT NOT NULL,
-- your 3-5 domain-specific columns here
status TEXT DEFAULT 'active',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- 3. The Activity (what users do with the core thing)
-- For a project tool: tasks
-- For a marketplace: orders
-- For a CMS: comments
CREATE TABLE [activity] (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
[core_thing]_id UUID REFERENCES [core_thing](id),
-- your 2-3 activity-specific columns
created_at TIMESTAMPTZ DEFAULT now()
);
That’s it. Three tables. Users, the thing your app is about, and what users do with it.
UTMStamp at launch:
users— who signed upsignatures— email signatures they createdclicks— tracking events on those signatures
Three tables. Shipped in 13 days. Still works.
5 Rules for MVP Database Design
Rule 1: Start With 3-5 Tables, Max
If your initial schema has more than 5 tables, you’re overdesigning.
The urge to model everything upfront is strong. Resist it. You don’t know what your product will actually become until users touch it. Every table you create now is a table you might have to migrate later.
Start minimal. Add tables when you feel pain, not when you imagine future pain.
Bad (day 1 schema for a todo app):
users, profiles, teams, team_members, roles, permissions,
projects, project_settings, tasks, subtasks, comments,
attachments, tags, task_tags, notifications, notification_preferences,
integrations, webhooks, audit_logs
Good (day 1 schema for a todo app):
users, projects, tasks
You can add teams in week 3 when someone actually asks for collaboration. You can add tags in month 2 when users complain about finding things.
Rule 2: Use UUIDs, Not Auto-Increment IDs
-- Don't do this
id SERIAL PRIMARY KEY
-- Do this
id UUID PRIMARY KEY DEFAULT gen_random_uuid()
Why:
- Security: Sequential IDs leak information.
/api/users/47tells attackers you have ~47 users. - Merging: If you ever need to combine databases (acquisitions, migrations), UUIDs don’t collide.
- Frontend: UUIDs can be generated client-side, enabling optimistic UI updates.
The performance difference is negligible at MVP scale. Use UUIDs.
Rule 3: Always Add created_at and updated_at
Every single table. No exceptions.
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
You will need these for:
- Debugging (“when did this record change?”)
- Analytics (“signups this week”)
- Sorting (“most recent first”)
- Business logic (“inactive for 30 days”)
Use TIMESTAMPTZ (timestamp with timezone), not TIMESTAMP. Timezone-naive timestamps will haunt you the moment you have users in different timezones.
Set up an auto-update trigger for updated_at:
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Apply to each table
CREATE TRIGGER set_updated_at
BEFORE UPDATE ON users
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
Rule 4: Use TEXT, Not VARCHAR(n)
-- Don't do this
name VARCHAR(100),
email VARCHAR(255),
bio VARCHAR(500)
-- Do this
name TEXT,
email TEXT,
bio TEXT
In Postgres, TEXT and VARCHAR have identical performance. The only difference is VARCHAR(n) adds a length check — which you should handle in your application layer anyway.
Why this matters for MVPs:
- No migration needed when you realize 100 chars isn’t enough for names
- Application validation is more flexible than database constraints
- One less thing to think about
Add length constraints in your API validation (Zod, Joi, etc.), not in your schema.
Rule 5: Use a JSON Column for “Everything Else”
This is the MVP secret weapon:
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT UNIQUE NOT NULL,
name TEXT,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
That metadata JSONB column is your escape valve. When you need to store something but don’t want to add a column or run a migration:
-- Store onboarding preferences
UPDATE users SET metadata = metadata || '{"onboarding_step": 3, "referral_source": "twitter"}'
WHERE id = '...';
-- Query it
SELECT * FROM users WHERE metadata->>'referral_source' = 'twitter';
-- Index it if you query it often
CREATE INDEX idx_users_referral ON users ((metadata->>'referral_source'));
The rule: Columns you query frequently → proper columns. Everything else → JSONB metadata.
This lets you iterate fast without migrations. When a metadata field becomes important enough, promote it to a proper column.
Migrations: Keep Them Simple
Use whatever migration tool comes with your ORM:
- Prisma:
prisma migrate dev(best DX) - Drizzle:
drizzle-kit pushfor dev,drizzle-kit generatefor production - Knex:
knex migrate:make/knex migrate:latest - Raw SQL: A numbered folder of
.sqlfiles (honestly fine for MVPs)
Rules for MVP migrations:
- Forward-only. Don’t write down migrations. If something goes wrong, write a new forward migration to fix it.
- One change per migration. Don’t bundle “add users table + add orders table + add index” into one file.
- Always test locally before running in production.
docker compose up db→ run migration → verify.
Common Mistakes (And How to Avoid Them)
Mistake 1: Premature Normalization
Bad: Creating a separate addresses table with address_type, country_id referencing a countries lookup table, state_id referencing states…
Good: A shipping_address TEXT column on the orders table. Or a JSONB column with the full address object.
Normalize when you have the same address referenced by 5 different tables. Not before.
Mistake 2: Soft Deletes From Day One
-- Don't start with this
deleted_at TIMESTAMPTZ,
is_deleted BOOLEAN DEFAULT false
Soft deletes add complexity to every query (WHERE deleted_at IS NULL everywhere). At the MVP stage, just… delete things. If you’re worried about accidental deletion, set up daily database backups (which you should do anyway).
Add soft deletes when you have a legal or business reason to keep deleted records.
Mistake 3: Over-Indexing
Every index speeds up reads but slows down writes. At MVP scale, you need exactly these indexes:
- Primary keys (automatic)
- Foreign keys (add these)
- Columns you filter/sort on frequently (add as needed)
That’s it. Don’t index every column “just in case.” Add indexes when queries get slow, which won’t happen until you have real data volume.
Mistake 4: Choosing a Database for Scale You Don’t Have
“We need Cassandra for our write-heavy workload.” Your workload is 50 users creating 3 records per day.
“We should use a graph database for the social features.” You have a follows table with 200 rows.
Postgres handles thousands of requests per second out of the box. You will not outgrow it at the MVP stage. I promise.
Mistake 5: No Backups
This isn’t a schema mistake — it’s an existential one.
Set up automated daily backups. Every managed Postgres provider (Supabase, Neon, Railway) does this automatically. If you’re self-hosting, set up pg_dump on a cron job.
# Daily backup (add to crontab)
0 2 * * * pg_dump $DATABASE_URL | gzip > /backups/db-$(date +\%Y\%m\%d).sql.gz
Losing your database = losing your company. This takes 5 minutes to set up. Do it.
The Best MVP Database Setup in 2026
Here’s what I’d use for any new MVP today:
Database: Supabase (free tier → Postgres with auth, real-time, and a dashboard built in)
ORM: Prisma or Drizzle
- Prisma: Better for teams, incredible DX, auto-generated types
- Drizzle: Lighter, closer to SQL, better performance
Schema: 3-5 tables with JSONB metadata columns
Migrations: Prisma Migrate or Drizzle Kit
Backups: Automatic via Supabase (or daily pg_dump if self-hosted)
Monitoring: Supabase dashboard for basics, upgrade to pganalyze when you hit scale
Total setup time: 30 minutes to first query.
Schema Template You Can Copy
Here’s a production-ready MVP schema. Copy it, rename the tables, and start building:
-- Extensions
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Users
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT UNIQUE NOT NULL,
name TEXT,
avatar_url TEXT,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Your core entity (rename this)
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
title TEXT NOT NULL,
description TEXT,
status TEXT DEFAULT 'active' CHECK (status IN ('active', 'archived', 'deleted')),
settings JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Activity/items within the core entity (rename this)
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id),
title TEXT NOT NULL,
completed BOOLEAN DEFAULT false,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes (only what you need)
CREATE INDEX idx_projects_user ON projects(user_id);
CREATE INDEX idx_tasks_project ON tasks(project_id);
CREATE INDEX idx_tasks_user ON tasks(user_id);
-- Updated_at trigger
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN NEW.updated_at = now(); RETURN NEW; END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER users_updated BEFORE UPDATE ON users
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
CREATE TRIGGER projects_updated BEFORE UPDATE ON projects
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
CREATE TRIGGER tasks_updated BEFORE UPDATE ON tasks
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
Copy. Rename. Ship.
The Bottom Line
MVP database design is about making decisions that are:
- Good enough to build on
- Simple enough to change
- Standard enough that any developer can understand
Use Postgres. Start with 3-5 tables. Add JSONB metadata columns as escape valves. Don’t normalize until it hurts. Don’t optimize until it’s slow.
Your database isn’t a cathedral — it’s a tent. It needs to keep the rain out while you figure out if anyone wants to camp here. You can build the cathedral later, when you know people are staying.
Ship the schema. Ship the product. Iterate when you have data.