Why we ship Supabase RLS instead of a custom /api/auth
We were three commits into a custom auth API when we deleted all of it. Here's what changed our mind and what we shipped instead.
Sprint 4 of WidgetCraft needed persistence. The plan was straightforward: a POST /api/config that validates input and writes to a database, a GET /api/config/:id that returns a row. Auth to come in a later sprint. We wrote the first two routes. Then we wrote the third. Then we realized we were writing an auth system, not a persistence layer.
Every route was growing the same boilerplate: validate the JWT, extract the user id, check they own the row, return 403 if not, return the row if so. Each new route meant a new place where that logic could drift. We'd seen this pattern age badly in a previous codebase.
The alternative was to push authorization into the database itself. Postgres has Row Level Security (RLS) — a policy layer that runs before any query, enforcing per-row access based on the authenticated caller. Supabase exposes this cleanly: the client SDK adds the caller's auth context to every query, and RLS policies do the rest.
What the custom API would have looked like
// /api/config/[id]/route.ts
export async function GET(req, { params }) {
const user = await requireUser(req);
if (!user) return NextResponse.json({ error: "auth" }, { status: 401 });
const config = await db.config.findUnique({ where: { id: params.id } });
if (!config) return NextResponse.json({ error: "not found" }, { status: 404 });
// Public configs anyone can read. Private configs only the owner.
if (config.disabled) return NextResponse.json({ error: "disabled" }, { status: 410 });
if (config.owner && config.owner !== user.id) {
return NextResponse.json({ error: "forbidden" }, { status: 403 });
}
return NextResponse.json(config);
}That logic gets copied into every route that touches the config table. POST, PUT, DELETE, GET — each one re-implements the same ownership check. Miss one check in one place, ship an IDOR vulnerability.
What the RLS version looks like
One migration, five policies:
-- Anyone can read non-disabled rows
CREATE POLICY configs_select_public ON configs
FOR SELECT USING (disabled = false);
-- Anon clients can create ownerless rows
CREATE POLICY configs_insert_anon ON configs
FOR INSERT TO anon
WITH CHECK (owner IS NULL);
-- Authenticated users can create rows only with themselves as owner
CREATE POLICY configs_insert_auth ON configs
FOR INSERT TO authenticated
WITH CHECK (owner = auth.uid());
-- Authenticated users can update/delete only their own rows
CREATE POLICY configs_update_own ON configs
FOR UPDATE TO authenticated
USING (owner = auth.uid());
CREATE POLICY configs_delete_own ON configs
FOR DELETE TO authenticated
USING (owner = auth.uid());And then the application code becomes zero-lines-of-auth-logic:
const { data } = await supabase.from("configs").insert({ widget_type, config_json });If the user is anonymous, the configs_insert_anon policy accepts the write (owner defaults to null). If authenticated, the configs_insert_auth policy accepts it only when owner = auth.uid(). We never write an ownership check in TypeScript again.
What this buys us
1. Defense at the database, not the perimeter. An INSERT from a misbehaving edge function, a misconfigured MCP tool, a stolen service-role key — all of them still hit the policy if they're not using the service role. The usual edge-case scenarios where authorization gets bypassed because someone added a new route and forgot the check? Gone. The check lives in the table.
2. Every client looks the same to the database. The Next.js app calls supabase.from(). The MCP server calls supabase.from(). A future mobile app would call supabase.from(). None of them need to know the authorization rules; the rules are in one place.
3. The service-role key is the explicit escape hatch. When we *do* need to bypass RLS — admin operations, system-level maintenance, the MCP server writing ownerless configs on behalf of any caller — we use the service-role key, which Supabase explicitly treats as RLS-bypass. The key lives in Vercel env, never on the client. Using it is an intentional choice, visible in the code, greppable.
The gotcha
RLS isn't free. Two things to watch:
Debugging is different. A row that "doesn't exist" often does exist — you just don't have policy access to see it. Supabase's query log shows when a policy blocks a read; we set up a habit of running queries as postgres first, then as anon, to tell apart "real 404" from "policy 404".
Policies add query cost. Every policy is a predicate Postgres applies before returning rows. For our configs table (5 policies, ~1 row per widget per user), the cost is imperceptible. For a large analytics table with 10 policies, you'd want to profile.
When would we NOT use RLS?
Rare cases. If the authorization rules were genuinely complex — say, "users in org A can see rows in org B only if both orgs have opted into cross-sharing AND the row has an age between 30 and 90 days" — we'd probably factor that into a stored procedure or a view with its own policy. The RLS policy language is good but not unlimited.
For the normal case — "users see their own rows, some rows are public" — RLS is strictly better than a hand-rolled auth API. Less code, fewer footguns, one place to audit.
We deleted /api/config/* at the end of Sprint 4 and haven't missed it.
*WidgetCraft's Supabase schema, policies, and migrations are open-source:* github.com/JidAiLabs/widgetcraft/tree/main/supabase