Part 1 of 2 in Redirect Lifeguard

Building a Lean SEO Tool: Architecture Decisions That Keep Costs Near Zero

Part 1 of the Redirect Lifeguard series. How we designed an SEO redirect analysis tool with minimal infrastructure, cheap overhead, and a path to marketplace viability.

Every SEO professional has the same experience at least once: you inherit a site with hundreds of broken redirects, canonical mismatches, and chain loops that have been silently destroying search performance for months. The existing tools — Screaming Frog, Ahrefs, Semrush. Can detect these issues, but they’re expensive, designed for broad crawling rather than focused redirect analysis, and built for agencies with large portfolios.

We needed something different. A lean, focused tool that does one thing well: analyze redirect health, detect problems, and provide evidence-based recommendations. And we needed to build it without the infrastructure overhead that makes most SaaS tools expensive before they have a single customer.

This is the Redirect Lifeguard project, and this series documents the architecture decisions that keep the whole thing running on a near-zero budget while maintaining production-grade reliability.

Design Principles

We started with four constraints that shaped every subsequent decision.

Minimal infrastructure. No Kubernetes cluster. No microservices. No Redis. No message queue. The tool runs on a single deployment target with a single database. Complexity gets added only when measurable load demands it, not because it’s architecturally fashionable.

Cheap overhead. The monthly cost of running this tool at zero customers should be negligible. At moderate scale. A few hundred active users; it should remain profitable on a low subscription price. This means managed hosting, serverless where appropriate, and aggressive optimization of the only expensive resource: database operations.

Speed to deploy. We want to go from concept to live product in weeks, not months. This means choosing a stack we’re deeply familiar with, avoiding novel infrastructure, and building the simplest version that delivers real value.

Technical stability. The tool will be handling data that people make business decisions on. Incorrect redirect analysis could lead to changes that damage a site’s SEO. Every analysis result needs to be backed by stored evidence. The actual HTTP responses, headers, and redirect chains that led to the conclusion.

The Stack

TypeScript end to end. Next.js with App Router for both the frontend and API layer. Prisma ORM for database access. PostgreSQL for storage. That’s it.

The decision to use Next.js for the API isn’t purely about convenience. It’s about deployment simplicity. A single Vercel deployment serves both the application and the API, with managed Postgres from Neon or Supabase as the database. No separate backend service to deploy, monitor, or scale independently.

Prisma gives us schema-managed migrations, type-safe database queries, and a clean abstraction layer that would let us swap database providers if needed. The schema includes tables for users, projects, API keys, jobs, job runs, artifacts, and snapshots. The minimum set needed for a functional analysis tool with evidence storage.

Evidence-First Architecture

The most unconventional decision in the architecture is the evidence storage model. Every redirect analysis produces two categories of output: the conclusions (redirect chains detected, canonical mismatches found, loops identified) and the raw evidence (HTTP headers, response bodies, parsed metadata, timing data).

Most tools store the conclusions and discard the evidence. We store both, permanently.

This costs more in storage but provides three critical capabilities. Reproducibility: if a customer questions an analysis result, we can show them exactly what our tool saw. Debugging: when our analysis logic has a bug, we can replay stored evidence through fixed logic without re-crawling. And auditability: for enterprise customers who need to document why they made SEO changes, the stored evidence chain provides a complete audit trail.

The evidence is stored as structured snapshots. Input payloads, output payloads, HTTP headers, and parsed metadata. All linked to the job run that produced them. This creates a complete forensic record of every analysis without any additional implementation effort beyond the initial storage design.

Job Processing Without a Queue

Most tools that do asynchronous work reach for Redis, RabbitMQ, or a dedicated job queue. We use the database.

The jobs table has a status column with four states: pending, running, failed, completed. A worker process polls for pending jobs, claims them with an atomic UPDATE, processes them, and writes results. Failed jobs get a retry count with exponential backoff.

This is explicitly not a scalable architecture. A database-backed job queue tops out at maybe a few hundred concurrent jobs before polling becomes a bottleneck. But at our expected scale. Tens to hundreds of analysis jobs per hour. It works perfectly and eliminates an entire infrastructure dependency.

The explicit plan is to migrate to a proper queue (SQS, or possibly Postgres-native LISTEN/NOTIFY) when and only when monitoring shows the database queue becoming a bottleneck. Until then, the simpler architecture wins.

In Part 2, we’ll get into the redirect analysis engine itself: how we detect chains, loops, and canonical mismatches, and why the analysis model is designed around URL inspection rather than full-site crawling.

Discussion

Adam Bishop

Veteran, entrepreneur, and independent researcher. Writing about formal methods, AI governance, production systems, and the operational discipline that connects them. Every project here demonstrates hard thinking on simple infrastructure.