How to Do Web Development in 2026
Learn What Takes Years in a Week - The secret isn't learning more. It's starting with the right foundation. ZeroStarter teaches you practices that take years to learn—in a week.
Learn What Takes Years in a Week.
The secret isn't learning more. It's starting with the right foundation.
Here's an uncomfortable truth about web development in 2026: The gap between junior and senior developers isn't about knowing more syntax or frameworks. It's about understanding practices—the invisible systems that separate hobby projects from production software.
How do you structure a codebase that scales? How do you ensure type safety across your entire stack? How do you manage environment variables without production surprises? How do you set up a development workflow that catches problems early? How do you automate releases so changelogs write themselves?
These practices take years to learn through painful mistakes. Or you can absorb them by working with a codebase that already implements them.
ZeroStarter is practices as a service. It's a living reference implementation. Study it, modify it, make it yours. The patterns matter more than the specific choices.
Ready to dive in? Get started with ZeroStarter or explore the documentation.
Project Architecture
The principle: Code should be organized with clear boundaries. Shared logic should live in shared packages, not be copy-pasted across projects.
Why it matters: When authentication logic exists in one place, fixing a bug fixes it everywhere. When it's duplicated, you fix it in three places and miss the fourth. When your frontend and backend live in separate repos with no shared types, they drift apart.
How ZeroStarter approaches this
ZeroStarter uses a monorepo—multiple applications and packages in a single repository:
├── api/hono/ # Backend API
├── web/next/ # Frontend application
└── packages/
├── auth/ # Authentication logic (used by API)
├── db/ # Database schema (single source of truth)
├── env/ # Environment variable validation
└── tsconfig/ # Shared TypeScript configurationThe API imports auth logic from @packages/auth. The database schema in @packages/db is the single source of truth. TypeScript configuration is consistent across all packages.
Turborepo manages the build graph. Change packages/db? It rebuilds dependent packages. Unchanged packages use cache. Builds stay fast as the codebase grows.
What you can learn
Monorepos aren't the only answer:
- Turborepo, Nx, Lerna: Different monorepo tools with different trade-offs
- Separate repositories: Simpler for small teams, harder to keep in sync
- npm packages: Publish shared code to a registry
- Polyrepo with shared types: Keep repos separate but share type definitions
The practice is clear boundaries and code reuse. How you achieve it depends on your team size, deployment needs, and how much shared code you have.
The Tech Stack
Before diving into practices, here's what ZeroStarter uses—and why these choices work together:
| Layer | Choice | Why |
|---|---|---|
| Runtime | Bun | Fast installs, native TypeScript, modern tooling |
| Build | Turborepo | Incremental builds, dependency-aware caching |
| Frontend | Next.js | App Router, Server Components, mature ecosystem |
| Backend | Hono | Ultra-fast, edge-compatible, type-safe RPC |
| Database | PostgreSQL/Drizzle | Type-safe ORM, SQL-like syntax, zero runtime overhead |
| Auth | Better Auth | Self-hosted, flexible, TypeScript-first |
| Styling | Tailwind + Shadcn | Utility CSS, accessible components, copy-paste friendly |
| Validation | Zod | Runtime + compile-time types from one schema |
| Linting | Oxlint + Oxfmt | 50-100x faster than ESLint/Prettier |
| Docs | Fumadocs | MDX-based, generates AI-friendly documentation |
| Analytics | PostHog | Product analytics, feature flags, session replay |
These aren't the only valid choices. Next.js could be Remix or SvelteKit. Hono could be Express or Fastify. Drizzle could be Prisma. The practices that follow apply regardless of your specific stack.
See it in action: Check out the Architecture documentation to understand how these pieces fit together.
Type-Safe API Communication
The principle: The contract between frontend and backend should be enforced at compile time. If the backend changes a response shape, the frontend should know immediately—not when users report errors.
Why it matters: The backend returns { userName: "John" }. The frontend expects { username: "john" }. Different casing, different property name. Without type safety, this becomes a runtime error—possibly only for some users, making it hard to reproduce.
How ZeroStarter approaches this
ZeroStarter uses Hono RPC to share types between backend and frontend without code generation:
Backend defines routes with types:
// api/hono/src/routers/v1.ts
import { Hono } from "hono"
import { zValidator } from "@hono/zod-validator"
import { z } from "zod"
const itemSchema = z.object({
name: z.string().min(1),
price: z.number().positive(),
})
export const v1Router = new Hono()
.get("/items", async (c) => {
const items = await db.query.items.findMany()
return c.json(items)
})
.post("/items", zValidator("json", itemSchema), async (c) => {
const data = c.req.valid("json") // Typed from Zod schema
const item = await db.insert(items).values(data).returning()
return c.json(item[0])
})Frontend infers types automatically:
// web/next/src/lib/api/client.ts
import type { AppType } from "@api/hono"
import { hc } from "hono/client"
export const apiClient = hc<AppType>(config.api.url).apiChange the backend response? TypeScript errors appear in the frontend immediately. Rename a field? Every call site is flagged.
What you can learn
Multiple approaches achieve this:
- Hono RPC: Types inferred from route definitions, no codegen
- tRPC: Similar approach, tight React Query integration
- GraphQL + codegen: Schema-first, generates TypeScript types
- OpenAPI + codegen: REST with generated clients from spec
- Shared types package: Manual but explicit control
The practice is type safety across the network boundary. Whether you infer types or generate them, the goal is the same: catch mismatches at compile time.
Learn more: See the Type-Safe API documentation for complete examples and patterns.
API Observability
The principle: Every request should be traceable. Errors should be standardized. You should know what happened, when, and how long it took.
Why it matters: A user reports "something went wrong." Without request IDs, timestamps, and structured errors, you're searching through logs hoping to find the needle in the haystack. With proper observability, you search by request ID and see exactly what happened.
How ZeroStarter approaches this
Request ID middleware assigns a unique ID to every request:
import { requestId } from "hono/request-id"
app.use("*", requestId())Metadata middleware injects timing and context into responses:
const metadata = {
requestId: c.get("requestId"),
timestamp: new Date().toISOString(),
method: c.req.method,
path: c.req.path,
duration: Math.round(performance.now() - start),
}Global error handler standardizes error responses:
.onError((error, c) => {
if (error instanceof z.ZodError) {
return c.json({
error: {
code: "VALIDATION_ERROR",
message: "Invalid request payload",
issues: error.issues,
},
}, 400)
}
return c.json({
error: {
code: "INTERNAL_SERVER_ERROR",
message: isLocal(env.NODE_ENV) ? error.message : "An unexpected error occurred",
},
}, 500)
})Every error follows the same shape. Zod validation errors include the specific issues. Request context is always available for debugging.
What you can learn
Observability tools vary:
- Request IDs: Built-in middleware or custom
- Structured logging: Pino, Winston, console with JSON
- Error tracking: Sentry, Bugsnag, LogRocket
- APM: Datadog, New Relic, OpenTelemetry
The practice is consistent, traceable request handling. When something fails, you should be able to trace it end-to-end.
Authentication
The principle: Authentication should be secure by default, flexible enough to extend, and simple enough to understand. Users shouldn't think about security—it should just work.
Why it matters: Auth bugs are security bugs. A session that doesn't expire, a token that leaks, a callback URL that's not validated—these become vulnerabilities. Using a battle-tested library with secure defaults protects you from mistakes.
How ZeroStarter approaches this
ZeroStarter uses Better Auth with multiple providers:
// packages/auth/src/index.ts
export const auth = betterAuth({
baseURL: env.HONO_APP_URL,
trustedOrigins: env.HONO_TRUSTED_ORIGINS,
database: drizzleAdapter(db, {
provider: "pg",
schema: { user, session, account, verification },
}),
socialProviders: {
github: {
clientId: env.GITHUB_CLIENT_ID,
clientSecret: env.GITHUB_CLIENT_SECRET,
},
google: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
plugins: [openAPI()], // Auto-generated API docs
})Frontend auth client with magic link support:
import { createAuthClient } from "better-auth/react"
import { magicLinkClient } from "better-auth/client/plugins"
export const authClient = createAuthClient({
baseURL: `${config.api.url}/api/auth`,
plugins: [magicLinkClient()],
})
export const { useSession, signIn, signUp, signOut } = authClientCross-subdomain cookies for multi-app authentication:
...(cookieDomain && {
advanced: {
crossSubDomainCookies: {
enabled: true,
domain: cookieDomain,
},
},
})What you can learn
Auth approaches vary:
- Better Auth: Self-hosted, flexible, TypeScript-first
- Auth.js: Popular, many providers, Next.js integration
- Lucia: Lightweight, database-agnostic
- Clerk, Auth0: Managed services, less control
- Custom: Full control, full responsibility
The practice is secure, tested authentication. Whether self-hosted or managed, the auth system should be secure by default and auditable.
Try it yourself: Clone ZeroStarter and run
bun dev- authentication with GitHub and Google OAuth is ready to configure. See the Installation guide.
Environment Variable Management
The principle: Environment variables should be validated at startup and typed throughout your code. A missing variable should crash the app immediately with a clear error—not fail silently during a request.
Why it matters: process.env.DATABASE_URL returns string | undefined. If it's undefined, when do you find out? Maybe immediately. Maybe five minutes later when the first database query runs. Maybe in production when a specific code path executes.
How ZeroStarter approaches this
ZeroStarter uses @t3-oss/env-core with Zod schemas:
// packages/env/src/api-hono.ts
export const env = createEnv({
server: {
NODE_ENV: z.enum(["local", "development", "test", "staging", "production"]),
DATABASE_URL: z.url(),
PORT: z.coerce.number().default(4000),
TRUSTED_ORIGINS: z.string().transform((s) => s.split(",")),
},
runtimeEnv: {
NODE_ENV: process.env.NODE_ENV,
DATABASE_URL: process.env.DATABASE_URL,
PORT: process.env.PORT,
TRUSTED_ORIGINS: process.env.TRUSTED_ORIGINS,
},
})Each package has its own env file, importing only what it needs. The auth package can't accidentally access database credentials. The frontend can't access server secrets.
What you can learn
Validation approaches vary:
- t3-env: Zod-based, popular in Next.js projects
- envalid: Similar concept, different API
- Infisical: Secrets management with syncing
- Platform validation: Vercel, Railway have built-in checks
The practice is validated, typed environment configuration. The specific library matters less than having validation at all. Fail fast, fail clearly.
Deep dive: Learn how ZeroStarter organizes environment variables across packages in the Environment documentation.
Development Workflow
The principle: Your workflow should prevent mistakes, not just catch them. Commits should be meaningful. Branches should protect production.
Why it matters: Direct pushes to production cause outages. Commits like "fix stuff" and "wip" make debugging impossible. A workflow with guardrails prevents these problems without slowing you down.
Branch Strategy
ZeroStarter uses canary (development) and main (production) branches. Push to canary, a draft PR is automatically created for main. This ensures every production change goes through review.
Branch protection as code via GitHub rulesets:
// .github/rulesets/canary.json
{
"name": "canary",
"enforcement": "active",
"rules": [
{ "type": "deletion" },
{ "type": "non_fast_forward" },
{
"type": "pull_request",
"parameters": {
"required_approving_review_count": 1,
"dismiss_stale_reviews_on_push": true
}
}
]
}Rulesets are version-controlled. Branch protection is documented. New team members understand the workflow by reading the config.
Commit Conventions
ZeroStarter enforces Conventional Commits:
feat(auth): add Google OAuth support
fix(api): handle null user in session
chore(deps): update dependencies
refactor(web): extract form validation hookBut your team might prefer ticket references, emoji conventions, or freeform messages. The practice is meaningful, consistent commit messages. The format is a team decision.
Code Quality Gates
The principle: Catch problems before they enter the codebase. A bug caught at commit time costs minutes. The same bug in production costs hours—plus customer trust.
Why it matters: Code review is for logic and architecture. Humans shouldn't spend time catching formatting issues, linting errors, or type mismatches—that's what automation is for.
How ZeroStarter approaches this
Lefthook runs checks before every commit:
pre-commit:
piped: true # Run in sequence, stop on first failure
commands:
audit:
run: bun audit --audit-level high
lint-staged:
run: bunx lint-staged --verbose
stage_fixed: true # Auto-stage formatted files
build:
run: bun run build
commit-msg:
commands:
commitlint:
run: bunx commitlint --edit {1}IDE configuration is shared via .vscode/settings.json:
{
"editor.defaultFormatter": "oxc.oxc-vscode",
"editor.formatOnSave": true
}Everyone uses the same formatter. Formatting debates are over. Code looks the same regardless of who wrote it.
What you can learn
The specific checks vary by project. The practice is automated quality gates. Where you place them (pre-commit, pre-push, CI only) depends on how fast they run and how much friction you'll accept.
See the full setup: The Code Quality documentation explains every tool, config file, and why they're configured that way.
Code Review Automation
The principle: Automate the tedious parts of code review. Let humans focus on logic, architecture, and edge cases.
Why it matters: Reviewers have limited attention. If they're spending time on "this file needs a newline at the end," they're not catching "this query could return stale data under load."
How ZeroStarter approaches this
CodeRabbit for AI-powered code review:
# .coderabbit.yaml
reviews:
collapse_walkthrough: false
issue_enrichment:
auto_enrich:
enabled: true
labeling:
auto_apply_labels: trueAuto-labeling based on changed files:
# PR touches api/hono/ → gets @api/hono label
# PR touches package.json → gets @dependencies label
# PR touches .github/workflows/ → gets @workflows labelLabels are created automatically. Reviews can be filtered by area. You know at a glance what a PR affects.
In-repo code reviews as reference:
.github/reviews/
└── 2025-12-26-cursor-composer-1.md # Detailed review with findingsPast reviews are documented. Patterns are captured. New team members learn from history.
What you can learn
Review automation varies:
- CodeRabbit, Codium: AI-powered review
- Danger.js: Custom automation rules
- CODEOWNERS: Route reviews to experts
- Labeler actions: Tag PRs by path
The practice is systematic, consistent code review. Automate what can be automated. Free humans for what requires judgment.
CI/CD Pipelines
The principle: Every change should pass automated checks. Machines verify formatting, types, and tests. Humans review logic and architecture.
Why it matters: Reviewers have limited attention. Don't waste it on "you forgot a semicolon" when automation can catch that.
How ZeroStarter approaches this
GitHub Actions runs on every PR:
jobs:
checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: oven-sh/setup-bun@v2
- run: bun install
- run: bun audit --audit-level high
- run: bun run lint
- run: bun run buildPRs are automatically labeled based on changed files. Approval status is tracked (0/1 → APPROVED). You always know the state of every PR at a glance.
What you can learn
CI platforms abound. Common checks include linting, type checking, tests, builds, security scanning, and preview deployments. The practice is automated verification before merge.
Halfway there! You've learned the core development practices. Ready to see them in action? Clone ZeroStarter and explore the workflows firsthand.
SEO and Social Sharing
The principle: Your app should be discoverable. Search engines and social platforms should understand your content. This shouldn't require manual work for every page.
Why it matters: A blog post with no OG image gets ignored on Twitter. A site with no sitemap gets indexed slowly. Manual meta tags are forgotten. Automation ensures consistency.
How ZeroStarter approaches this
Auto-generated sitemap from content:
// web/next/src/app/sitemap.ts
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const docsPages = docsSource.getPages()
const blogPages = blogSource.getPages()
return [
{ url: baseUrl, priority: 1 },
...docsPages.map((page) => ({ url: `${baseUrl}${page.url}`, priority: 0.9 })),
...blogPages.map((page) => ({ url: `${baseUrl}${page.url}`, priority: 0.9 })),
]
}Add a doc or blog post? It's automatically in the sitemap. No manual updates needed.
robots.txt configuration:
export default function robots(): MetadataRoute.Robots {
return {
rules: [{ userAgent: "*", allow: "/", disallow: ["/api/", "/dashboard/"] }],
sitemap: `${baseUrl}/sitemap.xml`,
}
}Dynamic OG images generated per page:
// web/next/src/app/api/og/home/route.tsx
export async function GET() {
return new ImageResponse(
<div
style={
{
/* ... */
}
}
>
<div>{config.app.name}</div>
<div>{config.app.description}</div>
</div>,
{ width: 1200, height: 630 },
)
}Share a page on social media? It has a proper preview image. No manual image creation.
What you can learn
SEO approaches vary:
- Framework built-ins: Next.js Metadata API, Nuxt SEO
- OG image libraries: @vercel/og, satori
- Sitemap generators: next-sitemap, manual
- Structured data: JSON-LD for rich snippets
The practice is automated SEO. Sitemaps, meta tags, and social images should generate themselves.
Dependency Management
The principle: Dependencies should be consistent and secure. The same package should have the same version everywhere.
Why it matters: Package A uses zod@3.22. Package B uses zod@3.23. They share types. Now you have subtle incompatibilities that only appear in specific combinations.
How ZeroStarter approaches this
Bun's catalog feature centralizes versions:
// Root package.json
{
"catalog": {
"zod": "^3.25.30",
"hono": "^4.7.10"
}
}
// Any workspace package.json
{
"dependencies": {
"zod": "catalog:",
"hono": "catalog:"
}
}A script runs automatically on commits, migrating consistent versions to the catalog.
Shadcn update script keeps UI components current:
# .github/scripts/shadcn-update.sh
bunx shadcn@latest add -a # Update all components
git restore package.json # Keep our config
bun run formatUI components stay updated. Accessibility improvements flow in. Security patches are applied.
What you can learn
The practice is version consistency and automated updates. How you achieve it depends on your package manager.
Release Management
The principle: Releases should document themselves. Changelogs should be accurate. Version numbers should be meaningful.
Why it matters: Manual changelogs are incomplete. When something breaks in production, you need to know exactly what changed.
How ZeroStarter approaches this
When code merges to main, changelogen automatically generates a changelog:
## v0.2.0
### 🚀 Features
- **auth**: add Google OAuth provider (a1b2c3d)
### 🐛 Bug Fixes
- **api**: handle null user in session (d4e5f6g)
### Contributors
@nrjdalalThe workflow bumps the version, updates CHANGELOG.md, and creates a PR to sync back.
What you can learn
The practice is automated, accurate release documentation. Whether you auto-generate or write by hand, the goal is knowing exactly what shipped.
Deployment
The principle: Deployments should be reproducible. "Works on my machine" shouldn't be an excuse.
Why it matters: Environment differences cause production-only bugs. Containerization eliminates these surprises.
How ZeroStarter approaches this
Docker Compose for local production testing:
services:
api:
build:
context: .
dockerfile: api/hono/Dockerfile
environment:
- INTERNAL_API_URL=http://api:4000
web:
build:
context: .
dockerfile: web/next/Dockerfile
environment:
- INTERNAL_API_URL=http://api:4000Services communicate via Docker's internal network. Server-side rendering connects to the API container directly.
Vercel for production: Frontend and backend deploy as separate projects from the same repository.
What you can learn
The practice is reproducible, automated deployments. Whether containers, serverless, or traditional—deployments should be consistent.
Deploy today: ZeroStarter includes ready-to-use Docker and Vercel configurations. See the Docker and Vercel deployment guides.
Developer Experience
The principle: Development should be fast and informative. Errors should be clear. The feedback loop should be tight.
Why it matters: Slow builds kill productivity. Cryptic errors waste hours. Good DX keeps developers in flow.
How ZeroStarter approaches this
Dev tools in non-production environments:
// Shows viewport size, breakpoints, deployment links
{
!isProduction(env.NODE_ENV) && <DevTools />
}The component shows current breakpoint (SM, MD, LG), viewport dimensions, and quick links to Vercel deployments. Responsive debugging is instant.
React Query DevTools for data debugging:
<ReactQueryDevtools buttonPosition="top-right" />See every query, its state, cache status, and refetch triggers. Data issues are visible.
Centralized config with feature flags:
// web/next/src/lib/config.ts
export const config = {
app: {
name: "ZeroStarter",
url: env.NEXT_PUBLIC_APP_URL,
},
features: {
// enableAnalytics: env.NEXT_PUBLIC_ENABLE_ANALYTICS === "true",
},
sidebar: {
groups: [
/* navigation structure */
],
},
}App name in one place. Feature flags centralized. Navigation structure defined in code.
What you can learn
DX tools vary:
- Dev tools: Custom components, browser extensions
- Hot reloading: Vite, Next.js Fast Refresh
- Error overlays: Better error messages in development
- Mock servers: MSW for API mocking
The practice is optimized developer experience. Fast feedback. Clear errors. Tools that help rather than hinder.
AI-Assisted Development
The principle: AI tools are only as good as the context you give them. A well-structured codebase produces better AI suggestions.
Why it matters: Ask AI to "add an API endpoint" in a messy codebase, and you get messy code. Ask the same in a structured codebase with consistent patterns, and the AI follows those patterns.
How ZeroStarter approaches this
AGENTS.md tells AI tools about project structure.
.skills/ contains structured instructions:
.skills/
├── git/commit/SKILL.md # How to write commits
└── turbo/generate-build-graph/SKILL.mdllms.txt auto-generates documentation optimized for AI:
| Endpoint | Content |
|---|---|
/llms.txt | Documentation index with links |
/llms-full.txt | Complete docs in one file |
Point an AI at the full docs and it understands your entire project.
What you can learn
The practice is intentional context for AI tools. Whether .skills/, AGENTS.md, or well-structured code—the goal is helping AI understand your patterns.
AI-ready from day one: ZeroStarter's documentation is available at /llms.txt - point your AI assistant there and it understands the entire project.
Getting Started
# Clone the template
bunx gitpick https://github.com/nrjdalal/zerostarter/tree/main
cd zerostarter
# Install dependencies
bun install
# Set up environment
cp .env.example .env # Edit with your values
# Initialize database
bun run db:generate
bun run db:migrate
# Start development
bun devFrontend runs on localhost:3000. Backend runs on localhost:4000. Types flow end-to-end. Commits are validated. Quality gates are active.
Explore the code. Read the patterns. Modify them for your needs.
This is just the beginning. Every practice in this article is implemented in ZeroStarter, fully documented, and ready for you to learn from. The complete documentation covers installation, architecture, deployment, and more.
- Documentation: zerostarter.dev/docs
- AI-Ready Docs: zerostarter.dev/llms.txt
- GitHub: github.com/nrjdalal/zerostarter
- Updates: @nrjdalal
MIT Licensed. Learn from it. Modify it. Make it yours.