Vibe coding — writing software by describing intent to an AI and iterating on its output — has gone from Reddit meme to mainstream professional workflow in 18 months. Cursor crossed 1 million developers. GitHub Copilot passed 2 million subscribers. But the gap between developers who use these tools productively and those who create AI-generated spaghetti is enormous, and it comes down to a small number of learnable patterns.
⚡ TL;DR: Effective vibe coding requires: a
.cursorrulesfile that encodes your architecture conventions, prompts that specify constraints not just goals, reading every line before committing, and a “distrust then verify” mental model. The AI is a fast junior dev who has read every StackOverflow answer but never shipped to production.
The .cursorrules file — your architecture encoded in AI instructions
# .cursorrules (place in project root — Cursor reads this automatically)
# This is the single highest-leverage thing you can do for AI-assisted development
## Project Architecture
- This is a Node.js 20 / TypeScript 5.3 monorepo
- API layer: Fastify (NOT Express) — use Fastify plugins, not Express middleware
- Database: PostgreSQL via Prisma ORM — never write raw SQL unless explicitly asked
- Auth: JWT with refresh tokens — see /lib/auth for patterns
## Code Style Rules
- Functions max 40 lines — extract helpers if longer
- No any types — use unknown and narrow, or define proper interfaces
- Error handling: always use Result pattern from /lib/result.ts
- No console.log in production code — use logger from /lib/logger.ts
## What NOT to do
- Never use require() — this is ESM only
- Never hardcode configuration — use /config/index.ts
- Never throw errors in async functions — return Result type
- Never import directly from node_modules in tests — use dependency injection
## Testing
- Jest for unit tests, Supertest for integration
- Test files: *.test.ts co-located with source
- Coverage target: 80% for /services/, 60% for /routes/
## When I say "add feature X":
1. Check if similar patterns exist in /src — follow them
2. Add types first, implementation second
3. Write the test before the implementation
4. Update /docs/api.md if adding a new endpoint
Prompting for constraints, not just goals
# Bad prompt — goal only, no constraints
"Add user authentication to the API"
# Result: AI picks bcrypt + JWT + whatever Express patterns it has seen most
# Problem: conflicts with your existing architecture, introduces 3 new dependencies
# Good prompt — goal + constraints + context
"Add email/password authentication to POST /api/auth/login using:
- The existing User model in /prisma/schema.prisma
- JWT signing via /lib/auth.ts (already set up)
- Return Result<{token: string, user: UserDTO}, AuthError> (see /lib/result.ts)
- Rate limit: max 5 attempts per IP per 15 minutes (use existing rate limiter in /middleware/)
- Do NOT add new dependencies
- Tests in /src/routes/auth.test.ts"
# The AI now has specific guardrails:
# - Existing architecture to follow
# - Explicit list of what NOT to do
# - Test requirement forces it to think about interface first
# - No new deps removes the most common source of AI-generated bloat
Context management in Cursor — what to include
# Cursor @ mentions — what to include in context for different tasks
# Adding a new feature:
@file existing-similar-feature.ts # Show the pattern to follow
@file types.ts # Relevant type definitions
@file schema.prisma # Database schema for context
# Prompt: "Add [feature] following the exact same pattern as [existing-similar-feature]"
# Debugging a specific error:
@file broken-file.ts # The file with the error
@terminal # Include the error message from terminal
# Prompt: "Fix the TypeScript error in line 47. The error is: [paste error]"
# Refactoring:
@folder /src/services/ # All files to refactor
# Prompt: "Extract the email sending logic into a separate EmailService class.
# The service should be injectable (see UserService for the pattern).
# Do not change any tests."
# What NOT to include:
# - Your entire codebase (context pollution — quality drops dramatically)
# - Unrelated files (AI will try to "be helpful" and change them)
# - node_modules (waste of context window)
The review workflow — never commit AI code without this
# Required review checklist before committing AI-generated code:
# 1. Run: git diff --stat
# If AI touched files you did not expect — investigate before accepting
# 2. For every new function/method, ask:
# - Does this handle the null/undefined case?
# - Does this handle the async error case?
# - Is there an N+1 query hiding in a loop?
# 3. Check for AI hallucination patterns:
# - Import from a path that does not exist
# - Using an API that does not exist in this version
# - Type assertion (as SomeType) that hides a real type error
# - Copying a pattern from a different framework incorrectly
# 4. Run the tests:
# npm test -- --testPathPattern=affected-file
# If AI wrote the tests, run them with mutation testing occasionally
# 5. Check for security issues AI commonly misses:
# - SQL injection via template literals
# - Missing authentication on new routes
# - Secrets hardcoded (AI sometimes uses example values)
# - Missing input validation on new request handlers
The tools ranked for different tasks
- ✅ Cursor — best for greenfield features, refactoring across files, and complex multi-file edits. Composer mode is exceptional for “make this whole thing work” tasks.
- ✅ Claude.ai / API — best for architecture decisions, explaining complex code, writing tests for legacy code, and anything requiring real reasoning about your system.
- ✅ GitHub Copilot — best for line-by-line completion and boilerplate in an existing codebase you know well.
- ✅ Claude Code (CLI) — best for tasks that span multiple files and need shell access: refactoring, running tests, fixing CI failures.
- ❌ None of these replace code review — they generate plausible-looking code that may be subtly wrong.
Vibe coding at scale pairs naturally with TypeScript generics — strong types are the best guardrail against AI generating subtly incorrect code. For generating boilerplate at the infrastructure level, the Lambda Layers guide shows the kind of repetitive infrastructure code that AI handles extremely well. External reference: Cursor rules documentation.
Level up your AI development skills
→ View Course on Udemy — The most comprehensive hands-on course covering every concept in this post with real projects.
→ Building LLM Powered Applications (Amazon) — The definitive book on building production AI systems and agents.
Sponsored links. We may earn a commission at no extra cost to you.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
