Make AI Do the Boring Stuff (Tests and Docs Are Free Now)

By: on Nov 29, 2025
Person in orange sleeve writing on white paper - documentation and planning concept

Tests and documentation are important. Everyone knows this.

They're also boring as hell to write. Which is why most of us skip them.

But here's what changed: AI writes tests and documentation for free now. Like, actually free. You're already paying $20/month for Claude or Cursor to write code. Making it write tests and docs costs you one extra prompt.

So I stopped skipping the boring stuff. I just make my tools do it.

Here's What I Actually Do

Every time AI writes code for me, I make it write two more things. Takes maybe 30 seconds of prompting total.

Two types of guard rails:

  • Tests - Verify the code actually works
  • Documentation - Explain what it does and why

Both are basically free when AI generates them. Both save you later when you're trying to figure out what this code does and whether it's working correctly.

What You Get

  • Code that provably works (tests verify it immediately)
  • Understanding of what AI built (docs explain the approach)
  • Faster debugging (failing tests show exactly what broke)
  • Context for future sessions (you or AI can read the docs later)
  • Confidence to ship (you know it works, not just hope)

Real Examples of Guard Rails

Test Suites That Actually Help

When AI writes a function, ask it to write tests immediately:

Me: "Write tests for this function. Cover edge cases."

AI: *generates comprehensive test suite with cases you didn't think of*

The tests catch bugs before production. They also document what the function is supposed to do.

Example prompt I use:

"Write a test suite for this code. Include:
- Happy path tests
- Edge cases (empty input, null, boundary conditions)
- Error handling tests
- Integration test if it touches external systems"

AI is surprisingly good at finding edge cases. It thinks of scenarios you forgot about - empty arrays, null values, timezone edge cases, stuff you would have discovered in production.

Documentation That Makes Sense

Not the "this function adds two numbers" stuff that passes linters but helps nobody. Useful documentation that explains decisions:

Me: "Document this module. Explain the design decisions and gotchas."

AI: *writes explanation of why the code works this way and what to watch for*

Types of docs I request:

  • Architecture docs - How the system fits together
  • Decision logs - Why we chose this approach over alternatives
  • Gotchas - Things that will trip people up
  • Usage examples - Working code snippets you can copy

Passing Context Between Sessions

Here's where documentation becomes especially useful. AI doesn't remember previous conversations. But it CAN read documentation.

This means you can have different AI sessions build on each other's work:

Session 1: AI writes code + documentation
Session 2: Different AI instance reads the docs, continues building
Session 3: Tests verify nothing broke along the way

The documentation provides context across sessions. Tests verify the code still works.

Example I use regularly:

"Read the documentation from the auth module.
Now implement the rate limiting feature mentioned in the TODO.
Write tests first, then the implementation."

AI reads its previous documentation and builds on it. Guard rails (tests and docs) let you maintain continuity across sessions instead of re-explaining everything every time.

Test-First Development with AI

This feels backwards but works incredibly well:

Me: "Write tests for a function that validates email addresses.
Include tests for common invalid formats."

AI: *writes comprehensive test suite*

Me: "Now write the function that makes these tests pass."

AI: *implements exactly what the tests require*

The tests define the contract. The AI just fills in the implementation. You get exactly what you specified.

Documentation for Future Sessions

AI doesn't remember previous conversations (unless you're in the same session). Documentation bridges the gap:

// docs/api-design.md
## Rate Limiting Strategy

We use token bucket algorithm because:
1. Allows burst traffic (better UX)
2. Simple to implement
3. Works with distributed systems

Implementation in src/ratelimit.ts

TODO: Add Redis backend for multi-server deployments

Next time you (or AI) touch this code, context is right there. No re-explaining the entire system.

"But I Don't Have Time for Tests and Documentation"

That used to be true. Writing tests and docs took time you didn't have.

But now? AI writes them in seconds. You're already waiting for AI to generate code anyway. Adding tests and docs to that same request costs you basically nothing.

Here's the thing: When you write code yourself, you might know if it works. You typed every line and thought through the logic.

When AI writes code, you're reviewing someone else's work. Tests tell you immediately if the code actually works. Documentation tells you what it's supposed to do.

You're already paying for AI tools. Making them write tests and docs costs one extra sentence in your prompt. Not using that is leaving free value on the table.

How to Make This Automatic

Option 1: Always ask for tests

Every time AI writes code, immediately follow up:

"Write tests for this."

That's it. Make it a habit.

Option 2: Tests first, then code

Describe what you want in tests, then have AI implement:

"Write tests that verify [behavior]. Then implement the code."

AI writes the contract (tests), then the implementation. Everything is verified from the start.

Option 3: Documentation-driven development

Have AI write the documentation first, then implement from the docs:

"Write documentation explaining how a [feature] should work.
Include examples and edge cases. Then implement it."

Thinking through the docs forces clarity. Implementation follows naturally.

Guard Rails Make AI Better

Here's what I've noticed: AI with guard rails is WAY more effective than AI without them.

Without guard rails:

  • AI generates code
  • You review it manually
  • Maybe it works, maybe it doesn't
  • Bugs show up later
  • You forget what the code does

With guard rails:

  • AI generates code + tests + docs
  • Tests verify it works immediately
  • Docs explain what and why
  • Next session reads the docs and continues
  • Tests catch regressions automatically

The guard rails create a feedback loop. The AI understands its own work better. You understand it better. Future AI sessions understand it better.

It's like having a team that documents and tests everything - except the team is you + AI, and the overhead is basically zero.

What This Actually Looks Like

Real example from my codebase:

Me: "Build a markdown parser that handles code blocks and links."

AI: *generates implementation*

Me: "Write tests covering edge cases - nested blocks, malformed links,
     empty input, mixed content."

AI: *generates comprehensive test suite*

*Tests fail on edge case: nested code blocks*

Me: "Fix the implementation to handle nested code blocks."

AI: *updates code, tests pass*

Me: "Document how the parser handles nesting and why."

AI: *writes clear explanation in comments and README*

Total time: Maybe 5 minutes. Result: Working code with tests and documentation. No bugs in production.

Why This Actually Matters

Some people worry AI will make them worse developers. Like they're cutting corners by letting AI help.

That's backwards.

AI with guard rails makes you more rigorous, not less. Because tests and docs are so cheap to generate now that you actually create them.

Before AI: "I'll write tests when I have time" (never). Documentation gets outdated. Code mostly works.

With AI + guard rails: Tests and docs are free, so you actually make them. Your code is more reliable because you verify everything.

The developers who get the most from AI are the ones who use it to add structure, not skip steps. Guard rails let you move fast without breaking things.

You're paying for these tools anyway. Use them for the boring stuff. Your future self will thank you when the tests catch a bug at 2pm instead of 2am.

Start Right Now

Takes 2 minutes:

1. Next time AI writes code, ask it to write tests too
2. Run the tests, see if they pass
3. If tests fail, have AI fix the code
4. Ask for documentation explaining what you just built

That's it. You now have code that works (proven by tests) and that you understand (explained by docs).

You're already paying for these tools. Make them do the boring work. Your tests will catch bugs. Your documentation will save time. Both are free.

Be effective. Not proud.

Photo by Romain Dancre on Unsplash

Content on this blog was created using human and AI-assisted workflows described here. Original ideas and editorial decisions by Justin Quaintance.