I spent way too long crafting prompts. Adding context, tweaking tone, writing examples, then getting frustrated when the AI still misunderstood what I wanted.
Then I realized something obvious: AI models are really good at prompt engineering. They know exactly what context they need, what format works best, and how to structure instructions for their own processing.
So I started asking them to write their own prompts.
Here's What I Actually Do
Instead of spending 20 minutes crafting instructions, I spend 30 seconds asking the AI to help me write better ones:
"I need a prompt for [specific task]. What information would you need, and how should I structure this request?"
Then:
"Write me the actual prompt I should use."
The results are immediately better than anything I used to write myself.
Examples of AI-Generated Prompts
Case 1: After a Long Conversation
Imagine spending 30 minutes debugging a complex Nix configuration with an AI assistant. Tons of back-and-forth, error messages, system details. When you need to do something similar later, you can distill all that knowledge into a better prompt.
What I asked:
"Based on everything we just figured out about my Nix setup, write me a prompt I can use next time to get straight to the solution without all this debugging."
What Claude gave me:
I need help debugging my configuration with these details:
Context: [Language/Framework] project configuration
Issue: Circular dependency errors when loading modules
Project Structure:
- Main config file at root
- Module system with nested imports
- Dependencies managed through [package manager]
What's working:
- Direct imports function correctly
- Single-level modules load without issues
- Package versions are locked and consistent
Need: Configuration pattern that avoids circular dependencies while maintaining modular structure.
This kind of prompt saves hours of debugging. All the context distilled into exactly what matters.
Case 2: Capturing Your Working Style
After working with AI to write several blog posts, you can capture that developed style for future posts.
What I asked:
"You've learned how I write blog posts - the ADHD-friendly style, personal hooks, practical focus. Write a prompt that would make you write in this exact style again."
The result becomes a reusable style guide:
Write technical content with these characteristics:
- Personal hook addressing common frustration
- Conversational, accessible tone
- Short paragraphs and sentences
- Include time estimates for tasks
- Number steps clearly (1 of 3, etc.)
- Practical examples with code snippets
- Alternative approaches for different setups
- Clear next-steps conclusion
Topic: [technical subject]
Problem: [specific pain point]
Solution: [approach or tool]
Now every post can hit the same authentic voice without explaining it each time.
Case 3: Complex Code Generation
When building a CLI tool with specific error handling patterns developed over multiple iterations.
Instead of vague instructions:
"Write Python code with good error handling"
Ask the AI to write its own prompt:
"I need Python CLI code that follows these patterns: early returns, specific error types, user-friendly messages. Write the prompt you'd need to generate code in this style."
AI-generated prompt:
Create a function with these specifications:
Function: Process and validate user input
Language: [Python/JavaScript/etc.]
Requirements:
- Handle multiple input formats
- Validate according to rules
- Return standardized output
- Provide clear error messages
Edge cases to clarify:
- How to handle ambiguous inputs?
- Should partial/incomplete data be accepted?
- What's the default behavior if options aren't specified?
- Should the function be permissive or strict?
Include examples showing:
- Basic usage
- Error handling
- Edge case behavior
Notice those "IMPORTANT clarifications"? AI models are great at identifying edge cases like ambiguous dates (01/02/03). You can then add your decisions directly to the prompt:
IMPORTANT clarifications:
- What should happen with ambiguous dates like 01/02/03? → Assume US format (MM/DD/YY)
- Should partial dates (just year, or year-month) be accepted? → Yes, default to first day
- Default output format if none specified? → ISO format (YYYY-MM-DD)
- Should relative dates like "yesterday" work? → No, explicit dates only
As you discover more edge cases, keep adding to the prompt. It evolves with your understanding. Every function can follow the pattern perfectly.
Case 4: Research Prompts for Perplexity
For deep research on technical topics, instead of crafting search prompts manually, AI can write ones optimized for research.
What I asked:
"I need to research WebRTC implementation patterns. Write me a Perplexity prompt that will get comprehensive, technical results with actual code examples."
AI-optimized research prompt:
Technical research query: [Technology] implementation patterns
Focus areas:
1. Production architecture patterns
2. Performance optimization strategies
3. Real-world scaling approaches
4. Security best practices
Include:
- Code examples from production systems
- Benchmark data and metrics
- Common problems and solutions
- Library/framework comparisons
Exclude: Beginner tutorials, basic definitions
The search results were exactly what I needed - no fluff, all substance.
Case 5: Optimizing for Different Model Sizes
This is a powerful technique. When using smaller models (like smaller models) for quick tasks that need very specific prompting.
Ask a larger model:
"I'm using a smaller, faster model with limited context. Help me write prompts that work well within these constraints for code tasks."
The AI provides this template:
Format: [language]
Task: [one specific thing]
Input: [code]
Style: [2-3 word description]
Example for Python formatting:
Format: Python
Task: Add type hints
Input: def calculate(x, y): return x + y
Style: Modern Python
Now smaller models give exactly what you want, every time. Use larger models to craft prompts for smaller ones.
Case 6: Cross-Model Templates
Different models have different strengths. Keep prompt templates for each:
For code models:
"Write me a prompt for Codestral that leverages its strength in [specific language/framework]"
For research models:
"Write me a Perplexity prompt that will find recent, technical information about [topic]"
For fast local models:
"Write me a prompt for [small model] that works within its token limits but gets [specific result]"
Use the smartest model you have access to as your prompt engineering assistant. It's like having a translator who speaks both human and AI.
Case 7: Working with Limited Context Windows
Running models locally? Using a model with limited context? Bigger models understand how to work within constraints.
Ask a model with larger context:
"I'm using a local model with limited context window (4K tokens). I need to process a large codebase. Write prompts that work within this limitation."
AI's context-aware approach:
For limited context models, use this chunking strategy:
Prompt 1 - Analysis:
"Analyze this code section for refactoring opportunities. List only function names and issues. Be extremely concise.
[code chunk]"
Prompt 2 - Specific refactor:
"Refactor only the function 'processData()' with these goals:
1. Extract validation logic
2. Improve error handling
Return only the refactored function."
Prompt 3 - Integration check:
"Given this refactored function, write minimal tests:
[new function]"
Instead of trying to fit everything in one prompt, the AI designs a workflow that respects the model's limits.
The Iteration Advantage
Here's the thing - bigger models are better at understanding your messy human explanations. They get tone, context, and what you really mean even when you explain it poorly.
After a few iterations with advanced models, they learn:
- Your coding style preferences
- How technical you want responses
- Your specific use cases
- What "good" looks like to you
Then they can encode all of that understanding into prompts for other models. It's like having a senior developer write documentation for juniors.
Why This Works
Bigger models have three key advantages for prompt writing:
- Context Understanding - They grasp nuance, tone, and implicit requirements
- Pattern Recognition - They've seen millions of prompts and know what works
- Translation Ability - They can convert human intent into model-speak
It's like asking a chef to write their own recipe instead of guessing what ingredients they need.
The Go-To Meta-Prompt
For complex tasks, use this:
I want to accomplish [goal]. Write a prompt that gives you:
1. All context you need to understand the task
2. Exact output format I want
3. Examples that improve results
4. Common edge cases to avoid
Then explain why you structured it that way.
Works for code reviews, content creation, data analysis, system design - anything where you need nuanced understanding.
What You Get
- Prompts that actually work first try
- Better results with less effort
- Templates you can reuse
- Understanding of what makes prompts effective
This works with any AI assistant. GPT, Claude, Gemini, whatever you're using.
Try It Right Now
Next time you need AI help:
1. Describe what you want in plain English
2. Ask the AI to write a better prompt for that goal
3. Use what it gives you
Stop trying to outsmart AI. Just ask it to help you communicate better with it.
Photo by Logan Voss on Unsplash
Content on this blog was created using human and AI-assisted workflows described here. Original ideas and editorial decisions by Justin Quaintance.