- AI30.io
- Posts
- 7 Prompt Patterns That Transformed How I Use LLMs
7 Prompt Patterns That Transformed How I Use LLMs
I use these techniques every day, and my prompts are on steroids...
Today, as every day for during this week I’m posting and sharing my knowledge on Reddit. I want to have a 7 day streak there as I see a lot of people find that content extremely valuable.
I’ve posted on Reddit 4 Prompt Patters that Transformed How I use LLMs, but as you’re part of my community I would love to share more knowledge with you.
AI30 v2 is coming soon. I'm going to ship that by the end of this week, where you can access resources, see guides and watch tutorials. Here's a sneak peek of how it will look 👀
I’ll be offering both free and paid memberships at $39.99/month. Or, you can grab a founding member seat now (only 9 left) for $99.99, which gives you lifetime access to all my pro content on AI30 as well as access to private community.

7 Prompt Patterns That Transformed How I Use LLMs
Ever felt like you're not getting the most out of LLMs? You're not alone. Many users interact with LLMs as simple Q&A tools. With the right prompting techniques, you can transform models into expert collaborators tailored to your specific needs.
In this post, I would like to share 7 powerful patterns (for subscribers only) that inject steroids into LLMs and transform them from general assistants to domain experts. Each pattern includes practical examples you can copy, paste, and modify immediately for your own use cases (I'm mostly against copy/pasted prompts, but I just want you to experiment with them so you can see the real POWER).
1. The Chain of Thought (CoT) Pattern
Chain of Thought (CoT) prompting is a technique that guides models to break down complex tasks/problems into sequential steps. By explicitly asking the model to work through problems step by step, you can dramatically improve its reasoning and accuracy.
What's the magic behind it?
LLMs, like humans, perform better on complex tasks when they think methodically rather than jumping right to a conclusion. This pattern reduces errors by making each step of the reasoning process transparent and reviewable.
Example prompt:
I need to solve this probability question: In a group of 70 people, 40 like chocolate, 35 like vanilla, and 20 like both. How many people don't like either flavor?
Please solve this step by step, showing all of your work and reasoning before providing the final answer.
Response is mind-blowing. Try it by yourself as well.
💡Pro tip: Use Claude Console or OpenAI Playground to test your prompts. They don’t have shared memory, so you can have a clear state of a chat there and test prompts without any side-effects.
Response:
I need to solve this step-by-step to find how many people don't like either chocolate or vanilla.
First, I'll identify what we know:
Total people in the group: 70
People who like chocolate: 40
People who like vanilla: 35
People who like both flavors: 20
To solve this problem, I'll use the formula for the union of two sets:
n(A ∪ B) = n(A) + n(B) - n(A ∩ B)
Where:
n(A) = number of people who like chocolate = 40
n(B) = number of people who like vanilla = 35
n(A ∩ B) = number of people who like both = 20
n(A ∪ B) = number of people who like at least one flavor
Step 1: Calculate how many people like at least one flavor.
n(A ∪ B) = 40 + 35 - 20 = 55
Step 2: Subtract this from the total to find those who like neither flavor.
People who like neither = Total - People who like at least one
People who like neither = 70 - 55 = 15
Therefore, 15 people don't like either chocolate or vanilla.
But we're not quite there yet. We can enhance reasoning by providing instructions on what our mental model is and how we would like it to be solved. You can think of it as giving a model your reasoning framework.
How to adapt it:
Add
Think step by step
or Work through this systematically to your promptsFor math and logic problems, say
Show all your work
with that we can eliminate cheating and increase integrity, as well as see if model failed with calculation, and at what stage it failed.For complex decisions, ask model to
Consider each factor in sequence
.
Improved Prompt Example:
<general_goal>
I need to determine the best location for our new retail store.
</general_goal>
We have the following data
<data>
- Location A: 2,000 sq ft, $4,000/month, 15,000 daily foot traffic
- Location B: 1,500 sq ft, $3,000/month, 12,000 daily foot traffic
- Location C: 2,500 sq ft, $5,000/month, 18,000 daily foot traffic
</data>
<instruction>
Analyze this decision step by step. First calculate the cost per square foot, then the cost per potential customer (based on foot traffic), then consider qualitative factors like visibility and accessibility. Show your reasoning at each step before making a final recommendation.
</instruction>
Note: I've tried this prompt on Claude as well as on ChatGPT, and adding XML tags doesn't provide any difference in Claude, but in ChatGPT I had a feeling that with XML tags it was providing more data-driven answers (tried a couple of times). I've just added them here to show the structure of the prompt from my perspective and highlight it.
2. The Expertise Persona Pattern
This pattern involves asking a model to adopt the mindset and knowledge of a specific expert when responding to your questions. It's remarkably effective at accessing the model's specialized knowledge in particular domains.
When you're changing a perspective of a model, the LLM accesses more domain-specific knowledge and applies appropriate frameworks, terminology, and approaches relevant to that field. The simplest perspective shifting prompt can start with Act as a Senior DevOps engineer from FAANG
Example prompt:
I'd like you to respond as an experienced data scientist with expertise in natural language processing. I'm trying to build a sentiment analysis model for customer reviews.
What approach would you recommend for a small team with limited ML experience, and what are the key considerations we should keep in mind?
That's not a terrible prompt, but it's written without much of a structure. I would enhance it with exact expertise like Senior ML Specialist
, or Senior ML Engineer
. Adding credentials like with 15+ years of experience
increases competence and will make it act as an experienced ML engineer would respond. Last but not least, I would include details about the expert's approach, like who takes a practical, implementation-focused approach
.
Improved Prompt Example:
I'd like you to respond as a senior software architect with 20+ years of experience in scalable systems and a track record of migrating legacy applications to cloud infrastructure.
You take a pragmatic approach that balances technical debt reduction with business continuity.
My company has a 15 year old Java monolith application handling our core business processes. We need to modernize it while keeping it operational.
What migration strategy would you recommend, what pitfalls should we watch for, and how would you structure the team to execute this transition?
Keep in mind there's also a difference between Act as a senior software architect...
and I'd like you to respond as a senior software architect
. With the one which I've just provided, it tries to give you a more humanized answer. With Act like
it just gives you a functional response.
Here's the response I've got from above prompt:
Modernizing Your Java Monolith: A Pragmatic Approach
After leading numerous legacy modernization projects over my career, I can tell you this is as much about organizational change as technical transformation. Let me outline a pragmatic strategy that's worked well for similar situations.
Recommended Migration Strategy: Incremental Strangler Pattern
The "big bang" approach almost always fails for systems of this age and complexity. Instead, I recommend:
Assess and map the monolith - Before touching anything, document system boundaries, dependencies, and business workflows. Identify the "hot spots" (frequently changed code) and "stable zones."
Establish a unified deployment pipeline - Implement CI/CD for the existing monolith first. This creates the infrastructure for your modernized components.
Create an API gateway/facade - Place a service gateway in front of your monolith to intercept and route traffic, enabling gradual migration of functionality.
Extract services strategically - Begin with non-core, well-bounded contexts that provide high business value but relatively lower risk.
Implement event-driven communication - Gradually introduce asynchronous messaging between components to reduce direct dependencies.
Common Pitfalls to Avoid
Rewriting without understanding - I've seen teams rewrite components without fully understanding edge cases in the original system.
Over-engineering microservices - Not everything needs to be a microservice. Sometimes "right-sized services" are better than numerous tiny ones.
Neglecting operational readiness - New architecture requires new monitoring, logging, and support procedures.
Database coupling - The shared database is often the hardest to decouple. Consider using data virtualization or CDC patterns.
Underestimating testing complexity - Integration testing becomes exponentially more difficult. Invest in automated testing early.
Team Structure
I recommend a "pioneer-settler-town planner" model:
Platform team: Builds tooling, CI/CD, cloud infrastructure
Extraction teams: Cross-functional teams focused on migrating specific business domains
Core team: Maintains the monolith and supports extraction
Ensure you have a dedicated architect who understands both the legacy system and modern patterns to maintain consistency across teams.
First Steps
Run a technical discovery workshop to map dependencies
Implement comprehensive monitoring of your current system
Choose a small, valuable component for first extraction as proof-of-concept
Build your deployment pipeline and API gateway infrastructure
Would you like me to elaborate on any particular aspect of this approach?
3. The Working Memory Technique
This technique helps the model to maintain and refer back to information across a conversation, creating a makeshift working memory that improves continuity and context awareness.
While modern models have generous context windows (especially Gemini), explicitly defining key information as important to remember signals that certain details should be prioritized and referenced throughout the conversation.
Example prompt:
I'm planning a marketing campaign with the following constraints:
- Budget: $15,000
- Timeline: 6 weeks (Starting April 10, 2025)
- Primary audience: SME business founders and CEOs, ages 25-40
- Goal: 200 qualified leads
Please keep these details in mind throughout our conversation. Let's start by discussing channel selection based on these parameters.
It's not bad, let's agree, but there's room for improvement. We can structure important information in a bulleted list (top to bottom with a priority). Explicitly state "Remember these details for our conversations" (Keep in mind you need to use it with a model that has memory like Claude, ChatGPT, Gemini, etc... web interface or configure memory with API that you're using).
Now you can refer back to the information in subsequent messages like Based on the budget we established
.
Improved Prompt Example:
4. Using Decision Tress for Nuanced Choices
The Decision Tree pattern guides the model through complex decision making by establishing a clear framework of if/else scenarios. This is particularly valuable when multiple factors influence decision making.
Decision trees provide models with a structured approach to navigate complex choices, ensuring all relevant factors are considered in a logical sequence.
Example prompt:
I need help deciding which Blog platform/system to use for my small media business. Please create a decision tree that considers:
1. Budget (under $100/month vs over $100/month)
2. Daily visitor (under 10k vs over 10k)
3. Primary need (share freemium content vs paid content)
4. Technical expertise available (limited vs substantial)
For each branch of the decision tree, recommend specific Blogging solutions that would be appropriate.
Now let's improve this one by clearly enumerating key decision factors, specifying the possible values or ranges for each factor, and then asking the model for reasoning at each decision point.
Improved Prompt Example:
I need help selecting the optimal blog platform for my small media business. Please create a detailed decision tree that thoroughly analyzes:
DECISION FACTORS:
1. Budget considerations
- Tier A: Under $100/month
- Tier B: $100-$300/month
- Tier C: Over $300/month
2. Traffic volume expectations
- Tier A: Under 10,000 daily visitors
- Tier B: 10,000-50,000 daily visitors
- Tier C: Over 50,000 daily visitors
3. Content monetization strategy
- Option A: Primarily freemium content distribution
- Option B: Subscription/membership model
- Option C: Hybrid approach with multiple revenue streams
4. Available technical resources
- Level A: Limited technical expertise (no dedicated developers)
- Level B: Moderate technical capability (part-time technical staff)
- Level C: Substantial technical resources (dedicated development team)
For each pathway through the decision tree, please:
1. Recommend 2-3 specific blog platforms most suitable for that combination of factors
2. Explain why each recommendation aligns with those particular requirements
3. Highlight critical implementation considerations or potential limitations
4. Include approximate setup timeline and learning curve expectations
Additionally, provide a visual representation of the decision tree structure to help visualize the selection process.
Here are some key improvements like expanded decision factors, adding more granular tiers for each decision factor, clear visual structure, descriptive labels, comprehensive output request implementation context, and more.
5. The "Self-Evaluation" Pattern
This pattern prompts models to review and critique its own responses, leading to higher quality output and more nuanced analysis.
Why it works: By asking an LLM to evaluate its own work, you trigger an additional layer of quality control that catches potential issues, identifies limitations, and results in more comprehensive answers.
Example prompt:
I need a cold email template for reaching out to potential clients for my web design business.
After you create the template, please review it and evaluate:
1. How persuasive it is
2. Potential weaknesses or objections recipients might have
3. Specific improvements that could make it more effective
Then provide an improved version based on your evaluation.
How to adapt it:
Ask model to rate its response on specific criteria (clarity, comprehensiveness, etc.)
Request identification of weaknesses or limitations in the initial response
Always ask for an improved version based on the self-critique
Improved prompt example:
I need a cold email template for reaching out to potential clients for my web design business.
Create an initial template that includes:
- An attention-grabbing subject line
- A personalized introduction paragraph
- A clear value proposition section
- Social proof/credibility elements
- A specific call-to-action
After crafting the initial template, please perform a comprehensive self-evaluation by:
EVALUATION CRITERIA:
1. Persuasiveness (rate 1-10 with specific justification)
2. Personalization flexibility (rate 1-10)
3. Clarity of unique value proposition (rate 1-10)
4. Potential objections or concerns recipients might have
5. Risk of being filtered as spam (rate 1-10)
6. Balance between professional tone and conversational engagement
For each criterion where your rating is below 8/10, identify specific weaknesses and provide tactical improvement suggestions.
Then, create a revised template that implements all your recommended improvements. Include annotations [in brackets] explaining how each modification addresses a specific weakness identified in your evaluation.
Finally, provide 3-5 subject line variations that could be A/B tested, with a brief explanation of the psychological principle each one leverages.
6. The "Structured Output" Framework
This pattern specifically requests information in a predefined format, ensuring Claude's responses are organized exactly the way you need them.
Why it works: By defining the exact structure you need upfront, you eliminate the need for follow-up clarifications and get immediately usable responses formatted for your specific use case.
Please analyze these three project management software options. For each one, provide information in the following structure:
<name>Software Name</name>
<price>Pricing details</price>
<pros>
- Pro 1
- Pro 2
- Pro 3
</pros>
<cons>
- Con 1
- Con 2
</cons>
<best_for>Type of user or company this is ideal for</best_for>
The three options are: Asana, Monday.com, and ClickUp.
How to adapt it:
Use XML-style tags to define custom sections
Be precise about formatting within each section (bullets, paragraphs, etc.)
Provide an example of the structure if it's complex
Improved prompt example:
I need to create job descriptions for three open positions at our tech startup. Please format each job description with the following structure:
<position>Job Title</position>
<summary>2-3 sentence overview of the role</summary>
<responsibilities>
- Responsibility 1
- Responsibility 2
- Responsibility 3
- Responsibility 4
</responsibilities>
<requirements>
- Required skill/qualification 1
- Required skill/qualification 2
- Required skill/qualification 3
</requirements>
<nice_to_have>
- Preferred skill/qualification 1
- Preferred skill/qualification 2
</nice_to_have>
<benefits>
Brief paragraph on company benefits and culture
</benefits>
The three positions are: Full Stack Developer, UX/UI Designer, and DevOps Engineer. Each position requires at least 3 years of experience in their respective fields and familiarity with agile development practices.
7. The "Comparative Analysis" Pattern
This pattern directs LLMs to systematically compare multiple options across consistent dimensions, resulting in a balanced assessment.
Why it works: Forcing a structured comparison prevents bias toward any particular option and ensures all alternatives receive equal analytical treatment.
I'm trying to decide between React, Vue, and Angular for a new web application project. Which one would you recommend?
How to adapt it:
Define clear comparison categories relevant to your decision
Ask for specific rankings within each category
Request explicit reasoning for each evaluation
Improved prompt example:
I'm trying to decide between React, Vue, and Angular for a new web application project. Please compare these three frameworks across the following dimensions:
1. Learning curve for developers
2. Performance and speed
3. Community support and libraries
4. Long-term maintenance considerations
5. Suitability for a medium-sized e-commerce application
For each dimension, rank the frameworks from best to worst and explain your reasoning.
Combining Patterns for Maximum Impact
While each pattern is powerful on its own, the real magic happens when you combine them. For example:
Pair the "Expertise Persona" with "Chain of Thought" to get expert-level reasoning with transparent logic
Combine "Self-Evaluation" with "Working Memory" to maintain consistency while improving quality
Use "Decision Trees" with "Structured Output" to get complex decision support in an easily digestible format
Getting Started
The best way to master these patterns is to experiment with them on your own tasks. Start with the example prompts provided, then gradually modify them to fit your specific needs. Pay attention to how model’s responses change as you refine your prompting technique.
Remember that effective prompting is an iterative process. Don't be afraid to refine your approach based on the results you get.
Hope you enjoyed with this post 🙌
I’ve worked a lot to compile all the information I wanted to share with you, and I think it will be valuable to you as well.
Have a great day!
George
Reply