A prompt is the instruction you give to an AI. The difference between a mediocre response and a brilliant one is almost always in how you phrase your question — not the model you use.
This guide teaches you to build effective prompts, from the simplest to advanced techniques used by AI engineers. Each example highlights the prompt parts with colors so you understand the structure.
Works with any model: ChatGPT, Claude, Gemini, Llama, Mistral and more.
// The 4 components of a good prompt
🟣Persona
Defines who the AI is. Gives it a role, expertise, or specific perspective.
🔵Task
What you want it to do. The main instruction — the verb of the prompt.
🟡Context
Background information, constraints, and relevant data the AI needs.
🟢Format
How you want the response: table, list, length, structure, tone.
Level 1
Basic — Zero-shot
Single-part prompt: just the task. No structure or additional context. Works for simple, direct questions.
General examples
Summary
Summarize this text: [text]
Task
Translation
Translate to Spanish: [text]
Task
Explanation
Explain what machine learning is
Task
Engineering / Technical
Quick debug
Why does this code throw an error? [code]
Task
Concept
Explain what a REST API is
Task
Level 2
Intermediate — Task + Context
2-3 parts. Adding context or format dramatically improves results. The AI understands better what you need.
General examples
Task + Context
Write a follow-up emailfor a client who requested a demo 3 days ago and hasn't responded
TaskContext
Task + Format
List the 5 main benefits of meditationin bullet format with a maximum of 10 words each
TaskFormat
Engineering / Technical
Code review
Review this Python codeand explain what each function does: [code]
TaskFormat
Regex
Generate a regexthat validates emails with .com and .co domains
TaskContext
Level 3
Advanced — All 4 components
Persona + Task + Context + Format. The complete prompt. Maximizes response quality by giving the AI all the necessary information.
General examples
Digital marketing
You are a digital marketing expert.Create a social media content strategyfor a fintech startup in Colombia with a limited budget.Present the plan in a table with columns: Channel, Frequency, Content Type, KPI.
PersonaTaskContextFormat
SWOT Analysis
Act as a senior business consultant.Analyze the strengths and weaknessesof this business model: B2B services marketplace.Use the SWOT framework and limit to 3 points per quadrant.
PersonaTaskContextFormat
Engineering / Technical
System architecture
You are a senior engineer experienced in distributed systems.Design the architecturefor a real-time notification system supporting 100k concurrent users.Present the options with pros/cons in a table, include ASCII diagram.
PersonaTaskContextFormat
Security audit
Act as a code reviewer with security expertise.Audit the following REST endpointlooking for OWASP top 10 vulnerabilities.List each finding with severity (High/Medium/Low) and the recommended fix.
PersonaTaskContextFormat
Level 4
Expert — Advanced techniques
Chain of Thought, Few-shot, Self-consistency and Prompt chaining. Techniques that unlock the AI's maximum potential for complex tasks.
General examples
Chain of Thought
Think step by step:If a store sells 150 products per day with an average margin of $8, but fixed costs are $25,000/month, how many days does it need to operate to break even?
TaskContext
Few-shot
Classify the sentiment of these texts.
"The service was excellent" → Positive
"They took 2 hours" → Negative
"The product is average" → NeutralNow classify: "I loved the packaging but the quality is mediocre"
ContextTask
Engineering / Technical
Technical Chain of Thought
Think step by step:How many servers do I need to handle 1M requests per hour if each request takes 50ms and each server has 8 cores?Show each intermediate calculation.
TaskContextFormat
Prompt chaining
You are a tech lead.Let's design a system step by step.Step 1: Define the functional requirements for a payment system. When you're done, I'll give you step 2.Only answer the current step in concise bullets.
PersonaTaskContextFormat
// Advanced techniques in detail
Chain of Thought (CoT)
Ask the AI to "think step by step". Dramatically improves reasoning in math, logic, and complex problems.
When to use: Math problems, logic, decisions with multiple variables.
Few-shot prompting
Provide 2-3 examples of expected input and output before your actual question. The AI learns the pattern.
When to use: Classification, consistent formatting, repetitive tasks with specific format.
Self-consistency
Request multiple reasoning paths for the same problem and compare results. Ideal for questions where the AI may be inconsistent.
When to use: Important calculations, critical decisions, analysis validation.
Prompt chaining
Break a complex task into a sequence of connected prompts. The output of one feeds the next.
When to use: Large projects, long documents, multi-stage workflows.
New era
Prompting for AI Agents
Classic prompting assumes you talk to one model. In the age of agents, you coordinate systems of multiple models that delegate tasks to each other. The rules change: now you design behaviors, not just instructions.
🎯Agent Orchestration
In the age of agents, you do not just give instructions to a model — you coordinate multiple specialized agents. An orchestrator agent receives the task, breaks it down, and delegates to sub-agents.
🤖Specialized Sub-agents
Each agent has a specific role: one searches for information, another writes code, another reviews, another executes. Prompting changes because you design the whole system, not just one instruction.
🪆Nested Prompts
An agent can generate prompts for other agents. The top-level prompt defines the global objective; nested prompts define subtasks with their own Persona/Task/Context/Format.
⚙️System Prompts vs User Prompts
Agents have two layers: the system prompt (permanent developer instructions) and the user prompt (user input). Knowing when to put something in each layer is critical for agent behavior.
🔗Context Passing Between Agents
When an agent passes results to another, the transition prompt is critical. You must define exactly what information is transferred, in what format, and what context the receiving agent maintains.
🔧Tool-Use Prompts
Modern agents can execute code, search the web, call APIs. Prompting must specify when to use each tool, how to interpret the result, and how to incorporate it into the response.
// Agent prompt examples
Research agent with sub-agents
Prompt for an orchestrator agent that coordinates search + synthesis
Orchestration
Eres un agente orquestador de investigación. Tu tarea es investigar a fondo el tema: [TEMA]. Tienes acceso a dos sub-agentes: (1) SearchAgent — busca fuentes recientes, (2) SynthesisAgent — sintetiza y redacta. Primero lanza SearchAgent con la query relevante, luego pasa sus resultados a SynthesisAgent. El output final debe ser un informe estructurado con: Resumen ejecutivo (3 bullets), Hallazgos principales (max 5), Fuentes citadas, Gaps de información identificados.
PersonaTaskContextFormat
Code review pipeline
Nested prompt: main agent delegates to security agent and quality agent
Nested Prompts
Eres el CodeReviewOrchestrator. Revisa el siguiente PR: [DIFF]. Debes lanzar dos sub-agentes en paralelo: SecurityAgent (busca vulnerabilidades OWASP) y QualityAgent (verifica clean code, complejidad, tests). Espera ambos resultados antes de generar el reporte final. Reporte final: sección de Seguridad + sección de Calidad + Veredicto (APPROVE / REQUEST_CHANGES) + lista priorizada de cambios requeridos.
PersonaTaskContextFormat
System prompt of a support agent
Difference between system instructions (permanent) vs user input
System vs User Prompt
SYSTEM: Eres un agente de soporte para [Empresa]. Tienes acceso a la base de conocimiento interna (tool: search_kb) y al historial del cliente (tool: get_customer_history). Nunca reveles información de otros clientes. Responde siempre en el idioma del usuario. Si no sabes la respuesta, escala al humano con: ESCALATE:[motivo].
USER: Mi factura del mes pasado tiene un cobro que no reconozco de $45.000.
PersonaContextFormatTask
Context passing between agents
How to structure the handoff from one agent to another
Context Passing
Eres el WriterAgent. Recibe el output del ResearchAgent y escribe un artículo de blog. INPUT DEL RESEARCH AGENT: {research_output}. Audiencia objetivo: profesionales de tecnología en LATAM. Tono: técnico pero accesible. No repitas información ya cubierta en artículos previos: {previous_articles_list}. Output: Título SEO + meta description + artículo (800-1200 palabras) con H2s y H3s + 3 call-to-actions sugeridos.
PersonaTaskContextFormat
// Quick tips
🎯Be specific
Instead of "write something about marketing", say "write 3 subject lines for a Black Friday email targeting women aged 25-35".
🔄Iterate
Your first prompt is rarely perfect. Refine based on the response: ask for more detail, change the tone, add constraints.