Artificial IntelligenceAnthropicPrompt Engineering

Prompt Engineering Refresher: 10 Techniques for Claude

TT
TopicTrick
Prompt Engineering Refresher: 10 Techniques for Claude

Before moving into Claude's advanced capabilities — tools, vision, computer use, and agents — this is the right moment to lock in the fundamentals. The techniques in this refresher are the foundation underneath everything more advanced. If any of these are shaky, the complex features that build on them will be harder to use correctly.

This post consolidates the ten most important prompt engineering techniques from Modules 2 and 3 of this series. It is designed to be fast to read, easy to bookmark, and useful to return to when something in a later module does not behave as expected. Think of it as your quick-reference card.


Technique 1 — Be Specific, Not Vague

Vague instructions produce vague results. The single biggest improvement in prompt quality comes from replacing general directions with specific, concrete instructions.

  • Vague: "Summarise this document"
  • Specific: "Summarise this document in three bullet points, each no longer than 20 words, focusing on action items for a project manager"

Specificity about format, length, audience, and focus unlocks dramatically better output.


Technique 2 — Assign a Role in the System Prompt

Claude adapts its tone, vocabulary, and depth of reasoning based on the role you assign it. A clearly defined role in the system prompt is one of the most cost-effective prompt engineering tools available.

text

Role assignment is most powerful when it includes domain expertise, communication style, and behavioural constraints.


Technique 3 — Use XML Tags to Structure Complex Prompts

For prompts with multiple components — instructions, background context, examples, and the actual query — XML tags eliminate ambiguity about which part is which.

xml

Claude processes XML-structured prompts more reliably than the same content written as a paragraph because the tags create unambiguous boundaries.

XML Tags for Long-Context Prompts

XML tags are especially valuable in long-context scenarios where you are passing thousands of tokens of background data. Without tags, Claude must infer where the context ends and the instruction begins. With tags like <context> and <question>, there is no ambiguity.


    Technique 4 — Provide Examples (Few-Shot Prompting)

    Examples are the fastest path to the exact output format you want. If instructions alone are not producing the right result, add two or three worked examples of the input-output pair you expect.

    text

    Claude uses the pattern in your examples to complete the final output consistently.


    Technique 5 — Use Negative Examples

    Negative examples explicitly show Claude what you do not want. They are particularly useful for preventing a specific mistake Claude keeps making.

    text

    Pairing a negative example with a positive one is more effective than a positive example alone for correcting persistent formatting mistakes.


    Technique 6 — Ask Claude to Think Before Answering

    For complex reasoning tasks, asking Claude to think through the problem before answering — chain-of-thought prompting — produces more accurate results.

    text

    Chain-of-thought works because it allocates more computation to the reasoning process before commitment to a final answer.


    Technique 7 — Use Prompt Chaining for Complex Workflows

    Instead of trying to do everything in one massive prompt, break complex tasks into a sequence of simpler prompts where each step's output feeds the next.

    • Step 1: Extract key data from a raw document
    • Step 2: Analyse the extracted data for risks
    • Step 3: Draft a risk report based on the analysis

    Each step is independently verifiable, and errors at any stage can be caught and corrected before they propagate downstream.


    Technique 8 — Control Format Explicitly

    If you need a specific output format, state it explicitly rather than hoping Claude guesses. Claude will match almost any format you describe clearly.

    • "Return your answer as a numbered list"
    • "Format this as a professional email with subject line, greeting, body paragraphs, and a sign-off"
    • "Return only valid JSON — no markdown, no explanation, no preamble"
    • "Use HTML list tags, never markdown dashes"

    State What You Do NOT Want

    Format instructions work better when you include both what you want and what you do not want. Instead of just saying 'return JSON', say 'return only raw JSON with no markdown code fences and no explanatory text before or after the JSON object'. The negative constraint prevents the most common formatting mistakes.


      Technique 9 — Define Constraints Clearly

      Constraints are specific boundaries Claude must operate within. They prevent Claude from producing responses that are technically correct but practically unhelpful.

      Examples of effective constraints:

      • Length: "No more than 150 words"
      • Scope: "Only reference information explicitly stated in the document — do not use external knowledge"
      • Tone: "Always maintain a professional, neutral tone — do not editorialize"
      • Output: "If you cannot answer with confidence, respond with 'INSUFFICIENT DATA' rather than guessing"

      Well-defined constraints reduce variability and make Claude's output predictable in production.


      Technique 10 — Iterate Systematically

      Prompt engineering is not a one-shot process — it is an iterative loop. The most effective approach:

      1. Write an initial prompt and run it against 5-10 diverse test inputs
      2. Identify the most common failure mode in the outputs
      3. Add a specific instruction or example that targets that failure mode
      4. Test again and confirm the failure mode is resolved without introducing new ones
      5. Repeat until performance across your test set is acceptable

      Never judge a prompt by a single output. Evaluate it across a representative sample of the inputs your production system will encounter.


      Quick Reference Table

      Since this is a refresher, here is a condensed version of all ten techniques:

      1. Specificity: Replace vague instructions with concrete, detailed directions
      2. Role assignment: Define expertise, tone, and behavioural constraints in the system prompt
      3. XML tags: Structure multi-component prompts to eliminate ambiguity
      4. Few-shot examples: Show the exact input-output format you expect
      5. Negative examples: Explicitly demonstrate what you do not want
      6. Chain-of-thought: Ask Claude to reason before committing to an answer
      7. Prompt chaining: Break complex workflows into sequential, verifiable steps
      8. Format control: State your format requirements explicitly including exclusions
      9. Constraints: Define hard boundaries on scope, length, tone, and fallback behaviour
      10. Systematic iteration: Test on diverse inputs, identify failure modes, refine, repeat

      These Techniques Compound

      The full power of prompt engineering comes from combining these techniques. A well-designed production prompt might include role assignment, XML structure, two or three examples, explicit format instructions, a chain-of-thought instruction, and fallback constraints — all in a single system prompt. Each technique reinforces the others.


        What is Coming Next

        With prompt engineering fundamentals consolidated, you are ready to move into Claude's most powerful capabilities for building real applications. Module 4 covers the tools and features that transform Claude from a text generator into a system that can interact with the real world.

        The next post introduces the foundation of all of them: Claude Tool Use Explained: Give Claude the Ability to Act.


        This post is part of the Anthropic AI Tutorial Series. Previous post: Structured Outputs with Claude: Getting JSON Every Time.

        If you want to go deeper on any of these techniques, the Prompt Engineering for Claude: Beginner's Guide covers each principle in full detail with more examples. For advanced patterns, see Advanced Prompting Techniques for Claude.

        The official Anthropic Prompt Engineering documentation is the authoritative reference for Claude-specific prompting behaviour.

        Additional Prompt Engineering Techniques

        Chain prompting for complex tasks Instead of asking a model to complete a complex multi-step task in a single prompt, break it into a chain of simpler prompts where each output feeds into the next. This reduces errors because each step is narrower in scope, and it makes it easier to debug which step produced an unexpected result. Chain prompting is the foundation of agentic AI architectures. See Anthropic's prompt chaining guidance.

        Temperature and sampling parameters Most LLM APIs expose a temperature parameter (typically 0–1 or 0–2) controlling output randomness. Temperature 0 makes the model deterministic (always selecting the highest-probability token). Higher temperatures produce more varied, creative outputs. For factual, structured tasks (data extraction, code generation), use low temperature (0–0.3). For creative writing or brainstorming, use higher temperature (0.7–1.0). The Anthropic API reference documents all sampling parameters.

        Frequently Asked Questions

        What is the difference between zero-shot, one-shot, and few-shot prompting? Zero-shot prompting asks the model to complete a task with no examples — relying on the model's pre-trained knowledge. One-shot provides a single example of the desired input/output format. Few-shot provides two to five examples. Few-shot prompting is more reliable for tasks with specific output formats or domain-specific conventions, because examples calibrate the model's behaviour more precisely than instructions alone. See the Anthropic prompt engineering documentation for guidance on when each approach is appropriate.

        How do I reduce hallucinations in LLM outputs? Hallucinations (confidently stated false information) are reduced by: grounding the model with retrieved context (RAG); instructing the model to say "I don't know" when uncertain rather than guessing; asking for citations or sources and then verifying them; using lower temperature settings; and breaking complex factual questions into smaller verifiable components. No technique eliminates hallucinations entirely — always verify critical factual claims from authoritative sources.