Instruction-Based Contextualization

Good prompting helps AI use the information it finds when creating answers:

In-Context Learning (ICL):

  • Puts retrieved information directly in the prompt as examples
  • Tells the AI to use these examples when answering
  • Makes context look different from instructions
  • Example: 'Here are some passages about diabetes. Use this information to answer the question: What are the common symptoms of Type 2 diabetes?'

Chain-of-Thought (CoT) Prompting:

  • Asks the AI to think step-by-step through its reasoning
  • Directs the AI to examine the retrieved facts before drawing conclusions
  • Makes answers more accurate for hard questions that need multiple thinking steps
  • Example: 'First, review the information about climate change in these documents. Then, identify the key factors mentioned. Finally, explain how these factors contribute to rising sea levels.'

Retrieval-Augmented Prompting Patterns:

  • Context segregation: Clearly separating found information from instructions (Example: 'CONTEXT: [retrieved documents] QUESTION: [user query]')
  • Source attribution: Keeping track of where information came from for citations (Example: 'According to document #2 from the company handbook...')
  • Relevance assessment: Asking the AI to first check if the information is helpful before using it (Example: 'Review these passages and determine which are relevant to the question before answering')

How you structure your prompt greatly affects how well the AI uses the retrieved information in its final answer.