What are some common prompt techniques?
Technique | When to Use |
---|---|
Zero-shot prompting | When an LLM has enough existing knowledge to respond accurately without requiring any prior coaching. As training methods for LLMs improve, particularly with techniques like instruction tuning and reinforcement learning from human feedback (RLHF), these models can often generate desired responses without the need for any examples. |
Few-shot prompting | When zero-shot prompting fails, you can provide examples or demonstrations directly in the prompt to guide the model. |
Chain of thought (CoT) prompting | When an LLM is tasked with complex reasoning or problem-solving. This technique enhances the accuracy of the responses, increases transparency, and improves the LLM's ability to reason by encouraging it to provide a step-by-step explanation of its thought process. It reveals the model's reasoning to address situations where the model might give an incorrect answer and fail to recognize its mistake. |
Prompt chaining |
When the LLM needs to do tasks with multiple stages, it's more efficient or exact to break the process down into smaller, manageable chunks. Unlike CoT, which is more focused on incremental reasoning, prompt chaining involves separating tasks into distinct steps, each of which can be handled by a separate prompt. This method links two prompts together. Instead of instructing the model to think step-by-step, you first ask it to analyze the situation and then use that analysis as input for a second prompt to obtain a final, straightforward answer. |