A New Prompting Method for Large Language Models - Active-Prompt
The Chain-of-Thought (CoT) method relies on a fixed set of manually annotated examples. However, the problem is that these examples may not be the most effective demonstrations for different tasks. To address this issue, a new prompting method called Active-Prompt has recently been proposed to adapt large language models (LLMs) to different task-specific example prompts (annotated with human-designed CoT reasoning).
The method works as follows:
- Query the LLM with or without a small number of CoT examples to generate k possible answers for a set of training questions.
- Calculate an uncertainty metric (using inconsistency) based on the k answers.
- Select the most uncertain questions for human annotation.
- Use the new annotated examples to infer each question. 分享