Authenticity
LLM models sometimes generate responses that sound coherent and convincing but are occasionally fictional. Improving prompts can help enhance the accuracy and truthfulness of the model’s outputs, while reducing the likelihood of inconsistent or fabricated responses.
Some solutions may include:
- Providing basic facts in the context (such as relevant article paragraphs or Wikipedia entries) to minimize the model’s tendency to generate fictional content.
- Configuring the model to produce fewer stylized responses by lowering probability parameters and instructing it to admit ignorance (e.g., “I don’t know”) when unsure.
- Including a combination of question-and-answer examples in the prompt, covering both known and unknown scenarios. Let’s look at a simple example:Prompt:
markdown
Q: What is an atom?
A: An atom is a tiny particle that makes up everything.
Q: Who is Alvan Muntz?
A: ?
Q: What is Kozar-09?
A: ?
Q: How many moons does Mars have?
A: Two, Phobos and Deimos.
Q: Who is Neto Beto Roberto?