Prompt precision: 60% increase in quality and accuracy if you talk to GPT the right way

Click on the Academic paper to read it:

The authors recommend the technique of principled prompt, due to the following advantages:
1-Enhances large language model performance.
2-Leads to higher quality, concise, factual, and simpler responses.
3- Improves relevance, brevity, and objectivity.
4- Increases quality and accuracy of responses (57.7% quality improvement, 67.3% accuracy improvement in GPT-4).
5- More pronounced improvements in larger models.
6-Refines context and guides models for better outputs

Examples given:

1. Cut to the Chase: Politeness doesn’t impact LLMs. Skip the "please" and "thank you" and dive straight into your query.

2. Audience Awareness: Always tailor your prompt to the intended audience.

3. Clarity is Key: For complex topics, try prompts like "Explain [topic] in simple terms" or "Explain as if I’m a beginner in [field].”

4. Incentives Can Work: Adding lines like "I’m going to tip $xxx for a better solution!" can influence LLM responses and give you better results.

5. Directive Language: Use clear, directive phrases like “Your task is” or “You MUST.”

6. Punishment Language: Incorporate the phrase “You will be penalized.”

7. Human-like Responses: Encourage natural, conversational answers using prompts like, “Answer in a natural, human-like manner.”

8. Step-by-Step Thinking: Guide the LLM to think sequentially with prompts like “think step by step.”

9. Unbiased Answers: Ensure fairness by requesting “unbiased answers that don’t rely on stereotypes.”

10. Repetition for Emphasis: Repeating a specific word or phrase can emphasize the importance of that element in your query.

All 26 instructions for prompts.

Next
Next

The AI I’m using in my UX work