Zero-Shot Prompting
Pre-Flight Briefing
The Power of Zero-Shot
Large language models (LLMs) today are tuned to follow instructions and trained on massive amounts of data. This large-scale training enables them to perform many tasks in a 'zero-shot' manner.
Zero-shot prompting means that the prompt used to interact with the model directly instructs it to perform a task without containing any prior examples or demonstrations.
This capability is largely powered by 'Instruction Tuning'—finetuning models on datasets described via instructions. Furthermore, RLHF (Reinforcement Learning from Human Feedback) is used to scale this tuning, aligning the model to better fit human preferences.
Reference Examples
Zero-Shot Text Classification
Classify the text into neutral, negative or positive.
Text: I think the vacation is okay.
Sentiment:
Output: Neutral