Few-shot Learning
Showing a model a handful of input-output examples so it can generalize to similar inputs.
Few-shot learning is the simplest, most reliable way to steer an LLM toward a specific output format. You include 2-10 examples of the task in the prompt, and the model imitates the pattern.
Few-shot prompting beats zero-shot prompting on almost every structured task: classification, extraction, formatting, translation. The examples should be diverse, error-free, and as close to the real distribution of inputs as possible.
With powerful frontier models, few-shot is sometimes unnecessary good instructions plus a strong reasoning model is enough. With smaller models, few-shot is often the difference between unusable and production-ready output.