Grounding
Connecting AI outputs to verified external data sources so answers are based on facts, not training-time guesses.
Grounding is the practice of constraining a model's output to information from sources you trust retrieved documents, live APIs, internal databases rather than relying purely on training data. Done well, it dramatically reduces hallucination and makes outputs auditable.
The two most common grounding patterns are RAG (retrieval) and tool use (live API calls). Both feed real data into the model's context just before it answers.
Good grounded outputs cite sources. Great products surface those citations to users so they can verify. The trend in 2026 is for AI features to be 'grounded by default' the question is no longer whether to ground, but how rigorously.