AI & MLhallucination
Hallucination
When an AI model generates plausible-sounding but factually incorrect information. LLMs hallucinate because they predict likely token sequences, not verified facts. In blockchain development, hallucinations can be dangerous—incorrect API usage, nonexistent functions, or wrong program addresses. Mitigation: RAG for grounding, code verification, testing, and using models with lower hallucination rates.
Related terms
2AI & ML
LLM (Large Language Model)
A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gem...
AI & ML
RAG (Retrieval-Augmented Generation)
An AI architecture that combines LLMs with external knowledge retrieval. Instead of relying solely on training data, RAG...