New:

AI Hallucination in AEC

The phenomenon where AI systems confidently generate plausible but incorrect information—fabricated code citations, invented specifications, or wrong project details—with potentially serious consequences in construction.

Definition

AI hallucination refers to a language model generating confident, coherent-sounding outputs that are factually incorrect—a particular hazard in AEC where inaccurate information about building codes, structural requirements, or contract terms can have legal and life-safety consequences. In construction, hallucinations manifest as fabricated code section references (citing IBC sections that don't exist), incorrect material specifications (stating a product meets a standard it doesn't), invented project history, and plausible but wrong structural calculations. Research demonstrates that LLM validation in automated compliance checking achieves 94-95% success rates—meaning 5-6% of AI outputs require human oversight even in well-designed systems. Mitigation strategies: RAG grounds responses in verified source documents with citations; multi-agent architectures use verification agents to cross-check outputs; code execution validates numerical calculations; and multi-stage verification frameworks prioritize accuracy over completeness. A 2025 multi-agent approach achieved 100% accuracy in 18 of 20 structural modeling benchmark cases by eliminating single-model hallucination failure modes.

Examples

1

An AI citing a plausible but non-existent egress width table—requiring mandatory human verification

2

RAG system returning 'no code section found' instead of fabricating an answer for an unusual occupancy combination

3

Multi-agent verification catching an incorrect moment connection capacity before it reaches the engineer

Nomic Use Cases

See how Nomic applies this in production AEC workflows:

Frequently Asked Questions

AI hallucination refers to a language model generating confident, coherent-sounding outputs that are factually incorrect—a particular hazard in AEC where inaccurate information about building codes, structural requirements, or contract terms can have legal and life-safety consequences. In construction, hallucinations manifest as fabricated code section references (citing IBC sections that don't exist), incorrect material specifications (stating a product meets a standard it doesn't), invented project history, and plausible but wrong structural calculations. Research demonstrates that LLM validation in automated compliance checking achieves 94-95% success rates—meaning 5-6% of AI outputs require human oversight even in well-designed systems. Mitigation strategies: RAG grounds responses in verified source documents with citations; multi-agent architectures use verification agents to cross-check outputs; code execution validates numerical calculations; and multi-stage verification frameworks prioritize accuracy over completeness. A 2025 multi-agent approach achieved 100% accuracy in 18 of 20 structural modeling benchmark cases by eliminating single-model hallucination failure modes.

An AI citing a plausible but non-existent egress width table—requiring mandatory human verification. RAG system returning 'no code section found' instead of fabricating an answer for an unusual occupancy combination. Multi-agent verification catching an incorrect moment connection capacity before it reaches the engineer.

Automated Code Compliance: Check drawings against 380+ building codes and standards with cited answers. Project Research: Instantly access all project-critical information from a single search interface.

More Technology Terms

View all

See AI Hallucination in AEC in action

Nomic is purpose-built AI for architecture, engineering, and construction. Connect your project data and start getting answers in minutes.