ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Large Language Models (LLMs) have revolutionized AI, yet their inherent uncertainties pose significant challenges to reliable deployment. This paper presents a comprehensive systematic review of uncertainty in LLMs, bridging theoretical foundations and cutting-edge methodologies. We analyze over 80 papers from top venues—including ASE, NeurIPS, ICML, and Nature—to trace the evolution of uncertainty quantification (UQ). We categorize uncertainty into aleatoric and epistemic types, detailing probabilistic modeling, confidence estimation, and calibration techniques. Through illustrative case studies in high-stakes domains such as medical diagnosis and code generation, we demonstrate UQ’s pivotal role in enhancing reliability. We further discuss limitations, ethical considerations, and future directions, emphasizing the need for granular interpretability and human-AI collaboration. This work advances the understanding of LLM uncertainty to enable safer, trustworthy, and responsible real-world integration.

Hello World!