Noam Michael

Hallucination Detection in LLMs through Confidence Calibration

As Large Language Models (LLMs) like ChatGPT become more widespread, their tendency to generate inaccurate or fabricated information has emerged as a significant concern. To reduce the risk of harmful or misleading outputs, it is critical for these models to provide well-calibrated estimates of their confidence in completing a given task. This project aims to develop a methodology for evaluating the calibration of current foundation models and identifying areas where their self-assessments are most likely to fail.

Message To Sponsor

Thank you so much for your generosity! Your donation gives researchers like myself the opportunity to meaningfully advance our scientific fields through our research. The value of your donation goes beyond immediate financial support—it helps lay the foundation for a long-term career in academia. Because of your generosity, I’ve been able to focus fully on my current research project without the added burden of financial stress on myself or my family.
Headshot of Noam Michael
Major: Data Science
Mentor: Don Moore, Haas School of Business
Sponsor: Chandra Research Fellows - Chandra Fund
Back to Listings
Back to Donor Reports