Sommige rechtse partijen willen dat we net als Trump/VS worden terwijl ik het een groot goed vind dat mensen verschrikkelijke dingen mogen zeggen die mij helemaal niet bevallen en tegen al mijn normen ingaan. Onze wet bepaalt waar de grens ligt voor die vrijheid. #JA21
Diving into Mitigating Hallucinations from a Vision Perspective for Large Vision-Language Models
Weihang Wang, Xinhao Li, Ziyue Wang, Yan Pang, Jielei Zhang, Peiyi Li, Qiang Zhang, Longwen Gao
https://arxiv.org/abs/2509.13836
#SteadyCommunityContent
Wieso geben #Sprachmodelle oft selbstbewusst falsche Antworten?
Wenn Raten besser bewertet wird als Ehrlichkeit, droht Vertrauensverlust. Wie lässt sich das ändern, und warum ist „Ich weiß es nicht“ bisher ein Problem für KI-Modelle? Eine neue…
EGOILLUSION: Benchmarking Hallucinations in Egocentric Video Understanding
Ashish Seth, Utkarsh Tyagi, Ramaneswaran Selvakumar, Nishit Anand, Sonal Kumar, Sreyan Ghosh, Ramani Duraiswami, Chirag Agarwal, Dinesh Manocha
https://arxiv.org/abs/2508.12687
Geometric Uncertainty for Detecting and Correcting Hallucinations in LLMs
Edward Phillips, Sean Wu, Soheila Molaei, Danielle Belgrave, Anshul Thakur, David Clifton
https://arxiv.org/abs/2509.13813
"In theory, AI model makers could eliminate hallucinations by using a dataset that contains no errors."
I think someone has fundamentally misunderstood the technology. Developing a model using a 100% correct training dataset does not mean that the resulting AI will be able to correctly answer questions that were not in the training data.
Over-fitting is a thing.
Hallucination in LLM-Based Code Generation: An Automotive Case Study
Marc Pavel, Nenad Petrovic, Lukasz Mazur, Vahid Zolfaghari, Fengjunjie Pan, Alois Knoll
https://arxiv.org/abs/2508.11257
Librarians Are Being Asked to Find AI-Hallucinated Books https://www.404media.co/librarians-are-being-asked-to-find-ai-hallucinated-books/
DSCC-HS: A Dynamic Self-Reinforcing Framework for Hallucination Suppression in Large Language Models
Xiao Zheng
https://arxiv.org/abs/2509.13702 https://ar…