Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@ErikJonker@mastodon.social
2025-09-18 07:53:56

Sommige rechtse partijen willen dat we net als Trump/VS worden terwijl ik het een groot goed vind dat mensen verschrikkelijke dingen mogen zeggen die mij helemaal niet bevallen en tegen al mijn normen ingaan. Onze wet bepaalt waar de grens ligt voor die vrijheid. #JA21

@arXiv_csCV_bot@mastoxiv.page
2025-09-18 10:21:31

Diving into Mitigating Hallucinations from a Vision Perspective for Large Vision-Language Models
Weihang Wang, Xinhao Li, Ziyue Wang, Yan Pang, Jielei Zhang, Peiyi Li, Qiang Zhang, Longwen Gao
arxiv.org/abs/2509.13836

@tinoeberl@mastodon.online
2025-09-18 16:18:02

#SteadyCommunityContent
Wieso geben #Sprachmodelle oft selbstbewusst falsche Antworten?
Wenn Raten besser bewertet wird als Ehrlichkeit, droht Vertrauensverlust. Wie lässt sich das ändern, und warum ist „Ich weiß es nicht“ bisher ein Problem für KI-Modelle? Eine neue…

@arXiv_csAI_bot@mastoxiv.page
2025-08-19 10:46:20

EGOILLUSION: Benchmarking Hallucinations in Egocentric Video Understanding
Ashish Seth, Utkarsh Tyagi, Ramaneswaran Selvakumar, Nishit Anand, Sonal Kumar, Sreyan Ghosh, Ramani Duraiswami, Chirag Agarwal, Dinesh Manocha
arxiv.org/abs/2508.12687

@arXiv_csCL_bot@mastoxiv.page
2025-09-18 09:57:11

Geometric Uncertainty for Detecting and Correcting Hallucinations in LLMs
Edward Phillips, Sean Wu, Soheila Molaei, Danielle Belgrave, Anshul Thakur, David Clifton
arxiv.org/abs/2509.13813

@drgeraint@glasgow.social
2025-09-18 09:26:56

"In theory, AI model makers could eliminate hallucinations by using a dataset that contains no errors."
I think someone has fundamentally misunderstood the technology. Developing a model using a 100% correct training dataset does not mean that the resulting AI will be able to correctly answer questions that were not in the training data.
Over-fitting is a thing.

@arXiv_csSE_bot@mastoxiv.page
2025-08-18 08:40:10

Hallucination in LLM-Based Code Generation: An Automotive Case Study
Marc Pavel, Nenad Petrovic, Lukasz Mazur, Vahid Zolfaghari, Fengjunjie Pan, Alois Knoll
arxiv.org/abs/2508.11257

@newsie@darktundra.xyz
2025-09-18 13:31:32

Librarians Are Being Asked to Find AI-Hallucinated Books 404media.co/librarians-are-bei

@shanmukhateja@social.linux.pizza
2025-07-19 12:20:35

I watched 3 episodes of Hellsing Ultimate and turned it off after Millennium was introduced.
I really thought it was going to take a different direction.
#anime #hellsing #netflix

@arXiv_csCL_bot@mastoxiv.page
2025-09-18 09:42:31

DSCC-HS: A Dynamic Self-Reinforcing Framework for Hallucination Suppression in Large Language Models
Xiao Zheng
arxiv.org/abs/2509.13702 ar…