Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCY_bot@mastoxiv.page
2025-06-09 07:25:02

Can LLMs Talk 'Sex'? Exploring How AI Models Handle Intimate Conversations
Huiqian Lai
arxiv.org/abs/2506.05514

@ErikJonker@mastodon.social
2025-06-07 08:07:20

Interesting, "GPT-style models have a fixed memorization capacity of approximately 3.6 bits per parameter."
venturebeat.com/ai/how-much-in

@arXiv_csSE_bot@mastoxiv.page
2025-06-10 10:11:13

Evaluating LLMs Effectiveness in Detecting and Correcting Test Smells: An Empirical Study
E. G. Santana Jr, Jander Pereira Santos Junior, Erlon P. Almeida, Iftekhar Ahmed, Paulo Anselmo da Mota Silveira Neto, Eduardo Santana de Almeida
arxiv.org/abs/2506.07594

@arXiv_csIR_bot@mastoxiv.page
2025-06-10 07:52:42

FinBERT2: A Specialized Bidirectional Encoder for Bridging the Gap in Finance-Specific Deployment of Large Language Models
Xuan Xu, Fufang Wen, Beilin Chu, Zhibing Fu, Qinhong Lin, Jiaqi Liu, Binjie Fei, Zhongliang Yang, Linna Zhou, Yu Li
arxiv.org/abs/2506.06335

@arXiv_csCR_bot@mastoxiv.page
2025-06-03 17:52:02

This arxiv.org/abs/2505.18889 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csAI_bot@mastoxiv.page
2025-06-03 07:21:03

Do Language Models Mirror Human Confidence? Exploring Psychological Insights to Address Overconfidence in LLMs
Chenjun Xu, Bingbing Wen, Bin Han, Robert Wolfe, Lucy Lu Wang, Bill Howe
arxiv.org/abs/2506.00582

@lysander07@sigmoid.social
2025-05-12 08:39:14

Last leg on our brief history of NLP (so far) is the advent of large language models with GPT-3 in 2020 and the introduction of learning from the prompt (aka few-shot learning).
T. B. Brown et al. (2020). Language models are few-shot learners. NIPS'20

Slide from Information System Engineering 2025 lecture, 02 - Natural Language Processing 01, A brief history of NLP, NLP Timeline.
The NLP timeline is in the middle of the page from top to bottom. The marker is at 2020. On the left side, an original screenshot of GPT-3 is shown, giving advise on how to present a talk about "Symbolic and Subsymbolic AI - An Epic Dilemma?".
The right side holds the following text: 
2020: GPT-3 was released by OpenAI, based on 45TB data crawled from the web. A “da…
@arXiv_csCY_bot@mastoxiv.page
2025-06-05 07:16:45

Facts are Harder Than Opinions -- A Multilingual, Comparative Analysis of LLM-Based Fact-Checking Reliability
Lorraine Saju, Arnim Bleier, Jana Lasser, Claudia Wagner
arxiv.org/abs/2506.03655

@DGIInfo@openbiblio.social
2025-03-19 12:46:41

Am 3. April,15:00–16:30, bietet der Workshop "RAGtAIme – vom Custom GPT zum KI-Chatbot mit OpenAI Assistant" praxisnahe Einblicke in die Anwendung von KI im Bibliothekswesen. Uwe Dierolf (KIT-Bibliothek) erläutert RAG, Custom GPTs und die Entwicklung KI-gestützter Chatbots – auch ohne tiefgehende Programmierkenntnisse.
Anmeldung unter:

@arXiv_csCY_bot@mastoxiv.page
2025-06-03 07:20:41

Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs
Nariman Naderi, Zahra Atf, Peter R Lewis, Aref Mahjoub far, Seyed Amir Ahmad Safavi-Naini, Ali Soroush
arxiv.org/abs/2506.00072