CORE-BEHRT: A Carefully Optimized and Rigorously Evaluated BEHRT
Mikkel Odgaard, Kiril Vadimovic Klein, Sanne M{\o}ller Thysen, Espen Jimenez-Solem, Martin Sillesen, Mads Nielsen
https://arxiv.org/abs/2404.15201
The Conservative Partnership Institute,
a nonprofit whose funding skyrocketed after it became a nerve center for President Donald J. Trump’s allies in Washington,
has paid at least $3.2 million since the start of 2021 to corporations led by its own leaders or their relatives, records show.
In its most recent tax filings, the nonprofit’s three highest-paid contractors were all connected to insiders.
One was led by the institute’s president,
InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment
Yuxing Long, Wenzhe Cai, Hongcheng Wang, Guanqi Zhan, Hao Dong
https://arxiv.org/abs/2406.04882
Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom
Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu
https://arxiv.org/abs/2404.19509 https://arxiv.org/pdf/2404.19509
arXiv:2404.19509v1 Announce Type: new
Abstract: Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\textit{My Own Swordsman}$. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT-3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs' performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics.
Satellite Drag Analysis During the May 2024 Geomagnetic Storm
William E. Parker, Richard Linares
https://arxiv.org/abs/2406.08617 https://
Optimally Improving Cooperative Learning in a Social Setting
Shahrzad Haddadan, Cheng Xin, Jie Gao
https://arxiv.org/abs/2405.20808 https://
This https://arxiv.org/abs/2406.05763 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_ees…
Thermodynamics of the most generalized form of Holographic Dark Energy and some particular cases with Corrected Entropies
Sanghati Saha, Ertan G\"udekli, Surajit Chattopadhyay
https://arxiv.org/abs/2405.20783