Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_condmatsuprcon_bot@mastoxiv.page
2024-04-26 07:01:36

From weak to strong-coupling superconductivity tuned by substrate in TiN films
Yixin Liu, Zulei Xu, Aobo Yu, Xiaoni Wang, Wei Peng, Yu Wu, Gang Mu, Zhi-Rong Lin
arxiv.org/abs/2404.16469

@arXiv_mathAG_bot@mastoxiv.page
2024-04-24 08:34:25

This arxiv.org/abs/2303.02066 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@laxsill@social.spejset.org
2024-05-20 12:43:15

Om detta betyder vad jag tror att det betyder är det ett synnerligen rimligt, smart och välbalanserat beslut av ICC. De besvarar Israels anmälan av tre från Hamas ledning för Sjunde oktober-dåden med att utöver de anklagade också begära två ur den israeliska regeringen (Bibi och Gallant) arresterade för deras krigsbrott.

@shuttle@mastodon.online
2024-05-16 00:32:57

Interested in LLMs?
Get started with building your first AI tool in 10 minutes by following our article 👀
shuttle.rs/blog/2024/04/29/bui

@metacurity@infosec.exchange
2024-06-05 14:21:04

Don't miss today's seriously packed Metacurity for the most critical infosec developments you should know, including
--London hospitals grind to a halt after Qilin ransomware group hits pathology provider Synnovis
--TikTok fixes zero-day after two high-profile accounts targeted,
--MediSecure slides into bankruptcy following ransomware attack,
--Medibank faces theoretical trillions in fines for 2022 cyberattack,
--Important rare-earth mine hit by Bian Lian ransomware group,
--GhostR claims ransomware attack on Oz freight logistics firm,
--Judge orders Canadian insurance firm to pay $15,000 per customer for cyberattack,
--US seeks extradition of hack-for-hire private investigator,
--Four people busted for seeking to sabotage Interpol system,
--Russian supermarket chain hit by a cyberattack,
--Ethical hacker releases tool to extract data collected by Microsoft's Recall,
--so much more
metacurity.com/p/london-hospit

@maxheadroom@hub.uckermark.social
2024-05-05 13:44:00

This weekend it's "days of open atelier" in Brandenburg country. We've visited the wonderful place of Silke Schmidt. It's quite magical #Uckermark

Four-panel image showing sketches of a girl with flowing hair and birds flying around on the first two panels, and a sketch of a seated woman reflected in a mirror on the third panel. The fourth panel depicts the reflection of the girl sketch blurred as if
Artwork depicting a line drawing of a young girl's face with a bird perched to the right, set against a blue-gradient background, displayed on a wall.
A framed painting depicting an interior scene with a chandelier is displayed on a surface, with another artwork partially visible in the background.
@arXiv_mathGT_bot@mastoxiv.page
2024-06-12 06:56:22

A survey on the Le-Murakami-Ohtsuki invariant for closed 3-manifolds
Benjamin Enriquez, Anderson Vera
arxiv.org/abs/2406.06857

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:48:59

Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom
Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu
arxiv.org/abs/2404.19509 arxiv.org/pdf/2404.19509
arXiv:2404.19509v1 Announce Type: new
Abstract: Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\textit{My Own Swordsman}$. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT-3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs' performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at github.com/sjtu-compling/llm-p.

@arXiv_csIR_bot@mastoxiv.page
2024-05-09 06:50:14

LLMs Can Patch Up Missing Relevance Judgments in Evaluation
Shivani Upadhyay, Ehsan Kamalloo, Jimmy Lin
arxiv.org/abs/2405.04727

@arXiv_condmatstrel_bot@mastoxiv.page
2024-06-04 09:25:12

This arxiv.org/abs/2405.14811 has been replaced.
initial toot: mastoxiv.page/@arX…