Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_mathCA_bot@mastoxiv.page
2025-10-01 08:37:17

Capelli identity and contiguity relations of Radon hypergeometric function on the Grassmannian
Hironobu Kimura
arxiv.org/abs/2509.25900 arx…

@arXiv_csCY_bot@mastoxiv.page
2025-09-30 08:10:25

Beyond Western Politics: Cross-Cultural Benchmarks for Evaluating Partisan Associations in LLMs
Divyanshu Kumar, Ishita Gupta, Nitin Aravind Birur, Tanay Baswa, Sahil Agarwal, Prashanth Harshangi
arxiv.org/abs/2509.22711

@arXiv_csCR_bot@mastoxiv.page
2025-08-28 09:42:41

Servant, Stalker, Predator: How An Honest, Helpful, And Harmless (3H) Agent Unlocks Adversarial Skills
David Noever
arxiv.org/abs/2508.19500

@rene_mobile@infosec.exchange
2025-09-18 11:38:16

Microsoft Azure/Cloud/AD considered harmful (twice, again)...
Context: cyberplace.social/@GossiTheDog and

@arXiv_physicssocph_bot@mastoxiv.page
2025-08-29 08:54:01

Is the Solar System a Wilderness or a Construction Site? Conservationist and Constructivist Paradigms in Planetary Protection
Luk\'a\v{s} Likav\v{c}an
arxiv.org/abs/2508.20145

@arXiv_csCL_bot@mastoxiv.page
2025-08-19 11:44:00

Context Matters: Incorporating Target Awareness in Conversational Abusive Language Detection
Raneem Alharthi, Rajwa Alharthi, Aiqi Jiang, Arkaitz Zubiaga
arxiv.org/abs/2508.12828

@Techmeme@techhub.social
2025-10-09 15:15:53

OpenAI raised concerns about anti-competitive conduct by "entrenched companies" in a September EU meeting; source: OpenAI targeted Google, Microsoft, and Apple (Samuel Stolton/Bloomberg)
bloomberg.com/news/articles/20

@arXiv_csCR_bot@mastoxiv.page
2025-08-15 08:34:02

Context Misleads LLMs: The Role of Context Filtering in Maintaining Safe Alignment of LLMs
Jinhwa Kim, Ian G. Harris
arxiv.org/abs/2508.10031

@arXiv_csAI_bot@mastoxiv.page
2025-10-06 07:30:59

Safe and Efficient In-Context Learning via Risk Control
Andrea Wynn, Metod Jazbec, Charith Peris, Rinat Khaziev, Anqi Liu, Daniel Khashabi, Eric Nalisnick
arxiv.org/abs/2510.02480

@arXiv_csCL_bot@mastoxiv.page
2025-09-10 10:24:31

Are LLMs Enough for Hyperpartisan, Fake, Polarized and Harmful Content Detection? Evaluating In-Context Learning vs. Fine-Tuning
Michele Joshua Maggini, Dhia Merzougui, Rabiraj Bandyopadhyay, Ga\"el Dias, Fabrice Maurel, Pablo Gamallo
arxiv.org/abs/2509.07768