Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_qbioPE_bot@mastoxiv.page
2026-03-30 09:21:07

Crosslisted article(s) found for q-bio.PE. arxiv.org/list/q-bio.PE/new
[1/1]:
- Braess's paradox in tandem-running ants: When shortest path is not the quickest
Joy Das Bairagya, Udipta Chakraborti, Sumana Annagiri, Sagar Chakraborty
arxiv.org/abs/2603.26226 mastoxiv.page/@arXiv_physicsbi
toXiv_bot_toot

@BBC3MusicBot@mastodonapp.uk
2026-01-29 19:12:58

🇺🇦 #NowPlaying on BBCRadio3's #ClassicalMixtape
Andrea Casarrubios, Andrea Casarrubios & Chicago Symphony Orchestra:
🎵 Lullaby (Piano quintet, 2nd mvt)
#AndreaCasarrubios #ChicagoSymphonyOrchestra

@gwire@mastodon.social
2026-02-24 12:59:13

I got a fence down the ends for that sweet CDM.
bbc.co.uk/news/articles/ce3gqr

@ErikJonker@mastodon.social
2026-03-03 16:44:17

Technological dependence on American software and cloud services : an assessment of the economic consequences in Europe - Cigref
cigref.fr/technological-depend

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:44:51

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training
Anas Barakat, Souradip Chakraborty, Khushbu Pahwa, Amrit Singh Bedi
arxiv.org/abs/2602.21189 arxiv.org/pdf/2602.21189 arxiv.org/html/2602.21189
arXiv:2602.21189v1 Announce Type: new
Abstract: Pass@k is a widely used performance metric for verifiable large language model tasks, including mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of $k$ independently sampled solutions passes a verifier. This multi-sample inference metric has motivated inference-aware fine-tuning methods that directly optimize pass@$k$. However, prior work reports a recurring trade-off: pass@k improves while pass@1 degrades under such methods. This trade-off is practically important because pass@1 often remains a hard operational constraint due to latency and cost budgets, imperfect verifier coverage, and the need for a reliable single-shot fallback. We study the origin of this trade-off and provide a theoretical characterization of when pass@k policy optimization can reduce pass@1 through gradient conflict induced by prompt interference. We show that pass@$k$ policy gradients can conflict with pass@1 gradients because pass@$k$ optimization implicitly reweights prompts toward low-success prompts; when these prompts are what we term negatively interfering, their upweighting can rotate the pass@k update direction away from the pass@1 direction. We illustrate our theoretical findings with large language model experiments on verifiable mathematical reasoning tasks.
toXiv_bot_toot