Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@mgorny@social.treehouse.systems
2025-07-05 15:24:22

A while ago, I've followed the example given by #Fedora and unbundled ensurepip wheels from #Python in #Gentoo (just checked — "a while ago" was 3 years ago). This had the important advantage that it enabled us to update these wheels along with the actual pip and setuptools packages, meaning new virtual environments would get fresh versions rather than whatever CPython happened to bundle at the time of release.
I had considered using our system packages to prepare these wheels, but since we were already unbundling dependencies back then, that couldn't work. So I just went with fetching upstream wheels from PyPI. Why not build them from source instead? Well, besides feeling unnecessary (it's not like the PyPI wheels are actually binary packages), we probably didn't have the right kind of eclass support for that at the time.
Inspired by @…, today I've tried preparing new revisions of ensurepip packages that actually do build everything from source. So what changed, and why should building from source matter now? Firstly, as part of the wheel reuse patches, we do have a reasonably clean architecture to grab the wheels created as part of the PEP517 build. Secondly, since we're unbundling dependencies from pip and setuptools, we're effectively testing different packages than these installed as ensurepip wheels — and so it would be meaningful to test both variants. Thirdly, building from source is going to make patching easier, and at the very least enable user patching.
While at it, I've refreshed the test suite runs in all three regular packages (pip, setuptools and wheel — we need an "ensurepip" wheel for the last because of test suites). And of course, I hit some test failures in testing the versions with bundled dependencies, and I've discovered a random bug in #PyPy.
github.com/gentoo/gentoo/pull/ (yes, we haven't moved yet)
github.com/pypy/pypy/issues/53

@arXiv_csDS_bot@mastoxiv.page
2025-06-03 07:20:25

BWT for string collections
Davide Cenzato, Zsuzsanna Lipt\'ak, Nadia Pisanti, Giovanna Rosone, Marinella Sciortino
arxiv.org/abs/2506.01092

@floheinstein@chaos.social
2025-05-26 05:59:37

Leider werden viele Leute keinen Widerspruch bei Meta eingelegt haben gegen die KI-Verwendung ihrer Facebook/Instagram-Daten, weil sie dachten "Boah, schon wieder so ein nutzloser Ich-widerspreche Post der nicht funktioniert."
Bin da wohl nicht ganz unschuldig dran, hab vor 11 Jahren einen Scherz gepostet, den viele nicht erkannten (siehe ab Zeile 10)
#Meta

In response to the Facebook guidelines and under articles L.111, 112 and 113 of the code of intellectual property, I declare that my rights are attached to all my personal data, drawings, paintings, photos, texts etc... published on my profile. For commercial use of the foregoing my written consent is required at all times. Those reading this text can copy it and paste it on their Facebook wall. This will allow them to place themselves under the protection of copyright. By this release, I tell …
@frankel@mastodon.top
2025-05-22 16:15:07

Don't Unwrap Options: There Are Better Ways #Rust
corrode.dev/blog/rust-option-h

@arXiv_eessAS_bot@mastoxiv.page
2025-06-30 07:51:10

HighRateMOS: Sampling-Rate Aware Modeling for Speech Quality Assessment
Wenze Ren, Yi-Cheng Lin, Wen-Chin Huang, Ryandhimas E. Zezario, Szu-Wei Fu, Sung-Feng Huang, Erica Cooper, Haibin Wu, Hung-Yu Wei, Hsin-Min Wang, Hung-yi Lee, Yu Tsao
arxiv.org/abs/2506.21951

@arXiv_csCL_bot@mastoxiv.page
2025-06-27 09:58:19

Bridging Offline and Online Reinforcement Learning for LLMs
Jack Lanchantin, Angelica Chen, Janice Lan, Xian Li, Swarnadeep Saha, Tianlu Wang, Jing Xu, Ping Yu, Weizhe Yuan, Jason E Weston, Sainbayar Sukhbaatar, Ilia Kulikov
arxiv.org/abs/2506.21495 arxiv.org/pdf/2506.21495 arxiv.org/html/2506.21495
arXiv:2506.21495v1 Announce Type: new
Abstract: We investigate the effectiveness of reinforcement learning methods for finetuning large language models when transitioning from offline to semi-online to fully online regimes for both verifiable and non-verifiable tasks. Our experiments cover training on verifiable math as well as non-verifiable instruction following with a set of benchmark evaluations for both. Across these settings, we extensively compare online and semi-online Direct Preference Optimization and Group Reward Policy Optimization objectives, and surprisingly find similar performance and convergence between these variants, which all strongly outperform offline methods. We provide a detailed analysis of the training dynamics and hyperparameter selection strategies to achieve optimal results. Finally, we show that multi-tasking with verifiable and non-verifiable rewards jointly yields improved performance across both task types.
toXiv_bot_toot

@CerstinMahlow@mastodon.acm.org
2025-06-09 14:43:52

TIL (im IC5): man holt sich Schminktipps nicht mehr in der italienischen Vogue, sondern auf TikTok! Dito Ausflugvorschläge für Sommerbeizli.
Neben mir sitzen Grosi (ca. 75) und Enkelin (ca. 25), sehr süss zusammen
Nächstes Thema: Verwandte und Freunde im Filmgeschäft und überhaupt. Anschliessend: wo welche Beiz war, zum gemütlich feiern, aber mit ordentlichen Stühlen! Next: Taxigeld ausgleichen (Grosi an Enkelin, eh klar)
Und ich vermisse grad meine Oma sehr

@vrandecic@mas.to
2025-05-07 20:21:12

It seems that chatting with AIs can either cause or trigger psychotic episodes
Please be careful. My suggestion, if you're using AI frequently, would be to tell someone you trust that you're using AI frequently, ask them to keep an eye on you, and even allow them to access your chat log.