Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@kexpmusicbot@mastodonapp.uk
2026-01-26 04:51:41

🇺🇦 #NowPlaying on KEXP's #SundaySoul
Chuck Carbo:
🎵 Can I Be Your Squeeze
#ChuckCarbo
tuffcity.com/track/can-i-be-yo
open.spotify.com/track/4XSI9oi

@gwire@mastodon.social
2026-02-24 12:59:13

I got a fence down the ends for that sweet CDM.
bbc.co.uk/news/articles/ce3gqr

@kexpmusicbot@mastodonapp.uk
2025-12-24 10:38:40

🇺🇦 #NowPlaying on KEXP's #VarietyMix
Vampire Weekend:
🎵 Holiday
#VampireWeekend
thechillestcellist.bandcamp.co
open.spotify.com/track/4cYZReb

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:44:51

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training
Anas Barakat, Souradip Chakraborty, Khushbu Pahwa, Amrit Singh Bedi
arxiv.org/abs/2602.21189 arxiv.org/pdf/2602.21189 arxiv.org/html/2602.21189
arXiv:2602.21189v1 Announce Type: new
Abstract: Pass@k is a widely used performance metric for verifiable large language model tasks, including mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of $k$ independently sampled solutions passes a verifier. This multi-sample inference metric has motivated inference-aware fine-tuning methods that directly optimize pass@$k$. However, prior work reports a recurring trade-off: pass@k improves while pass@1 degrades under such methods. This trade-off is practically important because pass@1 often remains a hard operational constraint due to latency and cost budgets, imperfect verifier coverage, and the need for a reliable single-shot fallback. We study the origin of this trade-off and provide a theoretical characterization of when pass@k policy optimization can reduce pass@1 through gradient conflict induced by prompt interference. We show that pass@$k$ policy gradients can conflict with pass@1 gradients because pass@$k$ optimization implicitly reweights prompts toward low-success prompts; when these prompts are what we term negatively interfering, their upweighting can rotate the pass@k update direction away from the pass@1 direction. We illustrate our theoretical findings with large language model experiments on verifiable mathematical reasoning tasks.
toXiv_bot_toot

@matematico314@social.linux.pizza
2025-12-09 17:11:38

#LB Kkkkkk, IA em magia até faz sentido. Magia não passa de alucinação e a IA é especialista nisso kkkk.
laserdisc.party/@checkervest/1

@Techmeme@techhub.social
2025-12-18 05:10:40

India-listed RRP Semiconductor's stock surged 55,000% in the 20 months through Dec. 17, despite negative revenue; source: India's SEBI is examining the surge (Chiranjivi Chakraborty/Bloomberg)
bloomberg.com/news/articles/20

@toxi@mastodon.thi.ng
2026-01-16 11:27:56

New #ThingUmbrella example to create a parametric, grid layout-based calibration sheet for black and white photography development. The sheet includes different swatches and gradients to measure results/responses of different exposure times and developer solutions/processes. The sheet also includes a placeholder for a custom test image to be added later...
All sheet components are pa…

Generated grayscale calibration sheet as discussed in the post. The top part of the image has three rows of grayscale swatches in different gradations, followed by a row of opposing grayscale gradients with 10% markings. Below is a column of two radial gradients: the first one white to black, the other inverted, both with 10% markings (as hemi-arcs). On the right, four bands of vertical gradients, each with a superimposed low contrast checkerboard pattern. The remaining space is reserved for an…
@simon_brooke@mastodon.scot
2026-02-05 10:09:59

"While the rest of us headed into years of immiseration, the filthy rich carried on regardless – and they did so with the willing aid of the centre-left elite, whether Peter #Mandelson or the French Socialists or the US Democrats." -- Aditya Chakrabortty
#Kleptocracy

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@toxi@mastodon.thi.ng
2026-02-16 09:42:04

Made new test prints on some off-cuts, using a slightly stronger developer solution than usual to see impact on max. depth. The main image (Eagle Creek, Oregon) is using 18% sodium acetate (curve corrected negative), the test strips are of 20% and 15% solutions (both uncorrected). The phone capture doesn't really show the differences too well, but I think I will go for the 18-20% from now on...
(Btw. The original image is here:

iPhone photo of 3 printed pieces of paper on a table., the largest showing a kallitype print of a moody scene of a mountain creak flowing through a wintry forest canyon. Below the main image are two smaller test strips showing gradients and checkerboard patterns (partially cut off)