Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@burger_jaap@mastodon.social
2026-02-23 13:06:21

Italy may be the first EU country to impose requirements on private charging points in its transposition of the EU REDIII into national law: newly installed private charging points must be able to communicate with smart meters from June 30 onwards.
normattiva.it/uri-res/N2Ls?u…

Art. 23
Inserimento dell'articolo 45-bis al decreto legislativo
8 novembre 2021, n. 199
1. Dopo I'articolo 45 del decreto legislativo 8 novembre 2021, n. 199, é aggiunto il
seguente:
«Art. 45-bis (Funzionalita di ricarica intelligente). - 1. A partire dal 30 giugno 2026, al
fine di garantire funzionalita di ricarica intelligente e di comunicazione diretta con i
sistemi di misurazione intelligenti, tutti i punti di ricarica di potenza standard, nuovi e
sostituiti, non accessibili al pubblico, in…
Article 23
Insertion of Article 45-bis into Legislative Decree
8 November 2021, no. 199
1. After Article 45 of Legislative Decree No. 199 of 8 November 2021, the following is
added: "Art. 45-bis (Smart charging functionality). - 1. From 30 June 2026, in order to
ensure smart charging functionality and direct communication with smart metering
systems, all new and replaced standard power charging points that are not accessible to
the public and installed on national territory shall be certified i…
@andycarolan@social.lol
2026-02-23 09:22:31

Can't we all just give up on Discord already and spin up a few self hosted forums instead?
I mean, IRC is fine, but I would argue that the onboarding isn't easy for everyone.

Here was the vice president defending the administration’s vile immigration policies
in a way that fundamentally degrades the experiences and traditions of his own family,
of people he is bound by vows —vows that should be sacred to a Christian —to love and protect.
It sums up Vance’s journey into public life and politics:
There is nobody he won’t betray,
and no principle he won’t cast aside, in his quest to accrue more fame and power.
This has been clear…

@qurlyjoe@mstdn.social
2026-02-22 17:33:53

What’s with all the grey cars and trucks lately? Why would anyone deliberately spend that much money to buy a vehicle that is nearly invisible in low-light conditions? Is this evidence of a death wish, or just another manifestation of the millennial grey fashion meme in clothes and interior decorating? Disclosure: my car is orange—highly visible. Official Subaru marketing name: Sunshine Orange. My unofficial name for it: Orange Sunshine, to commemorate some fun times in my youth.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:44:51

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training
Anas Barakat, Souradip Chakraborty, Khushbu Pahwa, Amrit Singh Bedi
arxiv.org/abs/2602.21189 arxiv.org/pdf/2602.21189 arxiv.org/html/2602.21189
arXiv:2602.21189v1 Announce Type: new
Abstract: Pass@k is a widely used performance metric for verifiable large language model tasks, including mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of $k$ independently sampled solutions passes a verifier. This multi-sample inference metric has motivated inference-aware fine-tuning methods that directly optimize pass@$k$. However, prior work reports a recurring trade-off: pass@k improves while pass@1 degrades under such methods. This trade-off is practically important because pass@1 often remains a hard operational constraint due to latency and cost budgets, imperfect verifier coverage, and the need for a reliable single-shot fallback. We study the origin of this trade-off and provide a theoretical characterization of when pass@k policy optimization can reduce pass@1 through gradient conflict induced by prompt interference. We show that pass@$k$ policy gradients can conflict with pass@1 gradients because pass@$k$ optimization implicitly reweights prompts toward low-success prompts; when these prompts are what we term negatively interfering, their upweighting can rotate the pass@k update direction away from the pass@1 direction. We illustrate our theoretical findings with large language model experiments on verifiable mathematical reasoning tasks.
toXiv_bot_toot

@hex@kolektiva.social
2025-12-20 23:22:58

So in another dream I just woke up from, I was talking to someone about "the idea problem" (that it's becoming harder to monitize ideas, from a vox article written by an AI cooked reporter).
iheart.com/podcast/105-it-coul
Basically, I was arguing that the majority of inventions target men because patriarchy puts economic control in men's hands. As men have started to help more with childcare, there have been more inventions related to childcare. (I don't have any idea if this is true. Seems legit, but I'm just relating my dream. I think I was also oversimplifying a bit to "men" and "women" because of my audience, but anyway it was a dream.) There's actually more low-hanging fruit, I pointed out, related to making care work easier.
So I argued that the real problem was a failure to invest in research into solving that problem. Today there are all these boondoggles built around killing people. What if, instead of all this government research into killing people, we dumped a ton of money into making it easier to support a household? That would be great for the economy. (Being asleep, I seem to have forgotten that working people need money.)
In the blur of being just awake I started thinking about how you could kickstart the US economy by taking the money from the AI boondoggle and other autonomous murder bots and create something like a program to build robots for housekeepers. You'd still be funding tech with government money, so the same horrible people get paid, but you're now actually solving real problems. It wouldn't even matter if it was a boondoggle, honestly. Just dumping money into something other than murdering people is good enough.
I imagined first if there was a program to fund a robot housecleaner, like robot dog with AI some laundry pickup, that would be provided, free of charge, to help people with children. It would work the same as the military boondoggle where a private company makes the government buy a piece of hardware from them and then also pay them to service it for some number of years. But instead of that hardware sitting around waiting to kill someone, it would be getting brought to people's houses to help them.
Then I thought, hey, you could even boost the economy more if you just had government funding for doulas and housecleaners and paid them a living wage. Hey, you could really kickstart the economy by nationalizing healthcare and including doula support as part of all births. Oh, and you could also just include the optional household help for families with children until the kids turn 18.
None of this is perfect (I don't actually think most of this is possible from any state), but the point is that it's actually wildly easy to figure out all kinds of ways to invest in the economy and monitize ideas as long as you aren't entirely focused on the same old "make money from spying on people and killing them." Funny that. Like they said in the podcast, maybe "finding ideas" isn't the problem.
Hope you enjoyed the weird semi-awake brain dump/rant.

@theodric@social.linux.pizza
2025-12-24 18:30:25

I made nog again

# My eggnog recipe (iterated over a synthesis of historic American recipes)

Ingredients for about 1-1.5 liters of nog (depending on booze)

- 6 yolks
- Optional: 1-2 whites, beaten to stiff peaks
- 100g sugar
- 2x vanilla beans
- About 0.75x nutmeg, grated fine
- 1x 250ml small pot double cream
- 750ml full-fat++ milk (6% is optimal)
- Optional, but delicious: 500ml Napoleon brandy, white rum, vodka, or similar. Don't go too oaky.

Destructions

- Whisk egg yolks and sugar together until cream…
(DA NOG)

-continuing...-

- If using raw milk, hell yeah brother. Up to if you if you want to now pasturize the lot of it: hold at 72°C for a couple minutes, then put the whole pan in the freezer and chill, stirring every 30-45 minutes until desired temperature is reached
- Optional: beat whites to stiff peaks, then fold through nog to add foamy fluffiness
- Add booze to taste. Also perfectly drinkable without alcohol!

Serves one, if I have anything to say about it.
@raysofred@discordian.social
2026-02-23 00:11:17

Suicide jokes are only funny when I make them.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:37

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
arxiv.org/abs/2408.07016 mastoxiv.page/@arXiv_csLG_bot/
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
arxiv.org/abs/2411.00759 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
arxiv.org/abs/2412.06871 mastoxiv.page/@arXiv_csLG_bot/
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
arxiv.org/abs/2501.08219 mastoxiv.page/@arXiv_csLG_bot/
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
arxiv.org/abs/2501.10538 mastoxiv.page/@arXiv_csLG_bot/
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
arxiv.org/abs/2501.16613 mastoxiv.page/@arXiv_csLG_bot/
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
arxiv.org/abs/2502.03652 mastoxiv.page/@arXiv_csLG_bot/
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
arxiv.org/abs/2502.12108 mastoxiv.page/@arXiv_csLG_bot/
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
arxiv.org/abs/2502.17028 mastoxiv.page/@arXiv_csLG_bot/
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
arxiv.org/abs/2503.00229 mastoxiv.page/@arXiv_csLG_bot/
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
arxiv.org/abs/2503.04398 mastoxiv.page/@arXiv_csLG_bot/
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
arxiv.org/abs/2503.12016 mastoxiv.page/@arXiv_csLG_bot/
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
arxiv.org/abs/2503.23434 mastoxiv.page/@arXiv_csLG_bot/
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
arxiv.org/abs/2504.13961 mastoxiv.page/@arXiv_csLG_bot/
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
arxiv.org/abs/2505.11602 mastoxiv.page/@arXiv_csLG_bot/
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
arxiv.org/abs/2505.13289 mastoxiv.page/@arXiv_csLG_bot/
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
arxiv.org/abs/2505.18877 mastoxiv.page/@arXiv_csLG_bot/
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
arxiv.org/abs/2505.19193 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:47

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
arxiv.org/abs/2505.19698 mastoxiv.page/@arXiv_csLG_bot/
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
arxiv.org/abs/2506.08660 mastoxiv.page/@arXiv_csLG_bot/
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
arxiv.org/abs/2506.10167 mastoxiv.page/@arXiv_csLG_bot/
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
arxiv.org/abs/2506.10914 mastoxiv.page/@arXiv_csLG_bot/
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2506.18481 mastoxiv.page/@arXiv_csLG_bot/
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
arxiv.org/abs/2506.21220 mastoxiv.page/@arXiv_csLG_bot/
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
arxiv.org/abs/2507.04448 mastoxiv.page/@arXiv_csLG_bot/
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
arxiv.org/abs/2508.01115 mastoxiv.page/@arXiv_csLG_bot/
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
arxiv.org/abs/2508.13904 mastoxiv.page/@arXiv_csLG_bot/
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
arxiv.org/abs/2508.16815 mastoxiv.page/@arXiv_csLG_bot/
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
arxiv.org/abs/2508.21785 mastoxiv.page/@arXiv_csLG_bot/
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
arxiv.org/abs/2509.15796 mastoxiv.page/@arXiv_csLG_bot/
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
arxiv.org/abs/2509.19975 mastoxiv.page/@arXiv_csLG_bot/
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
arxiv.org/abs/2509.21895 mastoxiv.page/@arXiv_csLG_bot/
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
arxiv.org/abs/2509.22566 mastoxiv.page/@arXiv_csLG_bot/
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
arxiv.org/abs/2509.23115 mastoxiv.page/@arXiv_csLG_bot/
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
arxiv.org/abs/2509.25424 mastoxiv.page/@arXiv_csLG_bot/
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
arxiv.org/abs/2509.26626 mastoxiv.page/@arXiv_csLG_bot/
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
arxiv.org/abs/2510.12402 mastoxiv.page/@arXiv_csLG_bot/
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
arxiv.org/abs/2510.15425 mastoxiv.page/@arXiv_csLG_bot/
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
arxiv.org/abs/2510.18114 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
arxiv.org/abs/2510.22293 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot