It's really great that #Gentoo no longer has to rely on such uncultured solutions as using scp / rsync to push a bunch of distfiles to a public_html directory that's then exposed on a HTTP server. Now I just have to… [checks notes]
1. Use 'kup putraw' to put the kernel patchset on distfiles mirror.
2. Wait an hour for mirrors to sync.
3. Start building new kernels.
4. Rsync kernels from build hosts to the local machine.
5. Use a complex script involving 'kup ls ... | grep' (yes, there is no way to check if a file exists, and most of kup commands don't even feature exit statuses) and `kup putraw ...` to upload the built kernels.
6. Wait an hour for mirrors to sync.
7. Check if all kernels were uploaded correctly.
8. Push new kernels.
Currently exploring Origami Linux. Bootc, Fedora Atomic, Cosmic, Cachy LTO kernel. Included tooling is all Rust based, even my friend Yazi is there. Very nice! Shoutout to @… for pointing me to this distro.
#origami
Might need to take another break from social media. Not from mastodon, because everyone on here is chill.
Every time I open social media, it reminds me I have to be freaked out, angry, hopeless, and lost.
Yes, things are extremely dark right now. I will continue to fight for a future without fear, scarcity and war. But, telling me I have to panic buy gas right now or telling me where the nukes might drop isn't helping. It reminds me of the fear cycle around the pandemic.
One of my pet peeves of recent years has been the mainstream press:
(1) adopting “populism” as a euphemism for fascism (e.g. “Trump’s populist rhetoric”), which is at best a blurring of the word’s actual meaning (yes, fascists can use populist language, but populism is not their defining feature), and then
(2) applying the word in its original meaning (i.e. mass appeal to ordinary people) to leftists in direct comparison to fascists as if this constitutes a useful insight (e.g. “politicians with a populist message, such as Donald Trump and Bernie Sanders”).
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/5]:
- Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic D...
Lakshan Cooray, Deshan Sumanathilaka, Pattigadapa Venkatesh Raju
https://arxiv.org/abs/2602.00665 https://mastoxiv.page/@arXiv_csCL_bot/116006686092324902
- SEAD: Self-Evolving Agent for Multi-Turn Service Dialogue
Dai, Gao, Zhang, Wang, Luo, Wang, Wang, Wu, Wang
https://arxiv.org/abs/2602.03548
- OmniRAG-Agent: Agentic Omnimodal Reasoning for Low-Resource Long Audio-Video Question Answering
Yifan Zhu, Xinyu Mu, Tao Feng, Zhonghong Ou, Yuning Gong, Haoran Luo
https://arxiv.org/abs/2602.03707
- GreekMMLU: A Native-Sourced Multitask Benchmark for Evaluating Language Models in Greek
Zhang, Konomi, Xypolopoulos, Divriotis, Skianis, Nikolentzos, Stamou, Shang, Vazirgiannis
https://arxiv.org/abs/2602.05150
- Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems
Zhangqi Duan, Arnav Kankaria, Dhruv Kartik, Andrew Lan
https://arxiv.org/abs/2602.17542 https://mastoxiv.page/@arXiv_csCL_bot/116102514058414603
- MetaState: Persistent Working Memory Enhances Reasoning in Discrete Diffusion Language Models
Kejing Xia, Mingzhe Li, Lixuan Wei, Zhenbang Du, Xiangchi Yuan, Dachuan Shi, Qirui Jin, Wenke Lee
https://arxiv.org/abs/2603.01331 https://mastoxiv.page/@arXiv_csCL_bot/116165314672421581
- A Browser-based Open Source Assistant for Multimodal Content Verification
Milner, Foster, Karmakharm, Razuvayevskaya, Roberts, Porcellini, Teyssou, Bontcheva
https://arxiv.org/abs/2603.02842 https://mastoxiv.page/@arXiv_csCL_bot/116170368271004704
- Nw\=ach\=a Mun\=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR
Sharma, Shrestha, Poudel, Tiwari, Shrestha, Ghimire, Bal
https://arxiv.org/abs/2603.07554 https://mastoxiv.page/@arXiv_csCL_bot/116204797995674104
- Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions
Mingyang Song, Mao Zheng
https://arxiv.org/abs/2603.09938 https://mastoxiv.page/@arXiv_csCL_bot/116210189810004206
- AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Ag...
Zekun Wu, Adriano Koshiyama, Sahan Bulathwela, Maria Perez-Ortiz
https://arxiv.org/abs/2603.12564 https://mastoxiv.page/@arXiv_csCL_bot/116237800898328349
- GhanaNLP Parallel Corpora: Comprehensive Multilingual Resources for Low-Resource Ghanaian Languages
Gyamfi, Azunre, Moore, Budu, Asare, Owusu, Asiamah
https://arxiv.org/abs/2603.13793 https://mastoxiv.page/@arXiv_csCL_bot/116243544688031749
- sebis at ArchEHR-QA 2026: How Much Can You Do Locally? Evaluating Grounded EHR QA on a Single Not...
Ibrahim Ebrar Yurt, Fabian Karl, Tejaswi Choppa, Florian Matthes
https://arxiv.org/abs/2603.13962 https://mastoxiv.page/@arXiv_csCL_bot/116243646346563497
- ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
Yuzhe Shang, Pengzhi Gao, Yazheng Yang, Jiayao Ma, Wei Liu, Jian Luan, Jinsong Su
https://arxiv.org/abs/2603.14903 https://mastoxiv.page/@arXiv_csCL_bot/116243711232778054
- BanglaSocialBench: A Benchmark for Evaluating Sociopragmatic and Cultural Alignment of LLMs in Ba...
Tanvir Ahmed Sijan, S. M Golam Rifat, Pankaj Chowdhury Partha, Md. Tanjeed Islam, Md. Musfique Anwar
https://arxiv.org/abs/2603.15949 https://mastoxiv.page/@arXiv_csCL_bot/116249122231759766
- EngGPT2: Sovereign, Efficient and Open Intelligence
G. Ciarfaglia, et al.
https://arxiv.org/abs/2603.16430 https://mastoxiv.page/@arXiv_csCL_bot/116249228411487178
- HypeLoRA: Hyper-Network-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning
Bartosz Trojan, Filip G\k{e}bala
https://arxiv.org/abs/2603.19278 https://mastoxiv.page/@arXiv_csCL_bot/116277612915482857
- Automatic Analysis of Collaboration Through Human Conversational Data Resources: A Review
Yi Yu, Maria Boritchev, Chlo\'e Clavel
https://arxiv.org/abs/2603.19292 https://mastoxiv.page/@arXiv_csCL_bot/116277620779254916
- Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Langu...
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg, Tuhin Chakrabarty
https://arxiv.org/abs/2603.20957 https://mastoxiv.page/@arXiv_csCL_bot/116283538317671552
- KG-Hopper: Empowering Compact Open LLMs with Knowledge Graph Reasoning via Reinforcement Learning
Shuai Wang, Yinan Yu
https://arxiv.org/abs/2603.21440 https://mastoxiv.page/@arXiv_csCL_bot/116283595007808076
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
https://arxiv.org/abs/2505.19698 https://mastoxiv.page/@arXiv_csLG_bot/114578810521008766
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
https://arxiv.org/abs/2506.08660 https://mastoxiv.page/@arXiv_csLG_bot/114664238967892509
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
https://arxiv.org/abs/2506.10167 https://mastoxiv.page/@arXiv_csLG_bot/114675175949432731
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
https://arxiv.org/abs/2506.10914 https://mastoxiv.page/@arXiv_csLG_bot/114675529854402158
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
https://arxiv.org/abs/2506.18481 https://mastoxiv.page/@arXiv_csLG_bot/114738421450807709
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
https://arxiv.org/abs/2506.21220 https://mastoxiv.page/@arXiv_csLG_bot/114754764750730849
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
https://arxiv.org/abs/2507.04448 https://mastoxiv.page/@arXiv_csLG_bot/114818005803079705
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
https://arxiv.org/abs/2508.01115 https://mastoxiv.page/@arXiv_csLG_bot/114975999992144374
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
https://arxiv.org/abs/2508.13904 https://mastoxiv.page/@arXiv_csLG_bot/115060568241390847
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
https://arxiv.org/abs/2508.16815 https://mastoxiv.page/@arXiv_csLG_bot/115094785677272005
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
https://arxiv.org/abs/2508.21785 https://mastoxiv.page/@arXiv_csLG_bot/115128450608548173
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
https://arxiv.org/abs/2509.15796 https://mastoxiv.page/@arXiv_csLG_bot/115247429156900905
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
https://arxiv.org/abs/2509.19975 https://mastoxiv.page/@arXiv_csLG_bot/115264498084813952
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
https://arxiv.org/abs/2509.21895 https://mastoxiv.page/@arXiv_csLG_bot/115287261047939306
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
https://arxiv.org/abs/2509.22566 https://mastoxiv.page/@arXiv_csLG_bot/115287379672141023
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
https://arxiv.org/abs/2509.23115 https://mastoxiv.page/@arXiv_csLG_bot/115293273559547106
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
https://arxiv.org/abs/2509.25424 https://mastoxiv.page/@arXiv_csLG_bot/115298579764580635
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
https://arxiv.org/abs/2509.26626 https://mastoxiv.page/@arXiv_csLG_bot/115298789487177431
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
https://arxiv.org/abs/2510.12402 https://mastoxiv.page/@arXiv_csLG_bot/115377759317818093
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
https://arxiv.org/abs/2510.15425 https://mastoxiv.page/@arXiv_csLG_bot/115405933861293858
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
https://arxiv.org/abs/2510.18114 https://mastoxiv.page/@arXiv_csLG_bot/115417332500265972
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
https://arxiv.org/abs/2510.22293 https://mastoxiv.page/@arXiv_csLG_bot/115451746201804373
toXiv_bot_toot