Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:47

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
arxiv.org/abs/2505.19698 mastoxiv.page/@arXiv_csLG_bot/
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
arxiv.org/abs/2506.08660 mastoxiv.page/@arXiv_csLG_bot/
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
arxiv.org/abs/2506.10167 mastoxiv.page/@arXiv_csLG_bot/
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
arxiv.org/abs/2506.10914 mastoxiv.page/@arXiv_csLG_bot/
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2506.18481 mastoxiv.page/@arXiv_csLG_bot/
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
arxiv.org/abs/2506.21220 mastoxiv.page/@arXiv_csLG_bot/
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
arxiv.org/abs/2507.04448 mastoxiv.page/@arXiv_csLG_bot/
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
arxiv.org/abs/2508.01115 mastoxiv.page/@arXiv_csLG_bot/
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
arxiv.org/abs/2508.13904 mastoxiv.page/@arXiv_csLG_bot/
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
arxiv.org/abs/2508.16815 mastoxiv.page/@arXiv_csLG_bot/
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
arxiv.org/abs/2508.21785 mastoxiv.page/@arXiv_csLG_bot/
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
arxiv.org/abs/2509.15796 mastoxiv.page/@arXiv_csLG_bot/
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
arxiv.org/abs/2509.19975 mastoxiv.page/@arXiv_csLG_bot/
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
arxiv.org/abs/2509.21895 mastoxiv.page/@arXiv_csLG_bot/
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
arxiv.org/abs/2509.22566 mastoxiv.page/@arXiv_csLG_bot/
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
arxiv.org/abs/2509.23115 mastoxiv.page/@arXiv_csLG_bot/
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
arxiv.org/abs/2509.25424 mastoxiv.page/@arXiv_csLG_bot/
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
arxiv.org/abs/2509.26626 mastoxiv.page/@arXiv_csLG_bot/
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
arxiv.org/abs/2510.12402 mastoxiv.page/@arXiv_csLG_bot/
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
arxiv.org/abs/2510.15425 mastoxiv.page/@arXiv_csLG_bot/
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
arxiv.org/abs/2510.18114 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
arxiv.org/abs/2510.22293 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@metacurity@infosec.exchange
2026-01-24 12:17:56

Happy Saturday! Metacurity offers our free and premium subscribers a weekly digest of the best long-form (and longish) infosec-related pieces we couldn't properly fit into our daily news crush.
This week's selection covers
--The untouchable hacker god who destroyed psychotherapy patients,
--AI prompt injection is an unsolvable problem,
--Deepfakes are messing up Canada's justice system,
--What the hack of Russia's Unified Military Registry revea…

@cowboys@darktundra.xyz
2026-03-24 22:16:14

Erin Andrews Has Strong Words for Dak Prescott After Infidelity Accusations heavy.com/sports/nfl/dallas-co

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:37:11

Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20796 arxiv.org/pdf/2602.20796 arxiv.org/html/2602.20796
arXiv:2602.20796v1 Announce Type: new
Abstract: The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:42:31

ProxyFL: A Proxy-Guided Framework for Federated Semi-Supervised Learning
Duowen Chen, Yan Wang
arxiv.org/abs/2602.21078 arxiv.org/pdf/2602.21078 arxiv.org/html/2602.21078
arXiv:2602.21078v1 Announce Type: new
Abstract: Federated Semi-Supervised Learning (FSSL) aims to collaboratively train a global model across clients by leveraging partially-annotated local data in a privacy-preserving manner. In FSSL, data heterogeneity is a challenging issue, which exists both across clients and within clients. External heterogeneity refers to the data distribution discrepancy across different clients, while internal heterogeneity represents the mismatch between labeled and unlabeled data within clients. Most FSSL methods typically design fixed or dynamic parameter aggregation strategies to collect client knowledge on the server (external) and / or filter out low-confidence unlabeled samples to reduce mistakes in local client (internal). But, the former is hard to precisely fit the ideal global distribution via direct weights, and the latter results in fewer data participation into FL training. To this end, we propose a proxy-guided framework called ProxyFL that focuses on simultaneously mitigating external and internal heterogeneity via a unified proxy. I.e., we consider the learnable weights of classifier as proxy to simulate the category distribution both locally and globally. For external, we explicitly optimize global proxy against outliers instead of direct weights; for internal, we re-include the discarded samples into training by a positive-negative proxy pool to mitigate the impact of potentially-incorrect pseudo-labels. Insight experiments & theoretical analysis show our significant performance and convergence in FSSL.
toXiv_bot_toot

Trump threatens to deploy ICE to airports as TSA shortages drive delays

Spring break travel is set to strain airports
as rising callouts and resignations among unpaid TSA officers
stretch security more than a month into the funding standoff

@cjust@infosec.exchange
2026-03-16 22:47:38

#USPol #USpolitics #network1976
I'm not saying that the 1976 movie "Network" was prescient . . . but . . .

Shopkeepers keep a gun under the counter. Punks are running wild in the street, and there's nobody anywhere who seems to know what to do, and there's no end to it. We know the air is unfit to breathe, and our food is unfit to eat.

We sit watching our TVs, while some local newscaster tells us that today we had 15 homicides and 63 violent crimes, as if that's the way it's supposed to be. We know things are bad, worse than bad. They're crazy.

It's like everything everywhere is going crazy, so we…
@Techmeme@techhub.social
2026-02-20 23:50:55

India joins Pax Silica, a US-led initiative that aims to build secure supply chains for semiconductors, advanced manufacturing, and critical technologies (Rajesh Roy/Associated Press)
apnews.com/article/pax-silica-

@Techmeme@techhub.social
2026-03-19 15:40:45

Alphabet's X spins out Anori, which seeks to streamline building approvals through a unified platform for developers and city regulators, with $26M in funding (Connie Loizos/TechCrunch)
techcrunch.com/2026/03/19/alph

@inthehands@hachyderm.io
2026-02-11 20:32:24

Has somebody started a single unified Quisling Database yet?
thehill.com/homenews/media/573