Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/3]:
- Diffusion Modulation via Environment Mechanism Modeling for Planning
Hanping Zhang, Yuhong Guo
https://arxiv.org/abs/2602.20422 https://mastoxiv.page/@arXiv_csAI_bot/116130110576555049
- Heterogeneity-Aware Client Selection Methodology For Efficient Federated Learning
Nihal Balivada, Shrey Gupta, Shashank Shreedhar Bhatt, Suyash Gupta
https://arxiv.org/abs/2602.20450 https://mastoxiv.page/@arXiv_csDC_bot/116130191233002036
- Prior-Agnostic Incentive-Compatible Exploration
Ramya Ramalingam, Osbert Bastani, Aaron Roth
https://arxiv.org/abs/2602.20465 https://mastoxiv.page/@arXiv_csGT_bot/116130245628406144
- PhyGHT: Physics-Guided HyperGraph Transformer for Signal Purification at the HL-LHC
Mohammed Rakib, Luke Vaughan, Shivang Patel, Flera Rizatdinova, Alexander Khanov, Atriya Sen
https://arxiv.org/abs/2602.20475 https://mastoxiv.page/@arXiv_hepex_bot/116130242350426528
- ActionEngine: From Reactive to Programmatic GUI Agents via State Machine Memory
Zhong, Faisal, Fran\c{c}a, Leesatapornwongsa, Szekeres, Rong, Nath
https://arxiv.org/abs/2602.20502 https://mastoxiv.page/@arXiv_csAI_bot/116130180718734838
- Inner Speech as Behavior Guides: Steerable Imitation of Diverse Behaviors for Human-AI coordination
Rakshit Trivedi, Kartik Sharma, David C Parkes
https://arxiv.org/abs/2602.20517 https://mastoxiv.page/@arXiv_csAI_bot/116130223344095649
- Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning
Lovelace, Belardi, Zalouk, Polavaram, Kundurthy, Weinberger
https://arxiv.org/abs/2602.20528 https://mastoxiv.page/@arXiv_csCL_bot/116130628998822849
- Standard Transformers Achieve the Minimax Rate in Nonparametric Regression with $C^{s,\lambda}$ T...
Yanming Lai, Defeng Sun
https://arxiv.org/abs/2602.20555 https://mastoxiv.page/@arXiv_statML_bot/116130512372759166
- Personal Information Parroting in Language Models
Nishant Subramani, Kshitish Ghate, Mona Diab
https://arxiv.org/abs/2602.20580 https://mastoxiv.page/@arXiv_csCL_bot/116130630309564204
- Characterizing Online and Private Learnability under Distributional Constraints via Generalized S...
Mo\"ise Blanchard, Abhishek Shetty, Alexander Rakhlin
https://arxiv.org/abs/2602.20585 https://mastoxiv.page/@arXiv_statML_bot/116130525452248337
- Amortized Bayesian inference for actigraph time sheet data from mobile devices
Daniel Zhou, Sudipto Banerjee
https://arxiv.org/abs/2602.20611 https://mastoxiv.page/@arXiv_statML_bot/116130543144314661
- Knowing the Unknown: Interpretable Open-World Object Detection via Concept Decomposition Model
Xueqiang Lv, Shizhou Zhang, Yinghui Xing, Di Xu, Peng Wang, Yanning Zhang
https://arxiv.org/abs/2602.20616 https://mastoxiv.page/@arXiv_csCV_bot/116130795466851481
- On the Convergence of Stochastic Gradient Descent with Perturbed Forward-Backward Passes
Boao Kong, Hengrui Zhang, Kun Yuan
https://arxiv.org/abs/2602.20646 https://mastoxiv.page/@arXiv_mathOC_bot/116130476952419594
- DANCE: Doubly Adaptive Neighborhood Conformal Estimation
Feng, Reich, Beaglehole, Luo, Park, Yoo, Huang, Mao, Boz, Kim
https://arxiv.org/abs/2602.20652 https://mastoxiv.page/@arXiv_statML_bot/116130551664144143
- Vision-Language Models for Ergonomic Assessment of Manual Lifting Tasks: Estimating Horizontal an...
Mohammad Sadra Rajabi, Aanuoluwapo Ojelade, Sunwook Kim, Maury A. Nussbaum
https://arxiv.org/abs/2602.20658 https://mastoxiv.page/@arXiv_csCV_bot/116130809228818544
- F10.7 Index Prediction: A Multiscale Decomposition Strategy with Wavelet Transform for Performanc...
Xuran Ma, et al.
https://arxiv.org/abs/2602.20712 https://mastoxiv.page/@arXiv_astrophIM_bot/116130530693731576
- Communication-Inspired Tokenization for Structured Image Representations
Davtyan, Sahin, Haghighi, Stapf, Acuaviva, Alahi, Favaro
https://arxiv.org/abs/2602.20731 https://mastoxiv.page/@arXiv_csCV_bot/116130824303022936
- SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing
Yifei Xu, et al.
https://arxiv.org/abs/2602.20751 https://mastoxiv.page/@arXiv_csCL_bot/116130739757479992
- Assessing the Impact of Speaker Identity in Speech Spoofing Detection
Anh-Tuan Dao, Driss Matrouf, Nicholas Evans
https://arxiv.org/abs/2602.20805 https://mastoxiv.page/@arXiv_csSD_bot/116130218074059060
- Don't Ignore the Tail: Decoupling top-K Probabilities for Efficient Language Model Distillation
Sayantan Dasgupta, Trevor Cohn, Timothy Baldwin
https://arxiv.org/abs/2602.20816 https://mastoxiv.page/@arXiv_csCL_bot/116130753521420972
- DRESS: A Continuous Framework for Structural Graph Refinement
Eduar Castrillo Velilla
https://arxiv.org/abs/2602.20833 https://mastoxiv.page/@arXiv_csDS_bot/116130545112457981
toXiv_bot_toot
I explained something for a friend in a simple way, and I think it's worth paraphrasing again here.
You cannot create a system that constrains itself. Any constraint on a system must be external to the system, or that constraint can be ignored or removed. That's just how systems work. Every constitution for every country claims to do this impossible thing, a thing proven is impossible almost 100 years ago now. Gödel's loophole has been known to exist since 1947.
Every constitution in the world, every "separation of powers" and set of "checks and balances," attempts to do something which is categorically impossible. Every government is always, at best, a few steps away from authoritarianism. From this, we would then expect that governments trand towards authoritarianism. Which, of course, is what we see historically.
Constraints on power are a formality, because no real controls can possibly exist. So then democratic processes become sort of collective classifiers that try to select only people who won't plunge the country into a dictatorship. Again, because this claim of restrictions on powers is a lie (willful or ignorant, a lie reguardless) that classifier has to be correct 100% of the time (even assuming a best case scenario). That's statistically unlikely.
So as long as you have a system of concentrated power, you will have the worst people attracted to it, and you will inevitably have that power fall into the hands of one of the worst possible person.
Fortunately, there is an alternative. The alternative is to not centralize power. In the security world we try to design systems that assume compromise and minimize impact, rather than just assuming that we will be right 100% of the time. If you build systems that maximially distribute power, then you minimize the impact of one horrible person.
Now, I didn't mention this because we're both already under enough stress, but...
Almost 90% of the nuclear weapons deployed around the world are in the hands of ghoulish dictators. Only two of the countries with nuclear weapons not straight up authoritarian, but they're not far off. We're one crashout away from steralizing the surface of the Earth with nuclear hellfire. Maybe countries shouldn't exist, and *definitely* multiple thousands of nuclear weapons shouldn't exist and shouldn't all be wired together to launch as soon as one of these assholes goes a bit too far sideways.
FBI serves search warrants at Los Angeles school district headquarters and superintendent's home (Associated Press)
https://apnews.com/article/los-angeles-schools-fbi-search-warrants-f7ffc6853a6c0b228c50cf5fe596ce66
http://www.memeorandum.com/260225/p80#a260225p80
The idea for the program started back in 2021,
as severe drought conditions enveloped agricultural powerhouse states across the country.
The $400 million, according to Montaño Greene, was set to be distributed through the Commodity Credit Corporation,
a financial institution used to implement specific agricultural programs established by the federal government.
By the close of 2024,
she said the Biden administration had entered final agreements with selected r…
A photo of the High Park area near where my daughter and son-in-law live. Snow still coming down.
So far, Belle Ewart has dodged the brunt of this storm.
#Snowmageddon #OntarioSnowStorm
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
https://arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
https://arxiv.org/abs/2402.12118 https://mastoxiv.page/@arXiv_csLG_bot/111962593972369958
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
https://arxiv.org/abs/2405.00645 https://mastoxiv.page/@arXiv_csLG_bot/112370274737558603
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
https://arxiv.org/abs/2405.15325 https://mastoxiv.page/@arXiv_csLG_bot/112511890051553111
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
https://arxiv.org/abs/2405.15877 https://mastoxiv.page/@arXiv_csLG_bot/112517547424098076
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
https://arxiv.org/abs/2409.03735 https://mastoxiv.page/@arXiv_csLG_bot/113089789682783135
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
https://arxiv.org/abs/2410.06800 https://mastoxiv.page/@arXiv_csLG_bot/113283021321510736
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
https://arxiv.org/abs/2410.18686 https://mastoxiv.page/@arXiv_csLG_bot/113367101100828901
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
https://arxiv.org/abs/2412.00720 https://mastoxiv.page/@arXiv_csLG_bot/113587817648503815
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
https://arxiv.org/abs/2412.15184 https://mastoxiv.page/@arXiv_csLG_bot/113683924322164777
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
https://arxiv.org/abs/2501.10290 https://mastoxiv.page/@arXiv_csLG_bot/113859392622871057
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
https://arxiv.org/abs/2501.10321 https://mastoxiv.page/@arXiv_csLG_bot/113859392688054204
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
https://arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
https://arxiv.org/abs/2502.06658 https://mastoxiv.page/@arXiv_csLG_bot/113984059089245671
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
https://arxiv.org/abs/2502.09496 https://mastoxiv.page/@arXiv_csLG_bot/114000974082372598
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
https://arxiv.org/abs/2502.10784 https://mastoxiv.page/@arXiv_csLG_bot/114023667639951005
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
https://arxiv.org/abs/2502.11027 https://mastoxiv.page/@arXiv_csLG_bot/114023688225233656
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
https://arxiv.org/abs/2505.10432 https://mastoxiv.page/@arXiv_csLG_bot/114516300594057680
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
https://arxiv.org/abs/2505.17720 https://mastoxiv.page/@arXiv_csLG_bot/114572963019603744
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
https://arxiv.org/abs/2505.22255 https://mastoxiv.page/@arXiv_csLG_bot/114589956040892075
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
https://arxiv.org/abs/2506.06486 https://mastoxiv.page/@arXiv_csLG_bot/114658421178857085
toXiv_bot_toot
Metropolitana VII - 🆙 🆙 🆙
城 VII - 🆙 🆙 🆙
📷 Pentax MX
🎞️ Ilford Pan 100
#filmphotography #Photography #blackandwhite
ProxyFL: A Proxy-Guided Framework for Federated Semi-Supervised Learning
Duowen Chen, Yan Wang
https://arxiv.org/abs/2602.21078 https://arxiv.org/pdf/2602.21078 https://arxiv.org/html/2602.21078
arXiv:2602.21078v1 Announce Type: new
Abstract: Federated Semi-Supervised Learning (FSSL) aims to collaboratively train a global model across clients by leveraging partially-annotated local data in a privacy-preserving manner. In FSSL, data heterogeneity is a challenging issue, which exists both across clients and within clients. External heterogeneity refers to the data distribution discrepancy across different clients, while internal heterogeneity represents the mismatch between labeled and unlabeled data within clients. Most FSSL methods typically design fixed or dynamic parameter aggregation strategies to collect client knowledge on the server (external) and / or filter out low-confidence unlabeled samples to reduce mistakes in local client (internal). But, the former is hard to precisely fit the ideal global distribution via direct weights, and the latter results in fewer data participation into FL training. To this end, we propose a proxy-guided framework called ProxyFL that focuses on simultaneously mitigating external and internal heterogeneity via a unified proxy. I.e., we consider the learnable weights of classifier as proxy to simulate the category distribution both locally and globally. For external, we explicitly optimize global proxy against outliers instead of direct weights; for internal, we re-include the discarded samples into training by a positive-negative proxy pool to mitigate the impact of potentially-incorrect pseudo-labels. Insight experiments & theoretical analysis show our significant performance and convergence in FSSL.
toXiv_bot_toot
I was shot point-blank.
At sixteen years old, I chased down and tackled a man who had stolen a woman’s purse.
In the struggle, the thief shot me at point-blank range.
The bullet tore through my gut, lodging in my liver.
Doctors weren’t sure I would survive.
Hi, it’s Tony Box,
candidate for Texas Attorney General to replace Ken Paxton.
That day, I got a second chance at life.
I vowed to dedicate the time God had given me to the servi…