This is as good a time as any for a thought experiment.
You're in Nazi Germany. You know about the camps, you know what they do, you see the ash fall, you smell it. People who resist alone are killed, some are sent to the camps too. You're afraid to even talk to people about it for fear that they'll turn you in.
You think back to when the camps were being built. You had all the warning signs, but you didn't know how to interpret them. You could believe it would happen. You thought you'd have a chance to vote him out. You thought there might be another way. You thought maybe things would turn out differently if you just sat tight, kept your head down, kept yourself safe.
You see a family being dragged from their home. You know they will be killed. You want to fight, not just for them but for yourself. You opposed Hitler, and at any point you know you could be on the list... Even if you do nothing.
You wish you could rise up, shoot the SS, open the gates, fight it all. You know you aren't alone, but you don't know how to connect with the people who want the same thing.
Using the knowledge we have now, what should you have done in the preceding months and years to connect, to build a community that would open up all paths of resistance?
There were people who resisted. We know it wasn't enough.
Gun laws in Nazi Germany were very similar to US laws in that Nazis were largely free to own guns and everyone else was not. Unlike the US, where "others" have historically controlled using the fear that they might be randomly executed, Germany did codify it. Red flag laws were one more step in the US towards that codification, and there will be more.
When Nazis were taking away those guns, the social networks didn't exist to make resistance possible for most folks. But some Jews were able to resist.
It wasn't the guns that made the Warsaw Ghetto Uprising possible, though they definitely helped. The Warsaw Ghetto uprising was made possible by labor organizing in the precessing years.
If there were more uprisings like that, the Holocaust could have been stopped if not prevented. Social networks make resistance possible. Guns are only useful tools to resist authoritarianism *after* you build a community able to support that resistance, and they are only one of many tools made useful by that community.
Getting guns is easy, and not always necessary. Building community is hard. Guns won't keep you safe. Community will.
Single acts of resistance may slow the machine down, but to actually bring down a monster you need to be able to attack more than once. You need a society of resistance. If you are afraid now, build that. Talk to people while it's still safe to do so. Ask them where their red line is. Talk to neighbors. Figure out your network.
Take the steps you need now to keep your neighbors safe, to keep yourself safe.
#USPol
On Electric Vehicle Energy Demand Forecasting and the Effect of Federated Learning
Andreas Tritsarolis, Gil Sampaio, Nikos Pelekis, Yannis Theodoridis
https://arxiv.org/abs/2602.20782 https://arxiv.org/pdf/2602.20782 https://arxiv.org/html/2602.20782
arXiv:2602.20782v1 Announce Type: new
Abstract: The wide spread of new energy resources, smart devices, and demand side management strategies has motivated several analytics operations, from infrastructure load modeling to user behavior profiling. Energy Demand Forecasting (EDF) of Electric Vehicle Supply Equipments (EVSEs) is one of the most critical operations for ensuring efficient energy management and sustainability, since it enables utility providers to anticipate energy/power demand, optimize resource allocation, and implement proactive measures to improve grid reliability. However, accurate EDF is a challenging problem due to external factors, such as the varying user routines, weather conditions, driving behaviors, unknown state of charge, etc. Furthermore, as concerns and restrictions about privacy and sustainability have grown, training data has become increasingly fragmented, resulting in distributed datasets scattered across different data silos and/or edge devices, calling for federated learning solutions. In this paper, we investigate different well-established time series forecasting methodologies to address the EDF problem, from statistical methods (the ARIMA family) to traditional machine learning models (such as XGBoost) and deep neural networks (GRU and LSTM). We provide an overview of these methods through a performance comparison over four real-world EVSE datasets, evaluated under both centralized and federated learning paradigms, focusing on the trade-offs between forecasting fidelity, privacy preservation, and energy overheads. Our experimental results demonstrate, on the one hand, the superiority of gradient boosted trees (XGBoost) over statistical and NN-based models in both prediction accuracy and energy efficiency and, on the other hand, an insight that Federated Learning-enabled models balance these factors, offering a promising direction for decentralized energy demand forecasting.
toXiv_bot_toot
Localized Dynamics-Aware Domain Adaption for Off-Dynamics Offline Reinforcement Learning
Zhangjie Xia, Yu Yang, Pan Xu
https://arxiv.org/abs/2602.21072 https://arxiv.org/pdf/2602.21072 https://arxiv.org/html/2602.21072
arXiv:2602.21072v1 Announce Type: new
Abstract: Off-dynamics offline reinforcement learning (RL) aims to learn a policy for a target domain using limited target data and abundant source data collected under different transition dynamics. Existing methods typically address dynamics mismatch either globally over the state space or via pointwise data filtering; these approaches can miss localized cross-domain similarities or incur high computational cost. We propose Localized Dynamics-Aware Domain Adaptation (LoDADA), which exploits localized dynamics mismatch to better reuse source data. LoDADA clusters transitions from source and target datasets and estimates cluster-level dynamics discrepancy via domain discrimination. Source transitions from clusters with small discrepancy are retained, while those from clusters with large discrepancy are filtered out. This yields a fine-grained and scalable data selection strategy that avoids overly coarse global assumptions and expensive per-sample filtering. We provide theoretical insights and extensive experiments across environments with diverse global and local dynamics shifts. Results show that LoDADA consistently outperforms state-of-the-art off-dynamics offline RL methods by better leveraging localized distribution mismatch.
toXiv_bot_toot
Glad to see someone major endorsing McMorrow.
I got the sense that el-Sayed jumped in because he saw a lane for a non-woman progressive candidate, not because he wants to be a Senator or because he believes he’d be good at it. Stevens is a neolib disaster. McMorrow is an experienced *legislator* with the right attitude for the time. @…
Life Treats 🥇
生活奖励 🥇
📷 Nikon FE
🎞️ Ilford HP5 Plus 400, expired 1993
If you like my work, buy me a coffee from PayPal #filmphotography
On the spatial structure and intermittency of soot in a lab-scale gas turbine combustor: Insights from large-eddy simulations
Leonardo Pachano, Daniel Mira, Abhijit Kalbhor, Jeroen van Oijen
https://arxiv.org/abs/2602.23155 https://arxiv.org/pdf/2602.23155 https://arxiv.org/html/2602.23155
arXiv:2602.23155v1 Announce Type: new
Abstract: This work presents a numerical investigation of soot formation in the Cambridge lab-scale gas turbine combustor. Large-eddy simulations (LES) of a swirl-stabilized ethylene flame are performed using the flamelet generated manifold method coupled with a discrete sectional model to account for soot formation, growth, and oxidation. The study aims to elucidate the mechanism governing the spatial structure and intermittency of soot, supported by comparisons with experimental data. The predicted soot distribution agrees well with measurements, with peak concentrations near the bluff body. Flow recirculation is identified as the key mechanism driving soot accumulation in fuel-rich regions, where surface reactions dominate soot mass growth. Soot intermittency arises from fluctuations in the flow field driven by interactions between the flame front and the recirculation vortex. Two soot modeling approaches are evaluated, differing in their treatment of soot model quantities: the first approach employs on-the-fly computation of source terms (FGM-C), while the second uses fully pre-tabulated source terms (FGM-T). Their predictive performance and computational cost are compared in the context of unsteady, sooting flames in swirl-stabilized combustors.
toXiv_bot_toot
Not All Subjectivity Is the Same! Defining Desiderata for the Evaluation of Subjectivity in NLP
Urja Khurana, Michiel van der Meer, Enrico Liscio, Antske Fokkens, Pradeep K. Murukannaiah
https://arxiv.org/abs/2603.28351 https://arxiv.org/pdf/2603.28351 https://arxiv.org/html/2603.28351
arXiv:2603.28351v1 Announce Type: new
Abstract: Subjective judgments are part of several NLP datasets and recent work is increasingly prioritizing models whose outputs reflect this diversity of perspectives. Such responses allow us to shed light on minority voices, which are frequently marginalized or obscured by dominant perspectives. It remains a question whether our evaluation practices align with these models' objectives. This position paper proposes seven evaluation desiderata for subjectivity-sensitive models, rooted in how subjectivity is represented in NLP data and models. The desiderata are constructed in a top-down approach, keeping in mind the user-centric impact of such models. We scan the experimental setup of 60 papers and show that various aspects of subjectivity are still understudied: the distinction between ambiguous and polyphonic input, whether subjectivity is effectively expressed to the user, and a lack of interplay between different desiderata, amongst other gaps.
toXiv_bot_toot
Modeling the mutational dynamics of very short tandem repeats
Amos Onn (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig), Tzipy Marx (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Liming Tao (Cellular Tissue Genomics, Genentech), Tamir Biezuner (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Ehud Shapiro (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Christoph A. Klein (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Fraunhofer Institute for Toxicology and Experimental Medicine Regensburg), Peter F. Stadler (Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig, Max Planck Institute for Mathematics in the Sciences, Institute for Theoretical Chemistry, University of Vienna, Facultad de Ciencias, Universidad Nacional de Colombia, Center for non-coding RNA in Technology and Health, University of Copenhagen, Santa Fe Institute)
https://arxiv.org/abs/2603.25628 https://arxiv.org/pdf/2603.25628 https://arxiv.org/html/2603.25628
arXiv:2603.25628v1 Announce Type: new
Abstract: Short tandem repeats (STRs) are low-entropy regions in the genome, consisting of a short (1-6 bp) unit that is consecutively repeated multiple times. They are known for high mutational instability, due to so-called stutter-mutations, in which the number of units in the run increases or descreases. In particular, STRs with repeat unit length of 1-2 bp are prone to mutate even within several cell divisions. The extremely rapid accumulation of variation makes them interesting phylogenetic markers for retrospective single-cell lineage reconstruction. Here we model their mutational dynamics at the level of individual repeat unit type and then aggregate length variations over many STR loci with the aim of obtaining a very fast ``molecular clock''. We calibrate our model based on several datasets with known lineage structure prepared from cultured cells. We find that the mutational dynamics of STRs are reasonably consistent for a given cell line, but vary among different ones. This suggests that the dynamics are not entirely explained by mutations in caretaker genes, rather, various other factors play a role -- possibly tissue origin and differentiation state. Further data and research is necessary to asses their relative effects.
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
https://arxiv.org/abs/2602.16954 https://mastoxiv.page/@arXiv_csLG_bot/116102434757760085
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
https://arxiv.org/abs/2602.17050 https://mastoxiv.page/@arXiv_csLG_bot/116102517335590034
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
https://arxiv.org/abs/2602.17550 https://mastoxiv.page/@arXiv_csLG_bot/116102581561441103
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
https://arxiv.org/abs/2602.17554 https://mastoxiv.page/@arXiv_csLG_bot/116102582216715527
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
https://arxiv.org/abs/2602.17646 https://mastoxiv.page/@arXiv_csLG_bot/116102592047544971
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
https://arxiv.org/abs/2602.19654 https://mastoxiv.page/@arXiv_csLG_bot/116125610403473755
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
https://arxiv.org/abs/2405.10385 https://mastoxiv.page/@arXiv_csCL_bot/112472190479013167
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
https://arxiv.org/abs/2406.10281 https://mastoxiv.page/@arXiv_csCR_bot/112636307340218522
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
https://arxiv.org/abs/2407.00575 https://mastoxiv.page/@arXiv_csMA_bot/112715733875586837
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
https://arxiv.org/abs/2407.12506 https://mastoxiv.page/@arXiv_quantph_bot/112806295477530195
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
https://arxiv.org/abs/2410.16106 https://mastoxiv.page/@arXiv_statML_bot/113350611306532443
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
https://arxiv.org/abs/2412.01283 https://mastoxiv.page/@arXiv_mathRT_bot/113587812663608119
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
https://arxiv.org/abs/2502.17457 https://mastoxiv.page/@arXiv_eessSP_bot/114069047434302054
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
https://arxiv.org/abs/2503.04071 https://mastoxiv.page/@arXiv_statML_bot/114120074927291283
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
https://arxiv.org/abs/2503.06437 https://mastoxiv.page/@arXiv_csCV_bot/114142690988862508
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
https://arxiv.org/abs/2504.18310 https://mastoxiv.page/@arXiv_econGN_bot/114414380916957986
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
https://arxiv.org/abs/2505.06646 https://mastoxiv.page/@arXiv_eessIV_bot/114499319986528625
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
https://arxiv.org/abs/2505.08125 https://mastoxiv.page/@arXiv_statML_bot/114505047719395949
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
https://arxiv.org/abs/2505.17645 https://mastoxiv.page/@arXiv_csCV_bot/114572928659057348
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
https://arxiv.org/abs/2505.22554 https://mastoxiv.page/@arXiv_statML_bot/114589983451462525
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
https://arxiv.org/abs/2506.01666 https://mastoxiv.page/@arXiv_quantph_bot/114618420761346125
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
https://arxiv.org/abs/2505.19698 https://mastoxiv.page/@arXiv_csLG_bot/114578810521008766
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
https://arxiv.org/abs/2506.08660 https://mastoxiv.page/@arXiv_csLG_bot/114664238967892509
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
https://arxiv.org/abs/2506.10167 https://mastoxiv.page/@arXiv_csLG_bot/114675175949432731
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
https://arxiv.org/abs/2506.10914 https://mastoxiv.page/@arXiv_csLG_bot/114675529854402158
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
https://arxiv.org/abs/2506.18481 https://mastoxiv.page/@arXiv_csLG_bot/114738421450807709
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
https://arxiv.org/abs/2506.21220 https://mastoxiv.page/@arXiv_csLG_bot/114754764750730849
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
https://arxiv.org/abs/2507.04448 https://mastoxiv.page/@arXiv_csLG_bot/114818005803079705
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
https://arxiv.org/abs/2508.01115 https://mastoxiv.page/@arXiv_csLG_bot/114975999992144374
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
https://arxiv.org/abs/2508.13904 https://mastoxiv.page/@arXiv_csLG_bot/115060568241390847
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
https://arxiv.org/abs/2508.16815 https://mastoxiv.page/@arXiv_csLG_bot/115094785677272005
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
https://arxiv.org/abs/2508.21785 https://mastoxiv.page/@arXiv_csLG_bot/115128450608548173
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
https://arxiv.org/abs/2509.15796 https://mastoxiv.page/@arXiv_csLG_bot/115247429156900905
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
https://arxiv.org/abs/2509.19975 https://mastoxiv.page/@arXiv_csLG_bot/115264498084813952
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
https://arxiv.org/abs/2509.21895 https://mastoxiv.page/@arXiv_csLG_bot/115287261047939306
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
https://arxiv.org/abs/2509.22566 https://mastoxiv.page/@arXiv_csLG_bot/115287379672141023
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
https://arxiv.org/abs/2509.23115 https://mastoxiv.page/@arXiv_csLG_bot/115293273559547106
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
https://arxiv.org/abs/2509.25424 https://mastoxiv.page/@arXiv_csLG_bot/115298579764580635
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
https://arxiv.org/abs/2509.26626 https://mastoxiv.page/@arXiv_csLG_bot/115298789487177431
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
https://arxiv.org/abs/2510.12402 https://mastoxiv.page/@arXiv_csLG_bot/115377759317818093
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
https://arxiv.org/abs/2510.15425 https://mastoxiv.page/@arXiv_csLG_bot/115405933861293858
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
https://arxiv.org/abs/2510.18114 https://mastoxiv.page/@arXiv_csLG_bot/115417332500265972
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
https://arxiv.org/abs/2510.22293 https://mastoxiv.page/@arXiv_csLG_bot/115451746201804373
toXiv_bot_toot