Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
https://arxiv.org/abs/2511.14745 https://mastoxiv.page/@arXiv_csLG_bot/115575981129228810
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
https://arxiv.org/abs/2511.18214 https://mastoxiv.page/@arXiv_csLG_bot/115610315210502140
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
https://arxiv.org/abs/2511.23083 https://mastoxiv.page/@arXiv_csLG_bot/115644325602130493
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
https://arxiv.org/abs/2512.11529 https://mastoxiv.page/@arXiv_csLG_bot/115723008170311172
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
https://arxiv.org/abs/2512.12783 https://mastoxiv.page/@arXiv_csLG_bot/115729287232895097
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
https://arxiv.org/abs/2512.15068 https://mastoxiv.page/@arXiv_csLG_bot/115740048142898391
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
https://arxiv.org/abs/2512.16715 https://mastoxiv.page/@arXiv_csLG_bot/115745910810427061
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
https://arxiv.org/abs/2401.15502 https://mastoxiv.page/@arXiv_statML_bot/111843467510507382
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
https://arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
https://arxiv.org/abs/2408.07588 https://mastoxiv.page/@arXiv_statML_bot/112965266196097314
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
https://arxiv.org/abs/2410.13161 https://mastoxiv.page/@arXiv_heplat_bot/113327593338897860
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
https://arxiv.org/abs/2410.22674 https://mastoxiv.page/@arXiv_eessIV_bot/113401026110345647
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
https://arxiv.org/abs/2411.02221 https://mastoxiv.page/@arXiv_statML_bot/113429912435819479
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
https://arxiv.org/abs/2412.01389 https://mastoxiv.page/@arXiv_statML_bot/113588027268311334
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
https://arxiv.org/abs/2412.12667 https://mastoxiv.page/@arXiv_csCV_bot/113672538318570349
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
https://arxiv.org/abs/2502.01890 https://mastoxiv.page/@arXiv_csCV_bot/113949981686723660
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
https://arxiv.org/abs/2502.01956 https://mastoxiv.page/@arXiv_csRO_bot/113949997485625086
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
https://arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
https://arxiv.org/abs/2502.09652 https://mastoxiv.page/@arXiv_csCV_bot/114017924551186136
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
https://arxiv.org/abs/2503.19041 https://mastoxiv.page/@arXiv_csCL_bot/114227502448008352
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
https://arxiv.org/abs/2503.21526 https://mastoxiv.page/@arXiv_statML_bot/114238919468512990
toXiv_bot_toot
Higher in prices, lower in quality.
From: @…
https://mas.to/@alternativeto/115266493151400548
Mesh of Spatiotemporal Optical Vortices with Programmable Intensity Nulls
Jinxin Wu, Dan Wang, Qingqing Liang, Jianhua Hu, Jiahao Dong, Jijun Feng, Yi Liu
https://arxiv.org/abs/2511.18087 https://arxiv.org/pdf/2511.18087 https://arxiv.org/html/2511.18087
arXiv:2511.18087v1 Announce Type: new
Abstract: Light carrying transverse orbital angular momentum (T-OAM) in the form of spatiotemporal optical vortices (STOVs) is opening new degrees of freedom for structured light manipulation. Such spatiotemporal wavepackets hold significant potential for optical trapping, analog optical computing, studying photonic symmetry and topology, among others. Up to now, synthesizing of such vortices is limited in one dimension, either in temporal or spatial domain. In this work, we propose and experimentally demonstrate a two-dimensional flexible mesh of spatiotemporal optical vortices (M-STOV) with programmable intensity nulls, and analyze their diffraction patterns for detection. Furthermore, we extend the spectral range of M-STOV via second-harmonic generation while examining the transfer of OAM in this nonlinear process. This study establishes a foundational framework for designing higher dimensional spatiotemporal vortex fields and promises a high-capacity information carrier based on ST optical vortices.
toXiv_bot_toot
“Additional direct manufacturing costs do not fully explain the higher prices of electric cars outside China. [..] in Germany, the retail price difference is more than double the manufacturing cost difference.”
https://www.iea.org/reports/what-next-for-the…
Suspicions in the crypto community point to AI-supported hackers carrying out a concentrated campaign to steal around $5 million in old and sometimes abandoned DeFi projects.
Is an AI hacker targeting old DeFi projects in $5M spree?
https://protos.com/is-an-ai-hacker-tar
An international group is organizing an observing campaign through the Citizen Science Working Group of the #LUMIO mission: LUMIO is an ESA space mission to observe lunar #impact flashes (LIFs) from space, on the lunar far side (#Geminid meteoroid stream, 13-15 Dec 2025. During the maximum of the stream, the number of visible impact flashes will be higher than during non-shower times, therefore there is a good chance of detecting at least some impact flashes.
Observations can be made using moderately-sized telescopes and a video camera. On the website https://lif.mi.imati.cnr.it/home_page.php there are now a recording of a thorough talk about the project and its slides at https://lif.mi.imati.cnr.it/open_item_page.php?item_idk=LDB-000000001 and slides about the preferred analysis software ALFI at https://lif.mi.imati.cnr.it/open_item_page.php?item_idk=LDB-000000002 -if you want to join in the LGC please sign on by 21 November.
Faculty? Staff? Student? Submit your project to the new “Open Source in Higher Education” track at SCaLE 23x!
#SCaLE23x
US volunteer fire departments are scrambling to find software amid shrinking options and higher costs, as companies backed by private equity dominate the market (Mike Baker/New York Times)
https://www.nytimes.com/2025/12/14/us/fire-department-software-priva…
“Items that I have bought regularly have gone up in price steadily
From hair dye to baby formula,
our grocery list has gotten smaller while our budget has had to increase.
Meats like steak are a no-go for our household.”
https://www.theguardian.com/us-news…
Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
Paul Mangold, Elo\"ise Berthier, Eric Moulines
https://arxiv.org/abs/2512.17688 https://arxiv.org/pdf/2512.17688 https://arxiv.org/html/2512.17688
arXiv:2512.17688v1 Announce Type: new
Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
toXiv_bot_toot