Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@randy_@social.linux.pizza
2025-09-26 05:21:27

Higher in prices, lower in quality.
From: @…
mas.to/@alternativeto/11526649

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 09:56:33

Mesh of Spatiotemporal Optical Vortices with Programmable Intensity Nulls
Jinxin Wu, Dan Wang, Qingqing Liang, Jianhua Hu, Jiahao Dong, Jijun Feng, Yi Liu
arxiv.org/abs/2511.18087 arxiv.org/pdf/2511.18087 arxiv.org/html/2511.18087
arXiv:2511.18087v1 Announce Type: new
Abstract: Light carrying transverse orbital angular momentum (T-OAM) in the form of spatiotemporal optical vortices (STOVs) is opening new degrees of freedom for structured light manipulation. Such spatiotemporal wavepackets hold significant potential for optical trapping, analog optical computing, studying photonic symmetry and topology, among others. Up to now, synthesizing of such vortices is limited in one dimension, either in temporal or spatial domain. In this work, we propose and experimentally demonstrate a two-dimensional flexible mesh of spatiotemporal optical vortices (M-STOV) with programmable intensity nulls, and analyze their diffraction patterns for detection. Furthermore, we extend the spectral range of M-STOV via second-harmonic generation while examining the transfer of OAM in this nonlinear process. This study establishes a foundational framework for designing higher dimensional spatiotemporal vortex fields and promises a high-capacity information carrier based on ST optical vortices.
toXiv_bot_toot

@burger_jaap@mastodon.social
2025-11-21 14:05:40

“Additional direct manufacturing costs do not fully explain the higher prices of electric cars outside China. [..] in Germany, the retail price difference is more than double the manufacturing cost difference.”
iea.org/reports/what-next-for-

@metacurity@infosec.exchange
2025-12-19 12:34:49

Suspicions in the crypto community point to AI-supported hackers carrying out a concentrated campaign to steal around $5 million in old and sometimes abandoned DeFi projects.
Is an AI hacker targeting old DeFi projects in $5M spree?
protos.com/is-an-ai-hacker-tar

@cosmos4u@scicomm.xyz
2025-11-11 22:11:00

An international group is organizing an observing campaign through the Citizen Science Working Group of the #LUMIO mission: LUMIO is an ESA space mission to observe lunar #impact flashes (LIFs) from space, on the lunar far side (#Geminid meteoroid stream, 13-15 Dec 2025. During the maximum of the stream, the number of visible impact flashes will be higher than during non-shower times, therefore there is a good chance of detecting at least some impact flashes.
Observations can be made using moderately-sized telescopes and a video camera. On the website lif.mi.imati.cnr.it/home_page. there are now a recording of a thorough talk about the project and its slides at lif.mi.imati.cnr.it/open_item_ and slides about the preferred analysis software ALFI at lif.mi.imati.cnr.it/open_item_ -if you want to join in the LGC please sign on by 21 November.

@socallinuxexpo@social.linux.pizza
2025-10-22 19:35:01

Faculty? Staff? Student? Submit your project to the new “Open Source in Higher Education” track at SCaLE 23x!
#SCaLE23x

@Techmeme@techhub.social
2025-12-15 09:55:36

US volunteer fire departments are scrambling to find software amid shrinking options and higher costs, as companies backed by private equity dominate the market (Mike Baker/New York Times)
nytimes.com/2025/12/14/us/fire

“Items that I have bought regularly have gone up in price steadily
From hair dye to baby formula,
our grocery list has gotten smaller while our budget has had to increase.
Meats like steak are a no-go for our household.”
theguardian.com/us-news…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:40

Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
Paul Mangold, Elo\"ise Berthier, Eric Moulines
arxiv.org/abs/2512.17688 arxiv.org/pdf/2512.17688 arxiv.org/html/2512.17688
arXiv:2512.17688v1 Announce Type: new
Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
toXiv_bot_toot