Tootfinder

Opt-in global Mastodon full text search. Join the index!

@netzschleuder@social.skewed.de
2025-09-27 05:00:04

edit_wikiquote: Wikiquote edits (2010)
A bipartite user-page network extracted from Wikiquotes. A user connects to a page if that user edited that page. Edits (edges) are timestamped. Edge weights represent counts of the number of edits.
This network has 3834 nodes and 12574 edges.
Tags: Informational, Web graph, Multigraph, Timestamps

edit_wikiquote: Wikiquote edits (2010). 3834 nodes, 12574 edges. https://networks.skewed.de/net/edit_wikiquote#ml
@bourgwick@heads.social
2025-10-27 03:59:05

happily escaping into this 1980 issue of the UK zine dark star i scored from a street vendor last week. heavy old wave vibes but also a fun intersection of way serious british deadheads with alan moore (who contributes a curt vile page) & the soft boys (with a rapturous underwater moonlight review).

magazine called Dark Star with Grace Slick on the cover
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:35

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
arxiv.org/abs/2506.10892 mastoxiv.page/@arXiv_csLG_bot/
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
arxiv.org/abs/2506.20494 mastoxiv.page/@arXiv_csLG_bot/
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
arxiv.org/abs/2506.20584 mastoxiv.page/@arXiv_csLG_bot/
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
arxiv.org/abs/2506.22376 mastoxiv.page/@arXiv_csLG_bot/
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
arxiv.org/abs/2507.18242 mastoxiv.page/@arXiv_csLG_bot/
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
arxiv.org/abs/2508.17702 mastoxiv.page/@arXiv_csLG_bot/
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
arxiv.org/abs/2508.19009 mastoxiv.page/@arXiv_csLG_bot/
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
arxiv.org/abs/2508.19011 mastoxiv.page/@arXiv_csLG_bot/
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
arxiv.org/abs/2508.20705 mastoxiv.page/@arXiv_csLG_bot/
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
arxiv.org/abs/2509.25977 mastoxiv.page/@arXiv_csLG_bot/
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
arxiv.org/abs/2510.01384 mastoxiv.page/@arXiv_csLG_bot/
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
arxiv.org/abs/2510.09775 mastoxiv.page/@arXiv_csLG_bot/
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
arxiv.org/abs/2510.14386 mastoxiv.page/@arXiv_csLG_bot/
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
arxiv.org/abs/2510.16882 mastoxiv.page/@arXiv_csLG_bot/
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
arxiv.org/abs/2510.23117 mastoxiv.page/@arXiv_csLG_bot/
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
arxiv.org/abs/2510.23501 mastoxiv.page/@arXiv_csLG_bot/
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
arxiv.org/abs/2511.00040 mastoxiv.page/@arXiv_csLG_bot/
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
arxiv.org/abs/2511.04469 mastoxiv.page/@arXiv_csLG_bot/
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
arxiv.org/abs/2511.09902 mastoxiv.page/@arXiv_csLG_bot/
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
arxiv.org/abs/2511.11571 mastoxiv.page/@arXiv_csLG_bot/
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
arxiv.org/abs/2511.12817 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@penguin42@mastodon.org.uk
2025-11-27 16:34:05

If the weather page says it's 13c out, wth does it not feel that much warmer than when it was in the low single digits?

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-11-27 10:05:46

Coupled Structural and Electronic Requirements in Alpha-FASnI3 Imposed by the Sn(II) Lone Pair
Mridhula Venkatanarayanan, Vladislav Slama, Madhubanti Mukherjee, Andrea Vezzosi, Ursula Rothlisberger, Virginia Carnevali
arxiv.org/abs/2511.21254

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-11-26 11:58:05

Crosslisted article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- Infrared absorption spectroscopy of a single polyatomic molecular ion
Wu, Duka, Isaza-Monsalve, Kautzky, \v{S}varc, Turci, Nardi, Gronowski, Tomza, Furey, Schindler

@metacurity@infosec.exchange
2025-10-24 11:02:18

This is a strange warning published in a Chinese state outlet. It basically warns that all forms of online shopping are subject to foreign espionage, and vendors should engage in data minimization.
globaltimes.cn/page/202510/134

@Sustainable2050@mastodon.energy
2025-10-25 12:31:39

The outgoing Dutch government failed to take the additional measures needed to meet the Climate Law goal of 55% emission reduction by 2030.
Although it's an 'aspirational goal', civil servants are already considering the risk of losing a new court case, and how to fund expensive emergency measures.

Front page of Trouw newspaper. Headline Government already preparing for consequences of new climate court cases
@scott@carfree.city
2025-10-25 05:01:51

I'm currently reading The Long Heat by Wim Carton and Andreas Malm and highly recommend it. Do not be daunted by the page count of 704 pages. It has so many notes at the back, without back matter it's only ~420 pages. Hard to read emotionally, though. It paints a stark picture of earth's near future and the massive destruction of human and other-than-human life ahead, and the insane brinkmanship of elite consensus.

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:19

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
arxiv.org/abs/2512.16927 mastoxiv.page/@arXiv_csDS_bot/
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
arxiv.org/abs/2512.16956 mastoxiv.page/@arXiv_csSE_bot/
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
arxiv.org/abs/2512.16962 mastoxiv.page/@arXiv_csCR_bot/
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
arxiv.org/abs/2512.16964 mastoxiv.page/@arXiv_eessIV_bo
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
arxiv.org/abs/2512.16969 mastoxiv.page/@arXiv_csAI_bot/
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
arxiv.org/abs/2512.16970 mastoxiv.page/@arXiv_csAI_bot/
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
arxiv.org/abs/2512.17028 mastoxiv.page/@arXiv_csCL_bot/
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
arxiv.org/abs/2512.17075 mastoxiv.page/@arXiv_csCL_bot/
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
arxiv.org/abs/2512.17127 mastoxiv.page/@arXiv_statML_bo
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
arxiv.org/abs/2512.17146 mastoxiv.page/@arXiv_csCR_bot/
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
arxiv.org/abs/2512.17169 mastoxiv.page/@arXiv_qbioBM_bo
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
arxiv.org/abs/2512.17185 mastoxiv.page/@arXiv_qfinRM_bo
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
arxiv.org/abs/2512.17209 mastoxiv.page/@arXiv_csSD_bot/
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
arxiv.org/abs/2512.17213 mastoxiv.page/@arXiv_csCV_bot/
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
arxiv.org/abs/2512.17245 mastoxiv.page/@arXiv_condmatmt
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
arxiv.org/abs/2512.17251 mastoxiv.page/@arXiv_csCR_bot/
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
arxiv.org/abs/2512.17254 mastoxiv.page/@arXiv_csCR_bot/
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
arxiv.org/abs/2512.17259 mastoxiv.page/@arXiv_csMA_bot/
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
arxiv.org/abs/2512.17277 mastoxiv.page/@arXiv_csIR_bot/
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
arxiv.org/abs/2512.17281 mastoxiv.page/@arXiv_csSD_bot/
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
arxiv.org/abs/2512.17340 mastoxiv.page/@arXiv_statME_bo
toXiv_bot_toot

@sauer_lauwarm@mastodon.social
2025-10-25 14:05:56

booksinsardinia.it/?ver=en

@wandklex@mastodon.art
2025-12-23 23:02:05

[Türchenöffnungsgeräusch]
74 Listings, hunderte in monatelanger Vorarbeit handgemalte Stücke: Der #klexadventskalender 2025 öffnete sich eben
wandklex.art/page/klexadventsk zum letzten Mal. Heute mit der alljährlichen Ausnahme von der Regel.☺
Danke fürs Dabeisein, ich wünsch Euch gute Festtage!

@netzschleuder@social.skewed.de
2025-12-26 22:00:03

webkb: WebKB graphs (1998)
Web graphs crawled from four Computer Science departments in 1998, with each page manually classified into one of 7 categories: course, department, faculty, project, staff, student, or other. All graphs included in a single .zip; also included are 'co-citation' graphs, which links i and j if they both point to some k. Edge weights count the number of links from i to j.
This network has 286 nodes and 1002 edges.
Tags: Informational, Web gra…

webkb: WebKB graphs (1998). 286 nodes, 1002 edges. https://networks.skewed.de/net/webkb#webkb_texas_link1
@grahamperrin@bsd.cafe
2025-10-25 07:17:05

@… exactly; a talking point … thank you for being part of a gift.
What is FreeBSD? The Foundation's page no longer makes a distinction between FreeBSD and Linux. Before and after:
― <

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-12-26 07:58:05

[2025-12-26 Fri (UTC), no new articles found for physics.atom-ph Atomic Physics]
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:55

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/5]:
- Sample, Don't Search: Rethinking Test-Time Alignment for Language Models
Gon\c{c}alo Faria, Noah A. Smith
arxiv.org/abs/2504.03790 mastoxiv.page/@arXiv_csCL_bot/
- A Survey on Archetypal Analysis
Aleix Alcacer, Irene Epifanio, Sebastian Mair, Morten M{\o}rup
arxiv.org/abs/2504.12392 mastoxiv.page/@arXiv_statME_bo
- The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations
Michael L. Wells, Kamel Lahouel, Bruno Jedynak
arxiv.org/abs/2505.11622 mastoxiv.page/@arXiv_statML_bo
- BOLT: Block-Orthonormal Lanczos for Trace estimation of matrix functions
Kingsley Yeon, Promit Ghosal, Mihai Anitescu
arxiv.org/abs/2505.12289 mastoxiv.page/@arXiv_mathNA_bo
- Clustering and Pruning in Causal Data Fusion
Otto Tabell, Santtu Tikka, Juha Karvanen
arxiv.org/abs/2505.15215 mastoxiv.page/@arXiv_statML_bo
- On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of ph...
Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi, Alison L. Marsden
arxiv.org/abs/2506.11683 mastoxiv.page/@arXiv_statML_bo
- Beyond Force Metrics: Pre-Training MLFFs for Stable MD Simulations
Maheshwari, Tang, Ock, Kolluru, Farimani, Kitchin
arxiv.org/abs/2506.14850 mastoxiv.page/@arXiv_physicsch
- Quantifying Uncertainty in the Presence of Distribution Shifts
Yuli Slavutsky, David M. Blei
arxiv.org/abs/2506.18283 mastoxiv.page/@arXiv_statML_bo
- ZKPROV: A Zero-Knowledge Approach to Dataset Provenance for Large Language Models
Mina Namazi, Alexander Nemecek, Erman Ayday
arxiv.org/abs/2506.20915 mastoxiv.page/@arXiv_csCR_bot/
- SpecCLIP: Aligning and Translating Spectroscopic Measurements for Stars
Zhao, Huang, Xue, Kong, Liu, Tang, Beers, Ting, Luo
arxiv.org/abs/2507.01939 mastoxiv.page/@arXiv_astrophIM
- Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based I...
Ko Watanabe, Stanislav Frolov, Aya Hassan, David Dembinsky, Adriano Lucieri, Andreas Dengel
arxiv.org/abs/2507.17860 mastoxiv.page/@arXiv_csCV_bot/
- PASS: Probabilistic Agentic Supernet Sampling for Interpretable and Adaptive Chest X-Ray Reasoning
Yushi Feng, Junye Du, Yingying Hong, Qifan Wang, Lequan Yu
arxiv.org/abs/2508.10501 mastoxiv.page/@arXiv_csAI_bot/
- Unified Acoustic Representations for Screening Neurological and Respiratory Pathologies from Voice
Ran Piao, Yuan Lu, Hareld Kemps, Tong Xia, Aaqib Saeed
arxiv.org/abs/2508.20717 mastoxiv.page/@arXiv_csSD_bot/
- Machine Learning-Driven Predictive Resource Management in Complex Science Workflows
Tasnuva Chowdhury, et al.
arxiv.org/abs/2509.11512 mastoxiv.page/@arXiv_csDC_bot/
- MatchFixAgent: Language-Agnostic Autonomous Repository-Level Code Translation Validation and Repair
Ali Reza Ibrahimzada, Brandon Paulsen, Reyhaneh Jabbarvand, Joey Dodds, Daniel Kroening
arxiv.org/abs/2509.16187 mastoxiv.page/@arXiv_csSE_bot/
- Automated Machine Learning Pipeline: Large Language Models-Assisted Automated Dataset Generation ...
Adam Lahouari, Jutta Rogal, Mark E. Tuckerman
arxiv.org/abs/2509.21647 mastoxiv.page/@arXiv_condmatmt
- Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference
Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, Zheng Ma
arxiv.org/abs/2509.21791 mastoxiv.page/@arXiv_csCL_bot/
- The Generation Phases of Flow Matching: a Denoising Perspective
Anne Gagneux, S\'egol\`ene Martin, R\'emi Gribonval, Mathurin Massias
arxiv.org/abs/2510.24830 mastoxiv.page/@arXiv_csCV_bot/
- Data-driven uncertainty-aware seakeeping prediction of the Delft 372 catamaran using ensemble Han...
Giorgio Palma, Andrea Serani, Matteo Diez
arxiv.org/abs/2511.04461 mastoxiv.page/@arXiv_eessSY_bo
- Generalized infinite dimensional Alpha-Procrustes based geometries
Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra
arxiv.org/abs/2511.09801 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@TobiasFrech@ijug.social
2025-11-24 09:58:05

In software development, progress sometimes means getting a different error message.

502 error weg page from Cloudflare showing nvd.nist.gov is not reachable
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:31

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
arxiv.org/abs/2512.17341 mastoxiv.page/@arXiv_statML_bo
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
arxiv.org/abs/2512.17381 mastoxiv.page/@arXiv_csNI_bot/
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
arxiv.org/abs/2512.17419 mastoxiv.page/@arXiv_csSE_bot/
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
arxiv.org/abs/2512.17426 mastoxiv.page/@arXiv_statML_bo
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
arxiv.org/abs/2512.17450 mastoxiv.page/@arXiv_csCV_bot/
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
arxiv.org/abs/2512.17460 mastoxiv.page/@arXiv_csSE_bot/
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
arxiv.org/abs/2512.17462 mastoxiv.page/@arXiv_csIR_bot/
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
arxiv.org/abs/2512.17466 mastoxiv.page/@arXiv_eessSY_bo
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
arxiv.org/abs/2512.17470 mastoxiv.page/@arXiv_csAI_bot/
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
arxiv.org/abs/2512.17473 mastoxiv.page/@arXiv_eessSP_bo
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
arxiv.org/abs/2512.17488 mastoxiv.page/@arXiv_csCV_bot/
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
arxiv.org/abs/2512.17515 mastoxiv.page/@arXiv_eessIV_bo
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
arxiv.org/abs/2512.17517 mastoxiv.page/@arXiv_csCV_bot/
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
arxiv.org/abs/2512.17534 mastoxiv.page/@arXiv_physicsfl
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
arxiv.org/abs/2512.17562 mastoxiv.page/@arXiv_csSD_bot/
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
arxiv.org/abs/2512.17574 mastoxiv.page/@arXiv_csDC_bot/
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
arxiv.org/abs/2512.17585 mastoxiv.page/@arXiv_eessIV_bo
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
arxiv.org/abs/2512.17594 mastoxiv.page/@arXiv_csCR_bot/
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
arxiv.org/abs/2512.17630 mastoxiv.page/@arXiv_csCL_bot/
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
arxiv.org/abs/2512.17659 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@barijaona@mastodon.mg
2025-10-20 01:31:48

Petit rappel utile (je savais que ça existait, mais il y a 48 heures, je me suis demandé comment il fallait faire)
Bookmarklet Š installer.
Lier un fragment de texte dans une page web – Le carnet de Joachim fourbi.eu/billet/2024-12-03-li

@aardrian@toot.cafe
2025-11-18 21:35:00

My favorite thing about infinite scroll is when your connection hiccups and nothing else will load so you have to reload the entire page and scroll a dozen times just to get to where you were so you can start the fuck over and hope your connection doesn’t hiccup again.

@kubikpixel@chaos.social
2025-12-22 06:05:36

End of the Internet
Congratulations! — You have finally reached the end of the internet! There's nothing more to see, no more links to visit. You've done it all. This is the very last page on the very last server at the very far end of the internet […]
🌐 hmpg.net

Animated GIF image: Download from the Internet window in the old Windows 98 that fails.
@NFL@darktundra.xyz
2025-11-21 13:05:22

Our guide to every Week 12 NFL game: Matchup previews, predictions, picks and nuggets espn.com/nfl/story/_/page/view

@midtsveen@social.linux.pizza
2025-12-14 22:06:28

I’ve moved my “Entry to Left Wing Anarchist Reading” to my personal website.
It’s styled to look a lot like Google Docs, that was intentional, regardless, it’s hosted in a more private, self-controlled space.
⭐ Read more here: midtsveen.codeberg.page/resour ⭐<…

@whitequark@mastodon.social
2025-11-16 02:25:53

> Keep in mind that certain details differ between GitHub and Forgejo.
(page does not say a single word about what actually differs)

@Dragofix@veganism.social
2025-11-17 21:11:02

Say No to China’s Cruel New “Monkey Business” #AnimalRights

@LillyHerself@Mastodon.social
2025-12-21 05:18:28

Who do they think they're fooling?

Epstein (Redacted) page 22987
No one ——
— has —
ever —
— seen
anyone—
— more ---
innocent —
— than —
— Trump

by goris
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@sillon_fictionnel@paperbay.org
2025-11-23 17:44:08

"À droite, une poignée d’hommes bouscule les philosophes, les contraignant vers le brasier où, Š gauche, les livres déjŠ brûlent — comme si l’incendie devait achever la pensée." de Marco Dente (1515–27)
Le nouveau marque page du Sillon vient de trouver sa gravure...
#art #gravure

"À droite, une poignée d’hommes bouscule les philosophes, les contraignant vers le brasier où, à gauche, les livres déjà brûlent — comme si l’incendie devait achever la pensée."
@netzschleuder@social.skewed.de
2025-11-22 22:00:05

edit_wikibooks: Wikipedia book edits (2010)
Two bipartite user-page networks extracted from Wikipedia, about books. A user connects to a page if that user edited that page. Edits (edges) are timestamped. Edge weights represent counts of the number of edits.
This network has 29946 nodes and 213603 edges.
Tags: Informational, Web graph, Multigraph, Timestamps

edit_wikibooks: Wikipedia book edits (2010). 29946 nodes, 213603 edges. https://networks.skewed.de/net/edit_wikibooks#nl
@cosmos4u@scicomm.xyz
2025-11-11 22:11:00

An international group is organizing an observing campaign through the Citizen Science Working Group of the #LUMIO mission: LUMIO is an ESA space mission to observe lunar #impact flashes (LIFs) from space, on the lunar far side (#Geminid meteoroid stream, 13-15 Dec 2025. During the maximum of the stream, the number of visible impact flashes will be higher than during non-shower times, therefore there is a good chance of detecting at least some impact flashes.
Observations can be made using moderately-sized telescopes and a video camera. On the website lif.mi.imati.cnr.it/home_page. there are now a recording of a thorough talk about the project and its slides at lif.mi.imati.cnr.it/open_item_ and slides about the preferred analysis software ALFI at lif.mi.imati.cnr.it/open_item_ -if you want to join in the LGC please sign on by 21 November.

@ruari@velocipederider.com
2025-10-20 07:27:25

This reads like a warning. So is cURL like nuts? Are people allergic to cURL!?
@… my watch might contain your software, maybe. They aren't 100% sure though. 🤷 🤣
[Note to reader, it almost certainly does contain cURL. I would be shocked if it did not!]
#WristCheck

A Garmin Venu 3 watch on wrist showing the about page information, which includes the line, "This product may contain Curl, distributed under the MIT/X License."
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:24

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
arxiv.org/abs/2402.12118 mastoxiv.page/@arXiv_csLG_bot/
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
arxiv.org/abs/2405.00645 mastoxiv.page/@arXiv_csLG_bot/
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
arxiv.org/abs/2405.15325 mastoxiv.page/@arXiv_csLG_bot/
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
arxiv.org/abs/2405.15877 mastoxiv.page/@arXiv_csLG_bot/
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
arxiv.org/abs/2409.03735 mastoxiv.page/@arXiv_csLG_bot/
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
arxiv.org/abs/2410.06800 mastoxiv.page/@arXiv_csLG_bot/
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
arxiv.org/abs/2410.18686 mastoxiv.page/@arXiv_csLG_bot/
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
arxiv.org/abs/2412.00720 mastoxiv.page/@arXiv_csLG_bot/
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
arxiv.org/abs/2412.15184 mastoxiv.page/@arXiv_csLG_bot/
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
arxiv.org/abs/2501.10290 mastoxiv.page/@arXiv_csLG_bot/
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
arxiv.org/abs/2501.10321 mastoxiv.page/@arXiv_csLG_bot/
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
arxiv.org/abs/2502.06658 mastoxiv.page/@arXiv_csLG_bot/
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
arxiv.org/abs/2502.09496 mastoxiv.page/@arXiv_csLG_bot/
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
arxiv.org/abs/2502.10784 mastoxiv.page/@arXiv_csLG_bot/
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
arxiv.org/abs/2502.11027 mastoxiv.page/@arXiv_csLG_bot/
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
arxiv.org/abs/2505.10432 mastoxiv.page/@arXiv_csLG_bot/
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
arxiv.org/abs/2505.17720 mastoxiv.page/@arXiv_csLG_bot/
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
arxiv.org/abs/2505.22255 mastoxiv.page/@arXiv_csLG_bot/
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
arxiv.org/abs/2506.06486 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@mia@hcommons.social
2025-12-05 15:17:00

Lightning talks include IIIF's AI/ML working group presented by the excellent Martin Kalfatovic conftool.org/fantastic-futures

@arXiv_csCV_bot@mastoxiv.page
2025-10-14 22:05:05

Replaced article(s) found for cs.CV. arxiv.org/list/cs.CV/new
[7/8]:
- MultiCOIN: Multi-Modal COntrollable Video INbetweening
Tanveer, Zhou, Niklaus, Amiri, Zhang, Singh, Zhao

Israeli politicians cast doubt on government’s commitment to ceasefire
Israel reportedly halts supply of aid into Gaza as it launches ‘massive wave of attacks’ – Middle East crisis live
Following Israeli airstrikes on Gaza earlier today, more senior ministers have made remarks casting doubt on the government’s commitment to the ceasefire deal
Amichai Chikli, Israel’s diaspora affairs minister and a vocal hardliner, said:
“As long as Hamas exists, there will be war.”

@memeorandum@universeodon.com
2025-10-15 00:05:45

Facebook removes Chicago-area page dedicated to ICE sightings after Justice Department intervenes (Lauryn Azu/Chicago Tribune)
chicagotribune.com/2025/10/14/
memeorandum.com/251014/p147#a2

@thomastraynor@social.linux.pizza
2025-10-22 12:32:17

More 'news' sites muted. Popups for accepting cookies or go to a screen to turn off what I don't want. Only problem is they want me to manually turn off hundreds of 'partners'. They don't present a third option, reject ALL cookies.
Probably a strange concept for them. Just present the page and display generic ads. No tracking cookies, no invasive scripts doing the damndest to identify me.

@arXiv_mathph_bot@mastoxiv.page
2025-10-14 19:05:05

Replaced article(s) found for math-ph. arxiv.org/list/math-ph/new
[1/2]:
- Asymptotic expansion of the hard-to-soft edge transition
Luming Yao, Lun Zhang

@axbom@axbom.me
2025-10-18 05:47:50
@… @… I have this page for embedding YouTube in a privacy-enhanced way. Don't know if that works for you but just wanted to let you know.

I'm also encouraging people to use somethin…
@rasterweb@mastodon.social
2025-12-18 16:57:54

Is there a web hosting provider today that does not promote AI shit on their home page?
You know, for us people who just want to hosting a fucking web site and do not need or want any AI?

@seeingwithsound@mas.to
2025-12-13 13:44:21

(PDF) Multi-algorithmic software for visual-to-auditory sensory substitution (VASS) #NWP2025, FUMN…

Developed multi-algorithmic software subsystem for VASS
@arXiv_csLO_bot@mastoxiv.page
2025-10-13 11:08:05

Crosslisted article(s) found for cs.LO. arxiv.org/list/cs.LO/new
[1/1]:
- An MDL-Style Cost Functional KC, Distribution-Preserving Reductions ($A2^d$), and an $AC^0$ log L...
Marko Lela

@newsie@darktundra.xyz
2025-12-23 18:38:21

SEC sues crypto firms for defrauding investors out of $14 million therecord.media/sec-sues-crypt

@arXiv_csCR_bot@mastoxiv.page
2025-10-14 12:05:38

DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation
Hyeseon Ahn, Shinwoo Park, Yo-Sub Han
arxiv.org/abs/2510.10987

@YaleDivinitySchool@mstdn.social
2025-12-18 20:34:48

FASPE and the Center for Public Theology/Public Policy at YDS recently hosted a day-long clergy symposium on Public Theology in a Time of Authoritarianism. Videos of the main sessions now available, including this recording of Session 4! youtube.com/watch?v=4fURT5QnOt4

A video screen with title and names, repeated in the video page.
@wandklex@mastodon.art
2025-12-21 08:00:22

[Türchenöffnungsgeräusch]
Tag 21 im #klexadventskalender ist auf wandklex.art/page/klexadventsk online! (nun auch mit dem richtigen Link 😬)
Es gibt zwar keine neuen versendbaren Stücke mehr, weil zwar Bestellungen möglich sind, ich aber vor Weihnachten nichts mehr versende.
Doch ich hab ein Geschenk! 🎁🧑‍🎄

@netzschleuder@social.skewed.de
2025-12-23 18:00:04

edit_wikiquote: Wikiquote edits (2010)
A bipartite user-page network extracted from Wikiquotes. A user connects to a page if that user edited that page. Edits (edges) are timestamped. Edge weights represent counts of the number of edits.
This network has 1438 nodes and 3450 edges.
Tags: Informational, Web graph, Multigraph, Timestamps

edit_wikiquote: Wikiquote edits (2010). 1438 nodes, 3450 edges. https://networks.skewed.de/net/edit_wikiquote#af
@arXiv_hepth_bot@mastoxiv.page
2025-10-14 19:34:05

Replaced article(s) found for hep-th. arxiv.org/list/hep-th/new
[2/3]:
- Limits of Symmetry in Schwarzschild: CKVs and BRST Triviality in the Kerr-Schild Double Copy
Brandon Holton

@arXiv_csAI_bot@mastoxiv.page
2025-10-14 22:05:36

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[14/14]:
- Local MAP Sampling for Diffusion Models
Shaorong Zhang, Rob Brekelmans, Greg Ver Steeg

@NFL@darktundra.xyz
2025-10-17 20:44:08

Cincy hopes benching gets Taylor-Britt 'to respond' espn.com/nfl/story/_/id/466292

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:43

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/3]:
- Fraud detection in credit card transactions using Quantum-Assisted Restricted Boltzmann Machines
Jo\~ao Marcos Cavalcanti de Albuquerque Neto, Gustavo Castro do Amaral, Guilherme Penello Tempor\~ao
arxiv.org/abs/2512.17660 mastoxiv.page/@arXiv_quantph_b
- Vidarc: Embodied Video Diffusion Model for Closed-loop Control
Feng, Xiang, Mao, Tan, Zhang, Huang, Zheng, Liu, Su, Zhu
arxiv.org/abs/2512.17661 mastoxiv.page/@arXiv_csRO_bot/
- Imputation Uncertainty in Interpretable Machine Learning Methods
Pegah Golchian, Marvin N. Wright
arxiv.org/abs/2512.17689 mastoxiv.page/@arXiv_statML_bo
- Revisiting the Broken Symmetry Phase of Solid Hydrogen: A Neural Network Variational Monte Carlo ...
Shengdu Chai, Chen Lin, Xinyang Dong, Yuqiang Li, Wanli Ouyang, Lei Wang, X. C. Xie
arxiv.org/abs/2512.17703 mastoxiv.page/@arXiv_condmatst
- Breast Cancer Neoadjuvant Chemotherapy Treatment Response Prediction Using Aligned Longitudinal M...
Rahul Ravi, Ruizhe Li, Tarek Abdelfatah, Stephen Chan, Xin Chen
arxiv.org/abs/2512.17759 mastoxiv.page/@arXiv_eessIV_bo
- MedNeXt-v2: Scaling 3D ConvNeXts for Large-Scale Supervised Representation Learning in Medical Im...
Roy, Kirchhoff, Ulrich, Rokuss, Wald, Isensee, Maier-Hein
arxiv.org/abs/2512.17774 mastoxiv.page/@arXiv_eessIV_bo
- Domain-Aware Quantum Circuit for QML
Gurinder Singh, Thaddeus Pellegrini, Kenneth M. Merz, Jr
arxiv.org/abs/2512.17800 mastoxiv.page/@arXiv_quantph_b
- Visually Prompted Benchmarks Are Surprisingly Fragile
Feng, Lian, Dunlap, Shu, Wang, Wang, Darrell, Suhr, Kanazawa
arxiv.org/abs/2512.17875 mastoxiv.page/@arXiv_csCV_bot/
- Learning vertical coordinates via automatic differentiation of a dynamical core
Tim Whittaker, Seth Taylor, Elsa Cardoso-Bihlo, Alejandro Di Luca, Alex Bihlo
arxiv.org/abs/2512.17877 mastoxiv.page/@arXiv_physicsao
- RadarGen: Automotive Radar Point Cloud Generation from Cameras
Tomer Borreda, Fangqiang Ding, Sanja Fidler, Shengyu Huang, Or Litany
arxiv.org/abs/2512.17897 mastoxiv.page/@arXiv_csCV_bot/
- Distributionally Robust Imitation Learning: Layered Control Architecture for Certifiable Autonomy
Gahlawat, Aboudonia, Banik, Hovakimyan, Matni, Ames, Zardini, Speranzon
arxiv.org/abs/2512.17899 mastoxiv.page/@arXiv_eessSY_bo
- Re-Depth Anything: Test-Time Depth Refinement via Self-Supervised Re-lighting
Ananta R. Bhattarai, Helge Rhodin
arxiv.org/abs/2512.17908 mastoxiv.page/@arXiv_csCV_bot/
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-24 12:39:05

Replaced article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- Calibration-free Rydberg Atomic Receiver for Sub-MHz Wireless Communications and Sensing
Chen, Mao, Xiao, Wu, Li, Cui, Zeng, Zheng, Huang, Wang

@mia@hcommons.social
2025-12-05 09:53:33

We began #FF2025 with two long papers - Developing Archival AI chatbots and AI to Improve AV Metadata in a Legacy System conftool.org/fantastic-futures<…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:55:06

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/5]:
- CLAReSNet: When Convolution Meets Latent Attention for Hyperspectral Image Classification
Asmit Bandyopadhyay, Anindita Das Bhattacharjee, Rakesh Das
arxiv.org/abs/2511.12346 mastoxiv.page/@arXiv_csCV_bot/
- Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without ...
Dimitris Oikonomou, Nicolas Loizou
arxiv.org/abs/2512.02342 mastoxiv.page/@arXiv_mathOC_bo
- Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven App...
Karthik Prabhakar, Durgamadhab Mishra
arxiv.org/abs/2512.06699 mastoxiv.page/@arXiv_csPF_bot/
- Minimum Bayes Risk Decoding for Error Span Detection in Reference-Free Automatic Machine Translat...
Lyu, Song, Kamigaito, Ding, Tanaka, Utiyama, Funakoshi, Okumura
arxiv.org/abs/2512.07540 mastoxiv.page/@arXiv_csCL_bot/
- In-Context Learning for Seismic Data Processing
Fabian Fuchs, Mario Ruben Fernandez, Norman Ettrich, Janis Keuper
arxiv.org/abs/2512.11575 mastoxiv.page/@arXiv_csCV_bot/
- Journey Before Destination: On the importance of Visual Faithfulness in Slow Thinking
Rheeya Uppaal, Phu Mon Htut, Min Bai, Nikolaos Pappas, Zheng Qi, Sandesh Swamy
arxiv.org/abs/2512.12218 mastoxiv.page/@arXiv_csCV_bot/
- Non-Resolution Reasoning (NRR): A Computational Framework for Contextual Identity and Ambiguity P...
Kei Saito
arxiv.org/abs/2512.13478 mastoxiv.page/@arXiv_csCL_bot/
- Stylized Synthetic Augmentation further improves Corruption Robustness
Georg Siedel, Rojan Regmi, Abhirami Anand, Weijia Shao, Silvia Vock, Andrey Morozov
arxiv.org/abs/2512.15675 mastoxiv.page/@arXiv_csCV_bot/
- mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
Jonas Pai, Liam Achenbach, Victoriano Montesinos, Benedek Forrai, Oier Mees, Elvis Nava
arxiv.org/abs/2512.15692 mastoxiv.page/@arXiv_csRO_bot/
toXiv_bot_toot

@midtsveen@social.linux.pizza
2025-12-14 11:03:26

📌 Pinned Post
I'm an anarcho-syndicalist non-binary person who uses they/them pronouns, is bisexual, from Bergen, Norway, and whose posts automatically delete after seven days regardless of likes or boosts, because I have autism and say random shit sometimes.
I talk allot about free software, and I use Secureblue, GrapheneOS and Debian as part of my software setup. I also spend a lot of time experimenting with Linux, especially Debian Testing/Sid, GNOME, and KDE.
As an …

@netzschleuder@social.skewed.de
2025-10-21 13:00:05

edit_wikiquote: Wikiquote edits (2010)
A bipartite user-page network extracted from Wikiquotes. A user connects to a page if that user edited that page. Edits (edges) are timestamped. Edge weights represent counts of the number of edits.
This network has 27896 nodes and 184257 edges.
Tags: Informational, Web graph, Multigraph, Timestamps

edit_wikiquote: Wikiquote edits (2010). 27896 nodes, 184257 edges. https://networks.skewed.de/net/edit_wikiquote#fr
@seeingwithsound@mas.to
2025-11-16 07:45:32

It looks like Nano Retina is back, or are they just trying to sell off their IP? No news since 2022 nano-retina.com retinal implant.
The clinical study

@metacurity@infosec.exchange
2025-11-17 09:35:25

RE: infosec.exchange/@metacurity/1
Check out the ICIJ's page posting a series of articles and videos on their investigation. Kudos to everyone.

@arXiv_csCV_bot@mastoxiv.page
2025-10-14 22:05:25

Replaced article(s) found for cs.CV. arxiv.org/list/cs.CV/new
[8/8]:
- TC-GS: A Faster Gaussian Splatting Module Utilizing Tensor Cores
Liao, Ding, Cui, Gong, Hu, Wang, Li, Zhang, Wang, Fu

@arXiv_csAI_bot@mastoxiv.page
2025-10-14 22:05:15

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[13/14]:
- Learning to Reason as Action Abstractions with Scalable Mid-Training RL
Shenao Zhang, Donghan Yu, Yihao Feng, Bowen Jin, Zhaoran Wang, John Peebles, Zirui Wang

@mia@hcommons.social
2025-12-05 11:39:17

I'm lucky to be chairing a great session with lots of brilliant short papers, from 'The Politics of AI Training Data' (with people who critiqued the KB NL in the audience!), vision language models, work with newspapers and more! conftool.org/fa…

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 14:20:33

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[6/6]:
- ICL-Router: In-Context Learned Model Representations for LLM Routing
Wang, Li, Zhang, Chen, Chen, Jian, Ye, Zhang, Hu

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-22 09:05:20

Probing Nuclear Interactions Through Isotope Shift Spectroscopy of Mercury
Thorsten Groh, Felix Affeld, Simon Stellmer
arxiv.org/abs/2510.18514

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 17:23:09

Replaced article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- Fault-tolerant dynamically-decoupled hyper-Ramsey spectroscopy of ultra-narrow clock transitions
T. Zanon-Willette, B. Ilikj, D. Wilkowski, B. Darqui\'e, N. V. Vitanov

@netzschleuder@social.skewed.de
2025-12-20 23:00:08

wiki_talk: Wikipedia talk networks
Interactions among users of 10 language-specific Wikipedias: Arabic, Chinese, Dutch, English, French, German, Italian, Portuguese, Russian, and Spanish. Nodes are registered wiki editors, and an edge represents a user i having written a message on user j's talk page. Edges are timestamped. The precise dates of the snapshots are uncertain.
This network has 103068 nodes and 312837 edges.
Tags: Social, Communication, Unweighted, Multigrap…

wiki_talk: Wikipedia talk networks. 103068 nodes, 312837 edges. https://networks.skewed.de/net/wiki_talk#sr
@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 14:41:43

Crosslisted article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- Infrared Absorption and Laser Spectroscopy of Ho$^{3 }$ Doped K$_2$YF$_5$ Microparticles
Pakwan Chanprakhon, Michael F. Reid, Jon-Paul R. Wells

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 09:26:56

Theory for the Rydberg states of helium: Results for $2 \le n \le 35$ and comparison with experiment for the singlet and triplet $P$-states
G. W. F. Drake, Aaron T. Bondy, Oliver P. Hallett, Benjamin C. Najem
arxiv.org/abs/2510.17495

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:50

Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
arxiv.org/abs/2512.17884 arxiv.org/pdf/2512.17884 arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot

@netzschleuder@social.skewed.de
2025-11-08 05:00:04

edit_wikiquote: Wikiquote edits (2010)
A bipartite user-page network extracted from Wikiquotes. A user connects to a page if that user edited that page. Edits (edges) are timestamped. Edge weights represent counts of the number of edits.
This network has 5297 nodes and 27934 edges.
Tags: Informational, Web graph, Multigraph, Timestamps

edit_wikiquote: Wikiquote edits (2010). 5297 nodes, 27934 edges. https://networks.skewed.de/net/edit_wikiquote#sl
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:40

Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow
Herlock Rahimi
arxiv.org/abs/2512.17878 arxiv.org/pdf/2512.17878 arxiv.org/html/2512.17878
arXiv:2512.17878v1 Announce Type: new
Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics.
A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 09:20:06

Evaluation of Quantum Offset in Velocity Imaging-Based Electron Spectrometry
Rui Zhang, Shuaiting Yan, Wenru Jie, Jiayi Chen, Qihan Liu, Chuangang Ning
arxiv.org/abs/2510.17204

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 09:07:46

Perturbation-assisted Observation of the Lowest Vibrational Level of the $\mathrm{b}^{3}\Pi_{0}$ State of Ultracold LiK Molecules
Anbang Yang, Xiaoyu Nie, Hao Lin Yu, Yiming Liu, Victor Avalos, Canming He, Jacek Klos, Svetlana Kotochigova, Kai Dieckmann
arxiv.org/abs/2510.17166

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:50

Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning
Wei Tang, Yin-Fang Yang, Weijia Zhang, Min-Ling Zhang
arxiv.org/abs/2512.17788 arxiv.org/pdf/2512.17788 arxiv.org/html/2512.17788
arXiv:2512.17788v1 Announce Type: new
Abstract: Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 08:59:16

Numerical modeling of laser cooling in molecules: From simple diatomics to polyatomics and radioactive species
Felix Kogel, Tatsam Garg, Phillip Gro{\ss}, Lukas Leczek, Marian Rockenh\"auser, Neil Shah, Jakob Wei{\ss}, Andreas Schindewolf, Tim Langen
arxiv.org/abs/2510.16203

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:40

Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments
Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo
arxiv.org/abs/2512.17771 arxiv.org/pdf/2512.17771 arxiv.org/html/2512.17771
arXiv:2512.17771v1 Announce Type: new
Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:20

Can You Hear Me Now? A Benchmark for Long-Range Graph Propagation
Luca Miglior, Matteo Tolloso, Alessio Gravina, Davide Bacciu
arxiv.org/abs/2512.17762 arxiv.org/pdf/2512.17762 arxiv.org/html/2512.17762
arXiv:2512.17762v1 Announce Type: new
Abstract: Effectively capturing long-range interactions remains a fundamental yet unresolved challenge in graph neural network (GNN) research, critical for applications across diverse fields of science. To systematically address this, we introduce ECHO (Evaluating Communication over long HOps), a novel benchmark specifically designed to rigorously assess the capabilities of GNNs in handling very long-range graph propagation. ECHO includes three synthetic graph tasks, namely single-source shortest paths, node eccentricity, and graph diameter, each constructed over diverse and structurally challenging topologies intentionally designed to introduce significant information bottlenecks. ECHO also includes two real-world datasets, ECHO-Charge and ECHO-Energy, which define chemically grounded benchmarks for predicting atomic partial charges and molecular total energies, respectively, with reference computations obtained at the density functional theory (DFT) level. Both tasks inherently depend on capturing complex long-range molecular interactions. Our extensive benchmarking of popular GNN architectures reveals clear performance gaps, emphasizing the difficulty of true long-range propagation and highlighting design choices capable of overcoming inherent limitations. ECHO thereby sets a new standard for evaluating long-range information propagation, also providing a compelling example for its need in AI for science.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 08:25:46

Light shift suppression in a CPT magnetometer using linear polarization and double frequency interrogation
M. A. Maldonado, Yang Li, James A. McKelvy, Andrey Matsko, Irina Novikova, Eugeniy E. Mikhailov, John Kitching, Ying-Ju Wang
arxiv.org/abs/2510.16159

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:00

Mitigating Forgetting in Low Rank Adaptation
Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
arxiv.org/abs/2512.17720 arxiv.org/pdf/2512.17720 arxiv.org/html/2512.17720
arXiv:2512.17720v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 08:18:06

Unravelling inter-channel quantum interference in below-threshold nonsequential double ionization with statistical measures
S. Hashim, C. Figueira de Morisson Faria
arxiv.org/abs/2510.16135

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-10-21 08:17:36

[2025-10-21 Tue (UTC), 6 new articles found for physics.atom-ph Atomic Physics]
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:40

Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
Paul Mangold, Elo\"ise Berthier, Eric Moulines
arxiv.org/abs/2512.17688 arxiv.org/pdf/2512.17688 arxiv.org/html/2512.17688
arXiv:2512.17688v1 Announce Type: new
Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:10

Polyharmonic Cascade
Yuriy N. Bakhvalov
arxiv.org/abs/2512.17671 arxiv.org/pdf/2512.17671 arxiv.org/html/2512.17671
arXiv:2512.17671v1 Announce Type: new
Abstract: This paper presents a deep machine learning architecture, the "polyharmonic cascade" -- a sequence of packages of polyharmonic splines, where each layer is rigorously derived from the theory of random functions and the principles of indifference. This makes it possible to approximate nonlinear functions of arbitrary complexity while preserving global smoothness and a probabilistic interpretation. For the polyharmonic cascade, a training method alternative to gradient descent is proposed: instead of directly optimizing the coefficients, one solves a single global linear system on each batch with respect to the function values at fixed "constellations" of nodes. This yields synchronized updates of all layers, preserves the probabilistic interpretation of individual layers and theoretical consistency with the original model, and scales well: all computations reduce to 2D matrix operations efficiently executed on a GPU. Fast learning without overfitting on MNIST is demonstrated.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:31:40

Estimating Spatially Resolved Radiation Fields Using Neural Networks
Felix Lehner, Pasquale Lombardo, Susana Castillo, Oliver Hupe, Marcus Magnor
arxiv.org/abs/2512.17654 arxiv.org/pdf/2512.17654 arxiv.org/html/2512.17654
arXiv:2512.17654v1 Announce Type: new
Abstract: We present an in-depth analysis on how to build and train neural networks to estimate the spatial distribution of scattered radiation fields for radiation protection dosimetry in medical radiation fields, such as those found in Interventional Radiology and Cardiology. Therefore, we present three different synthetically generated datasets with increasing complexity for training, using a Monte-Carlo Simulation application based on Geant4. On those datasets, we evaluate convolutional and fully connected architectures of neural networks to demonstrate which design decisions work well for reconstructing the fluence and spectra distributions over the spatial domain of such radiation fields. All used datasets as well as our training pipeline are published as open source in separate repositories.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:19:42

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[14/14]:
- The Contingencies of Physical Embodiment Allow for Open-Endedness and Care
Christov-Moore, Juliani, Kiefer, Reggente, Rousse, Safron, Hinrichs, Polani, Damasio

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:19:32

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[13/14]:
- Class-Invariant Test-Time Augmentation for Domain Generalization
Zhicheng Lin, Xiaolin Wu, Xi Zhang

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:19:22

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[12/14]:
- Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models?
Yun Qu, Qi Wang, Yixiu Mao, Vincent Tao Hu, Bj\"orn Ommer, Xiangyang Ji

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:19:11

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[11/14]:
- mCLM: A Modular Chemical Language Model that Generates Functional and Makeable Molecules
Edwards, Han, Lee, Nguyen, Szymku\'c, Prasad, Jin, Han, Diao, Liu, Peng, Grzybowski, Burke, Ji

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:19:01

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[10/14]:
- MGPATH: Vision-Language Model with Multi-Granular Prompt Learning for Few-Shot WSI Classification
Nguyen, Nguyen, Diep, Nguyen, Ho, Metsch, Maurer, Sonntag, Bohnenberger, Hauschild

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:51

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[9/14]:
- Hyper-STTN: Hypergraph Augmented Spatial-Temporal Transformer Network for Trajectory Prediction
Weizheng Wang, Baijian Yang, Sungeun Hong, Wenhai Sun, Byung-Cheol Min

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:40

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[8/14]:
- Early-Warning of Thunderstorm-Driven Power Outages with a Two-Stage Machine Learning Model
Iryna Stanishevska

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:30

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[7/14]:
- Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detec...
Faizul Rakib Sayem, Shahana Ibrahim

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:20

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[6/14]:
- Robust Causal Discovery in Real-World Time Series with Power-Laws
Matteo Tusoni, Giuseppe Masi, Andrea Coletta, Aldo Glielmo, Viviana Arrigoni, Novella Bartolini

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:09

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/14]:
- Load Balancing Mixture of Experts with Similarity Preserving Routers
Nabil Omi, Siddhartha Sen, Ali Farhadi

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:17:59

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/14]:
- Evolving Machine Learning: A Survey
Martin, Mukherjee, Baimagambetov, Vanschoren, Polatidis

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:17:48

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/14]:
- OrbitZoo: Multi-Agent Reinforcement Learning Environment for Orbital Dynamics
Alexandre Oliveira, Katarina Dyreby, Francisco Caldas, Cl\'audia Soares

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:17:38

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/14]:
- Learning-based Sketches for Frequency Estimation in Data Streams without Ground Truth
Xinyu Yuan, Yan Qiao, Meng Li, Zhenchun Wei, Cuiying Feng, Zonghui Wang, Wenzhi Chen

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:17:27

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/14]:
- Meta-Learning Adaptive Loss Functions
Christian Raymond, Qi Chen, Bing Xue, Mengjie Zhang