Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[3/3]:
- Functional Continuous Decomposition
Teymur Aghayev
https://arxiv.org/abs/2602.20857 https://mastoxiv.page/@arXiv_eessSP_bot/116130499236089653
- SpatiaLQA: A Benchmark for Evaluating Spatial Logical Reasoning in Vision-Language Models
Xie, Zhang, Shan, Zhu, Tang, Wei, Song, Wan, Song
https://arxiv.org/abs/2602.20901 https://mastoxiv.page/@arXiv_csCV_bot/116130845273808954
- Some Simple Economics of AGI
Christian Catalini, Xiang Hui, Jane Wu
https://arxiv.org/abs/2602.20946 https://mastoxiv.page/@arXiv_econGN_bot/116130470423837005
- Multimodal MRI Report Findings Supervised Brain Lesion Segmentation with Substructures
Yubin Ge, Yongsong Huang, Xiaofeng Liu
https://arxiv.org/abs/2602.20994 https://mastoxiv.page/@arXiv_eessIV_bot/116130212832138624
- MIP Candy: A Modular PyTorch Framework for Medical Image Processing
Tianhao Fu, Yucheng Chen
https://arxiv.org/abs/2602.21033 https://mastoxiv.page/@arXiv_csCV_bot/116130864279556063
- Empirically Calibrated Conditional Independence Tests
Milleno Pan, Antoine de Mathelin, Wesley Tansey
https://arxiv.org/abs/2602.21036 https://mastoxiv.page/@arXiv_statME_bot/116130690605113562
- Is Multi-Distribution Learning as Easy as PAC Learning: Sharp Rates with Bounded Label Noise
Rafael Hanashiro, Abhishek Shetty, Patrick Jaillet
https://arxiv.org/abs/2602.21039 https://mastoxiv.page/@arXiv_statML_bot/116130572661848449
- Position-Aware Sequential Attention for Accurate Next Item Recommendations
Timur Nabiev, Evgeny Frolov
https://arxiv.org/abs/2602.21052 https://mastoxiv.page/@arXiv_csIR_bot/116130263323086316
- Motivation is Something You Need
Mehdi Acheli, Walid Gaaloul
https://arxiv.org/abs/2602.21064 https://mastoxiv.page/@arXiv_csAI_bot/116130680774678580
- An Enhanced Projection Pursuit Tree Classifier with Visual Methods for Assessing Algorithmic Impr...
Natalia da Silva, Dianne Cook, Eun-Kyung Lee
https://arxiv.org/abs/2602.21130 https://mastoxiv.page/@arXiv_statML_bot/116130610674573081
- Complexity of Classical Acceleration for $\ell_1$-Regularized PageRank
Kimon Fountoulakis, David Mart\'inez-Rubio
https://arxiv.org/abs/2602.21138 https://mastoxiv.page/@arXiv_mathOC_bot/116130547076073836
- LUMEN: Longitudinal Multi-Modal Radiology Model for Prognosis and Diagnosis
Jiang, Yang, Nath, Parida, Kulkarni, Xu, Xu, Anwar, Roth, Linguraru
https://arxiv.org/abs/2602.21142 https://mastoxiv.page/@arXiv_csCV_bot/116130871488694585
- A Benchmark for Deep Information Synthesis
Debjit Paul, et al.
https://arxiv.org/abs/2602.21143 https://mastoxiv.page/@arXiv_csAI_bot/116130692571594706
- Scaling State-Space Models on Multiple GPUs with Tensor Parallelism
Anurag Dutt, Nimit Shah, Hazem Masarani, Anshul Gandhi
https://arxiv.org/abs/2602.21144 https://mastoxiv.page/@arXiv_csDC_bot/116130520888343997
- Not Just How Much, But Where: Decomposing Epistemic Uncertainty into Per-Class Contributions
Mame Diarra Toure, David A. Stephens
https://arxiv.org/abs/2602.21160 https://mastoxiv.page/@arXiv_statML_bot/116130618512594211
- Aletheia tackles FirstProof autonomously
Tony Feng, et al.
https://arxiv.org/abs/2602.21201 https://mastoxiv.page/@arXiv_csAI_bot/116130705679345625
- Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics
Abdulaziz Almuzairee, Henrik I. Christensen
https://arxiv.org/abs/2602.21203 https://mastoxiv.page/@arXiv_csRO_bot/116130765974498223
toXiv_bot_toot
Shit, hvor er jeg ved at være lidt træt af dem her. De fleste er ikke engang en smule relevante men i kategorien “Vi får snart en pakke til dig” og om man ikke vil have gjort noget andet ved den end at få leveret til dŸren. Der kommer selvfŸlgelig også notifikationer fra appen udover at jeg kunne styre det hele derfra hvis jeg ville. Det burde være forbudt at udsende automatiske mails, som ikke kan frameldes.
I got the demo of this game but I am too moronic to pass the very first level.
https://store.steampowered.com/app/2707490/Tower_Factory/
Man könnte dieses Posting kommentieren mit:
Was ne arme Wurst!
Oder man könnte sagen: Mark Carney, alles richtig gemacht!
Oder man kann es ernsthaft politisch analysieren: dann müsste man urteilen, das Trump damit die Zukunft dieses Fantasiegremiums besiegelt hat. Solange die Mitgliedschaft von der Laune eines Operettenkönigs abhängig ist, wird es keinerlei Relevanz haben.
As far as I understand (granted, I don't understand that much, but...) there is a legitimate and actively debated position in philosophy of mind and cognitive science regarding ant colonies.
That is, colony-level cognition may be real, not metaphorical. Ant colonies:
- integrate information over time
- exhibit memory (via pheromone landscapes)
- solve optimisation problems
- adapt flexibly to novel conditions
- show something like attention (resource …
Some City Some Nature 🏙️
一些城一些自然 🏙️
📷 Nikon F4E
🎞️ ERA 100, expired 1993
#filmphotography #Photography #blackandwhite
Estimation of Confidence Bounds in Binary Classification using Wilson Score Kernel Density Estimation
Thorbj{\o}rn Mosekj{\ae}r Iversen, Zebin Duan, Frederik Hagelskj{\ae}r
https://arxiv.org/abs/2602.20947 https://arxiv.org/pdf/2602.20947 https://arxiv.org/html/2602.20947
arXiv:2602.20947v1 Announce Type: new
Abstract: The performance and ease of use of deep learning-based binary classifiers have improved significantly in recent years. This has opened up the potential for automating critical inspection tasks, which have traditionally only been trusted to be done manually. However, the application of binary classifiers in critical operations depends on the estimation of reliable confidence bounds such that system performance can be ensured up to a given statistical significance. We present Wilson Score Kernel Density Classification, which is a novel kernel-based method for estimating confidence bounds in binary classification. The core of our method is the Wilson Score Kernel Density Estimator, which is a function estimator for estimating confidence bounds in Binomial experiments with conditionally varying success probabilities. Our method is evaluated in the context of selective classification on four different datasets, illustrating its use as a classification head of any feature extractor, including vision foundation models. Our proposed method shows similar performance to Gaussian Process Classification, but at a lower computational complexity.
toXiv_bot_toot
T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation
Dongik Park, Hyunwoo Ryu, Suahn Bae, Keondo Park, Hyung-Sin Kim
https://arxiv.org/abs/2602.21043 https://arxiv.org/pdf/2602.21043 https://arxiv.org/html/2602.21043
arXiv:2602.21043v1 Announce Type: new
Abstract: Imputing missing values in multivariate time series remains challenging, especially under diverse missing patterns and heavy missingness. Existing methods suffer from suboptimal performance as corrupted temporal features hinder effective cross-variable information transfer, amplifying reconstruction errors. Robust imputation requires both extracting temporal patterns from sparse observations within each variable and selectively transferring information across variables--yet current approaches excel at one while compromising the other. We introduce T1 (Time series imputation with 1-to-1 channel-head binding), a CNN-Transformer hybrid architecture that achieves robust imputation through Channel-Head Binding--a mechanism creating one-to-one correspondence between CNN channels and attention heads. This design enables selective information transfer: when missingness corrupts certain temporal patterns, their corresponding attention pathways adaptively down-weight based on remaining observable patterns while preserving reliable cross-variable connections through unaffected channels. Experiments on 11 benchmark datasets demonstrate that T1 achieves state-of-the-art performance, reducing MSE by 46% on average compared to the second-best baseline, with particularly strong gains under extreme sparsity (70% missing ratio). The model generalizes to unseen missing patterns without retraining and uses a consistent hyperparameter configuration across all datasets. The code is available at https://github.com/Oppenheimerdinger/T1.
toXiv_bot_toot