2025-10-02 09:20:40
Theory of Scaling Laws for In-Context Regression: Depth, Width, Context and Time
Blake Bordelon, Mary I. Letey, Cengiz Pehlevan
https://arxiv.org/abs/2510.01098 https://
Theory of Scaling Laws for In-Context Regression: Depth, Width, Context and Time
Blake Bordelon, Mary I. Letey, Cengiz Pehlevan
https://arxiv.org/abs/2510.01098 https://
Assumption-lean Inference for Network-linked Data
Wei Li, Nilanjan Chakraborty, Robert Lunde
https://arxiv.org/abs/2510.00287 https://arxiv.org/pdf/2510.00…
Mathematical Theory of Collinearity Effects on Machine Learning Variable Importance Measures
Kelvyn K. Bladen, D. Richard Cutler, Alan Wisler
https://arxiv.org/abs/2510.00557 ht…
Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold
Ke Xu, Yuefeng Han
https://arxiv.org/abs/2510.00569 https://arxiv.org…
Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination
Ilias Diakonikolas, Chao Gao, Daniel M. Kane, John Lafferty, Ankit Pensia
https://arxiv.org/abs/2510.10665
On function-on-function linear quantile regression
Muge Mutis, Ufuk Beyaztas, Filiz Karaman, Han Lin Shang
https://arxiv.org/abs/2510.10792 https://arxiv.o…
An efficient algorithm for kernel quantile regression
Shengxiang Deng, Xudong Li, Yangjing Zhang
https://arxiv.org/abs/2510.07929 https://arxiv.org/pdf/251…
Cellular Learning: Scattered Data Regression in High Dimensions via Voronoi Cells
Shankar Prasad Sastry
https://arxiv.org/abs/2510.03810 https://arxiv.org/…
Learning Linear Regression with Low-Rank Tasks in-Context
Kaito Takanami, Takashi Takahashi, Yoshiyuki Kabashima
https://arxiv.org/abs/2510.04548 https://a…
Uncertainty in Machine Learning
Hans Weytjens, Wouter Verbeke
https://arxiv.org/abs/2510.06007 https://arxiv.org/pdf/2510.06007
Robust Functional Logistic Regression
Berkay Akturk, Ufuk Beyaztas, Han Lin Shang
https://arxiv.org/abs/2510.12048 https://arxiv.org/pdf/2510.12048
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
Bayesian Transfer Learning for High-Dimensional Linear Regression via Adaptive Shrinkage
Parsa Jamshidian, Donatello Telesca
https://arxiv.org/abs/2510.03449 https://
Crosslisted article(s) found for physics.geo-ph. https://arxiv.org/list/physics.geo-ph/new
[1/1]:
- Rethinking deep learning: linear regression remains a key benchmark in predicting terrestrial wat...
Nie, Kumar, Chen, Zhao, Skulovich, Yoo, Pflug, Ahmad, Konapala
A mesh-free, derivative-free, matrix-free, and highly parallel localized stochastic method for high-dimensional semilinear parabolic PDEs
Shuixin Fang, Changtao Sheng, Bihao Su, Tao Zhou
https://arxiv.org/abs/2510.02635
Risk Phase Transitions in Spiked Regression: Alignment Driven Benign and Catastrophic Overfitting
Jiping Li, Rishi Sonthalia
https://arxiv.org/abs/2510.01414 https://
Bayesian Profile Regression with Linear Mixed Models (Profile-LMM) applied to Longitudinal Exposome Data
Matteo Amestoy, Mark van de Wiel, Jeroen Lakerveld, Wessel van Wieringen
https://arxiv.org/abs/2510.08304
Optimality and computational barriers in variable selection under dependence
Ming Gao, Bryon Aragam
https://arxiv.org/abs/2510.03990 https://arxiv.org/pdf/…
Fitting sparse high-dimensional varying-coefficient models with Bayesian regression tree ensembles
Soham Ghosh, Saloni Bhogale, Sameer K. Deshpande
https://arxiv.org/abs/2510.08204
dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
https://arxiv.org/abs/2511.10069 https://arxiv.org/pdf/2511.10069 https://arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot
High-dimensional Analysis of Synthetic Data Selection
Parham Rezaei, Filip Kovacevic, Francesco Locatello, Marco Mondelli
https://arxiv.org/abs/2510.08123 https://
Inference in pseudo-observation-based regression using (biased) covariance estimation and naive bootstrapping
Simon Mack, Morten Overgaard, Dennis Dobler
https://arxiv.org/abs/2510.06815
Crosslisted article(s) found for stat.CO. https://arxiv.org/list/stat.CO/new
[1/1]:
- Bayesian Transfer Learning for High-Dimensional Linear Regression via Adaptive Shrinkage
Parsa Jamshidian, Donatello Telesca
Adaptive randomized pivoting and volume sampling
Ethan N. Epperly
https://arxiv.org/abs/2510.02513 https://arxiv.org/pdf/2510.02513
Repro Samples Method for Model-Free Inference in High-Dimensional Binary Classification
Xiaotian Hou, Peng Wang, Minge Xie, Linjun Zhang
https://arxiv.org/abs/2510.01468 https:/…