2025-10-14 10:17:08
On function-on-function linear quantile regression
Muge Mutis, Ufuk Beyaztas, Filiz Karaman, Han Lin Shang
https://arxiv.org/abs/2510.10792 https://arxiv.o…
On function-on-function linear quantile regression
Muge Mutis, Ufuk Beyaztas, Filiz Karaman, Han Lin Shang
https://arxiv.org/abs/2510.10792 https://arxiv.o…
Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination
Ilias Diakonikolas, Chao Gao, Daniel M. Kane, John Lafferty, Ankit Pensia
https://arxiv.org/abs/2510.10665
Robust Functional Logistic Regression
Berkay Akturk, Ufuk Beyaztas, Han Lin Shang
https://arxiv.org/abs/2510.12048 https://arxiv.org/pdf/2510.12048
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
Crosslisted article(s) found for physics.geo-ph. https://arxiv.org/list/physics.geo-ph/new
[1/1]:
- Rethinking deep learning: linear regression remains a key benchmark in predicting terrestrial wat...
Nie, Kumar, Chen, Zhao, Skulovich, Yoo, Pflug, Ahmad, Konapala
Accelerating Regression Tasks with Quantum Algorithms
Chenghua Liu, Zhengfeng Ji
https://arxiv.org/abs/2509.24757 https://arxiv.org/pdf/2509.24757
dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
https://arxiv.org/abs/2511.10069 https://arxiv.org/pdf/2511.10069 https://arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot
Closed-form $\ell_r$ norm scaling with data for overparameterized linear regression and diagonal linear networks under $\ell_p$ bias
Shuofeng Zhang, Ard Louis
https://arxiv.org/abs/2509.21181
Cellular Learning: Scattered Data Regression in High Dimensions via Voronoi Cells
Shankar Prasad Sastry
https://arxiv.org/abs/2510.03810 https://arxiv.org/…
Risk Comparisons in Linear Regression: Implicit Regularization Dominates Explicit Regularization
Jingfeng Wu, Peter L. Bartlett, Jason D. Lee, Sham M. Kakade, Bin Yu
https://arxiv.org/abs/2509.17251
Learning Linear Regression with Low-Rank Tasks in-Context
Kaito Takanami, Takashi Takahashi, Yoshiyuki Kabashima
https://arxiv.org/abs/2510.04548 https://a…
An efficient algorithm for kernel quantile regression
Shengxiang Deng, Xudong Li, Yangjing Zhang
https://arxiv.org/abs/2510.07929 https://arxiv.org/pdf/251…
Bayesian Profile Regression with Linear Mixed Models (Profile-LMM) applied to Longitudinal Exposome Data
Matteo Amestoy, Mark van de Wiel, Jeroen Lakerveld, Wessel van Wieringen
https://arxiv.org/abs/2510.08304
Generalized Nonnegative Structured Kruskal Tensor Regression
Xinjue Wang, Esa Ollila, Sergiy A. Vorobyov, Ammar Mian
https://arxiv.org/abs/2509.19900 https://
Bayesian Transfer Learning for High-Dimensional Linear Regression via Adaptive Shrinkage
Parsa Jamshidian, Donatello Telesca
https://arxiv.org/abs/2510.03449 https://
Theory of Scaling Laws for In-Context Regression: Depth, Width, Context and Time
Blake Bordelon, Mary I. Letey, Cengiz Pehlevan
https://arxiv.org/abs/2510.01098 https://
Optimal estimation for regression discontinuity design with binary outcomes
Takuya Ishihara, Masayuki Sawada, Kohei Yata
https://arxiv.org/abs/2509.18857 https://
Uncertainty in Machine Learning
Hans Weytjens, Wouter Verbeke
https://arxiv.org/abs/2510.06007 https://arxiv.org/pdf/2510.06007
Optimality and computational barriers in variable selection under dependence
Ming Gao, Bryon Aragam
https://arxiv.org/abs/2510.03990 https://arxiv.org/pdf/…
Use multi level models with {parsnip}: http://multilevelmod.tidymodels.org/ #rstats #ML
A mesh-free, derivative-free, matrix-free, and highly parallel localized stochastic method for high-dimensional semilinear parabolic PDEs
Shuixin Fang, Changtao Sheng, Bihao Su, Tao Zhou
https://arxiv.org/abs/2510.02635
Linear Regression under Missing or Corrupted Coordinates
Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Jasper C. H. Lee, Thanasis Pittas
https://arxiv.org/abs/2509.19242
Crop Spirals: Re-thinking the field layout for future robotic agriculture
Lakshan Lavan, Lanojithan Thiyagarasa, Udara Muthugala, Rajitha de Silva
https://arxiv.org/abs/2509.25091
Fitting sparse high-dimensional varying-coefficient models with Bayesian regression tree ensembles
Soham Ghosh, Saloni Bhogale, Sameer K. Deshpande
https://arxiv.org/abs/2510.08204
High-dimensional Analysis of Synthetic Data Selection
Parham Rezaei, Filip Kovacevic, Francesco Locatello, Marco Mondelli
https://arxiv.org/abs/2510.08123 https://
A Type 2 Fuzzy Set Approach for Building Linear Linguistic Regression Analysis under Multi Uncertainty
Junzo Watada, Pei-Chun Lin, Bo Wang, Jeng-Shyang Pan, Jose Guadalupe Flores Muniz
https://arxiv.org/abs/2509.10498
Inference in pseudo-observation-based regression using (biased) covariance estimation and naive bootstrapping
Simon Mack, Morten Overgaard, Dennis Dobler
https://arxiv.org/abs/2510.06815
Risk Phase Transitions in Spiked Regression: Alignment Driven Benign and Catastrophic Overfitting
Jiping Li, Rishi Sonthalia
https://arxiv.org/abs/2510.01414 https://
A note on the relation between one--step, outcome regression and IPW--type estimators of parameters with the mixed bias property
Andrea Rotnitzky, Ezequiel Smucler, James M. Robins
https://arxiv.org/abs/2509.22452
Vulnerability Patching Across Software Products and Software Components: A Case Study of Red Hat's Product Portfolio
Jukka Ruohonen, Sani Abdullahi, Abhishek Tiwari
https://arxiv.org/abs/2509.13117
Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression
Anvit Garg, Sohom Bhattacharya, Pragya Sur
https://arxiv.org/abs/2509.22341
Assisting the Grading of a Handwritten General Chemistry Exam with Artificial Intelligence
Jan Cvengros, Gerd Kortemeyer
https://arxiv.org/abs/2509.10591 https://
Crosslisted article(s) found for stat.CO. https://arxiv.org/list/stat.CO/new
[1/1]:
- Bayesian Transfer Learning for High-Dimensional Linear Regression via Adaptive Shrinkage
Parsa Jamshidian, Donatello Telesca
Data coarse graining can improve model performance
Alex Nguyen, David J. Schwab, Vudtiwat Ngampruetikorn
https://arxiv.org/abs/2509.14498 https://arxiv.org…
The Impact of AI Adoption on Retail Across Countries and Industries
Yunqi Liu
https://arxiv.org/abs/2509.15885 https://arxiv.org/pdf/2509.15885
Mathematical Theory of Collinearity Effects on Machine Learning Variable Importance Measures
Kelvyn K. Bladen, D. Richard Cutler, Alan Wisler
https://arxiv.org/abs/2510.00557 ht…
Incorporating priors in learning: a random matrix study under a teacher-student framework
Malik Tiomoko, Ekkehard Schnoor
https://arxiv.org/abs/2509.22124 https://
Assumption-lean Inference for Network-linked Data
Wei Li, Nilanjan Chakraborty, Robert Lunde
https://arxiv.org/abs/2510.00287 https://arxiv.org/pdf/2510.00…
On the Rate of Gaussian Approximation for Linear Regression Problems
Marat Khusainov, Marina Sheshukova, Alain Durmus, Sergey Samsonov
https://arxiv.org/abs/2509.14039 https://
Some Simplifications for the Expectation-Maximization (EM) Algorithm: The Linear Regression Model Case
Daniel A. Griffith
https://arxiv.org/abs/2509.19461 https://
Optimal Nuisance Function Tuning for Estimating a Doubly Robust Functional under Proportional Asymptotics
Sean McGrath, Debarghya Mukherjee, Rajarshi Mukherjee, Zixiao Jolene Wang
https://arxiv.org/abs/2509.25536
Adaptive randomized pivoting and volume sampling
Ethan N. Epperly
https://arxiv.org/abs/2510.02513 https://arxiv.org/pdf/2510.02513
Pretrain-Test Task Alignment Governs Generalization in In-Context Learning
Mary I. Letey, Jacob A. Zavatone-Veth, Yue M. Lu, Cengiz Pehlevan
https://arxiv.org/abs/2509.26551 htt…
KOO Method-based Consistent Clustering for Group-wise Linear Regression with Graph Structure
M. Ohishi, R. Oda
https://arxiv.org/abs/2509.11103 https://arx…
Fast Estimation of Wasserstein Distances via Regression on Sliced Wasserstein Distances
Khai Nguyen, Hai Nguyen, Nhat Ho
https://arxiv.org/abs/2509.20508 https://
Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold
Ke Xu, Yuefeng Han
https://arxiv.org/abs/2510.00569 https://arxiv.org…
Repro Samples Method for Model-Free Inference in High-Dimensional Binary Classification
Xiaotian Hou, Peng Wang, Minge Xie, Linjun Zhang
https://arxiv.org/abs/2510.01468 https:/…
Least squares-based methods to bias adjustment in scalar-on-function regression model using a functional instrumental variable
Xiwei Chen, Ufuk Beyaztas, Caihong Qin, Heyang Ji, Gilson Honvoh, Roger S. Zoh, Lan Xue, Carmen D. Tekwe
https://arxiv.org/abs/2509.12122
A Random Matrix Perspective of Echo State Networks: From Precise Bias--Variance Characterization to Optimal Regularization
Yessin Moakher, Malik Tiomoko, Cosme Louart, Zhenyu Liao
https://arxiv.org/abs/2509.22011