Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:45:01

Statistical Query Lower Bounds for Smoothed Agnostic Learning
Ilias Diakonikolas, Daniel M. Kane
arxiv.org/abs/2602.21191 arxiv.org/pdf/2602.21191 arxiv.org/html/2602.21191
arXiv:2602.21191v1 Announce Type: new
Abstract: We study the complexity of smoothed agnostic learning, recently introduced by~\cite{CKKMS24}, in which the learner competes with the best classifier in a target class under slight Gaussian perturbations of the inputs. Specifically, we focus on the prototypical task of agnostically learning halfspaces under subgaussian distributions in the smoothed model. The best known upper bound for this problem relies on $L_1$-polynomial regression and has complexity $d^{\tilde{O}(1/\sigma^2) \log(1/\epsilon)}$, where $\sigma$ is the smoothing parameter and $\epsilon$ is the excess error. Our main result is a Statistical Query (SQ) lower bound providing formal evidence that this upper bound is close to best possible. In more detail, we show that (even for Gaussian marginals) any SQ algorithm for smoothed agnostic learning of halfspaces requires complexity $d^{\Omega(1/\sigma^{2} \log(1/\epsilon))}$. This is the first non-trivial lower bound on the complexity of this task and nearly matches the known upper bound. Roughly speaking, we show that applying $L_1$-polynomial regression to a smoothed version of the function is essentially best possible. Our techniques involve finding a moment-matching hard distribution by way of linear programming duality. This dual program corresponds exactly to finding a low-degree approximating polynomial to the smoothed version of the target function (which turns out to be the same condition required for the $L_1$-polynomial regression to work). Our explicit SQ lower bound then comes from proving lower bounds on this approximation degree for the class of halfspaces.
toXiv_bot_toot

@hex@kolektiva.social
2026-01-24 23:08:38

I've listened to enough episodes of Revolutions to have a rough idea of what's coming next. If you're in the US, now is a good time to do some more long-term (the system will be down for months or years) kind of disaster planning.
That means building connections with your community, because you can't do everything yourself and you may have to figure out how to get things without state/capital functioning to facilitate that.

@robpike@hachyderm.io
2026-01-17 20:40:43

Yesterday was one of those good days doing hard stuff, playing with Chebyshev polynomials, binary coefficients, factorials of half integers, and other fun. All in the interest of a pointless decision to (re)implement the general 𝚪 function using the Lanczos approximation, but totally worth it for the experience alone.
The internet, the good old internet, is a treasure trove for this kind of work, especially but not exclusively wikipedia@wikis.world.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:35:21

WeirNet: A Large-Scale 3D CFD Benchmark for Geometric Surrogate Modeling of Piano Key Weirs
Lisa L\"uddecke, Michael Hohmann, Sebastian Eilermann, Jan Tillmann-Mumm, Pezhman Pourabdollah, Mario Oertel, Oliver Niggemann
arxiv.org/abs/2602.20714 arxiv.org/pdf/2602.20714 arxiv.org/html/2602.20714
arXiv:2602.20714v1 Announce Type: new
Abstract: Reliable prediction of hydraulic performance is challenging for Piano Key Weir (PKW) design because discharge capacity depends on three-dimensional geometry and operating conditions. Surrogate models can accelerate hydraulic-structure design, but progress is limited by scarce large, well-documented datasets that jointly capture geometric variation, operating conditions, and functional performance. This study presents WeirNet, a large 3D CFD benchmark dataset for geometric surrogate modeling of PKWs. WeirNet contains 3,794 parametric, feasibility-constrained rectangular and trapezoidal PKW geometries, each scheduled at 19 discharge conditions using a consistent free-surface OpenFOAM workflow, resulting in 71,387 completed simulations that form the benchmark and with complete discharge coefficient labels. The dataset is released as multiple modalities compact parametric descriptors, watertight surface meshes and high-resolution point clouds together with standardized tasks and in-distribution and out-of-distribution splits. Representative surrogate families are benchmarked for discharge coefficient prediction. Tree-based regressors on parametric descriptors achieve the best overall accuracy, while point- and mesh-based models remain competitive and offer parameterization-agnostic inference. All surrogates evaluate in milliseconds per sample, providing orders-of-magnitude speedups over CFD runtimes. Out-of-distribution results identify geometry shift as the dominant failure mode compared to unseen discharge values, and data-efficiency experiments show diminishing returns beyond roughly 60% of the training data. By publicly releasing the dataset together with simulation setups and evaluation pipelines, WeirNet establishes a reproducible framework for data-driven hydraulic modeling and enables faster exploration of PKW designs during the early stages of hydraulic planning.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:08

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
arxiv.org/abs/2602.16954 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
arxiv.org/abs/2602.17050 mastoxiv.page/@arXiv_csLG_bot/
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
arxiv.org/abs/2602.17550 mastoxiv.page/@arXiv_csLG_bot/
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
arxiv.org/abs/2602.17554 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
arxiv.org/abs/2602.17646 mastoxiv.page/@arXiv_csLG_bot/
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
arxiv.org/abs/2602.19654 mastoxiv.page/@arXiv_csLG_bot/
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
arxiv.org/abs/2405.10385 mastoxiv.page/@arXiv_csCL_bot/
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
arxiv.org/abs/2406.10281 mastoxiv.page/@arXiv_csCR_bot/
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
arxiv.org/abs/2407.00575 mastoxiv.page/@arXiv_csMA_bot/
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
arxiv.org/abs/2407.12506 mastoxiv.page/@arXiv_quantph_b
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
arxiv.org/abs/2410.16106 mastoxiv.page/@arXiv_statML_bo
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
arxiv.org/abs/2412.01283 mastoxiv.page/@arXiv_mathRT_bo
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
arxiv.org/abs/2502.17457 mastoxiv.page/@arXiv_eessSP_bo
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
arxiv.org/abs/2503.04071 mastoxiv.page/@arXiv_statML_bo
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
arxiv.org/abs/2503.06437 mastoxiv.page/@arXiv_csCV_bot/
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
arxiv.org/abs/2504.18310 mastoxiv.page/@arXiv_econGN_bo
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
arxiv.org/abs/2505.06646 mastoxiv.page/@arXiv_eessIV_bo
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
arxiv.org/abs/2505.08125 mastoxiv.page/@arXiv_statML_bo
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
arxiv.org/abs/2505.17645 mastoxiv.page/@arXiv_csCV_bot/
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
arxiv.org/abs/2505.22554 mastoxiv.page/@arXiv_statML_bo
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
arxiv.org/abs/2506.01666 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 10:15:16

Neighborhood-Aware Graph Labeling Problem
Mohammad Shahverdikondori, Sepehr Elahi, Patrick Thiran, Negar Kiyavash
arxiv.org/abs/2602.08098 arxiv.org/pdf/2602.08098 arxiv.org/html/2602.08098
arXiv:2602.08098v1 Announce Type: new
Abstract: Motivated by optimization oracles in bandits with network interference, we study the Neighborhood-Aware Graph Labeling (NAGL) problem. Given a graph $G = (V,E)$, a label set of size $L$, and local reward functions $f_v$ accessed via evaluation oracles, the objective is to assign labels to maximize $\sum_{v \in V} f_v(x_{N[v]})$, where each term depends on the closed neighborhood of $v$. Two vertices co-occur in some neighborhood term exactly when their distance in $G$ is at most $2$, so the dependency graph is the squared graph $G^2$ and $\mathrm{tw}(G^2)$ governs exact algorithms and matching fine-grained lower bounds. Accordingly, we show that this dependence is inherent: NAGL is NP-hard even on star graphs with binary labels and, assuming SETH, admits no $(L-\varepsilon)^{\mathrm{tw}(G^2)}\cdot n^{O(1)}$-time algorithm for any $\varepsilon>0$. We match this with an exact dynamic program on a tree decomposition of $G^2$ running in $O\!\left(n\cdot \mathrm{tw}(G^2)\cdot L^{\mathrm{tw}(G^2) 1}\right)$ time. For approximation, unless $\mathsf{P}=\mathsf{NP}$, for every $\varepsilon>0$ there is no polynomial-time $n^{1-\varepsilon}$-approximation on general graphs even under the promise $\mathrm{OPT}>0$; without the promise $\mathrm{OPT}>0$, no finite multiplicative approximation ratio is possible. In the nonnegative-reward regime, we give polynomial-time approximation algorithms for NAGL in two settings: (i) given a proper $q$-coloring of $G^2$, we obtain a $1/q$-approximation; and (ii) on planar graphs of bounded maximum degree, we develop a Baker-type polynomial-time approximation scheme (PTAS), which becomes an efficient PTAS (EPTAS) when $L$ is constant.
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 21:08:46

Replaced article(s) found for cs.DS. arxiv.org/list/cs.DS/new
[1/1]:
- Fully Dynamic Adversarially Robust Correlation Clustering in Polylogarithmic Update Time
Vladimir Braverman, Prathamesh Dharangutte, Shreyas Pai, Vihan Shah, Chen Wang
arxiv.org/abs/2411.09979 mastoxiv.page/@arXiv_csDS_bot/
- A Simple and Combinatorial Approach to Proving Chernoff Bounds and Their Generalizations
William Kuszmaul
arxiv.org/abs/2501.03488 mastoxiv.page/@arXiv_csDS_bot/
- The Structural Complexity of Matrix-Vector Multiplication
Emile Anand, Jan van den Brand, Rose McCarty
arxiv.org/abs/2502.21240 mastoxiv.page/@arXiv_csDS_bot/
- Clustering under Constraints: Efficient Parameterized Approximation Schemes
Sujoy Bhore, Ameet Gadekar, Tanmay Inamdar
arxiv.org/abs/2504.06980 mastoxiv.page/@arXiv_csDS_bot/
- Minimizing Envy and Maximizing Happiness in Graphical House Allocation
Anubhav Dhar, Ashlesha Hota, Palash Dey, Sudeshna Kolay
arxiv.org/abs/2505.00296 mastoxiv.page/@arXiv_csDS_bot/
- Fast and Simple Densest Subgraph with Predictions
Thai Bui, Luan Nguyen, Hoa T. Vu
arxiv.org/abs/2505.12600 mastoxiv.page/@arXiv_csDS_bot/
- Compressing Suffix Trees by Path Decompositions
Becker, Cenzato, Gagie, Kim, Koerkamp, Manzini, Prezza
arxiv.org/abs/2506.14734 mastoxiv.page/@arXiv_csDS_bot/
- Improved sampling algorithms and functional inequalities for non-log-concave distributions
Yuchen He, Zhehan Lei, Jianan Shao, Chihao Zhang
arxiv.org/abs/2507.11236 mastoxiv.page/@arXiv_csDS_bot/
- Deterministic Lower Bounds for $k$-Edge Connectivity in the Distributed Sketching Model
Peter Robinson, Ming Ming Tan
arxiv.org/abs/2507.11257 mastoxiv.page/@arXiv_csDS_bot/
- Optimally detecting uniformly-distributed $\ell_2$ heavy hitters in data streams
Santhoshini Velusamy, Huacheng Yu
arxiv.org/abs/2509.07286 mastoxiv.page/@arXiv_csDS_bot/
- Uncrossed Multiflows and Applications to Disjoint Paths
Chandra Chekuri, Guyslain Naves, Joseph Poremba, F. Bruce Shepherd
arxiv.org/abs/2511.00254 mastoxiv.page/@arXiv_csDS_bot/
- Dynamic Matroids: Base Packing and Covering
Tijn de Vos, Mara Grilnberger
arxiv.org/abs/2511.15460 mastoxiv.page/@arXiv_csDS_bot/
- Branch-width of connectivity functions is fixed-parameter tractable
Tuukka Korhonen, Sang-il Oum
arxiv.org/abs/2601.04756 mastoxiv.page/@arXiv_csDS_bot/
- CoinPress: Practical Private Mean and Covariance Estimation
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman
arxiv.org/abs/2006.06618
- The Ideal Membership Problem and Abelian Groups
Andrei A. Bulatov, Akbar Rafiey
arxiv.org/abs/2201.05218
- Bridging Classical and Quantum: Group-Theoretic Approach to Quantum Circuit Simulation
Daksh Shami
arxiv.org/abs/2407.19575 mastoxiv.page/@arXiv_quantph_b
- Young domination on Hamming rectangles
Janko Gravner, Matja\v{z} Krnc, Martin Milani\v{c}, Jean-Florent Raymond
arxiv.org/abs/2501.03788 mastoxiv.page/@arXiv_mathCO_bo
- On the Space Complexity of Online Convolution
Joel Daniel Andersson, Amir Yehudayoff
arxiv.org/abs/2505.00181 mastoxiv.page/@arXiv_csCC_bot/
- Universal Solvability for Robot Motion Planning on Graphs
Anubhav Dhar, Pranav Nyati, Tanishq Prasad, Ashlesha Hota, Sudeshna Kolay
arxiv.org/abs/2506.18755 mastoxiv.page/@arXiv_csCC_bot/
- Colorful Minors
Evangelos Protopapas, Dimitrios M. Thilikos, Sebastian Wiederrecht
arxiv.org/abs/2507.10467
- Learning fermionic linear optics with Heisenberg scaling and physical operations
Aria Christensen, Andrew Zhao
arxiv.org/abs/2602.05058
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 10:40:45

Submodular Maximization over a Matroid $k$-Intersection: Multiplicative Improvement over Greedy
Moran Feldman, Justin Ward
arxiv.org/abs/2602.08473 arxiv.org/pdf/2602.08473 arxiv.org/html/2602.08473
arXiv:2602.08473v1 Announce Type: new
Abstract: We study the problem of maximizing a non-negative monotone submodular objective $f$ subject to the intersection of $k$ arbitrary matroid constraints. The natural greedy algorithm guarantees $(k 1)$-approximation for this problem, and the state-of-the-art algorithm only improves this approximation ratio to $k$. We give a $\frac{2k\ln2}{1 \ln2} O(\sqrt{k})<0.819k O(\sqrt{k})$ approximation for this problem. Our result is the first multiplicative improvement over the approximation ratio of the greedy algorithm for general $k$. We further show that our algorithm can be used to obtain roughly the same approximation ratio also for the more general problem in which the objective is not guaranteed to be monotone (the sublinear term in the approximation ratio becomes $O(k^{2/3})$ rather than $O(\sqrt{k})$ in this case).
All of our results hold also when the $k$-matroid intersection constraint is replaced with a more general matroid $k$-parity constraint. Furthermore, unlike the case in many of the previous works, our algorithms run in time that is independent of $k$ and polynomial in the size of the ground set. Our algorithms are based on a hybrid greedy local search approach recently introduced by Singer and Thiery (STOC 2025) for the weighted matroid $k$-intersection problem, which is a special case of the problem we consider. Leveraging their approach in the submodular setting requires several non-trivial insights and algorithmic modifications since the marginals of a submodular function $f$, which correspond to the weights in the weighted case, are not independent of the algorithm's internal randomness. In the special weighted case studied by Singer and Thiery, our algorithms reduce to a variant of their algorithm with an improved approximation ratio of $k\ln2 1-\ln2<0.694k 0.307$, compared to an approximation ratio of $\frac{k 1}{2\ln2}\approx0.722k 0.722$ guaranteed by Singer and Thiery.
toXiv_bot_toot