2026-01-12 12:42:03
from my link log —
A unique performance optimization for a 3D geometry language.
https://cprimozic.net/notes/posts/persistent-expr-memo-optimization-for-geoscript/
saved 2026-01-11
from my link log —
A unique performance optimization for a 3D geometry language.
https://cprimozic.net/notes/posts/persistent-expr-memo-optimization-for-geoscript/
saved 2026-01-11
"AI-Powered Price Optimization": US-Zustelldienst Instacart manipuliert Preise
Sogar bei Selbstabholung setzt Instacart für bestellte Lebensmittel höhere Preise an, individuell unterschiedlich. Selbes Produkt, selbe Zeit, selbe Filiale.
Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
https://arxiv.org/abs/2511.09940 https://arxiv.org/pdf/2511.09940 https://arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot
CloudX, which uses Anthropic's Claude to automate mobile ad pricing and inventory optimization for publishers, raised a $30M Series A led by Addition (Trishla Ostwal/Adweek)
https://www.adweek.com/media/adtech-founders-raise-30-million…
CloudX, which uses Anthropic's Claude to automate mobile ad pricing and inventory optimization for publishers, raised a $30M Series A led by Addition (Trishla Ostwal/Adweek)
https://www.adweek.com/media/adtech-founders-raise-30-million…
Primer to get you started with Optimization and Mathematical Programming in R #rstats
i’m reviewing a paper on reducing energy costs in large model training and it keeps slinging words like optimize and optimization around and calling other approaches suboptimal and i feel like i would be kind of an old crank if i were to ask if optimality is on the table here (it is not)
EDIT: hold on, maybe it is
Verification of Sequential Convex Programming for Parametric Non-convex Optimization
Rajiv Sambharya, Nikolai Matni, George Pappas
https://arxiv.org/abs/2511.10622 https://arxiv.org/pdf/2511.10622 https://arxiv.org/html/2511.10622
arXiv:2511.10622v1 Announce Type: new
Abstract: We introduce a verification framework to exactly verify the worst-case performance of sequential convex programming (SCP) algorithms for parametric non-convex optimization. The verification problem is formulated as an optimization problem that maximizes a performance metric (e.g., the suboptimality after a given number of iterations) over parameters constrained to be in a parameter set and iterate sequences consistent with the SCP update rules. Our framework is general, extending the notion of SCP to include both conventional variants such as trust-region, convex-concave, and prox-linear methods, and algorithms that combine convex subproblems with rounding steps, as in relaxing and rounding schemes. Unlike existing analyses that may only provide local guarantees under limited conditions, our framework delivers global worst-case guarantees--quantifying how well an SCP algorithm performs across all problem instances in the specified family. Applications in control, signal processing, and operations research demonstrate that our framework provides, for the first time, global worst-case guarantees for SCP algorithms in the parametric setting.
toXiv_bot_toot
'This is the dark side and true meaning of "business optimization." The optimal business pays its suppliers and workers nothing, and charges its customers everything it can. Obviously, businesses need to settle for suboptimal outcomes, because workers won't show up if they don't get paid, and customers won't buy things that cost everything they have⹋.
⹋ Unless, of course, you are an academic publisher, in which case this is just how you do business.'
…
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
We all favor efficiency, right? Efficiency is what drives profit, growth, share value, prosperity, etc, right? And localized sub-optimization drives whole systems optimization, right? Well, except when…
https://doctorow.medium.com/https-plur
NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang
https://arxiv.org/abs/2512.09524 https://arxiv.org/pdf/2512.09524 https://arxiv.org/html/2512.09524
arXiv:2512.09524v1 Announce Type: new
Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at https://github.com/Galaxy-Dawn/NeuroSketch.
toXiv_bot_toot
(LinkedIn) Job position: #Paradromics is looking for a heterogeneous integration engineer https://www.linkedin.com/feed/update/urn:li:activity:7415473562809102337/
The only thing more reliable than my Guardian's heals is my Linux OS. 🛡️🐧
Back in Tyria tonight streaming #GW2 from my trusty Linux PC! My machine has zero bloatware and 100% dragon-slaying optimization.
Come hang out! 👇 https://www.twitch.t…
Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity
Ilyas Fatkhullin, Niao He, Guanghui Lan, Florian Wolf
https://arxiv.org/abs/2511.10626 https://arxiv.org/pdf/2511.10626 https://arxiv.org/html/2511.10626
arXiv:2511.10626v1 Announce Type: new
Abstract: Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $\widetilde{\mathcal{O}}(\varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
toXiv_bot_toot
Weekend Reads
* EDNS client subnet in practice
https://farrokhi.net/posts/2025/10/edns-client-subnet-in-practice-evaluating-public-resolver-behaviors/
* BGP-based DDoS scrubbing services survey
More details about Surrogation:
https://en.wikipedia.org/wiki/Surrogation
"[...]managers tend to use measures as surrogates for strategy, acting as if measures were in fact the strategy when making optimization decisions. This appears to occur even if a measure-maximizing cho…
Fun game idea for a very specific crowd: optimization race.
Given a large compute heavy codebase, say ngscopeclient, each player picks a block that they think has significant potential for speedups. Everything is in play from algorithmic restructuring to porting to GPU.
After the time limit the player with the largest percent speedup relative to baseline is declared the winner.
I need to read it properly, but this looks 🔥 https://arxiv.org/abs/2511.16652
#Kubernetes 1.35 Released with In-Place Pod Resize and AI-Optimized Scheduling
https://www.infoq.com/news/2025/12/kubernetes-1-35/
Before I go writing my own, is anyone aware of a python package for solving employee scheduling type problems? Perhaps backed by Google's OR-Tools?
They have all the math covered, but the interface is a bit unwieldy. So I'd like to use or write something that is written with the domain in mind, not abstract optimization.
#LazyWeb
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
Fastbreak AI, which provides AI-based sports scheduling optimization tools to the NBA, NHL, and others, raised $40M led by Greycroft and GTMfund (Kim Bhasin/Bloomberg)
https://www.bloomberg.com/news/articles/2025-11-07/nba-in…
`brew install netlify-cli` also installs `gcc` ... and `systemd` ?
uhhh...
- netlify-cli uses npm package ipx (image optimization)
- ipx uses npm package sharp (fast image processing)
- sharp has prebuilt binaries that use libvips (image processing)
- netlify-cli brew formula removes those and instead uses the brew for vips
- vips requires poppler (pdf renderer)
- poppler requires gpgme, requires gnupg, requires libusb, requires systemd
(avoid this wi…
dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
https://arxiv.org/abs/2511.10069 https://arxiv.org/pdf/2511.10069 https://arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot
Finite-Temperature $\textit{ab initio}$ Structural Optimization of the Bilayer Nickelate Superconductor La$_3$Ni$_2$O$_7$
Ryoma Asai, Ryotaro Arita, Takumi Chida, Ryota Masuki, Kazuhiko Kuroki, Terumasa Tadano
https://arxiv.org/abs/2512.08251
⚓ Research on Integrated Modularization of Supercritical Carbon Dioxide System for Aircraft Carrier Nuclear Power
#energy
(PhD thesis, 2024) Prototyping phosphene vision: Simulation-based optimization of visual neuroprosthetics using deep learning #BCI
Riccati-ZORO: An efficient algorithm for heuristic online optimization of internal feedback laws in robust and stochastic model predictive control
Florian Messerer, Yunfan Gao, Jonathan Frey, Moritz Diehl
https://arxiv.org/abs/2511.10473 https://arxiv.org/pdf/2511.10473 https://arxiv.org/html/2511.10473
arXiv:2511.10473v1 Announce Type: new
Abstract: We present Riccati-ZORO, an algorithm for tube-based optimal control problems (OCP). Tube OCPs predict a tube of trajectories in order to capture predictive uncertainty. The tube induces a constraint tightening via additional backoff terms. This backoff can significantly affect the performance, and thus implicitly defines a cost of uncertainty. Optimizing the feedback law used to predict the tube can significantly reduce the backoffs, but its online computation is challenging.
Riccati-ZORO jointly optimizes the nominal trajectory and uncertainty tube based on a heuristic uncertainty cost design. The algorithm alternates between two subproblems: (i) a nominal OCP with fixed backoffs, (ii) an unconstrained tube OCP, which optimizes the feedback gains for a fixed nominal trajectory. For the tube optimization, we propose a cost function informed by the proximity of the nominal trajectory to constraints, prioritizing reduction of the corresponding backoffs. These ideas are developed in detail for ellipsoidal tubes under linear state feedback. In this case, the decomposition into the two subproblems yields a substantial reduction of the computational complexity with respect to the state dimension from $\mathcal{O}(n_x^6)$ to $\mathcal{O}(n_x^3)$, i.e., the complexity of a nominal OCP.
We investigate the algorithm in numerical experiments, and provide two open-source implementations: a prototyping version in CasADi and a high-performance implementation integrated into the acados OCP solver.
toXiv_bot_toot
Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven Approach to Storage Optimization
Karthik Prabhakar
https://arxiv.org/abs/2512.06699
I'm sitting at Zurich Airport waiting to pick someone up. Next to me sits a German person who has propped up their iPad on their suitcase. They are in a video conference with several people from banks and legal, discussing tax "optimization" from Germany by buying real estate in different US states and pretending to live elsewhere. On loudspeaker.
"Don't make a bank account, don't change your drivers license. We have checklists we can provide you."
Replaced article(s) found for cs.GT. https://arxiv.org/list/cs.GT/new
[1/1]:
- Cumulative Games: Who is the current player?
Urban Larsson, Reshef Meir, Yair Zick
https://arxiv.org/abs/2005.06326
- Contest Design with Threshold Objectives
Edith Elkind, Abheek Ghosh, Paul W. Goldberg
https://arxiv.org/abs/2109.03179
- Deep Learning Meets Mechanism Design: Key Results and Some Novel Applications
V. Udaya Sankar, Vishisht Srihari Rao, Y. Narahari
https://arxiv.org/abs/2401.05683 https://mastoxiv.page/@arXiv_csGT_bot/111741115483021453
- Charting the Shapes of Stories with Game Theory
Daskalakis, Gemp, Jiang, Leme, Papadimitriou, Piliouras
https://arxiv.org/abs/2412.05747 https://mastoxiv.page/@arXiv_csGT_bot/113627246220336424
- Computing Evolutionarily Stable Strategies in Multiplayer Games
Sam Ganzfried
https://arxiv.org/abs/2511.20859 https://mastoxiv.page/@arXiv_csGT_bot/115620508246637361
- Autodeleveraging: Impossibilities and Optimization
Tarun Chitra
https://arxiv.org/abs/2512.01112 https://mastoxiv.page/@arXiv_csGT_bot/115649040881525135
- Static Pricing Guarantees for Queueing Systems
Jacob Bergquist, Adam N. Elmachtoub
https://arxiv.org/abs/2305.09168 https://mastoxiv.page/@arXiv_csDS_bot/110382625621173269
- Game of arrivals at a two queue network with heterogeneous customer routes
Agniv Bandyopadhyay, Sandeep Juneja
https://arxiv.org/abs/2310.18149 https://mastoxiv.page/@arXiv_csPF_bot/111322112226936579
- Characterization of Priority-Neutral Matching Lattices
Clayton Thomas
https://arxiv.org/abs/2404.02142 https://mastoxiv.page/@arXiv_econTH_bot/112205968984928881
- Seven kinds of equivalent models for generalized coalition logics
Zixuan Chen, Fengkui Ju
https://arxiv.org/abs/2501.05466 https://mastoxiv.page/@arXiv_csLO_bot/113819715349259373
- Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
Hadi Hosseini, Samarth Khanna, Ronak Singh
https://arxiv.org/abs/2506.04478 https://mastoxiv.page/@arXiv_csAI_bot/114635186215388479
toXiv_bot_toot
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
https://arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
https://arxiv.org/abs/2405.12845 https://mastoxiv.page/@arXiv_mathOC_bot/112483516437815686
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
https://arxiv.org/abs/2508.20388 https://mastoxiv.page/@arXiv_mathOC_bot/115111048711698998
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
https://arxiv.org/abs/2509.07546 https://mastoxiv.page/@arXiv_mathOC_bot/115179281556444440
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
https://arxiv.org/abs/2509.13960 https://mastoxiv.page/@arXiv_mathOC_bot/115224514482363803
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
https://arxiv.org/abs/2509.21416 https://mastoxiv.page/@arXiv_mathOC_bot/115286533597711930
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
https://arxiv.org/abs/2510.11381 https://mastoxiv.page/@arXiv_mathOC_bot/115372322896073250
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
https://arxiv.org/abs/2510.26382 https://mastoxiv.page/@arXiv_mathOC_bot/115468018035252078
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
https://arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
https://arxiv.org/abs/2312.04932 https://mastoxiv.page/@arXiv_mathAP_bot/111560077272113052
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
https://arxiv.org/abs/2403.11945 https://mastoxiv.page/@arXiv_eessSY_bot/112121123836064435
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
https://arxiv.org/abs/2502.15341 https://mastoxiv.page/@arXiv_physicsclassph_bot/114057765769441123
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
https://arxiv.org/abs/2504.11133 https://mastoxiv.page/@arXiv_mathPR_bot/114346453424694503
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
https://arxiv.org/abs/2504.21597 https://mastoxiv.page/@arXiv_mathph_bot/114431404740241516
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
https://arxiv.org/abs/2505.24861 https://mastoxiv.page/@arXiv_mathNA_bot/114612580684567066
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
https://arxiv.org/abs/2508.02364 https://mastoxiv.page/@arXiv_csLG_bot/114976243138728278
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
https://arxiv.org/abs/2511.09021 https://mastoxiv.page/@arXiv_csGT_bot/115541243299714775
toXiv_bot_toot
from my link log —
Optimization countermeasures: inline asm value barriers for constant-time cryptography.
https://mcyoung.xyz/2025/12/15/value-barriers/
saved 2025-12-16
I'm going to have to screen record this, I can't believe I got it working this well.
2x 50M point differential Ethernet waveform into subtract filter, CDR, and eye pattern.
Refreshing at 8.3 Hz. With just a little bit more optimization or faster hardware this will be real time.
Then I can start working on getting protocol decodes to run at full rate too.
Huawei just launched FusionSolar 9.0, and it's a game-changer for renewable energy grids.
The platform combines solar-plus-storage with AI-driven management and grid-forming inverters that actively support weak grids instead of just feeding into them. It includes real-time optimization and fault detection, transforming solar plants into stability assets for grids with high renewable penetration.
Next up at #12Clouds: Joel at Pepper Data talking about GPU workload optimization in k8s
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
https://arxiv.org/abs/2512.17341 https://mastoxiv.page/@arXiv_statML_bot/115762312049963700
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
https://arxiv.org/abs/2512.17381 https://mastoxiv.page/@arXiv_csNI_bot/115762180316858485
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
https://arxiv.org/abs/2512.17419 https://mastoxiv.page/@arXiv_csSE_bot/115762487015279852
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
https://arxiv.org/abs/2512.17426 https://mastoxiv.page/@arXiv_statML_bot/115762346108219997
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
https://arxiv.org/abs/2512.17450 https://mastoxiv.page/@arXiv_csCV_bot/115762717053353674
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
https://arxiv.org/abs/2512.17460 https://mastoxiv.page/@arXiv_csSE_bot/115762500123147574
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
https://arxiv.org/abs/2512.17462 https://mastoxiv.page/@arXiv_csIR_bot/115762430673347625
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
https://arxiv.org/abs/2512.17466 https://mastoxiv.page/@arXiv_eessSY_bot/115762336277179643
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
https://arxiv.org/abs/2512.17470 https://mastoxiv.page/@arXiv_csAI_bot/115762556506696539
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
https://arxiv.org/abs/2512.17473 https://mastoxiv.page/@arXiv_eessSP_bot/115762580078964235
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
https://arxiv.org/abs/2512.17488 https://mastoxiv.page/@arXiv_csCV_bot/115762726884307901
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
https://arxiv.org/abs/2512.17515 https://mastoxiv.page/@arXiv_eessIV_bot/115762459510336799
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
https://arxiv.org/abs/2512.17517 https://mastoxiv.page/@arXiv_csCV_bot/115762741957639051
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
https://arxiv.org/abs/2512.17534 https://mastoxiv.page/@arXiv_physicsfludyn_bot/115762391350754768
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
https://arxiv.org/abs/2512.17562 https://mastoxiv.page/@arXiv_csSD_bot/115762423443170715
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
https://arxiv.org/abs/2512.17574 https://mastoxiv.page/@arXiv_csDC_bot/115762425409322293
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
https://arxiv.org/abs/2512.17585 https://mastoxiv.page/@arXiv_eessIV_bot/115762479150695610
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
https://arxiv.org/abs/2512.17594 https://mastoxiv.page/@arXiv_csCR_bot/115762509298207765
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
https://arxiv.org/abs/2512.17630 https://mastoxiv.page/@arXiv_csCL_bot/115762575512981257
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
https://arxiv.org/abs/2512.17659 https://mastoxiv.page/@arXiv_statML_bot/115762554519447500
toXiv_bot_toot
Low-Discrepancy Set Post-Processing via Gradient Descent
Fran\c{c}ois Cl\'ement, Linhang Huang, Woorim Lee, Cole Smidt, Braeden Sodt, Xuan Zhang
https://arxiv.org/abs/2511.10496 https://arxiv.org/pdf/2511.10496 https://arxiv.org/html/2511.10496
arXiv:2511.10496v1 Announce Type: new
Abstract: The construction of low-discrepancy sets, used for uniform sampling and numerical integration, has recently seen great improvements based on optimization and machine learning techniques. However, these methods are computationally expensive, often requiring days of computation or access to GPU clusters. We show that simple gradient descent-based techniques allow for comparable results when starting with a reasonably uniform point set. Not only is this method much more efficient and accessible, but it can be applied as post-processing to any low-discrepancy set generation method for a variety of standard discrepancy measures.
toXiv_bot_toot
Adobe plans to acquire NYSE-listed Semrush, which helps companies run search engine optimization as AI use rises, for $1.9B in cash, paying $12 per share (Lauren Thomas/Wall Street Journal)
https://www.wsj.com/business/d…
Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
https://arxiv.org/abs/2511.10372 https://arxiv.org/pdf/2511.10372 https://arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot
Linear programs help to find optimal solutions based on a set of constrains. I used {ompr} before, but the new package {tidyLP} looks promising and integrates with the tidyverse. #rstats #linearprograms #optimization
MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
https://arxiv.org/abs/2511.18980 https://arxiv.org/pdf/2511.18980 https://arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot
It seems that both AI and climate anxiety share a common root: the current structure of capitalism.
Our system, which demands constant, unrestricted growth, shapes everything from our economy to our psychology.
In the climate crisis, it manifests as overproduction and overconsumption, while in AI, it drives the replacement of human labor with cheaper machine labor and the optimization of attention economies.
Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
https://arxiv.org/abs/2511.10483 https://arxiv.org/pdf/2511.10483 https://arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot
Optimization of experimental parameters for laser-slowing and magneto-optical trapping of MgF molecules
Dongkyu Lim, Eunmi Chae
https://arxiv.org/abs/2511.16022 https://<…
On fundamental properties of high-order forward-backward envelope
Alireza Kabgani, Masoud Ahookhosh
https://arxiv.org/abs/2511.10421 https://arxiv.org/pdf/2511.10421 https://arxiv.org/html/2511.10421
arXiv:2511.10421v1 Announce Type: new
Abstract: This paper studies the fundamental properties of the high-order forward-backward splitting mapping (HiFBS) and its associated forward-backward envelope (HiFBE) through the lens of high-order regularization for nonconvex composite functions. Specifically, we (i) establish the boundedness and uniform boundedness of HiFBS, along with the H\"older and Lipschitz continuity of HiFBE; (ii) derive an explicit form for the subdifferentials of HiFBE; and (iii) investigate necessary and sufficient conditions for the differentiability and weak smoothness of HiFBE under suitable assumptions. By leveraging the prox-regularity of $g$ and the concept of $p$-calmness, we further demonstrate the local single-valuedness and continuity of HiFBS, which in turn guarantee the differentiability of HiFBE in neighborhoods of calm points. This paves the way for the development of gradient-based algorithms tailored to nonconvex composite optimization problems.
toXiv_bot_toot
Sources: Adobe nears a $1.9B deal to acquire NYSE-listed Semrush, which helps companies run search engine optimization as AI use increases, paying $12 per share (Lauren Thomas/Wall Street Journal)
https://www.wsj.com/bus…
What happens when you pair solar panels with mini nuclear reactors? Chinese researchers just cracked the code.
Their new microgrid framework combines photovoltaics with small modular reactors, using AI to balance both in real time. The results are striking: 18.7% lower costs, 37.1% fewer emissions, and 98% reliability.
The secret? Smart coordination between battery storage and hydrogen production that adapts on the fly.
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
https://arxiv.org/abs/2511.14745 https://mastoxiv.page/@arXiv_csLG_bot/115575981129228810
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
https://arxiv.org/abs/2511.18214 https://mastoxiv.page/@arXiv_csLG_bot/115610315210502140
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
https://arxiv.org/abs/2511.23083 https://mastoxiv.page/@arXiv_csLG_bot/115644325602130493
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
https://arxiv.org/abs/2512.11529 https://mastoxiv.page/@arXiv_csLG_bot/115723008170311172
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
https://arxiv.org/abs/2512.12783 https://mastoxiv.page/@arXiv_csLG_bot/115729287232895097
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
https://arxiv.org/abs/2512.15068 https://mastoxiv.page/@arXiv_csLG_bot/115740048142898391
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
https://arxiv.org/abs/2512.16715 https://mastoxiv.page/@arXiv_csLG_bot/115745910810427061
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
https://arxiv.org/abs/2401.15502 https://mastoxiv.page/@arXiv_statML_bot/111843467510507382
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
https://arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
https://arxiv.org/abs/2408.07588 https://mastoxiv.page/@arXiv_statML_bot/112965266196097314
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
https://arxiv.org/abs/2410.13161 https://mastoxiv.page/@arXiv_heplat_bot/113327593338897860
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
https://arxiv.org/abs/2410.22674 https://mastoxiv.page/@arXiv_eessIV_bot/113401026110345647
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
https://arxiv.org/abs/2411.02221 https://mastoxiv.page/@arXiv_statML_bot/113429912435819479
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
https://arxiv.org/abs/2412.01389 https://mastoxiv.page/@arXiv_statML_bot/113588027268311334
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
https://arxiv.org/abs/2412.12667 https://mastoxiv.page/@arXiv_csCV_bot/113672538318570349
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
https://arxiv.org/abs/2502.01890 https://mastoxiv.page/@arXiv_csCV_bot/113949981686723660
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
https://arxiv.org/abs/2502.01956 https://mastoxiv.page/@arXiv_csRO_bot/113949997485625086
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
https://arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
https://arxiv.org/abs/2502.09652 https://mastoxiv.page/@arXiv_csCV_bot/114017924551186136
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
https://arxiv.org/abs/2503.19041 https://mastoxiv.page/@arXiv_csCL_bot/114227502448008352
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
https://arxiv.org/abs/2503.21526 https://mastoxiv.page/@arXiv_statML_bot/114238919468512990
toXiv_bot_toot
New article on the blog!
This time, it's about how I optimized an algorithm which turns byte offsets into line/column numbers and UTF-16 offsets.
Most of the performance improvement came from the use of SIMD to efficiently count ASCII characters.
#rust #RustLang #SIMD #optimization #blog
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[5/5]:
- CLAReSNet: When Convolution Meets Latent Attention for Hyperspectral Image Classification
Asmit Bandyopadhyay, Anindita Das Bhattacharjee, Rakesh Das
https://arxiv.org/abs/2511.12346 https://mastoxiv.page/@arXiv_csCV_bot/115570753208147835
- Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without ...
Dimitris Oikonomou, Nicolas Loizou
https://arxiv.org/abs/2512.02342 https://mastoxiv.page/@arXiv_mathOC_bot/115654870924418771
- Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven App...
Karthik Prabhakar, Durgamadhab Mishra
https://arxiv.org/abs/2512.06699 https://mastoxiv.page/@arXiv_csPF_bot/115688618582182232
- Minimum Bayes Risk Decoding for Error Span Detection in Reference-Free Automatic Machine Translat...
Lyu, Song, Kamigaito, Ding, Tanaka, Utiyama, Funakoshi, Okumura
https://arxiv.org/abs/2512.07540 https://mastoxiv.page/@arXiv_csCL_bot/115689532163491162
- In-Context Learning for Seismic Data Processing
Fabian Fuchs, Mario Ruben Fernandez, Norman Ettrich, Janis Keuper
https://arxiv.org/abs/2512.11575 https://mastoxiv.page/@arXiv_csCV_bot/115723040285820239
- Journey Before Destination: On the importance of Visual Faithfulness in Slow Thinking
Rheeya Uppaal, Phu Mon Htut, Min Bai, Nikolaos Pappas, Zheng Qi, Sandesh Swamy
https://arxiv.org/abs/2512.12218 https://mastoxiv.page/@arXiv_csCV_bot/115729165330908574
- Non-Resolution Reasoning (NRR): A Computational Framework for Contextual Identity and Ambiguity P...
Kei Saito
https://arxiv.org/abs/2512.13478 https://mastoxiv.page/@arXiv_csCL_bot/115729234145554554
- Stylized Synthetic Augmentation further improves Corruption Robustness
Georg Siedel, Rojan Regmi, Abhirami Anand, Weijia Shao, Silvia Vock, Andrey Morozov
https://arxiv.org/abs/2512.15675 https://mastoxiv.page/@arXiv_csCV_bot/115740141862163631
- mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
Jonas Pai, Liam Achenbach, Victoriano Montesinos, Benedek Forrai, Oier Mees, Elvis Nava
https://arxiv.org/abs/2512.15692 https://mastoxiv.page/@arXiv_csRO_bot/115739947869830764
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
https://arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
https://arxiv.org/abs/2402.12118 https://mastoxiv.page/@arXiv_csLG_bot/111962593972369958
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
https://arxiv.org/abs/2405.00645 https://mastoxiv.page/@arXiv_csLG_bot/112370274737558603
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
https://arxiv.org/abs/2405.15325 https://mastoxiv.page/@arXiv_csLG_bot/112511890051553111
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
https://arxiv.org/abs/2405.15877 https://mastoxiv.page/@arXiv_csLG_bot/112517547424098076
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
https://arxiv.org/abs/2409.03735 https://mastoxiv.page/@arXiv_csLG_bot/113089789682783135
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
https://arxiv.org/abs/2410.06800 https://mastoxiv.page/@arXiv_csLG_bot/113283021321510736
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
https://arxiv.org/abs/2410.18686 https://mastoxiv.page/@arXiv_csLG_bot/113367101100828901
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
https://arxiv.org/abs/2412.00720 https://mastoxiv.page/@arXiv_csLG_bot/113587817648503815
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
https://arxiv.org/abs/2412.15184 https://mastoxiv.page/@arXiv_csLG_bot/113683924322164777
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
https://arxiv.org/abs/2501.10290 https://mastoxiv.page/@arXiv_csLG_bot/113859392622871057
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
https://arxiv.org/abs/2501.10321 https://mastoxiv.page/@arXiv_csLG_bot/113859392688054204
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
https://arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
https://arxiv.org/abs/2502.06658 https://mastoxiv.page/@arXiv_csLG_bot/113984059089245671
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
https://arxiv.org/abs/2502.09496 https://mastoxiv.page/@arXiv_csLG_bot/114000974082372598
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
https://arxiv.org/abs/2502.10784 https://mastoxiv.page/@arXiv_csLG_bot/114023667639951005
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
https://arxiv.org/abs/2502.11027 https://mastoxiv.page/@arXiv_csLG_bot/114023688225233656
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
https://arxiv.org/abs/2505.10432 https://mastoxiv.page/@arXiv_csLG_bot/114516300594057680
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
https://arxiv.org/abs/2505.17720 https://mastoxiv.page/@arXiv_csLG_bot/114572963019603744
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
https://arxiv.org/abs/2505.22255 https://mastoxiv.page/@arXiv_csLG_bot/114589956040892075
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
https://arxiv.org/abs/2506.06486 https://mastoxiv.page/@arXiv_csLG_bot/114658421178857085
toXiv_bot_toot
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
https://arxiv.org/abs/2512.16927 https://mastoxiv.page/@arXiv_csDS_bot/115762062326187898
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
https://arxiv.org/abs/2512.16956 https://mastoxiv.page/@arXiv_csSE_bot/115762248476963893
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
https://arxiv.org/abs/2512.16962 https://mastoxiv.page/@arXiv_csCR_bot/115762140339109012
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
https://arxiv.org/abs/2512.16964 https://mastoxiv.page/@arXiv_eessIV_bot/115762196702065869
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
https://arxiv.org/abs/2512.16969 https://mastoxiv.page/@arXiv_csAI_bot/115762050529328276
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
https://arxiv.org/abs/2512.16970 https://mastoxiv.page/@arXiv_csAI_bot/115762054461584205
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
https://arxiv.org/abs/2512.17028 https://mastoxiv.page/@arXiv_csCL_bot/115762049873946945
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
https://arxiv.org/abs/2512.17075 https://mastoxiv.page/@arXiv_csCL_bot/115762077400293945
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
https://arxiv.org/abs/2512.17127 https://mastoxiv.page/@arXiv_statML_bot/115762251753966702
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
https://arxiv.org/abs/2512.17146 https://mastoxiv.page/@arXiv_csCR_bot/115762318582013305
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
https://arxiv.org/abs/2512.17169 https://mastoxiv.page/@arXiv_qbioBM_bot/115762302873829397
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
https://arxiv.org/abs/2512.17185 https://mastoxiv.page/@arXiv_qfinRM_bot/115762275982224870
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
https://arxiv.org/abs/2512.17209 https://mastoxiv.page/@arXiv_csSD_bot/115762341541572505
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
https://arxiv.org/abs/2512.17213 https://mastoxiv.page/@arXiv_csCV_bot/115762574180736975
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
https://arxiv.org/abs/2512.17245 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/115762447037143855
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
https://arxiv.org/abs/2512.17251 https://mastoxiv.page/@arXiv_csCR_bot/115762396593872943
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
https://arxiv.org/abs/2512.17254 https://mastoxiv.page/@arXiv_csCR_bot/115762402470985707
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
https://arxiv.org/abs/2512.17259 https://mastoxiv.page/@arXiv_csMA_bot/115762225538364939
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
https://arxiv.org/abs/2512.17277 https://mastoxiv.page/@arXiv_csIR_bot/115762214396869930
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
https://arxiv.org/abs/2512.17281 https://mastoxiv.page/@arXiv_csSD_bot/115762361858560703
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
https://arxiv.org/abs/2512.17340 https://mastoxiv.page/@arXiv_statME_bot/115762446402738033
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
https://arxiv.org/abs/2506.10892 https://mastoxiv.page/@arXiv_csLG_bot/114675526577078472
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
https://arxiv.org/abs/2506.20494 https://mastoxiv.page/@arXiv_csLG_bot/114749113025183688
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
https://arxiv.org/abs/2506.20584 https://mastoxiv.page/@arXiv_csLG_bot/114749118923266356
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
https://arxiv.org/abs/2506.22376 https://mastoxiv.page/@arXiv_csLG_bot/114771735361664528
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
https://arxiv.org/abs/2507.18242 https://mastoxiv.page/@arXiv_csLG_bot/114913322736512937
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
https://arxiv.org/abs/2508.17702 https://mastoxiv.page/@arXiv_csLG_bot/115095014405732247
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
https://arxiv.org/abs/2508.19009 https://mastoxiv.page/@arXiv_csLG_bot/115100269482762688
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
https://arxiv.org/abs/2508.19011 https://mastoxiv.page/@arXiv_csLG_bot/115100270137397046
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
https://arxiv.org/abs/2508.20705 https://mastoxiv.page/@arXiv_csLG_bot/115111565155687451
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
https://arxiv.org/abs/2509.25977 https://mastoxiv.page/@arXiv_csLG_bot/115298721327100391
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
https://arxiv.org/abs/2510.01384 https://mastoxiv.page/@arXiv_csLG_bot/115309690976554356
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
https://arxiv.org/abs/2510.09775 https://mastoxiv.page/@arXiv_csLG_bot/115372387779061015
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
https://arxiv.org/abs/2510.14386 https://mastoxiv.page/@arXiv_csLG_bot/115389079527543821
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
https://arxiv.org/abs/2510.16882 https://mastoxiv.page/@arXiv_csLG_bot/115412243355962887
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
https://arxiv.org/abs/2510.23117 https://mastoxiv.page/@arXiv_csLG_bot/115451891042176876
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
https://arxiv.org/abs/2510.23501 https://mastoxiv.page/@arXiv_csLG_bot/115451942159737549
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
https://arxiv.org/abs/2511.00040 https://mastoxiv.page/@arXiv_csLG_bot/115490555013124989
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
https://arxiv.org/abs/2511.04469 https://mastoxiv.page/@arXiv_csLG_bot/115507943827841017
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
https://arxiv.org/abs/2511.09902 https://mastoxiv.page/@arXiv_csLG_bot/115547587245365920
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
https://arxiv.org/abs/2511.11571 https://mastoxiv.page/@arXiv_csLG_bot/115564541392410174
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
https://arxiv.org/abs/2511.12817 https://mastoxiv.page/@arXiv_csLG_bot/115570877730326947
toXiv_bot_toot