LED lighting (350-650nm) undermines human visual performance unless supplemented by wider spectra (400-1500nm ) like daylight https://www.nature.com/articles/s41598-026-35389-6
“Expanding the Supreme Court is no different than redistricting in California and Virginia.
It is a proportionate response to Republican attempts to degrade liberal democracy and move America toward a post-liberal order.”
https://open.…
RE: #Lisp! OK, it's about as sophisticated as the ed…
A Novel Explicit Filter for the Approximate Deconvolution in Large-Eddy Simulation on General Unstructured Grids: A posteriori tests on highly stretched grids
Mohammad Bagher Molaei, Ehsan Amani, Morteza Ghorbani
https://arxiv.org/abs/2602.21166 https://arxiv.org/pdf/2602.21166 https://arxiv.org/html/2602.21166
arXiv:2602.21166v1 Announce Type: new
Abstract: Explicit filters play a pivotal role in the scale separation and numerical stability of advanced Large Eddy Simulation (LES) closures, such as dynamic eddy-viscosity or Approximate Deconvolution (AD) methods. In the present study, it is demonstrated that the performance of commonly used explicit filters applicable to general unstructured grids highly depends on the grid configuration, specifically the cell aspect ratio, which can result in poor filter spectral properties, ultimately leading to large errors and even solution divergence. This study introduces a novel, efficient explicit filter for general unstructured grids, addressing this shortcoming through a combination of a face-averaging technique and recursive filtering. The filter parameters are then determined through a constrained multi-objective optimization, ensuring desirable spectral properties, including high-wavenumber attenuation, filter-width precision, filter stability and positivity, and minimized dispersion and commutation errors. The AD-LES of turbulent channel flow benchmarks using the new filter demonstrate a noticeable improvement in turbulent flow predictions on highly stretched boundary-layer-type grids, particularly in reducing the log-layer mean velocity profile mismatch, compared to simulations using conventional filters. The analyses show that this enhancement is mainly attributed to the sufficient level of attenuation near the Nyquist wavenumber achieved by the new filter in all spatial directions across various grid configurations, among others. The new filter was also successfully tested on unstructured prism grids for the 3D Taylor-Green vortex benchmark.
toXiv_bot_toot
Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
https://arxiv.org/abs/2602.20937 https://arxiv.org/pdf/2602.20937 https://arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot
Nederlanders steken grens over met jerrycans, politie probeert verkeershinder tegen te gaan: ‘Belgen zijn het spuugzat’ | Buitenland | De Gelderlander.nl
#benzineprijs
WeirNet: A Large-Scale 3D CFD Benchmark for Geometric Surrogate Modeling of Piano Key Weirs
Lisa L\"uddecke, Michael Hohmann, Sebastian Eilermann, Jan Tillmann-Mumm, Pezhman Pourabdollah, Mario Oertel, Oliver Niggemann
https://arxiv.org/abs/2602.20714 https://arxiv.org/pdf/2602.20714 https://arxiv.org/html/2602.20714
arXiv:2602.20714v1 Announce Type: new
Abstract: Reliable prediction of hydraulic performance is challenging for Piano Key Weir (PKW) design because discharge capacity depends on three-dimensional geometry and operating conditions. Surrogate models can accelerate hydraulic-structure design, but progress is limited by scarce large, well-documented datasets that jointly capture geometric variation, operating conditions, and functional performance. This study presents WeirNet, a large 3D CFD benchmark dataset for geometric surrogate modeling of PKWs. WeirNet contains 3,794 parametric, feasibility-constrained rectangular and trapezoidal PKW geometries, each scheduled at 19 discharge conditions using a consistent free-surface OpenFOAM workflow, resulting in 71,387 completed simulations that form the benchmark and with complete discharge coefficient labels. The dataset is released as multiple modalities compact parametric descriptors, watertight surface meshes and high-resolution point clouds together with standardized tasks and in-distribution and out-of-distribution splits. Representative surrogate families are benchmarked for discharge coefficient prediction. Tree-based regressors on parametric descriptors achieve the best overall accuracy, while point- and mesh-based models remain competitive and offer parameterization-agnostic inference. All surrogates evaluate in milliseconds per sample, providing orders-of-magnitude speedups over CFD runtimes. Out-of-distribution results identify geometry shift as the dominant failure mode compared to unseen discharge values, and data-efficiency experiments show diminishing returns beyond roughly 60% of the training data. By publicly releasing the dataset together with simulation setups and evaluation pipelines, WeirNet establishes a reproducible framework for data-driven hydraulic modeling and enables faster exploration of PKW designs during the early stages of hydraulic planning.
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
https://arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
https://arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
https://arxiv.org/abs/2408.07016 https://mastoxiv.page/@arXiv_csLG_bot/112959235461894530
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
https://arxiv.org/abs/2411.00759 https://mastoxiv.page/@arXiv_csLG_bot/113423933393275133
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
https://arxiv.org/abs/2412.06871 https://mastoxiv.page/@arXiv_csLG_bot/113632934357523592
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
https://arxiv.org/abs/2501.08219 https://mastoxiv.page/@arXiv_csLG_bot/113831081884570770
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
https://arxiv.org/abs/2501.10538 https://mastoxiv.page/@arXiv_csLG_bot/113872351652969955
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
https://arxiv.org/abs/2501.16613 https://mastoxiv.page/@arXiv_csLG_bot/113910356206562660
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
https://arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
https://arxiv.org/abs/2502.03652 https://mastoxiv.page/@arXiv_csLG_bot/113961314098841096
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
https://arxiv.org/abs/2502.12108 https://mastoxiv.page/@arXiv_csLG_bot/114023706252106865
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
https://arxiv.org/abs/2502.17028 https://mastoxiv.page/@arXiv_csLG_bot/114063477202397951
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
https://arxiv.org/abs/2503.00229 https://mastoxiv.page/@arXiv_csLG_bot/114103018985567633
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
https://arxiv.org/abs/2503.04398 https://mastoxiv.page/@arXiv_csLG_bot/114120014622063602
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
https://arxiv.org/abs/2503.12016 https://mastoxiv.page/@arXiv_csLG_bot/114182234054681647
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
https://arxiv.org/abs/2503.23434 https://mastoxiv.page/@arXiv_csLG_bot/114263024618476521
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
https://arxiv.org/abs/2504.13961 https://mastoxiv.page/@arXiv_csLG_bot/114380404041503229
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
https://arxiv.org/abs/2505.11602 https://mastoxiv.page/@arXiv_csLG_bot/114538965060456498
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
https://arxiv.org/abs/2505.13289 https://mastoxiv.page/@arXiv_csLG_bot/114539124884913788
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
https://arxiv.org/abs/2505.18877 https://mastoxiv.page/@arXiv_csLG_bot/114578778213033886
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
https://arxiv.org/abs/2505.19193 https://mastoxiv.page/@arXiv_csLG_bot/114578790124778172
toXiv_bot_toot