Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@seeingwithsound@mas.to
2026-01-25 13:05:57

LED lighting (350-650nm) undermines human visual performance unless supplemented by wider spectra (400-1500nm ) like daylight nature.com/articles/s41598-026

“Expanding the Supreme Court is no different than redistricting in California and Virginia.
It is a proportionate response to Republican attempts to degrade liberal democracy and move America toward a post-liberal order.”
open.…

@simon_brooke@mastodon.scot
2026-04-25 09:59:15

RE: #Lisp! OK, it's about as sophisticated as the ed…

@mgorny@social.treehouse.systems
2026-04-25 07:35:51

#TIL that if you quote the "EOF" word for #bash here-document thing, it does not perform all the substitutions inside the here-doc, i.e. treats it like single-quoted rather than double-quoted string.
cat > ... <<-'EOF'
${foo}
EOF
will get you raw '${foo}', no escaping needed.

@laxsill@social.spejset.org
2026-03-23 13:45:26

gonna tell my kids this was piratpartiet

RE: Unauthorized Use of Dreamworks SKG Properties

As you may or may not be aware, Sweden is not a state in the United States
of America. Sweden is a country in northern Europe.
Unless you figured it out by now, US law does not apply here.
For your information, no Swedish law is being violated.

Please be assured that any further contact with us, regardless of medium,
will result in
a) a suit being filed for harassment
b) a formal complaint lodged with the bar of your legal counsel, for
sending…
@arXiv_physicsfludyn_bot@mastoxiv.page
2026-02-25 08:52:01

A Novel Explicit Filter for the Approximate Deconvolution in Large-Eddy Simulation on General Unstructured Grids: A posteriori tests on highly stretched grids
Mohammad Bagher Molaei, Ehsan Amani, Morteza Ghorbani
arxiv.org/abs/2602.21166 arxiv.org/pdf/2602.21166 arxiv.org/html/2602.21166
arXiv:2602.21166v1 Announce Type: new
Abstract: Explicit filters play a pivotal role in the scale separation and numerical stability of advanced Large Eddy Simulation (LES) closures, such as dynamic eddy-viscosity or Approximate Deconvolution (AD) methods. In the present study, it is demonstrated that the performance of commonly used explicit filters applicable to general unstructured grids highly depends on the grid configuration, specifically the cell aspect ratio, which can result in poor filter spectral properties, ultimately leading to large errors and even solution divergence. This study introduces a novel, efficient explicit filter for general unstructured grids, addressing this shortcoming through a combination of a face-averaging technique and recursive filtering. The filter parameters are then determined through a constrained multi-objective optimization, ensuring desirable spectral properties, including high-wavenumber attenuation, filter-width precision, filter stability and positivity, and minimized dispersion and commutation errors. The AD-LES of turbulent channel flow benchmarks using the new filter demonstrate a noticeable improvement in turbulent flow predictions on highly stretched boundary-layer-type grids, particularly in reducing the log-layer mean velocity profile mismatch, compared to simulations using conventional filters. The analyses show that this enhancement is mainly attributed to the sufficient level of attenuation near the Nyquist wavenumber achieved by the new filter in all spatial directions across various grid configurations, among others. The new filter was also successfully tested on unstructured prism grids for the 3D Taylor-Green vortex benchmark.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:39:11

Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
arxiv.org/abs/2602.20937 arxiv.org/pdf/2602.20937 arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot

@vosje62@mastodon.nl
2026-03-18 13:48:42

Nederlanders steken grens over met jerrycans, politie probeert verkeershinder tegen te gaan: ‘Belgen zijn het spuugzat’ | Buitenland | De Gelderlander.nl
#benzineprijs

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:35:21

WeirNet: A Large-Scale 3D CFD Benchmark for Geometric Surrogate Modeling of Piano Key Weirs
Lisa L\"uddecke, Michael Hohmann, Sebastian Eilermann, Jan Tillmann-Mumm, Pezhman Pourabdollah, Mario Oertel, Oliver Niggemann
arxiv.org/abs/2602.20714 arxiv.org/pdf/2602.20714 arxiv.org/html/2602.20714
arXiv:2602.20714v1 Announce Type: new
Abstract: Reliable prediction of hydraulic performance is challenging for Piano Key Weir (PKW) design because discharge capacity depends on three-dimensional geometry and operating conditions. Surrogate models can accelerate hydraulic-structure design, but progress is limited by scarce large, well-documented datasets that jointly capture geometric variation, operating conditions, and functional performance. This study presents WeirNet, a large 3D CFD benchmark dataset for geometric surrogate modeling of PKWs. WeirNet contains 3,794 parametric, feasibility-constrained rectangular and trapezoidal PKW geometries, each scheduled at 19 discharge conditions using a consistent free-surface OpenFOAM workflow, resulting in 71,387 completed simulations that form the benchmark and with complete discharge coefficient labels. The dataset is released as multiple modalities compact parametric descriptors, watertight surface meshes and high-resolution point clouds together with standardized tasks and in-distribution and out-of-distribution splits. Representative surrogate families are benchmarked for discharge coefficient prediction. Tree-based regressors on parametric descriptors achieve the best overall accuracy, while point- and mesh-based models remain competitive and offer parameterization-agnostic inference. All surrogates evaluate in milliseconds per sample, providing orders-of-magnitude speedups over CFD runtimes. Out-of-distribution results identify geometry shift as the dominant failure mode compared to unseen discharge values, and data-efficiency experiments show diminishing returns beyond roughly 60% of the training data. By publicly releasing the dataset together with simulation setups and evaluation pipelines, WeirNet establishes a reproducible framework for data-driven hydraulic modeling and enables faster exploration of PKW designs during the early stages of hydraulic planning.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:37

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
arxiv.org/abs/2408.07016 mastoxiv.page/@arXiv_csLG_bot/
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
arxiv.org/abs/2411.00759 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
arxiv.org/abs/2412.06871 mastoxiv.page/@arXiv_csLG_bot/
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
arxiv.org/abs/2501.08219 mastoxiv.page/@arXiv_csLG_bot/
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
arxiv.org/abs/2501.10538 mastoxiv.page/@arXiv_csLG_bot/
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
arxiv.org/abs/2501.16613 mastoxiv.page/@arXiv_csLG_bot/
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
arxiv.org/abs/2502.03652 mastoxiv.page/@arXiv_csLG_bot/
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
arxiv.org/abs/2502.12108 mastoxiv.page/@arXiv_csLG_bot/
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
arxiv.org/abs/2502.17028 mastoxiv.page/@arXiv_csLG_bot/
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
arxiv.org/abs/2503.00229 mastoxiv.page/@arXiv_csLG_bot/
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
arxiv.org/abs/2503.04398 mastoxiv.page/@arXiv_csLG_bot/
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
arxiv.org/abs/2503.12016 mastoxiv.page/@arXiv_csLG_bot/
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
arxiv.org/abs/2503.23434 mastoxiv.page/@arXiv_csLG_bot/
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
arxiv.org/abs/2504.13961 mastoxiv.page/@arXiv_csLG_bot/
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
arxiv.org/abs/2505.11602 mastoxiv.page/@arXiv_csLG_bot/
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
arxiv.org/abs/2505.13289 mastoxiv.page/@arXiv_csLG_bot/
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
arxiv.org/abs/2505.18877 mastoxiv.page/@arXiv_csLG_bot/
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
arxiv.org/abs/2505.19193 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot