Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@seeingwithsound@mas.to
2025-06-24 11:21:53

Poster No 857: Plastic changes in the occipital regions during pitch perception associate with blindness onset age; "Specifically, the primary visual cortex remains engaged in pitch processing tasks following late blindness." doi.org/10.5281/zenodo.1564197

Screenshot of abstract.

Conclusions: In conclusion, while the plasticity of the cerebral cortex gradually declines with age, the visual cortices of late blind individuals (vision loss after four years old) continue to be recruited for pitch processing attributable to cross-modal reorganization. Specifically, the primary visual cortex remains engaged in pitch processing tasks following late blindness. However, compared to individuals who are either congenitally or early blind, late blind indivi…
@arXiv_econEM_bot@mastoxiv.page
2025-05-28 10:13:14

This arxiv.org/abs/2406.00827 has been replaced.
initial toot: mastoxiv.page/@arXiv_eco…

@arXiv_astrophGA_bot@mastoxiv.page
2025-06-25 09:28:30

The DECam MAGIC Survey: Spectroscopic Follow-up of the Most Metal-Poor Stars in the Distant Milky Way Halo
Vinicius M. Placco, Guilherme Limberg, Anirudh Chiti, Deepthi S. Prabhu, Alexander P. Ji, Fabr\'icia O. Barbosa, William Cerny, Andrew B. Pace, Guy S. Stringfellow, David J. Sand, Clara E. Mart\'inez-V\'azquez, Alexander H. Riley, Silvia Rossi, Noelia E. D. No\"el, A. Katherina Vivas, Gustavo E. Medina, Alex Drlica-Wagner, Joanna D. Sakowska, Bur\c{c}in Mutlu-Pakd…

@arXiv_statME_bot@mastoxiv.page
2025-06-16 10:08:29

A Review and Comparison of Different Sensitivity Analysis Techniques in Practice
Devin Francom, Abigael Nachtsheim
arxiv.org/abs/2506.11471

@arXiv_csGR_bot@mastoxiv.page
2025-06-23 08:18:10

FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space
Black Forest Labs, Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, Sumith Kulal, Kyle Lacey, Yam Levi, Cheng Li, Dominik Lorenz, Jonas M\"uller, Dustin Podell, Robin Rombach, Harry Saini, Axel Sauer, Luke Smith

@arXiv_eessSY_bot@mastoxiv.page
2025-07-17 08:48:30

Advantages of Feedback in Distributed Data-Gathering for Accurate and Power-Efficient State-Estimation
Hyeongmin Choe, Soojean Han
arxiv.org/abs/2507.11924

@arXiv_mathMG_bot@mastoxiv.page
2025-06-17 10:29:29

Geometric Convergence to an Extreme Limit Space with nonnegative scalar curvature
Christina Sormani, Wenchuan Tian, Wai-Ho Yeung
arxiv.org/abs/2506.12491

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 08:15:52

Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions
Simon Matrenok, Skander Moalla, Caglar Gulcehre
arxiv.org/abs/2507.08068 arxiv.org/pdf/2507.08068 arxiv.org/html/2507.08068
arXiv:2507.08068v1 Announce Type: new
Abstract: Aligning large language models with pointwise absolute rewards has so far required online, on-policy algorithms such as PPO and GRPO. In contrast, simpler methods that can leverage offline or off-policy data, such as DPO and REBEL, are limited to learning from preference pairs or relative signals. To bridge this gap, we introduce \emph{Quantile Reward Policy Optimization} (QRPO), which learns from pointwise absolute rewards while preserving the simplicity and offline applicability of DPO-like methods. QRPO uses quantile rewards to enable regression to the closed-form solution of the KL-regularized RL objective. This reward yields an analytically tractable partition function, removing the need for relative signals to cancel this term. Moreover, QRPO scales with increased compute to estimate quantile rewards, opening a new dimension for pre-computation scaling. Empirically, QRPO consistently achieves top performance on chat and coding evaluations -- reward model scores, AlpacaEval 2, and LeetCode -- compared to DPO, REBEL, and SimPO across diverse datasets and 8B-scale models. Finally, we find that training with robust rewards instead of converting them to preferences induces less length bias.
toXiv_bot_toot

@arXiv_physicsoptics_bot@mastoxiv.page
2025-07-17 09:18:00

Optical and electrical probing of plasmonic metal-molecule interactions
Andrei Stefancu, Wenxuan Tang, Ming Fu, Jordan Edwards, Naomi J. Halas, Ross C. Schofield, Toby Severs Millard, Peter Nordlander, Johannes Lischner, Pilar Carro, Rupert Oulton, Emiliano Cortes
arxiv.org/abs/2507.12128

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 09:13:22

Physics-Informed Neural Networks with Hard Nonlinear Equality and Inequality Constraints
Ashfaq Iftakher, Rahul Golder, M. M. Faruque Hasan
arxiv.org/abs/2507.08124 arxiv.org/pdf/2507.08124 arxiv.org/html/2507.08124
arXiv:2507.08124v1 Announce Type: new
Abstract: Traditional physics-informed neural networks (PINNs) do not guarantee strict constraint satisfaction. This is problematic in engineering systems where minor violations of governing laws can significantly degrade the reliability and consistency of model predictions. In this work, we develop KKT-Hardnet, a PINN architecture that enforces both linear and nonlinear equality and inequality constraints up to machine precision. It leverages a projection onto the feasible region through solving Karush-Kuhn-Tucker (KKT) conditions of a distance minimization problem. Furthermore, we reformulate the nonlinear KKT conditions using log-exponential transformation to construct a general sparse system with only linear and exponential terms, thereby making the projection differentiable. We apply KKT-Hardnet on both test problems and a real-world chemical process simulation. Compared to multilayer perceptrons and PINNs, KKT-Hardnet achieves higher accuracy and strict constraint satisfaction. This approach allows the integration of domain knowledge into machine learning towards reliable hybrid modeling of complex systems.
toXiv_bot_toot