Tootfinder

Opt-in global Mastodon full text search. Join the index!

So Bad Bunny
isn't a good "role model"
but Kid Rock is?❓
[Verse 3: Kid Rock & Joe-C]
On my cell phone I'm paid, G,
can't call me, just page me
👉Young ladies, young ladies,
I like 'em underage, see
Some say that's statutory
💥(But I say it's mandatory)

@arXiv_physicsfludyn_bot@mastoxiv.page
2026-02-27 08:32:10

From synthetic turbulence to true solutions: A deep diffusion model for discovering periodic orbits in the Navier-Stokes equations
Jeremy P Parker, Tobias M Schneider
arxiv.org/abs/2602.23181 arxiv.org/pdf/2602.23181 arxiv.org/html/2602.23181
arXiv:2602.23181v1 Announce Type: new
Abstract: Generative artificial intelligence has shown remarkable success in synthesizing data that mimic complex real-world systems, but its potential role in the discovery of mathematically meaningful structures in physical models remains underexplored. In this work, we demonstrate how a generative diffusion model can be used to uncover previously unknown solutions of a nonlinear partial differential equation: the two-dimensional Navier-Stokes equations in a turbulent regime. Trained on data from a direct numerical simulation of turbulence, the model learns to generate time series that resemble physically plausible trajectories. By carefully modifying the temporal structure of the model and enforcing the symmetries of the governing equations, we produce synthetic trajectories that are periodic in time, despite the fact that the training data did not contain periodic trajectories. These synthetic trajectories are then refined into true solutions using an iterative solver, yielding 111 new periodic orbits (POs) with very short periods. Our results reveal a previously unobserved richness in the PO structure of this system and suggest a broader role for generative AI: not as replacements for simulation and existing solvers, but as a complementary tool for navigating the complex solution spaces of nonlinear dynamical systems.
toXiv_bot_toot

@me@mastodon.peterjanes.ca
2025-12-21 06:10:03

Immediate unfollow. aljazeera.com/news/2025/12/21/

@Techmeme@techhub.social
2025-12-17 11:50:37

Source: Tencent tells staff that Yao Shunyu, an ex-OpenAI researcher who joined in September, is now its chief AI scientist, reporting to President Martin Lau (Juro Osawa/The Information)
theinformation.com/briefings/t

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:41

Understanding the Role of Rehearsal Scale in Continual Learning under Varying Model Capacities
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20791 arxiv.org/pdf/2602.20791 arxiv.org/html/2602.20791
arXiv:2602.20791v1 Announce Type: new
Abstract: Rehearsal is one of the key techniques for mitigating catastrophic forgetting and has been widely adopted in continual learning algorithms due to its simplicity and practicality. However, the theoretical understanding of how rehearsal scale influences learning dynamics remains limited. To address this gap, we formulate rehearsal-based continual learning as a multidimensional effectiveness-driven iterative optimization problem, providing a unified characterization across diverse performance metrics. Within this framework, we derive a closed-form analysis of adaptability, memorability, and generalization from the perspective of rehearsal scale. Our results uncover several intriguing and counterintuitive findings. First, rehearsal can impair model's adaptability, in sharp contrast to its traditionally recognized benefits. Second, increasing the rehearsal scale does not necessarily improve memory retention. When tasks are similar and noise levels are low, the memory error exhibits a diminishing lower bound. Finally, we validate these insights through numerical simulations and extended analyses on deep neural networks across multiple real-world datasets, revealing statistical patterns of rehearsal mechanisms in continual learning.
toXiv_bot_toot

@UP8@mastodon.social
2025-12-10 09:07:03

🧬 Largest RNA language model to date offers new way to predict behavior and boost drug discovery
#rna

@tiotasram@kolektiva.social
2026-01-19 21:51:49

Just finished "Match Point!" by Maddie Gallegos, an excellent graphic novel about racquetball, dumpster diving, best friends, and pressure from Dad. The characters and their fromance are super cute, and while I'm sure some might find the ending too happy, I'm usually fine with seeing the aspirational version of relationships because it can serve as a good role model, while other narratives can help explain how to handle worse outcomes.
#AmReading #ReadingNow

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:38:41

On the Generalization Behavior of Deep Residual Networks From a Dynamical System Perspective
Jinshu Huang, Mingfei Sun, Chunlin Wu
arxiv.org/abs/2602.20921 arxiv.org/pdf/2602.20921 arxiv.org/html/2602.20921
arXiv:2602.20921v1 Announce Type: new
Abstract: Deep neural networks (DNNs) have significantly advanced machine learning, with model depth playing a central role in their successes. The dynamical system modeling approach has recently emerged as a powerful framework, offering new mathematical insights into the structure and learning behavior of DNNs. In this work, we establish generalization error bounds for both discrete- and continuous-time residual networks (ResNets) by combining Rademacher complexity, flow maps of dynamical systems, and the convergence behavior of ResNets in the deep-layer limit. The resulting bounds are of order $O(1/\sqrt{S})$ with respect to the number of training samples $S$, and include a structure-dependent negative term, yielding depth-uniform and asymptotic generalization bounds under milder assumptions. These findings provide a unified understanding of generalization across both discrete- and continuous-time ResNets, helping to close the gap in both the order of sample complexity and assumptions between the discrete- and continuous-time settings.
toXiv_bot_toot

@arXiv_physicsfludyn_bot@mastoxiv.page
2026-02-26 11:51:36

Crosslisted article(s) found for physics.flu-dyn. arxiv.org/list/physics.flu-dyn
[1/1]:
- Physics Constrained Neural Collision Operators for Variable Hard Sphere Surrogates and Ab Initio ...
Ehsan Roohi, Ahmad Shoja-Sani, Stefan Stefanov
arxiv.org/abs/2602.21244 mastoxiv.page/@arXiv_physicsco
- Chapman-Enskog expansion for chirally colliding disks
Ruben Lier, Pawe{\l} Matus
arxiv.org/abs/2602.21367 mastoxiv.page/@arXiv_condmatso
- Passive freeze-out of the Richtmyer-Meshkov instability
J. Strucka, et al.
arxiv.org/abs/2602.21375 mastoxiv.page/@arXiv_physicspl
- A CFD-Based Investigation of Local Luminal Curvature as a Primary Determinant of Hemodynamic Envi...
Marcella P. A. Dallavanzi, Jos\'e L. Gasche, Iago L. Oliveira
arxiv.org/abs/2602.21409 mastoxiv.page/@arXiv_physicsme
- Unstable magnetic reconnection self-generates turbulence
Nick Williams, Alessandro De Rosis, Alex Skillen
arxiv.org/abs/2602.21422 mastoxiv.page/@arXiv_physicspl
- Out-of-time-ordered correlators for turbulent fields: a quantum-classical correspondence
Motoki Nakata
arxiv.org/abs/2602.21710 mastoxiv.page/@arXiv_physicspl
- Particle, kinetic and hydrodynamic models for sea ice floes. Part II: Rotating floes with nonline...
Quanling Deng, Seung-Yeal Ha, Jaemoon Lee
arxiv.org/abs/2602.21972 mastoxiv.page/@arXiv_mathph_bo
- A consistent phase-averaged model of the interactions between surface gravity waves and currents
Jacques Vanneste, William R. Young
arxiv.org/abs/2602.21976 mastoxiv.page/@arXiv_physicsao
- Hydrodynamics of Dense Active Fluids: Turbulence-Like States and the Role of Advected Activity
Sandip Sahoo, Siddhartha Mukherjee, Samriddhi Sankar Ray
arxiv.org/abs/2602.22044 mastoxiv.page/@arXiv_condmatso
- Surrogate models for Rock-Fluid Interaction: A Grid-Size-Invariant Approach
Pinheiro, Guo, Menke, Joshi, Heaney, ElSheikh, Pain
arxiv.org/abs/2602.22188 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot