Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@NFL@darktundra.xyz
2025-07-15 22:20:55

Rams' Puka Nacua feels like 'kid in the candy store' getting to learn from Davante Adams nfl.com/news/rams-puka-nacua-f

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 08:19:51

Low-rank Momentum Factorization for Memory Efficient Training
Pouria Mahdavinia, Mehrdad Mahdavi
arxiv.org/abs/2507.08091 arxiv.org/pdf/2507.08091 arxiv.org/html/2507.08091
arXiv:2507.08091v1 Announce Type: new
Abstract: Fine-tuning large foundation models presents significant memory challenges due to stateful optimizers like AdamW, often requiring several times more GPU memory than inference. While memory-efficient methods like parameter-efficient fine-tuning (e.g., LoRA) and optimizer state compression exist, recent approaches like GaLore bridge these by using low-rank gradient projections and subspace moment accumulation. However, such methods may struggle with fixed subspaces or computationally costly offline resampling (e.g., requiring full-matrix SVDs). We propose Momentum Factorized SGD (MoFaSGD), which maintains a dynamically updated low-rank SVD representation of the first-order momentum, closely approximating its full-rank counterpart throughout training. This factorization enables a memory-efficient fine-tuning method that adaptively updates the optimization subspace at each iteration. Crucially, MoFaSGD leverages the computed low-rank momentum factors to perform efficient spectrally normalized updates, offering an alternative to subspace moment accumulation. We establish theoretical convergence guarantees for MoFaSGD, proving it achieves an optimal rate for non-convex stochastic optimization under standard assumptions. Empirically, we demonstrate MoFaSGD's effectiveness on large language model alignment benchmarks, achieving a competitive trade-off between memory reduction (comparable to LoRA) and performance compared to state-of-the-art low-rank optimization methods. Our implementation is available at github.com/pmahdavi/MoFaSGD.
toXiv_bot_toot

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-07-14 08:46:02

Sensitive infrared surface photovoltage in quasi-equilibrium in a layered semiconductor at low-intensity low-temperature condition
Qiang Wan, Keming Zhao, Guohao Dong, Enting Li, Tianyu Yang, Hao Wang, Yaobo Huang, Yao Wen, Yiwei Li, Jun He, Youguo Shi, Hong Ding, Nan Xu
arxiv.org/abs/2507.08279

@arXiv_grqc_bot@mastoxiv.page
2025-07-14 08:34:32

Highly accurate simulations of asymmetric black-hole scattering and cross validation of effective-one-body models
Oliver Long, Harald P. Pfeiffer, Alessandra Buonanno, Gustav Uhre Jakobsen, Gustav Mogull, Antoni Ramos-Buades, Hannes R. R\"uter, Lawrence E. Kidder, Mark A. Scheel
arxiv.org/abs/2507.08071

Acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems
-- which still rely on floppy disks and Windows 95 computers!
The agency has issued a Request For Information to gather proposals from companies willing to tackle the massive infrastructure overhaul

@arXiv_physicsoptics_bot@mastoxiv.page
2025-07-14 08:26:12

Massively parallel and universal approximation of nonlinear functions using diffractive processors
Md Sadman Sakib Rahman, Yuhang Li, Xilin Yang, Shiqi Chen, Aydogan Ozcan
arxiv.org/abs/2507.08253

@Techmeme@techhub.social
2025-06-02 13:11:08

Filing: neobank Chime plans to sell 26M shares in its IPO at $24 to $26, giving it a valuation between $10.3B and $11.1B; its two co-founders own 4% to 5% each (Cory Weinberg/The Information)
theinformation.com/briefings/c

@BBC6MusicBot@mastodonapp.uk
2025-07-14 09:26:50

🇺🇦 #NowPlaying on #BBC6Music's #LaurenLaverne
Unknown Mortal Orchestra:
🎵 Swim and Sleep (Like A Shark)
#UnknownMortalOrchestra
unknown-mortal-orchestra.bandc
open.spotify.com/track/265ehI4

@kcase@mastodon.social
2025-07-08 22:39:06

In last week's roadmap update, I mentioned that we were just about ready for folks to take OmniFocus 4.7 through its paces in public test builds.
Well, now we're ready! The OmniFocus 4.7 public test introduces Planned dates, mutually exclusive tags, repeat counts and end dates, time-sensitive notifications, and more.
We look forward to your feedback!

@arXiv_grqc_bot@mastoxiv.page
2025-07-14 08:44:32

A model-agnostic gravitational-wave background characterization algorithm
Taylor Knapp, Patrick M. Meyers, Arianna I. Renzini
arxiv.org/abs/2507.08095