Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@thijs_lucas@norden.social
2025-06-15 17:52:39

Es sollte verpflichtend sein, Stellflächen für Fahrzeuge effizient zu nutzen.
1 in eine Garage oder 8 in eine halbe.
#Stellplatzsatzung2_0 #Effizienz

In einer Garage hängen vier Fahrräder an den Wand. Vier weitere sind dazwischen auf den Boden gestellt. 

Es wird mit der hintere Teil der Garage genutzt.
@heiseonline@social.heise.de
2025-08-12 06:19:00

Effiziente Perowskit-Solarzellen ersetzen Wegwerf-Batterien in Indoor-Geräten
Modifizierte Perowskit-Solarzellen arbeiten in Innenräumen effizienter und können Kleinelektronik mit Strom versorgen. Wegwerf-Batterien sind dann überflüssig.

@arXiv_quantph_bot@mastoxiv.page
2025-07-14 09:47:32

Enhancing Decoding Performance using Efficient Error Learning
Pavithran Iyer, Aditya Jain, Stephen D. Bartlett, Joseph Emerson
arxiv.org/abs/2507.08536

@Techmeme@techhub.social
2025-08-14 18:05:49

Google announces Gemma 3 270M, a compact model designed for task-specific fine-tuning with strong capabilities in instruction following and text structuring (Google Developers Blog)
developers.googleblog.com/en/i

@arXiv_csCV_bot@mastoxiv.page
2025-08-15 10:23:12

An Efficient Model-Driven Groupwise Approach for Atlas Construction
Ziwei Zou, Bei Zou, Xiaoyan Kui, Wenqi Lu, Haoran Dou, Arezoo Zakeri, Timothy Cootes, Alejandro F Frangi, Jinming Duan
arxiv.org/abs/2508.10743

@arXiv_csIT_bot@mastoxiv.page
2025-08-14 09:01:22

Non-Orthogonal Affine Frequency Division Multiplexing for Spectrally Efficient High-Mobility Communications
Qin Yi, Zilong Liu, Leila Musavian, Zeping Sui
arxiv.org/abs/2508.09782

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 08:19:51

Low-rank Momentum Factorization for Memory Efficient Training
Pouria Mahdavinia, Mehrdad Mahdavi
arxiv.org/abs/2507.08091 arxiv.org/pdf/2507.08091 arxiv.org/html/2507.08091
arXiv:2507.08091v1 Announce Type: new
Abstract: Fine-tuning large foundation models presents significant memory challenges due to stateful optimizers like AdamW, often requiring several times more GPU memory than inference. While memory-efficient methods like parameter-efficient fine-tuning (e.g., LoRA) and optimizer state compression exist, recent approaches like GaLore bridge these by using low-rank gradient projections and subspace moment accumulation. However, such methods may struggle with fixed subspaces or computationally costly offline resampling (e.g., requiring full-matrix SVDs). We propose Momentum Factorized SGD (MoFaSGD), which maintains a dynamically updated low-rank SVD representation of the first-order momentum, closely approximating its full-rank counterpart throughout training. This factorization enables a memory-efficient fine-tuning method that adaptively updates the optimization subspace at each iteration. Crucially, MoFaSGD leverages the computed low-rank momentum factors to perform efficient spectrally normalized updates, offering an alternative to subspace moment accumulation. We establish theoretical convergence guarantees for MoFaSGD, proving it achieves an optimal rate for non-convex stochastic optimization under standard assumptions. Empirically, we demonstrate MoFaSGD's effectiveness on large language model alignment benchmarks, achieving a competitive trade-off between memory reduction (comparable to LoRA) and performance compared to state-of-the-art low-rank optimization methods. Our implementation is available at github.com/pmahdavi/MoFaSGD.
toXiv_bot_toot

@arXiv_csCV_bot@mastoxiv.page
2025-07-14 10:08:02

CLiFT: Compressive Light-Field Tokens for Compute-Efficient and Adaptive Neural Rendering
Zhengqing Wang, Yuefan Wu, Jiacheng Chen, Fuyang Zhang, Yasutaka Furukawa
arxiv.org/abs/2507.08776

@arXiv_csCV_bot@mastoxiv.page
2025-08-14 08:02:02

$\Delta$-AttnMask: Attention-Guided Masked Hidden States for Efficient Data Selection and Augmentation
Jucheng Hu, Suorong Yang, Dongzhan Zhou
arxiv.org/abs/2508.09199

@arXiv_csCV_bot@mastoxiv.page
2025-07-14 10:03:02

Generalizable 7T T1-map Synthesis from 1.5T and 3T T1 MRI with an Efficient Transformer Model
Zach Eidex, Mojtaba Safari, Tonghe Wang, Vanessa Wildman, David S. Yu, Hui Mao, Erik Middlebrooks, Aparna Kesewala, Xiaofeng Yang
arxiv.org/abs/2507.08655