@… I agree: the measures should in principle cut down on excesses without hurting reasonable requests. But, as you say, we’ll see how it works out.
My fear is that funding for interdisciplinary projects will be even more difficult to obtain because the committees may tend even more to first fund their “core” disciplines.
@… I agree: the measures should in principle cut down on excesses without hurting reasonable requests. But, as you say, we’ll see how it works out.
My fear is that funding for interdisciplinary projects will be even more difficult to obtain because the committees may tend even more to first fund their “core” disciplines.
Ex-Cowboy DeMarcus Lawrence getting pricey gift from former teammate for Super Bowl? https://cowboyswire.usatoday.com/story/sports/nfl/cowboys/2026/01/26/cowboys-dez-bryant-demarcus-lawrence-super-bowl-ro…
"The [resonant computing] Manifesto promises to fix everything that’s wrong on the internet right now. But you look at the authors and the signers, you’ll see the same guys who caused the present problems. These guys made it rich on the Torment Nexus and they’re now claiming they can fix it."
(Original title: The Resonant Computing Manifesto: same AI slop, same AI guys)
Guardian: Face transplants promised hope. Patients were put through the unthinkable https://www.theguardian.com/science/2025/nov/27/face-transplant-patients-results-outcomes "negative data is often buried, driven by funding battles and inst…
I laughed for a second. Then I got very sad.
(link: https://bsky.app/profile/abstracttesseract.bsky.social/post/3mewrojqa422b )
Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
https://arxiv.org/abs/2602.20937 https://arxiv.org/pdf/2602.20937 https://arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot