Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
‪@mxp@mastodon.acm.org‬
2026-02-02 10:37:27

@… I agree: the measures should in principle cut down on excesses without hurting reasonable requests. But, as you say, we’ll see how it works out.
My fear is that funding for interdisciplinary projects will be even more difficult to obtain because the committees may tend even more to first fund their “core” disciplines.

@mxp@mastodon.acm.org‬
2026-02-02 10:37:27

@… I agree: the measures should in principle cut down on excesses without hurting reasonable requests. But, as you say, we’ll see how it works out.
My fear is that funding for interdisciplinary projects will be even more difficult to obtain because the committees may tend even more to first fund their “core” disciplines.

@cowboys@darktundra.xyz
2026-01-26 17:36:45

Ex-Cowboy DeMarcus Lawrence getting pricey gift from former teammate for Super Bowl? cowboyswire.usatoday.com/story

@tante@tldr.nettime.org
2025-12-18 10:20:04

"The [resonant computing] Manifesto promises to fix everything that’s wrong on the internet right now. But you look at the authors and the signers, you’ll see the same guys who caused the present problems. These guys made it rich on the Torment Nexus and they’re now claiming they can fix it."
(Original title: The Resonant Computing Manifesto: same AI slop, same AI guys)

@seeingwithsound@mas.to
2025-12-13 11:55:37

Guardian: Face transplants promised hope. Patients were put through the unthinkable theguardian.com/science/2025/n "negative data is often buried, driven by funding battles and inst…

@tante@tldr.nettime.org
2026-02-16 13:17:51

I laughed for a second. Then I got very sad.
(link: bsky.app/profile/abstracttesse )

Screenshot of a Bluesky Post:
It includes a screenshot of a Post by Simon Willison (a very well-known AI in coding fan and promoter) talking about how he and some friends came up with the term "deep blue" for "the sense of psychological ennui leading into existential dread that many software developers are feeling thanks to LLMs right now"

The poster (Abstract Tesseract) adds the remark:
"AI enthusiast podcasters recreating labor alienation from first principles while repeatedly pulling th…
@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:39:11

Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
arxiv.org/abs/2602.20937 arxiv.org/pdf/2602.20937 arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot