Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@datascience@genomic.social
2026-02-24 11:00:01

Primer to get you started with Optimization and Mathematical Programming in R #rstats

@Techmeme@techhub.social
2026-01-23 19:46:04

Source: Databricks obtained $1.8B in fresh debt and now has over $7B in debt ahead of a potential IPO; it raised $4B in December at a $134B valuation (Jordan Novet/CNBC)
cnbc.com/2026/01/23/databricks

@NFL@darktundra.xyz
2025-12-23 20:45:39

NFL Week 17 fantasy football flex rankings: Top 150 playoff options for championship week nfl.com/news/nfl-week-17-fanta

@memeorandum@universeodon.com
2026-01-23 18:20:48

Rejecting Decades of Science, Vaccine Panel Chair Says Polio and Other Shots Should Be Optional (Apoorva Mandavilli/New York Times)
nytimes.com/2026/01/23/health/
memeorandum.com/260123/p67#a26

@cowboys@darktundra.xyz
2026-02-23 21:04:09

Open Market: Top options for OLB shift in Cowboys' defense dallascowboys.com/news/open-ma

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:39:11

Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
arxiv.org/abs/2602.20937 arxiv.org/pdf/2602.20937 arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot

@Techmeme@techhub.social
2026-02-25 07:40:54

Workday reports Q4 revenue up 14.5% YoY to $2.53B, vs. $2.52B est., and forecasts FY 2027 subscription revenue below estimates; WDAY drops 9% after hours (Larry Dignan/Constellation Research)
constellationr.com/insights/ne

@NFL@darktundra.xyz
2026-02-24 20:49:46

Vikings exploring all QB options for '26, VP says espn.com/nfl/story/_/id/480248

@cowboys@darktundra.xyz
2026-01-25 11:56:12

Cowboys 5 options with George Pickens include trading star WR; here's how cowboyswire.usatoday.com/story

@NFL@darktundra.xyz
2026-02-25 03:54:14

Lynch believes 49ers, Williams 'on the same page' espn.com/nfl/story/_/id/480266