Primer to get you started with Optimization and Mathematical Programming in R #rstats
Source: Databricks obtained $1.8B in fresh debt and now has over $7B in debt ahead of a potential IPO; it raised $4B in December at a $134B valuation (Jordan Novet/CNBC)
https://www.cnbc.com/2026/01/23/databricks-obtains-1point8-billio…
NFL Week 17 fantasy football flex rankings: Top 150 playoff options for championship week https://www.nfl.com/news/nfl-week-17-fantasy-football-flex-rankings-top-150-playoff-options-for-championship-week
Rejecting Decades of Science, Vaccine Panel Chair Says Polio and Other Shots Should Be Optional (Apoorva Mandavilli/New York Times)
https://www.nytimes.com/2026/01/23/health/milhoan-vaccines-optional-polio.html
http://www.memeorandum.com/260123/p67#a260123p67
Open Market: Top options for OLB shift in Cowboys' defense https://www.dallascowboys.com/news/open-market-top-options-for-olb-shift-in-cowboys-defense
Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
https://arxiv.org/abs/2602.20937 https://arxiv.org/pdf/2602.20937 https://arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot
Workday reports Q4 revenue up 14.5% YoY to $2.53B, vs. $2.52B est., and forecasts FY 2027 subscription revenue below estimates; WDAY drops 9% after hours (Larry Dignan/Constellation Research)
https://www.constellationr.com/insights/ne
Vikings exploring all QB options for '26, VP says https://www.espn.com/nfl/story/_/id/48024854/vikings-exploring-all-qb-options-26-vp-brzezinski-says
Cowboys 5 options with George Pickens include trading star WR; here's how https://cowboyswire.usatoday.com/story/sports/nfl/cowboys/2026/01/25/cowboys-options-george-pickens-free-agency/88346904007/
Lynch believes 49ers, Williams 'on the same page' https://www.espn.com/nfl/story/_/id/48026679/john-lynch-believes-49ers-trent-williams-all-same-page-optimistic-deal