This fan predicted Drake Maye's stardom when the Patriots QB was 9 years old https://www.nytimes.com/athletic/7000684/2026/01/26/drake-maye-fan-prediction-2012-super-bowl/
The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
https://arxiv.org/abs/2602.21185 https://arxiv.org/pdf/2602.21185 https://arxiv.org/html/2602.21185
arXiv:2602.21185v1 Announce Type: new
Abstract: Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: https://s-sahoo.com/duo-ch2
toXiv_bot_toot
πΊπ¦ #NowPlaying on KEXP's #90TEEN
Just Mustard:
π΅ Dreamer
#JustMustard
https://justmustard.bandcamp.com/track/dreamer
https://open.spotify.com/track/4Yo311eRD6OxEYeaBXkdgE
An interview with Semafor's Justin Smith on growing its events business around tentpoles and how DC-focused media benefits from the corporate affairs market (Peter Kafka/Business Insider)
https://www.businessinsider.com/semafor-juβ¦
"You get paid $45 million a year to keep people fearful so they don't pay attention to the CEOs taking their healthcare. You don't care about the American people. You want them to be afraid of immigrants because all you have is fear"π
Hannity probably going to fire booker after this Justin Jones hit - National Zero
https://nationalzero.com/2026/01/20/hannity-probably-going-to-fire-booker-after-this-justin-jones-hit/
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
https://arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
https://arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
https://arxiv.org/abs/2408.07016 https://mastoxiv.page/@arXiv_csLG_bot/112959235461894530
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
https://arxiv.org/abs/2411.00759 https://mastoxiv.page/@arXiv_csLG_bot/113423933393275133
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
https://arxiv.org/abs/2412.06871 https://mastoxiv.page/@arXiv_csLG_bot/113632934357523592
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
https://arxiv.org/abs/2501.08219 https://mastoxiv.page/@arXiv_csLG_bot/113831081884570770
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
https://arxiv.org/abs/2501.10538 https://mastoxiv.page/@arXiv_csLG_bot/113872351652969955
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
https://arxiv.org/abs/2501.16613 https://mastoxiv.page/@arXiv_csLG_bot/113910356206562660
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
https://arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
https://arxiv.org/abs/2502.03652 https://mastoxiv.page/@arXiv_csLG_bot/113961314098841096
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
https://arxiv.org/abs/2502.12108 https://mastoxiv.page/@arXiv_csLG_bot/114023706252106865
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
https://arxiv.org/abs/2502.17028 https://mastoxiv.page/@arXiv_csLG_bot/114063477202397951
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
https://arxiv.org/abs/2503.00229 https://mastoxiv.page/@arXiv_csLG_bot/114103018985567633
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
https://arxiv.org/abs/2503.04398 https://mastoxiv.page/@arXiv_csLG_bot/114120014622063602
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
https://arxiv.org/abs/2503.12016 https://mastoxiv.page/@arXiv_csLG_bot/114182234054681647
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
https://arxiv.org/abs/2503.23434 https://mastoxiv.page/@arXiv_csLG_bot/114263024618476521
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
https://arxiv.org/abs/2504.13961 https://mastoxiv.page/@arXiv_csLG_bot/114380404041503229
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
https://arxiv.org/abs/2505.11602 https://mastoxiv.page/@arXiv_csLG_bot/114538965060456498
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
https://arxiv.org/abs/2505.13289 https://mastoxiv.page/@arXiv_csLG_bot/114539124884913788
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
https://arxiv.org/abs/2505.18877 https://mastoxiv.page/@arXiv_csLG_bot/114578778213033886
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
https://arxiv.org/abs/2505.19193 https://mastoxiv.page/@arXiv_csLG_bot/114578790124778172
toXiv_bot_toot
The guy that defended him against E. Jean Carroll's accusations...IOW, a real loser.
Trump Taps Lawyer From Legal Team for Appeals Court Spot | Thomson/Reuters
https://www.newsmax.com/newsfront/donald-trump-justin-smith-appeals-court/2026/02/18/id/1246640/
Jets trading QB Justin Fields to Chiefs for 2027 sixth-round pick https://www.nfl.com/news/jets-trading-qb-justin-fields-to-chiefs-for-2027-sixth-round-pick
πΊπ¦ #NowPlaying on KEXP's #90TEEN
Just Mustard:
π΅ Silver
#JustMustard
https://justmustard.bandcamp.com/track/silver
https://open.spotify.com/track/5JSeuIKIue1s7kc0xTRRt2
Ten crazy 2026 NFL offseason predictions: Justin Jefferson traded to Bills, two new rule changes and more
https://www.cbssports.com/nfl/news/ten-cra