Precedented, but not the same:
If ICE and CBP are basically the reincarnation of the KKK, it’s the KKK with a budget the size of Russia’s entire military.
Literally. In the literal sense of “literally.” The incoming ICE CBP budget is ~$140 billion.[1] Russia’s military budget is ~$145 billion.[2]
(If I’m misreading these numbers, please correct me.)
[1] https://www.appropriations.senate.gov/imo/media/doc/fy26_homeland_security_conference_bill_summary.pdf
[2] https://www.reuters.com/world/europe/russia-hikes-national-defence-spending-by-23-2025-2024-09-30/
12/
JOURNAL> Journal of Chinese Buddhist Studies 38
https://ift.tt/GDj1J2P
Mining the Logs: Sources on Blue Humor URL …
via Input 4 RELCFP https://ift.tt/pscliGV<…
Paying Our Great Transportation Security Administration Officers and Employees (Donald J. Trump/The White House)
https://www.whitehouse.gov/presidential-actions/2026/03/memorandum-for-the-secretary-of-homeland-security-and-the-director-of-the-office-of-management-and-budget/
http://www.memeorandum.com/260327/p95#a260327p95
Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training
Anas Barakat, Souradip Chakraborty, Khushbu Pahwa, Amrit Singh Bedi
https://arxiv.org/abs/2602.21189 https://arxiv.org/pdf/2602.21189 https://arxiv.org/html/2602.21189
arXiv:2602.21189v1 Announce Type: new
Abstract: Pass@k is a widely used performance metric for verifiable large language model tasks, including mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of $k$ independently sampled solutions passes a verifier. This multi-sample inference metric has motivated inference-aware fine-tuning methods that directly optimize pass@$k$. However, prior work reports a recurring trade-off: pass@k improves while pass@1 degrades under such methods. This trade-off is practically important because pass@1 often remains a hard operational constraint due to latency and cost budgets, imperfect verifier coverage, and the need for a reliable single-shot fallback. We study the origin of this trade-off and provide a theoretical characterization of when pass@k policy optimization can reduce pass@1 through gradient conflict induced by prompt interference. We show that pass@$k$ policy gradients can conflict with pass@1 gradients because pass@$k$ optimization implicitly reweights prompts toward low-success prompts; when these prompts are what we term negatively interfering, their upweighting can rotate the pass@k update direction away from the pass@1 direction. We illustrate our theoretical findings with large language model experiments on verifiable mathematical reasoning tasks.
toXiv_bot_toot
Trump’s economic agenda has created a new level of volatility for the federal budget,
as the Supreme Court ruling against many of his tariffs appeared to create a trillion-dollar hole on Friday morning that Trump quickly said he could fill.
Mr. Trump has reshaped the country’s fiscal situation since he took office last year.
He passed an expensive income tax cut that economists warned could put the already-indebted nation on an even more perilous path.
But he also ins…
Cold front arrives late tonight and the wind was certainly keen to hasten its arrival. Spitting rain at times, but happily any serious downpour never arrived.
Ran a quiet, pensive 7.5 km on the HS track. Heart ❤️ and lungs 🫁 felt great, but legs 🦵 were less willing. In all, the 4/1 run/walk intervals are helping. I look forward to being further along, but in the interim I can feel progress.
7.51 km in 59:30 for an average pace of 7:55/km.