2025 Week 8 NFL score predictions, bets, odds, picks today: Expert reveals exact scores for all 13 matchups
https://www.cbssports.com/nfl/news/week-8-nfl-score-predictions-…
★ Do you get excited or upset about AWS SCPs, or GCP Org Policies?
★ Do you have experience developing software to solve cloud security challenges?
★ Do you downplay your cloud security knowledge but actually you know a lot of niche oddities of cloud IAM?
★ Do you like working in diverse security teams that care about your wellbeing?
★ Do you want to get paid to work on cloud security for one of the most sophisticated AWS environments in the world?
I'm hiring a…
from Reece Martin Transit
The Transit "Experts" That Derail Transit.
My Grand Theory of North America's Transit Expansion Failures.
"saying that if Montreal was building major transit again in the 21st century it wouldn’t build a metro, because no city would is laughable, or painful, depending on ones state of mind when you read that."
Series C, Episode 12 - Death-Watch
MAX: Did you expect to?
DEETA: I don't mean personally. I mean there's something strange about him, something wrong.
https://blake.torpidity.net/m/312/219 B7B3
Pennsylvania's Supreme Court rules that police can get Google search data without a warrant; an expert warns it may encourage warrantless searches nationwide (Suzanne Smalley/The Record)
https://therecord.media/google-searches-police-access…
Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
Paul Mangold, Elo\"ise Berthier, Eric Moulines
https://arxiv.org/abs/2512.17688 https://arxiv.org/pdf/2512.17688 https://arxiv.org/html/2512.17688
arXiv:2512.17688v1 Announce Type: new
Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
toXiv_bot_toot
Matsuo Basho (1644–1694) lived his peculiar life on the conviction that
art could create an awareness that allowed one to see into and communicate the essence of experience.
Throughout his life he searched for the state of being one with the object of his poems,
something he believed a poet needed to reach in order to write truthfully.
This life-long search brought Basho to wandering.
He thought that travelling would lead to a state of karumi (lightness), ess…
2025 Week 7 NFL score predictions, bets, odds, picks today: Expert reveals exact scores for all 15 matchups
https://www.cbssports.com/nfl/news/nfl-score-predictions-we…
Cornell researchers: xAI's Grokipedia cites neo-Nazi website Stormfront 42 times, Infowars 34 times, and white nationalist website VDare 107 times (David Ingram/NBC News)
https://www.nbcnews.com/tech/elon-musk/elon-musk-groki…
Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments
Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo
https://arxiv.org/abs/2512.17771 https://arxiv.org/pdf/2512.17771 https://arxiv.org/html/2512.17771
arXiv:2512.17771v1 Announce Type: new
Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
toXiv_bot_toot