Tootfinder

Opt-in global Mastodon full text search. Join the index!

@Techmeme@techhub.social
2025-11-18 15:09:21

Anthropic commits to buy $30B in Azure capacity in a new deal with Microsoft and Nvidia, which commit to invest up to $5B and $10B, respectively, in Anthropic (Microsoft)
blogs.microsoft.com/blog/2025/

@andres4ny@social.ridetrans.it
2025-11-25 07:07:37

Hm, $450 for a bike whose manufacturer is about to enter bankruptcy (in no small part due to recalls), with broken rear wheel & motor, and almost 7,000 miles on it? Good luck with that.

ebay.com/itm/267487858971

@simon_brooke@mastodon.scot
2026-01-12 09:10:26

"These deals represent the corporate capture of the UK state including, our cloud capacity, National Health Service, and now our military establishment...
Starmer’s inability to speak the truth is not diplomacy. It’s evidence." @…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@arXiv_qbioMN_bot@mastoxiv.page
2026-01-21 08:54:02

Fluctuation Theorems from a Continuous-Time Markov Model of Information-Thermodynamic Capacity in Biochemical Signal Cascades
Tatsuaki Tsuruyama
arxiv.org/abs/2601.11941