> Due to its scale and detailed annotations, Danbooru has been widely used as a source for datasets in computer vision and generative modeling research. The Danbooru20xx series of datasets
of COURSE they were
heise | Parameter in KI-Modellen: Was sie bei großen Sprachmodellen wirklich bedeuten
Sie sind die geheimnisvollen Zahlen, die hinter KI-Modellen stecken und je mehr Parameter eines hat, desto besser soll es sein. Aber was bewirken sie wirklich?
Modeling European Beech masting events shows (1) conditions that trigger masting are becoming more frequent, (2) this is reducing viable seed production, and (3) "Severe disruptions to masting are projected to become the norm, with the greatest reductions (up to ~83%) at colder margins."
https://doi.org/10.1111/ele.70284
Game theoretic modeling of pricing algorithms (this seems like really worthy research): https://www.wired.com/story/game-theory-explains-how-algorithms-can-drive-up-prices/
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
"Fast 100 Jahre nach dem Erscheinen liest man dieses Buch nicht als nostalgisches Dokument, sondern als Frühwarnsystem mit literarischen Mitteln. Tucholsky beschreibt keine Ereignisse, sondern Zustände. Und diese Zustände – Machtmissbrauch, Bürokratie, politische Kälte, männliche Eitelkeit, kulturelle Fluchtpunkte – lassen sich ins Heute ohne großen Bedeutungsverlust übersetzen. "
Tucholsky – Mit 5 PS: 1/2−
Auf der Zentraldeponie #Cröbern bei #Leipzig ist eine neue #Photovoltaik-Anlage mit 12.000 Modulen in Betrieb.
Sie versorgt die
Smart Glasses ab 40 Dollar: VR-Hersteller DPVR stellt neue Modelle vor
DPVR steigt mit sechs Modellen in den Smart-Glasses-Markt ein. Das Einsteigermodell gibt es zum Kampfpreis von 40 US-Dollar.
https…
KI-Update Deep-Dive: Besser texten mit KI
KI kann beim Schreiben helfen, doch oft ist das Ergebnis mittelmäßig. Wie man zu besseren Texten kommt, erklärt Anne-Kathrin Gerstlauer.
https://www.heise.de/…