2026-03-12 11:30:03
Cross To The Other Side ❎
去向另一边 ❎
📷 Nikon FE
🎞️ Lucky C200
If you like my work, buy me a coffee from PayPal #filmphotography
Cross To The Other Side ❎
去向另一边 ❎
📷 Nikon FE
🎞️ Lucky C200
If you like my work, buy me a coffee from PayPal #filmphotography
"Science Ltd. – Research Enterprise in the Age of Machines"
https://www.routledge.com/Science-Ltd-Research-Enterprise-in-the-Age-of-Machines/Borghini-Severini/p/book/9781041131069
[erscheint (wen…
Wow, this is good to see. https://www.youtube.com/watch?v=M2qEyQ_0C-8 I wonder when Aotearoa NZ will work out that it's in this hole at least as deep as France? Will this coalition keep licking the Trump administration's boots? Those of the BigTech oligarchs?
Perplexity launches Perplexity Computer, "a general-purpose digital worker" that can route work across 19 AI models, available initially for Max subscribers (Jason Hiner/The Deep View)
https://www.thedeepview.com/articles/perplexity-may-have-built-a…
The ocean conservation world has lost a giant. Kristina Gjerde, known as the "mother of the high seas," spent two decades building the coalitions that led to the historic 2023 UN High Seas Treaty—protecting biodiversity in international waters for the first time.
Her work became urgent as industrial activities threatened deep-sea ecosystems. She was 68.
An absolutely extraordinary look at how to improve rendering of ASCII art, first in static images, then in motion. As he gets into contrast enhancement for complex grayscale animations, the vector lookup works starts to be parallel to work happening with language models. Just an excellent narration. https://alexharri.com/blog/ascii-…
> for a large share of actual product work, the person who can say "here is what done looks like, prove it without seeing my rubric" is more valuable than the person who can write the code.
> Implementation is what AI is getting good at. Knowing whether the result actually solves the real problem is not an engineering judgment call, it is a domain judgment call.
Reposting to call out those two quotes and agree that this matches my experience as a staff developer…
On the Generalization Behavior of Deep Residual Networks From a Dynamical System Perspective
Jinshu Huang, Mingfei Sun, Chunlin Wu
https://arxiv.org/abs/2602.20921 https://arxiv.org/pdf/2602.20921 https://arxiv.org/html/2602.20921
arXiv:2602.20921v1 Announce Type: new
Abstract: Deep neural networks (DNNs) have significantly advanced machine learning, with model depth playing a central role in their successes. The dynamical system modeling approach has recently emerged as a powerful framework, offering new mathematical insights into the structure and learning behavior of DNNs. In this work, we establish generalization error bounds for both discrete- and continuous-time residual networks (ResNets) by combining Rademacher complexity, flow maps of dynamical systems, and the convergence behavior of ResNets in the deep-layer limit. The resulting bounds are of order $O(1/\sqrt{S})$ with respect to the number of training samples $S$, and include a structure-dependent negative term, yielding depth-uniform and asymptotic generalization bounds under milder assumptions. These findings provide a unified understanding of generalization across both discrete- and continuous-time ResNets, helping to close the gap in both the order of sample complexity and assumptions between the discrete- and continuous-time settings.
toXiv_bot_toot
Just looking at the amount of work that went into this thing trying to sell something the core audience isn't buying shows how badly Mozilla chose when it comes to the new CEO
https://bsd.network/@dch/115968952449549217
🚨Paper Alert!🚨
Our work on Precambrian basement deep beneath the Williston Basin in #NorthDakota. We looked at two >2900 m drill cores, sampling the W edge of the Superior Craton and E margin of the 1.9 - 1.8 Ga Trans-Hudson Orogeny. ⚒️🧪 (T Nesheim, myself, J Vervoort)
I’m sick at home (Sorry #Merz) and watch analyses videos of Marvel Doomsday trailers. How can anyone who’s not neck deep in comic/movie/series lore enjoy these things anymore?! How do they make this work for normies?
- Customer vetting done in minutes, not weeks
- Synthesizes 50 data sources into one clear, actionable picture
- Delivers finished output: documents, decks, webpages, even full apps
📊 It's not just fast. Grep is state-of-the-art — top-ranked on the Deep Research Benchmark — so you get accuracy when it actually matters.
Whether you're a founder validating a market, an investor vetting a deal, or a strategist tracking competitors: serious work deserves serious …
From synthetic turbulence to true solutions: A deep diffusion model for discovering periodic orbits in the Navier-Stokes equations
Jeremy P Parker, Tobias M Schneider
https://arxiv.org/abs/2602.23181 https://arxiv.org/pdf/2602.23181 https://arxiv.org/html/2602.23181
arXiv:2602.23181v1 Announce Type: new
Abstract: Generative artificial intelligence has shown remarkable success in synthesizing data that mimic complex real-world systems, but its potential role in the discovery of mathematically meaningful structures in physical models remains underexplored. In this work, we demonstrate how a generative diffusion model can be used to uncover previously unknown solutions of a nonlinear partial differential equation: the two-dimensional Navier-Stokes equations in a turbulent regime. Trained on data from a direct numerical simulation of turbulence, the model learns to generate time series that resemble physically plausible trajectories. By carefully modifying the temporal structure of the model and enforcing the symmetries of the governing equations, we produce synthetic trajectories that are periodic in time, despite the fact that the training data did not contain periodic trajectories. These synthetic trajectories are then refined into true solutions using an iterative solver, yielding 111 new periodic orbits (POs) with very short periods. Our results reveal a previously unobserved richness in the PO structure of this system and suggest a broader role for generative AI: not as replacements for simulation and existing solvers, but as a complementary tool for navigating the complex solution spaces of nonlinear dynamical systems.
toXiv_bot_toot
Deep unfolding of MCMC kernels: scalable, modular & explainable GANs for high-dimensional posterior sampling
Jonathan Spence, Tob\'ias I. Liaudat, Konstantinos Zygalakis, Marcelo Pereyra
https://arxiv.org/abs/2602.20758 https://arxiv.org/pdf/2602.20758 https://arxiv.org/html/2602.20758
arXiv:2602.20758v1 Announce Type: new
Abstract: Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian computation, but can be computationally intensive, especially in high-dimensional settings. Push-forward generative models, such as generative adversarial networks (GANs), variational auto-encoders and normalising flows offer a computationally efficient alternative for posterior sampling. However, push-forward models are opaque as they lack the modularity of Bayes Theorem, leading to poor generalisation with respect to changes in the likelihood function. In this work, we introduce a novel approach to GAN architecture design by applying deep unfolding to Langevin MCMC algorithms. This paradigm maps fixed-step iterative algorithms onto modular neural networks, yielding architectures that are both flexible and amenable to interpretation. Crucially, our design allows key model parameters to be specified at inference time, offering robustness to changes in the likelihood parameters. We train these unfolded samplers end-to-end using a supervised regularized Wasserstein GAN framework for posterior sampling. Through extensive Bayesian imaging experiments, we demonstrate that our proposed approach achieves high sampling accuracy and excellent computational efficiency, while retaining the physics consistency, adaptability and interpretability of classical MCMC strategies.
toXiv_bot_toot
Urban Spots ✴️
城市噪点 ✴️
📷 Nikon F4E
🎞️ Ilford HP5 Plus 400, expired 1993
#filmphotography #Photography #blackandwhite
Want to break into climate work but don't know where to start? Terra.do's Learning for Action fellowship might be your answer.
This 12-week program goes deep on real-world climate solutions—beyond just clean energy. You'll learn the science, explore diverse solutions, and connect with a global community, all while working full-time (6-10 hrs/week).
Financial aid available.
Day Five in the Improv Narrative house, and we're in the format I like most really. A few instructive games in the first half and a couple of longer narrative stories in the second half.
The island game was supposed to teach something about not deliberately getting obstructive.
A scene where your players are told they are on one island and must end up at some point all on the other one the other side of the stage.
Set a scene, make some characters, but nobody said it was supposed to be difficult to get from one island to the other.
Yet barriers are deliberately thrown up, actually imaginary barriers since the whole thing is imaginary after all. Why should there be sharks or a quest for a boat or the sea deep and cold.
You can just wade across. You can just have a boat. You can just levitate yourself over with your hive mind psychic abilities.
Unsure about this.
There must be conflict and peril and challenges which are mastered in a story, you can't set up a hero's quest only to have the hero just happen to have a holy grail in the stationary cupboard. Already got one you see. Use it for storing pens.
Still. Finding the crowbar doesn't have to be a quest. There can just be one in the boot. Don't let things get bogged down in difficulty.
Watched a story about a lazy fellow falling into a life of crime and villainy because of his tardiness and fulfilling his teacher's prophecy that he would indeed end up as a criminal if he didn't buck up his ideas. Good repeated themes of characters making lists of his failures and nice stage-focus work when everyone was on stage at once.
Played a preacher organizing a wedding in a story about friends running a hotel.
Fun to have Reverend Priest finding sin everywhere again. Easy wipe-off sin in this case. We may have come to an end too early. Perhaps not enough obstructions put in the way. 😆
#improv #london #hooplaImpro
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
BLUETTI just dropped something big at CES 2026: a road trip charger that runs on your alternator and solar, plus power stations made from bio-circular plastics that slash CO2 emissions.
The Charger 2 solves slow-charging headaches with universal compatibility and bi-directional power. Meanwhile, their new Elite series brings sustainable materials to high-density energy storage.