Sources: Snowflake is in talks to buy app monitoring startup Observe for around $1B; Observe has raised more than $470M (The Information)
https://www.theinformation.com/articles/snowflake-talks-buy-app-monitoring-startup-observe…
Deep unfolding of MCMC kernels: scalable, modular & explainable GANs for high-dimensional posterior sampling
Jonathan Spence, Tob\'ias I. Liaudat, Konstantinos Zygalakis, Marcelo Pereyra
https://arxiv.org/abs/2602.20758 https://arxiv.org/pdf/2602.20758 https://arxiv.org/html/2602.20758
arXiv:2602.20758v1 Announce Type: new
Abstract: Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian computation, but can be computationally intensive, especially in high-dimensional settings. Push-forward generative models, such as generative adversarial networks (GANs), variational auto-encoders and normalising flows offer a computationally efficient alternative for posterior sampling. However, push-forward models are opaque as they lack the modularity of Bayes Theorem, leading to poor generalisation with respect to changes in the likelihood function. In this work, we introduce a novel approach to GAN architecture design by applying deep unfolding to Langevin MCMC algorithms. This paradigm maps fixed-step iterative algorithms onto modular neural networks, yielding architectures that are both flexible and amenable to interpretation. Crucially, our design allows key model parameters to be specified at inference time, offering robustness to changes in the likelihood parameters. We train these unfolded samplers end-to-end using a supervised regularized Wasserstein GAN framework for posterior sampling. Through extensive Bayesian imaging experiments, we demonstrate that our proposed approach achieves high sampling accuracy and excellent computational efficiency, while retaining the physics consistency, adaptability and interpretability of classical MCMC strategies.
toXiv_bot_toot
MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
https://arxiv.org/abs/2511.18980 https://arxiv.org/pdf/2511.18980 https://arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot
I am simple minded - in that I am often simply blown away at the intelligence that went into the design of some common objects.
Today I'm working with T-Posts, a kind of steel fencing post. They seem simple, but they are well designed to hold fencing and to support things like firing range targets.
And some years back I designed and built a craftsman/arts-and-crafts fireplace. I used over 300 tiles of various sizes. It is really cool how those tiles are sized to fit togeth…
I explained something for a friend in a simple way, and I think it's worth paraphrasing again here.
You cannot create a system that constrains itself. Any constraint on a system must be external to the system, or that constraint can be ignored or removed. That's just how systems work. Every constitution for every country claims to do this impossible thing, a thing proven is impossible almost 100 years ago now. Gödel's loophole has been known to exist since 1947.
Every constitution in the world, every "separation of powers" and set of "checks and balances," attempts to do something which is categorically impossible. Every government is always, at best, a few steps away from authoritarianism. From this, we would then expect that governments trand towards authoritarianism. Which, of course, is what we see historically.
Constraints on power are a formality, because no real controls can possibly exist. So then democratic processes become sort of collective classifiers that try to select only people who won't plunge the country into a dictatorship. Again, because this claim of restrictions on powers is a lie (willful or ignorant, a lie reguardless) that classifier has to be correct 100% of the time (even assuming a best case scenario). That's statistically unlikely.
So as long as you have a system of concentrated power, you will have the worst people attracted to it, and you will inevitably have that power fall into the hands of one of the worst possible person.
Fortunately, there is an alternative. The alternative is to not centralize power. In the security world we try to design systems that assume compromise and minimize impact, rather than just assuming that we will be right 100% of the time. If you build systems that maximially distribute power, then you minimize the impact of one horrible person.
Now, I didn't mention this because we're both already under enough stress, but...
Almost 90% of the nuclear weapons deployed around the world are in the hands of ghoulish dictators. Only two of the countries with nuclear weapons not straight up authoritarian, but they're not far off. We're one crashout away from steralizing the surface of the Earth with nuclear hellfire. Maybe countries shouldn't exist, and *definitely* multiple thousands of nuclear weapons shouldn't exist and shouldn't all be wired together to launch as soon as one of these assholes goes a bit too far sideways.
Oh! Look, we are getting Christmas visitors!
I asked if they aren't a bit late. But the Grey answered that a wizard is never late. He is also not too early. Wizards arrive just in time.
Well, let's see.
#Lego
Series B, Episode 04 - Horizon
ZEN: Planet visual is now available. [Horizon appears on main screen]
JENNA: Still holding course, Standard by Two.
BLAKE: Freighter's speed?
ZEN: Time Distort Six.
BLAKE: Freighter's planetfall?
https://blake.torpidity.net/m/204/69
Urban Demons V 👻
城市鬼魂 V 👻
📷 Nikon F4E
🎞️ Rollei RPX 400
If you like my work, buy me a coffee from PayPal #filmphotography
WeirNet: A Large-Scale 3D CFD Benchmark for Geometric Surrogate Modeling of Piano Key Weirs
Lisa L\"uddecke, Michael Hohmann, Sebastian Eilermann, Jan Tillmann-Mumm, Pezhman Pourabdollah, Mario Oertel, Oliver Niggemann
https://arxiv.org/abs/2602.20714 https://arxiv.org/pdf/2602.20714 https://arxiv.org/html/2602.20714
arXiv:2602.20714v1 Announce Type: new
Abstract: Reliable prediction of hydraulic performance is challenging for Piano Key Weir (PKW) design because discharge capacity depends on three-dimensional geometry and operating conditions. Surrogate models can accelerate hydraulic-structure design, but progress is limited by scarce large, well-documented datasets that jointly capture geometric variation, operating conditions, and functional performance. This study presents WeirNet, a large 3D CFD benchmark dataset for geometric surrogate modeling of PKWs. WeirNet contains 3,794 parametric, feasibility-constrained rectangular and trapezoidal PKW geometries, each scheduled at 19 discharge conditions using a consistent free-surface OpenFOAM workflow, resulting in 71,387 completed simulations that form the benchmark and with complete discharge coefficient labels. The dataset is released as multiple modalities compact parametric descriptors, watertight surface meshes and high-resolution point clouds together with standardized tasks and in-distribution and out-of-distribution splits. Representative surrogate families are benchmarked for discharge coefficient prediction. Tree-based regressors on parametric descriptors achieve the best overall accuracy, while point- and mesh-based models remain competitive and offer parameterization-agnostic inference. All surrogates evaluate in milliseconds per sample, providing orders-of-magnitude speedups over CFD runtimes. Out-of-distribution results identify geometry shift as the dominant failure mode compared to unseen discharge values, and data-efficiency experiments show diminishing returns beyond roughly 60% of the training data. By publicly releasing the dataset together with simulation setups and evaluation pipelines, WeirNet establishes a reproducible framework for data-driven hydraulic modeling and enables faster exploration of PKW designs during the early stages of hydraulic planning.
toXiv_bot_toot
Hierarchic-EEG2Text: Assessing EEG-To-Text Decoding across Hierarchical Abstraction Levels
Anupam Sharma, Harish Katti, Prajwal Singh, Shanmuganathan Raman, Krishna Miyapuram
https://arxiv.org/abs/2602.20932 https://arxiv.org/pdf/2602.20932 https://arxiv.org/html/2602.20932
arXiv:2602.20932v1 Announce Type: new
Abstract: An electroencephalogram (EEG) records the spatially averaged electrical activity of neurons in the brain, measured from the human scalp. Prior studies have explored EEG-based classification of objects or concepts, often for passive viewing of briefly presented image or video stimuli, with limited classes. Because EEG exhibits a low signal-to-noise ratio, recognizing fine-grained representations across a large number of classes remains challenging; however, abstract-level object representations may exist. In this work, we investigate whether EEG captures object representations across multiple hierarchical levels, and propose episodic analysis, in which a Machine Learning (ML) model is evaluated across various, yet related, classification tasks (episodes). Unlike prior episodic EEG studies that rely on fixed or randomly sampled classes of equal cardinality, we adopt hierarchy-aware episode sampling using WordNet to generate episodes with variable classes of diverse hierarchy. We also present the largest episodic framework in the EEG domain for detecting observed text from EEG signals in the PEERS dataset, comprising $931538$ EEG samples under $1610$ object labels, acquired from $264$ human participants (subjects) performing controlled cognitive tasks, enabling the study of neural dynamics underlying perception, decision-making, and performance monitoring.
We examine how the semantic abstraction level affects classification performance across multiple learning techniques and architectures, providing a comprehensive analysis. The models tend to improve performance when the classification categories are drawn from higher levels of the hierarchy, suggesting sensitivity to abstraction. Our work highlights abstraction depth as an underexplored dimension of EEG decoding and motivates future research in this direction.
toXiv_bot_toot