Tootfinder

Opt-in global Mastodon full text search. Join the index!

@metacurity@infosec.exchange
2025-12-30 12:09:54

“The problem today is that around 80 percent of all the [space data] traffic is downlinked to a single location in Svalbard, which is an island shared between different countries, including Russia”
politico.eu/article/space-hack

@toxi@mastodon.thi.ng
2026-01-10 13:23:30

My art, but not my video[1] for #Genuary9 #Genuary2026: Crazy Automata
C-SCAPE (2022), a piece of multiple co-evolving cellular automata, running as a massive projection as part of the ALGORYTHMS group exhibition at Collective, Oslo (August 2023).

Video panorama of a dark gallery space with a massive super-widescreen projection of an abstract, colorful 1.5D cellular automata artwork, with different automata creating different structures which are partially displacing and/or morphing with each other whilst they all co-evolve in the same shared space/environment.
@Dragofix@veganism.social
2026-01-08 00:28:38

Oil residues can travel over 5,000 miles on ocean debris, study finds #ocean

@nemobis@mamot.fr
2025-12-01 07:30:14

From our #cooperative, #MajavanTila: "Membership benefits for 2026".
majavantila.fi/en/posts/…

@toxi@mastodon.thi.ng
2026-01-10 14:10:34

My art, but not my video[1] for #Genuary9 #Genuary2026: Crazy Automata
C-SCAPE (2022), a piece of multiple co-evolving cellular automata, running on vertical aspect ratio LCD screen.
Also see:

Short phone video of an abstract, colorful 1.5D cellular automata artwork, with different automata creating different structures which are partially displacing and/or morphing & interacting with each other whilst they all co-evolve in the same shared space/environment. After a few seconds the camera zooms into a region of the artwork to show more close-up details.
@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:53:53

MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
arxiv.org/abs/2511.18980 arxiv.org/pdf/2511.18980 arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot