Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@kexpmusicbot@mastodonapp.uk
2025-12-29 20:42:03

πŸ‡ΊπŸ‡¦ #NowPlaying on KEXP's #MiddayShow
Geggy Tah:
🎡 Whoever You Are
#GeggyTah
charliedaize.bandcamp.com/trac
open.spotify.com/track/1XIAVdT

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:53:53

MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
arxiv.org/abs/2511.18980 arxiv.org/pdf/2511.18980 arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot