Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@brichapman@mastodon.social
2025-12-20 19:28:00

38 coastal, remote, and island communities are getting a lifeline for their fragile energy grids.
Through the Energy Technology Innovation Partnership Project, they're designing microgrids, exploring local renewable generation, and hardening systems against extreme weather. The goal: reliable, affordable power that can withstand the next storm.

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 08:52:00

Benders Decomposition for Passenger-Oriented Train Timetabling with Hybrid Periodicity
Zhiyuan Yao, Anita Sch\"obel, Lei Nie, Sven J\"ager
arxiv.org/abs/2511.09892 arxiv.org/pdf/2511.09892 arxiv.org/html/2511.09892
arXiv:2511.09892v1 Announce Type: new
Abstract: Periodic timetables are widely adopted in passenger railway operations due to their regular service patterns and well-coordinated train connections. However, fluctuations in passenger demand require varying train services across different periods, necessitating adjustments to the periodic timetable. This study addresses a hybrid periodic train timetabling problem, which enhances the flexibility and demand responsiveness of a given periodic timetable through schedule adjustments and aperiodic train insertions, taking into account the rolling stock circulation. Since timetable modifications may affect initial passenger routes, passenger routing is incorporated into the problem to guide planning decisions towards a passenger-oriented objective. Using a time-space network representation, the problem is formulated as a dynamic railway service network design model with resource constraints. To handle the complexity of real-world instances, we propose a decomposition-based algorithm integrating Benders decomposition and column generation, enhanced with multiple preprocessing and accelerating techniques. Numerical experiments demonstrate the effectiveness of the algorithm and highlight the advantage of hybrid periodic timetables in reducing passenger travel costs.
toXiv_bot_toot

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:53:53

MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
arxiv.org/abs/2511.18980 arxiv.org/pdf/2511.18980 arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 18:17:39

Replaced article(s) found for physics.optics. arxiv.org/list/physics.optics/
[1/1]:
- LLM4Laser: Large Language Models Automate the Design of Lasers
Renjie Li, Ceyao Zhang, Sixuan Mao, Xiyuan Zhou, Feng Yin, Sergios Theodoridis, Zhaoyu Zhang
arxiv.org/abs/2104.12145
- Room-temperature valley-selective emission in Si-MoSe2 heterostructures enabled by high-quality-f...
Feng Pan, et al.
arxiv.org/abs/2409.09806 mastoxiv.page/@arXiv_physicsop
- 1T'-MoTe$_2$ as an integrated saturable absorber for photonic machine learning
Maria Carolina Volpato, Henrique G. Rosa, Tom Reep, Pierre-Louis de Assis, Newton Cesario Frateschi
arxiv.org/abs/2507.16140 mastoxiv.page/@arXiv_physicsop
- NeOTF: Guidestar-free neural representation for broadband dynamic imaging through scattering
Yunong Sun, Fei Xia
arxiv.org/abs/2507.22328 mastoxiv.page/@arXiv_physicsop
- Structured Random Models for Phase Retrieval with Optical Diffusers
Zhiyuan Hu, Fakhriyya Mammadova, Juli\'an Tachella, Michael Unser, Jonathan Dong
arxiv.org/abs/2510.14490 mastoxiv.page/@arXiv_physicsop
- Memory Effects in Time-Modulated Radiative Heat Transfer
Riccardo Messina, Philippe Ben-Abdallah
arxiv.org/abs/2510.19378 mastoxiv.page/@arXiv_physicsop
- Mie-tronics supermodes and symmetry breaking in nonlocal metasurfaces
Thanh Xuan Hoang, Ayan Nussupbekov, Jie Ji, Daniel Leykam, Jaime Gomez Rivas, Yuri Kivshar
arxiv.org/abs/2511.03560 mastoxiv.page/@arXiv_physicsop
- Integrated soliton microcombs beyond the turnkey limit
Wang, Xu, Wang, Zhu, Luo, Luo, Wang, Ni, Yang, Gong, Xiao, Li, Yang
arxiv.org/abs/2511.06909 mastoxiv.page/@arXiv_physicsop
- Ising accelerator with a reconfigurable interferometric photonic processor
Rausell-Campo, Al Kayed, P\'erez-L\'ppez, Aadhi, Shastri, Francoy
arxiv.org/abs/2511.13284 mastoxiv.page/@arXiv_physicsop
- Superradiance in dense atomic samples
I. M. de Ara\'ujo, H. Sanchez, L. F. Alves da Silva, M. H. Y. Moussa
arxiv.org/abs/2504.20242 mastoxiv.page/@arXiv_quantph_b
- Fluctuation-induced Hall-like lateral forces in a chiral-gain environment
Daigo Oue, M\'ario G. Silveirinha
arxiv.org/abs/2507.14754 mastoxiv.page/@arXiv_condmatme
- Tensor-network approach to quantum optical state evolution beyond the Fock basis
Nikolay Kapridov, Egor Tiunov, Dmitry Chermoshentsev
arxiv.org/abs/2511.15295 mastoxiv.page/@arXiv_quantph_b
- OmniLens : Blind Lens Aberration Correction via Large LensLib Pre-Training and Latent PSF Repres...
Jiang, Qian, Gao, Sun, Yang, Yi, Li, Yang, Van Gool, Wang
arxiv.org/abs/2511.17126 mastoxiv.page/@arXiv_eessIV_bo
toXiv_bot_toot