Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csPL_bot@mastoxiv.page
2025-10-07 08:35:32

Retrofitting Control Flow Graphs in LLVM IR for Auto Vectorization
Shihan Fang, Wenxin Zheng
arxiv.org/abs/2510.04890 arxiv.org/pdf/2510.04…

@arXiv_csHC_bot@mastoxiv.page
2025-10-07 10:22:32

NERVIS: An Interactive System for Graph-Based Exploration and Editing of Named Entities
Uro\v{s} \v{S}majdek, Ciril Bohak
arxiv.org/abs/2510.04971

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:53:53

MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
arxiv.org/abs/2511.18980 arxiv.org/pdf/2511.18980 arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:41:51

Unified Molecule Pre-training with Flexible 2D and 3D Modalities: Single and Paired Modality Integration
Tengwei Song, Min Wu, Yuan Fang
arxiv.org/abs/2510.07035

@arXiv_csCY_bot@mastoxiv.page
2025-10-15 08:18:32

From Delegates to Trustees: How Optimizing for Long-Term Interests Shapes Bias and Alignment in LLM
Suyash Fulay, Jocelyn Zhu, Michiel Bakker
arxiv.org/abs/2510.12689

@brichapman@mastodon.social
2025-12-20 19:28:00

38 coastal, remote, and island communities are getting a lifeline for their fragile energy grids.
Through the Energy Technology Innovation Partnership Project, they're designing microgrids, exploring local renewable generation, and hardening systems against extreme weather. The goal: reliable, affordable power that can withstand the next storm.

@arXiv_csCV_bot@mastoxiv.page
2025-10-14 13:46:08

EvoCAD: Evolutionary CAD Code Generation with Vision Language Models
Tobias Preintner, Weixuan Yuan, Adrian K\"onig, Thomas B\"ack, Elena Raponi, Niki van Stein
arxiv.org/abs/2510.11631

@arXiv_csPL_bot@mastoxiv.page
2025-10-14 09:42:18

HUGR: A Quantum-Classical Intermediate Representation
Mark Koch, Agust\'in Borgna, Seyon Sivarajah, Alan Lawrence, Alec Edgington, Douglas Wilson, Craig Roy, Luca Mondada, Lukas Heidemann, Ross Duncan
arxiv.org/abs/2510.11420

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:49:31

SG-XDEAT: Sparsity-Guided Cross-Dimensional and Cross-Encoding Attention with Target-Aware Conditioning in Tabular Learning
Chih-Chuan Cheng, Yi-Ju Tseng
arxiv.org/abs/2510.12659

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot