Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-09-05 07:45:50

Combining feature-based approaches with graph neural networks and symbolic regression for synergistic performance and interpretability
Rog\'erio Almeida Gouv\^ea, Pierre-Paul De Breuck, Tatiane Pretto, Gian-Marco Rignanese, Marcos Jos\'e Leite dos Santos
arxiv.org/abs/2509.03547

@arXiv_csSE_bot@mastoxiv.page
2025-08-29 09:17:41

ConfLogger: Enhance Systems' Configuration Diagnosability through Configuration Logging
Shiwen Shan, Yintong Huo, Yuxin Su, Zhining Wang, Dan Li, Zibin Zheng
arxiv.org/abs/2508.20977

@arXiv_statAP_bot@mastoxiv.page
2025-09-05 08:26:51

Latent space projections and atlases: A cautionary tale in deep neuroimaging using autoencoders
J. M. Gorriz, F. Segovia, C. Jimenez, J. E. Arco, F. J. Martinez, J Ramirez, S. Abulikemu, J. Suckling
arxiv.org/abs/2509.03675

@arXiv_eessIV_bot@mastoxiv.page
2025-10-02 08:38:01

Latent Representation Learning from 3D Brain MRI for Interpretable Prediction in Multiple Sclerosis
Trinh Ngoc Huynh, Nguyen Duc Kien, Nguyen Hai Anh, Dinh Tran Hiep, Manuela Vaneckova, Tomas Uher, Jeroen Van Schependom, Stijn Denissen, Tran Quoc Long, Nguyen Linh Trung, Guy Nagels
arxiv.org/abs/2510.00051

@arXiv_statME_bot@mastoxiv.page
2025-10-02 09:19:00

A Data-Adaptive Factor Model Using Composite Quantile Approach
Seeun Park, Hee-Seok Oh
arxiv.org/abs/2510.00558 arxiv.org/pdf/2510.00558

@arXiv_csCE_bot@mastoxiv.page
2025-09-03 10:58:43

Autoencoder-based non-intrusive model order reduction in continuum mechanics
Jannick Kehls, Ellen Kuhl, Tim Brepols, Kevin Linka, Hagen Holthusen
arxiv.org/abs/2509.02237

@arXiv_csCR_bot@mastoxiv.page
2025-10-02 10:08:41

Fast, Secure, and High-Capacity Image Watermarking with Autoencoded Text Vectors
Gautier Evennou, Vivien Chappelier, Ewa Kijak
arxiv.org/abs/2510.00799

@arXiv_physicsoptics_bot@mastoxiv.page
2025-10-03 09:36:51

Towards Photonic Band Diagram Generation with Transformer-Latent Diffusion Models
Valentin Delchevalerie, Nicolas Roy, Arnaud Bougaham, Alexandre Mayer, Beno\^it Fr\'enay, Micha\"el Lobet
arxiv.org/abs/2510.01749

@arXiv_csCV_bot@mastoxiv.page
2025-09-26 10:26:01

Does FLUX Already Know How to Perform Physically Plausible Image Composition?
Shilin Lu, Zhuming Lian, Zihan Zhou, Shaocong Zhang, Chen Zhao, Adams Wai-Kin Kong
arxiv.org/abs/2509.21278

@arXiv_csLG_bot@mastoxiv.page
2025-10-01 11:58:17

Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training
Junlin Han, Shengbang Tong, David Fan, Yufan Ren, Koustuv Sinha, Philip Torr, Filippos Kokkinos
arxiv.org/abs/2509.26625