SecInfer: Preventing Prompt Injection via Inference-time Scaling
Yupei Liu, Yanting Wang, Yuqi Jia, Jinyuan Jia, Neil Zhenqiang Gong
https://arxiv.org/abs/2509.24967 https://
Markov Decision Processing Networks
Sanidhay Bhambay, Thirupathaiah Vasantam, Neil Walton
https://arxiv.org/abs/2509.24541 https://arxiv.org/pdf/2509.24541…
Multihead Finite-State Dimension
Xiang Huang, Xiaoyuan Li, Jack H. Lutz, Neil Lutz
https://arxiv.org/abs/2509.22912 https://arxiv.org/pdf/2509.22912…
Spectral-temporal processing using integrated recursive electro-optic circuit
Xudong Li, Yaowen Hu, Tong Ge, Andrea Cordaro, Yunxiang Song, Xinrui Zhu, Shengyuan Lu, Keith Powell, Let\'icia Magalh\~aes, Urban Senica, Neil Sinclair, Marko Lon\v{c}ar
https://arxiv.org/abs/2509.25102
Comprehensive X-ray Observations of the Exceptional Ultra-long X-ray and Gamma-ray Transient GRB 250702B with Swift, NuSTAR, and Chandra: Insights from the X-ray Afterglow Properties
Brendan O'Connor, Ramandeep Gill, James DeLaunay, Jeremy Hare, Dheeraj Pasham, Eric R. Coughlin, Ananya Bandopadhyay, Akash Anumarlapudi, Paz Beniamini, Jonathan Granot, Igor Andreoni, Jonathan Carney, Michael J. Moss, Ersin G\"o\u{g}\"u\c{s}, Jamie A. Kennea, Malte Busmann, Simone Dichiara, …
NaviGait: Navigating Dynamically Feasible Gait Libraries using Deep Reinforcement Learning
Neil C. Janwani, Varun Madabushi, Maegan Tucker
https://arxiv.org/abs/2510.11542 https…
Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
https://arxiv.org/abs/2512.17820 https://arxiv.org/pdf/2512.17820 https://arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot
Position: AI Will Transform Neuropsychology Through Mental Health Digital Twins for Dynamic Mental Health Care, Especially for ADHD
Neil Natarajan, Sruthi Viswanathan, Xavier Roberts-Gaal, Michelle Marie Martel
https://arxiv.org/abs/2510.07409
Numerical modeling of laser cooling in molecules: From simple diatomics to polyatomics and radioactive species
Felix Kogel, Tatsam Garg, Phillip Gro{\ss}, Lukas Leczek, Marian Rockenh\"auser, Neil Shah, Jakob Wei{\ss}, Andreas Schindewolf, Tim Langen
https://arxiv.org/abs/2510.16203
Replaced article(s) found for hep-ph. https://arxiv.org/list/hep-ph/new
[1/2]:
- Repurposing lattice QCD results for composite phenomenology
Thomas DeGrand, Ethan T. Neil