Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_quantph_bot@mastoxiv.page
2025-06-10 18:56:00

This arxiv.org/abs/2505.22083 has been replaced.
initial toot: mastoxiv.page/@arXiv_qu…

@arXiv_physicsoptics_bot@mastoxiv.page
2025-06-10 11:12:12

Dense Associative Memory in a Nonlinear Optical Hopfield Neural Network
Khalid Musa, Santosh Kumar, Michael Katidis, Yu-Ping Huang
arxiv.org/abs/2506.07849

@arXiv_csLG_bot@mastoxiv.page
2025-07-11 10:23:51

Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Ziyue Li, Yang Li, Tianyi Zhou
arxiv.org/abs/2507.07996 arxiv.org/pdf/2507.07996 arxiv.org/html/2507.07996
arXiv:2507.07996v1 Announce Type: new
Abstract: Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
toXiv_bot_toot

@arXiv_csNE_bot@mastoxiv.page
2025-06-10 16:47:59

This arxiv.org/abs/2506.05588 has been replaced.
initial toot: mastoxiv.page/@arXiv_csNE_…

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-09 08:31:22

A Linear Generative Framework for Structure-Function Coupling in the Human Brain
Sam Frank Kelemen, Joaqu\'in G\~oni, S\'ergio Pequito, Arian Ashourvan
arxiv.org/abs/2507.06136

@arXiv_qbioMN_bot@mastoxiv.page
2025-06-06 09:53:26

This arxiv.org/abs/2406.03456 has been replaced.
initial toot: mastoxiv.page/@arXiv_qbi…

@arXiv_csSD_bot@mastoxiv.page
2025-06-30 08:31:00

Fine-Tuning MIDI-to-Audio Alignment using a Neural Network on Piano Roll and CQT Representations
Sebastian Murgul, Moritz Reiser, Michael Heizmann, Christoph Seibert
arxiv.org/abs/2506.22237

@arXiv_csNE_bot@mastoxiv.page
2025-06-09 07:46:22

Preprocessing Methods for Memristive Reservoir Computing for Image Recognition
Rishona Daniels, Duna Wattad, Ronny Ronen, David Saad, Shahar Kvatinsky
arxiv.org/abs/2506.05588

@arXiv_csCE_bot@mastoxiv.page
2025-06-06 07:15:45

ChemReservoir -- An Open-Source Framework for Chemically-Inspired Reservoir Computing
Mehmet Aziz Yirik, Jakob Lykke Andersen, Rolf Fagerberg, Daniel Merkle
arxiv.org/abs/2506.04249

@arXiv_eessSY_bot@mastoxiv.page
2025-06-05 09:49:21

This arxiv.org/abs/2506.01226 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_condmatdisnn_bot@mastoxiv.page
2025-07-28 08:32:41

Adaptive Neural Quantum States: A Recurrent Neural Network Perspective
Jake McNaughton, Mohamed Hibat-Allah
arxiv.org/abs/2507.18700 arxiv.…

@arXiv_qfinST_bot@mastoxiv.page
2025-08-06 09:01:30

Adaptive Market Intelligence: A Mixture of Experts Framework for Volatility-Sensitive Stock Forecasting
Diego Vallarino
arxiv.org/abs/2508.02686

@arXiv_eessAS_bot@mastoxiv.page
2025-07-04 08:57:41

Multi-Utterance Speech Separation and Association Trained on Short Segments
Yuzhu Wang, Archontis Politis, Konstantinos Drossos, Tuomas Virtanen
arxiv.org/abs/2507.02562

@arXiv_eessIV_bot@mastoxiv.page
2025-06-25 09:44:50

ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation
Ahmad Mustafa, Reza Rastegar, Ghassan AlRegib
arxiv.org/abs/2506.19687

@arXiv_csCV_bot@mastoxiv.page
2025-07-14 10:08:22

NeuralOS: Towards Simulating Operating Systems via Neural Generative Models
Luke Rivard, Sun Sun, Hongyu Guo, Wenhu Chen, Yuntian Deng
arxiv.org/abs/2507.08800

@arXiv_eessSY_bot@mastoxiv.page
2025-06-03 07:42:51

React to Surprises: Stable-by-Design Neural Feedback Control and the Youla-REN
Nicholas H. Barbara, Ruigang Wang, Alexandre Megretski, Ian R. Manchester
arxiv.org/abs/2506.01226

@arXiv_qbioNC_bot@mastoxiv.page
2025-05-29 07:36:53

Organizational Regularities in Recurrent Neural Networks
Claus Metzner, Achim Schilling, Andreas Maier, Patrick Krauss
arxiv.org/abs/2505.22047

@arXiv_csMM_bot@mastoxiv.page
2025-06-03 07:21:56

Iola Walker: A Mobile Footfall Detection System for Music Composition
Will James
arxiv.org/abs/2506.01211 arxiv.org/p…

@arXiv_csCE_bot@mastoxiv.page
2025-07-04 08:23:01

Time Resolution Independent Operator Learning
Diab W. Abueidda, Mbebo Nonna, Panos Pantidis, Mostafa E. Mobasher
arxiv.org/abs/2507.02524

@arXiv_eessSY_bot@mastoxiv.page
2025-06-26 08:52:40

Recurrent neural network-based robust control systems with closed-loop regional incremental ISS and application to MPC design
Daniele Ravasio, Marcello Farina, Alessio La Bella, Andrea Ballarino
arxiv.org/abs/2506.20334

@arXiv_csNE_bot@mastoxiv.page
2025-07-30 08:00:46

Hebbian Memory-Augmented Recurrent Networks: Engram Neurons in Deep Learning
Daniel Szelogowski
arxiv.org/abs/2507.21474 arxiv.org/pdf/2507…

@arXiv_condmatquantgas_bot@mastoxiv.page
2025-07-29 16:17:01

Replaced article(s) found for cond-mat.quant-gas. arxiv.org/list/cond-mat.quant-
[1/1]:
- Recurrent neural network wave functions for Rydberg atom arrays on kagome lattice
Mohamed Hibat-Allah, Ejaaz Merali, Giacomo Torlai, Roger G Melko, Juan Carrasquilla…

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-23 09:06:30

Brain-inspired interpretable reservoir computing with resonant recurrent neural networks
Mark A. Kramer
arxiv.org/abs/2506.17083

@arXiv_csIR_bot@mastoxiv.page
2025-06-17 09:56:33

Device-Cloud Collaborative Correction for On-Device Recommendation
Tianyu Zhan, Shengyu Zhang, Zheqi Lv, Jieming Zhu, Jiwei Li, Fan Wu, Fei Wu
arxiv.org/abs/2506.12687

@arXiv_eessSP_bot@mastoxiv.page
2025-06-16 08:29:10

Recursive KalmanNet: Deep Learning-Augmented Kalman Filtering for State Estimation with Consistent Uncertainty Quantification
Hassan Mortada, Cyril Falcon, Yanis Kahil, Math\'eo Clavaud, Jean-Philippe Michel
arxiv.org/abs/2506.11639

@arXiv_csSD_bot@mastoxiv.page
2025-07-23 12:41:59

Replaced article(s) found for cs.SD. arxiv.org/list/cs.SD/new
[1/1]:
- ReMi: A Random Recurrent Neural Network Approach to Music Production
Hugo Chateau-Laurent, Tara Vanhatalo, Wei-Tung Pan, Xavier Hinaut

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-24 09:22:09

Sequence-to-Sequence Models with Attention Mechanistically Map to the Architecture of Human Memory Search
Nikolaus Salvatore, Qiong Zhang
arxiv.org/abs/2506.17424

@arXiv_csNE_bot@mastoxiv.page
2025-07-16 09:05:21

Biological Processing Units: Leveraging an Insect Connectome to Pioneer Biofidelic Neural Architectures
Siyu Yu, Zihan Qin, Tingshan Liu, Beiya Xu, R. Jacob Vogelstein, Jason Brown, Joshua T. Vogelstein
arxiv.org/abs/2507.10951