Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_quantph_bot@mastoxiv.page
2025-06-27 10:06:19

Canonical Quantization of a Memristive Leaky Integrate-and-Fire Neuron Circuit
Dean Brand, Domenica Dibenedetto, Francesco Petruccione
arxiv.org/abs/2506.21363

@arXiv_qbioNC_bot@mastoxiv.page
2025-05-29 10:30:37

This arxiv.org/abs/2306.11965 has been replaced.
initial toot: mastoxiv.page/@arXiv_qbi…

@arXiv_csCL_bot@mastoxiv.page
2025-06-27 09:58:09

TopK Language Models
Ryosuke Takahashi, Tatsuro Inaba, Kentaro Inui, Benjamin Heinzerling
arxiv.org/abs/2506.21468 arxiv.org/pdf/2506.21468 arxiv.org/html/2506.21468
arXiv:2506.21468v1 Announce Type: new
Abstract: Sparse autoencoders (SAEs) have become an important tool for analyzing and interpreting the activation space of transformer-based language models (LMs). However, SAEs suffer several shortcomings that diminish their utility and internal validity. Since SAEs are trained post-hoc, it is unclear if the failure to discover a particular concept is a failure on the SAE's side or due to the underlying LM not representing this concept. This problem is exacerbated by training conditions and architecture choices affecting which features an SAE learns. When tracing how LMs learn concepts during training, the lack of feature stability also makes it difficult to compare SAEs features across different checkpoints. To address these limitations, we introduce a modification to the transformer architecture that incorporates a TopK activation function at chosen layers, making the model's hidden states equivalent to the latent features of a TopK SAE. This approach eliminates the need for post-hoc training while providing interpretability comparable to SAEs. The resulting TopK LMs offer a favorable trade-off between model size, computational efficiency, and interpretability. Despite this simple architectural change, TopK LMs maintain their original capabilities while providing robust interpretability benefits. Our experiments demonstrate that the sparse representations learned by TopK LMs enable successful steering through targeted neuron interventions and facilitate detailed analysis of neuron formation processes across checkpoints and layers. These features make TopK LMs stable and reliable tools for understanding how language models learn and represent concepts, which we believe will significantly advance future research on model interpretability and controllability.
toXiv_bot_toot

@seeingwithsound@mas.to
2025-06-23 15:29:14

Prediction: interest in electrode-based BCIs will wane (aside from niches), as non-implantable ("non-invasive") phased array focused ultrasound will take over for both brain stimulation and measuring brain activity cell.com/neuron/fulltext/S0896

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:05:04

Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy?
Louis Vervoort, Vitaly Nikolaev
arxiv.org/abs/2506.14239

@netzschleuder@social.skewed.de
2025-06-22 11:00:04

fly_larva: Drosophila larva brain (2023)
A complete synaptic map of the brain connectome of the larva of the fruit fly Drosophila melanogaster. Nodes are neurons, and edges are synaptic connections, traced individually from brain image sections using three-dimensional electron microscopy–based reconstruction. Node metadata include the neuron hempisphere, hemispherical homologue, cell type, annotations, and inferred cluster. Edge metadata include the type of interaction (`'aa'`,…

fly_larva: Drosophila larva brain (2023). 2956 nodes, 116922 edges. https://networks.skewed.de/net/fly_larva
@arXiv_csNE_bot@mastoxiv.page
2025-06-17 09:53:13

Energy-Efficient Digital Design: A Comparative Study of Event-Driven and Clock-Driven Spiking Neurons
Filippo Marostica, Alessio Carpegna, Alessandro Savino, Stefano Di Carlo
arxiv.org/abs/2506.13268

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-26 08:54:40

Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings
Ian Christopher Tanoh, Michael Deistler, Jakob H. Macke, Scott W. Linderman
arxiv.org/abs/2506.20233

@arXiv_csAI_bot@mastoxiv.page
2025-06-24 12:01:10

Steering Conceptual Bias via Transformer Latent-Subspace Activation
Vansh Sharma, Venkat Raman
arxiv.org/abs/2506.18887

@arXiv_csIR_bot@mastoxiv.page
2025-06-02 07:19:18

A Novel Discrete Memristor-Coupled Heterogeneous Dual-Neuron Model and Its Application in Multi-Scenario Image Encryption
Yi Zou, Mengjiao Wang, Xinan Zhang, Herbert Ho-Ching Iu
arxiv.org/abs/2505.24294

@arXiv_csCR_bot@mastoxiv.page
2025-06-06 07:16:56

SECNEURON: Reliable and Flexible Abuse Control in Local LLMs via Hybrid Neuron Encryption
Zhiqiang Wang, Haohua Du, Junyang Wang, Haifeng Sun, Kaiwen Guo, Haikuo Yu, Chao Liu, Xiang-Yang Li
arxiv.org/abs/2506.05242

@arXiv_csLG_bot@mastoxiv.page
2025-06-05 11:00:37

This arxiv.org/abs/2506.00691 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-26 08:08:30

Impact of Hill coefficient and time delay on a perceptual decision-making model
Bart{\l}omiej Morawski, Anna Czartoszewska
arxiv.org/abs/2506.19853

@arXiv_physicsappph_bot@mastoxiv.page
2025-06-10 17:58:20

This arxiv.org/abs/2409.09268 has been replaced.
initial toot: mastoxiv.page/@arX…

@arXiv_eessSY_bot@mastoxiv.page
2025-06-17 12:09:13

A Hybrid Artificial Intelligence Method for Estimating Flicker in Power Systems
Javad Enayati, Pedram Asef, Alexandre Benoit
arxiv.org/abs/2506.13611

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-06-18 09:29:18

Evolutionary chemical learning in dimerization networks
Alexei V. Tkachenko, Bortolo Matteo Mognetti, Sergei Maslov
arxiv.org/abs/2506.14006

@arXiv_csNE_bot@mastoxiv.page
2025-06-09 07:46:52

Integer Binary-Range Alignment Neuron for Spiking Neural Networks
Binghao Ye, Wenjuan Li, Dong Wang, Man Yao, Bing Li, Weiming Hu, Dong Liang, Kun Shang
arxiv.org/abs/2506.05679

@arXiv_physicsbioph_bot@mastoxiv.page
2025-06-17 11:43:05

Decision-making in light-trapped slime molds involves active mechanical processes
Lisa Schick, Emily Eichenlaub, Fabian Drexel, Alexander Mayer, Siyu Chen, Marcus Roper, Karen Alim
arxiv.org/abs/2506.12803

@arXiv_mathDS_bot@mastoxiv.page
2025-06-16 08:18:39

Stable Synchronous Propagation in Feedforward Networks for Biped Locomotion
Ian Stewart, David Wood
arxiv.org/abs/2506.11780

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-25 08:32:30

Convergent and divergent connectivity patterns of the arcuate fasciculus in macaques and humans
Jiahao Huang, Ruifeng Li, Wenwen Yu, Anan Li, Xiangning Li, Mingchao Yan, Lei Xie, Qingrun Zeng, Xueyan Jia, Shuxin Wang, Ronghui Ju, Feng Chen, Qingming Luo, Hui Gong, Xiaoquan Yang, Yuanjing Feng, Zheng Wang
arxiv.org/abs/25…

@arXiv_csNE_bot@mastoxiv.page
2025-06-04 07:23:43

Minimal Neuron Circuits -- Part I: Resonators
Amr Nabil, T. Nandha Kumar, Haider Abbas F. Almurib
arxiv.org/abs/2506.02341

@arXiv_csNE_bot@mastoxiv.page
2025-06-17 09:48:56

Neuromorphic Online Clustering and Its Application to Spike Sorting
James E. Smith
arxiv.org/abs/2506.12555 arxiv.org…

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-10 09:42:53

A Neuronal Model at the Edge of Criticality: An Ising-Inspired Approach to Brain Dynamics
Sajedeh Sarmastani, Maliheh Ghodrat, Yousef Jamali
arxiv.org/abs/2506.07027

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-04 07:49:38

Identifying interactions across brain areas while accounting for individual-neuron dynamics with a Transformer-based variational autoencoder
Qi Xin, Robert E. Kass
arxiv.org/abs/2506.02263

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-17 11:57:29

Characterizing Neural Manifolds' Properties and Curvatures using Normalizing Flows
Peter Bouss, Sandra Nester, Kirsten Fischer, Claudia Merger, Alexandre Ren\'e, Moritz Helias
arxiv.org/abs/2506.12187

@arXiv_csNE_bot@mastoxiv.page
2025-06-10 08:02:42

Structured State Space Model Dynamics and Parametrization for Spiking Neural Networks
Maxime Fabre, Lyubov Dudchenko, Emre Neftci
arxiv.org/abs/2506.06374

@arXiv_csNE_bot@mastoxiv.page
2025-06-11 07:44:43

A Practical Guide to Tuning Spiking Neuronal Dynamics
William Gebhardt, Alexander G. Ororbia, Nathan McDonald, Clare Thiem, Jack Lombardi
arxiv.org/abs/2506.08138

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-05 07:36:56

Learning to cluster neuronal function
Nina S. Nellen, Polina Turishcheva, Michaela Vystr\v{c}ilov\'a, Shashwat Sridhar, Tim Gollisch, Andreas S. Tolias, Alexander S. Ecker
arxiv.org/abs/2506.03293