
2025-06-27 10:06:19
Canonical Quantization of a Memristive Leaky Integrate-and-Fire Neuron Circuit
Dean Brand, Domenica Dibenedetto, Francesco Petruccione
https://arxiv.org/abs/2506.21363
Canonical Quantization of a Memristive Leaky Integrate-and-Fire Neuron Circuit
Dean Brand, Domenica Dibenedetto, Francesco Petruccione
https://arxiv.org/abs/2506.21363
This https://arxiv.org/abs/2306.11965 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qbi…
TopK Language Models
Ryosuke Takahashi, Tatsuro Inaba, Kentaro Inui, Benjamin Heinzerling
https://arxiv.org/abs/2506.21468 https://arxiv.org/pdf/2506.21468 https://arxiv.org/html/2506.21468
arXiv:2506.21468v1 Announce Type: new
Abstract: Sparse autoencoders (SAEs) have become an important tool for analyzing and interpreting the activation space of transformer-based language models (LMs). However, SAEs suffer several shortcomings that diminish their utility and internal validity. Since SAEs are trained post-hoc, it is unclear if the failure to discover a particular concept is a failure on the SAE's side or due to the underlying LM not representing this concept. This problem is exacerbated by training conditions and architecture choices affecting which features an SAE learns. When tracing how LMs learn concepts during training, the lack of feature stability also makes it difficult to compare SAEs features across different checkpoints. To address these limitations, we introduce a modification to the transformer architecture that incorporates a TopK activation function at chosen layers, making the model's hidden states equivalent to the latent features of a TopK SAE. This approach eliminates the need for post-hoc training while providing interpretability comparable to SAEs. The resulting TopK LMs offer a favorable trade-off between model size, computational efficiency, and interpretability. Despite this simple architectural change, TopK LMs maintain their original capabilities while providing robust interpretability benefits. Our experiments demonstrate that the sparse representations learned by TopK LMs enable successful steering through targeted neuron interventions and facilitate detailed analysis of neuron formation processes across checkpoints and layers. These features make TopK LMs stable and reliable tools for understanding how language models learn and represent concepts, which we believe will significantly advance future research on model interpretability and controllability.
toXiv_bot_toot
Prediction: interest in electrode-based BCIs will wane (aside from niches), as non-implantable ("non-invasive") phased array focused ultrasound will take over for both brain stimulation and measuring brain activity https://www.cell.com/neuron/fulltext/S0896-6273(20)…
Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy?
Louis Vervoort, Vitaly Nikolaev
https://arxiv.org/abs/2506.14239
fly_larva: Drosophila larva brain (2023)
A complete synaptic map of the brain connectome of the larva of the fruit fly Drosophila melanogaster. Nodes are neurons, and edges are synaptic connections, traced individually from brain image sections using three-dimensional electron microscopy–based reconstruction. Node metadata include the neuron hempisphere, hemispherical homologue, cell type, annotations, and inferred cluster. Edge metadata include the type of interaction (`'aa'`,…
Energy-Efficient Digital Design: A Comparative Study of Event-Driven and Clock-Driven Spiking Neurons
Filippo Marostica, Alessio Carpegna, Alessandro Savino, Stefano Di Carlo
https://arxiv.org/abs/2506.13268
Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings
Ian Christopher Tanoh, Michael Deistler, Jakob H. Macke, Scott W. Linderman
https://arxiv.org/abs/2506.20233
Steering Conceptual Bias via Transformer Latent-Subspace Activation
Vansh Sharma, Venkat Raman
https://arxiv.org/abs/2506.18887 https://
A Novel Discrete Memristor-Coupled Heterogeneous Dual-Neuron Model and Its Application in Multi-Scenario Image Encryption
Yi Zou, Mengjiao Wang, Xinan Zhang, Herbert Ho-Ching Iu
https://arxiv.org/abs/2505.24294
SECNEURON: Reliable and Flexible Abuse Control in Local LLMs via Hybrid Neuron Encryption
Zhiqiang Wang, Haohua Du, Junyang Wang, Haifeng Sun, Kaiwen Guo, Haikuo Yu, Chao Liu, Xiang-Yang Li
https://arxiv.org/abs/2506.05242
This https://arxiv.org/abs/2506.00691 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csLG_…
Impact of Hill coefficient and time delay on a perceptual decision-making model
Bart{\l}omiej Morawski, Anna Czartoszewska
https://arxiv.org/abs/2506.19853
This https://arxiv.org/abs/2409.09268 has been replaced.
initial toot: https://mastoxiv.page/@arX…
A Hybrid Artificial Intelligence Method for Estimating Flicker in Power Systems
Javad Enayati, Pedram Asef, Alexandre Benoit
https://arxiv.org/abs/2506.13611
Evolutionary chemical learning in dimerization networks
Alexei V. Tkachenko, Bortolo Matteo Mognetti, Sergei Maslov
https://arxiv.org/abs/2506.14006 https:…
Integer Binary-Range Alignment Neuron for Spiking Neural Networks
Binghao Ye, Wenjuan Li, Dong Wang, Man Yao, Bing Li, Weiming Hu, Dong Liang, Kun Shang
https://arxiv.org/abs/2506.05679
Decision-making in light-trapped slime molds involves active mechanical processes
Lisa Schick, Emily Eichenlaub, Fabian Drexel, Alexander Mayer, Siyu Chen, Marcus Roper, Karen Alim
https://arxiv.org/abs/2506.12803
Stable Synchronous Propagation in Feedforward Networks for Biped Locomotion
Ian Stewart, David Wood
https://arxiv.org/abs/2506.11780 https://
Convergent and divergent connectivity patterns of the arcuate fasciculus in macaques and humans
Jiahao Huang, Ruifeng Li, Wenwen Yu, Anan Li, Xiangning Li, Mingchao Yan, Lei Xie, Qingrun Zeng, Xueyan Jia, Shuxin Wang, Ronghui Ju, Feng Chen, Qingming Luo, Hui Gong, Xiaoquan Yang, Yuanjing Feng, Zheng Wang
https://arxiv.org/abs/25…
Minimal Neuron Circuits -- Part I: Resonators
Amr Nabil, T. Nandha Kumar, Haider Abbas F. Almurib
https://arxiv.org/abs/2506.02341 https://
Neuromorphic Online Clustering and Its Application to Spike Sorting
James E. Smith
https://arxiv.org/abs/2506.12555 https://arxiv.org…
A Neuronal Model at the Edge of Criticality: An Ising-Inspired Approach to Brain Dynamics
Sajedeh Sarmastani, Maliheh Ghodrat, Yousef Jamali
https://arxiv.org/abs/2506.07027
Identifying interactions across brain areas while accounting for individual-neuron dynamics with a Transformer-based variational autoencoder
Qi Xin, Robert E. Kass
https://arxiv.org/abs/2506.02263
Characterizing Neural Manifolds' Properties and Curvatures using Normalizing Flows
Peter Bouss, Sandra Nester, Kirsten Fischer, Claudia Merger, Alexandre Ren\'e, Moritz Helias
https://arxiv.org/abs/2506.12187
Structured State Space Model Dynamics and Parametrization for Spiking Neural Networks
Maxime Fabre, Lyubov Dudchenko, Emre Neftci
https://arxiv.org/abs/2506.06374
A Practical Guide to Tuning Spiking Neuronal Dynamics
William Gebhardt, Alexander G. Ororbia, Nathan McDonald, Clare Thiem, Jack Lombardi
https://arxiv.org/abs/2506.08138
Learning to cluster neuronal function
Nina S. Nellen, Polina Turishcheva, Michaela Vystr\v{c}ilov\'a, Shashwat Sridhar, Tim Gollisch, Andreas S. Tolias, Alexander S. Ecker
https://arxiv.org/abs/2506.03293