Artificial phantasia: Evidence for propositional reasoning-based mental imagery in large language models https://arxiv.org/abs/2509.23108 on the representation of visual imagery in humans; more information in the Bluesky thread
Exploring Artificial Intelligence and Culture: Methodology for a comparative study of AI's impact on norms, trust, and problem-solving across academic and business environments
Matthias Huemmer, Theophile Shyiramunda, Michelle J. Cummings-Koether
https://arxiv.org/abs/2510.11530
A Biologically Interpretable Cognitive Architecture for Online Structuring of Episodic Memories into Cognitive Maps
E. A. Dzhivelikian, A. I. Panov
https://arxiv.org/abs/2510.03286
Lightweight Prompt Engineering for Cognitive Alignment in Educational AI: A OneClickQuiz Case Study
Antoun Yaacoub, Zainab Assaghir, J\'er\^ome Da-Rugna
https://arxiv.org/abs/2510.03374
Maybe AI Was Never a Tool
They can deliver conclusions that feel complete but skip the struggle that gives thought its humanity. This is what I call anti-intelligence—not stupidity, but perhaps better expressed as a kind of counterfeit cognition. It's intelligence without friction that results in output—built in that shared cognitive dynamic—that looks like insight but has bypassed the work that makes insight truly yours.
This long work from @… and @…, among many other interesting ideas,
calls the new kind of "AI" by a new name to distinguish it from older, often more ethical kinds of stats models:
"Displacement AI"
I lov…
A Study of Rule Omission in Raven's Progressive Matrices
Binze Li
https://arxiv.org/abs/2510.03127 https://arxiv.org/pdf/2510.03127
A Modular Theory of Subjective Consciousness for Natural and Artificial Minds
Micha\"el Gillon
https://arxiv.org/abs/2510.01864 https://arxiv.org/pdf/…
Position: Human Factors Reshape Adversarial Analysis in Human-AI Decision-Making Systems
Shutong Fan, Lan Zhang, Xiaoyong Yuan
https://arxiv.org/abs/2509.21436 https://
Multi state neurons
Robert Worden
https://arxiv.org/abs/2512.08815 https://arxiv.org/pdf/2512.08815 https://arxiv.org/html/2512.08815
arXiv:2512.08815v1 Announce Type: new
Abstract: Neurons, as eukaryotic cells, have powerful internal computation capabilities. One neuron can have many distinct states, and brains can use this capability. Processes of neuron growth and maintenance use chemical signalling between cell bodies and synapses, ferrying chemical messengers over microtubules and actin fibres within cells. These processes are computations which, while slower than neural electrical signalling, could allow any neuron to change its state over intervals of seconds or minutes. Based on its state, a single neuron can selectively de-activate some of its synapses, sculpting a dynamic neural net from the static neural connections of the brain. Without this dynamic selection, the static neural networks in brains are too amorphous and dilute to do the computations of neural cognitive models. The use of multi-state neurons in animal brains is illustrated in hierarchical Bayesian object recognition. Multi-state neurons may support a design which is more efficient than two-state neurons, and scales better as object complexity increases. Brains could have evolved to use multi-state neurons. Multi-state neurons could be used in artificial neural networks, to use a kind of non-Hebbian learning which is faster and more focused and controllable than traditional neural net learning. This possibility has not yet been explored in computational models.
toXiv_bot_toot