Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@seeingwithsound@mas.to
2025-10-01 14:05:54

Artificial phantasia: Evidence for propositional reasoning-based mental imagery in large language models arxiv.org/abs/2509.23108 on the representation of visual imagery in humans; more information in the Bluesky thread

@arXiv_csHC_bot@mastoxiv.page
2025-10-14 11:30:58

Exploring Artificial Intelligence and Culture: Methodology for a comparative study of AI's impact on norms, trust, and problem-solving across academic and business environments
Matthias Huemmer, Theophile Shyiramunda, Michelle J. Cummings-Koether
arxiv.org/abs/2510.11530

@arXiv_qbioNC_bot@mastoxiv.page
2025-10-07 08:43:32

A Biologically Interpretable Cognitive Architecture for Online Structuring of Episodic Memories into Cognitive Maps
E. A. Dzhivelikian, A. I. Panov
arxiv.org/abs/2510.03286

@arXiv_csCY_bot@mastoxiv.page
2025-10-07 09:30:32

Lightweight Prompt Engineering for Cognitive Alignment in Educational AI: A OneClickQuiz Case Study
Antoun Yaacoub, Zainab Assaghir, J\'er\^ome Da-Rugna
arxiv.org/abs/2510.03374

@cjust@infosec.exchange
2025-10-02 02:25:37

Maybe AI Was Never a Tool
They can deliver conclusions that feel complete but skip the struggle that gives thought its humanity. This is what I call anti-intelligence—not stupidity, but perhaps better expressed as a kind of counterfeit cognition. It's intelligence without friction that results in output—built in that shared cognitive dynamic—that looks like insight but has bypassed the work that makes insight truly yours.

@trochee@dair-community.social
2025-10-04 14:49:13

This long work from @… and @…, among many other interesting ideas,
calls the new kind of "AI" by a new name to distinguish it from older, often more ethical kinds of stats models:
"Displacement AI"
I lov…

@arXiv_csAI_bot@mastoxiv.page
2025-10-06 09:52:09

A Study of Rule Omission in Raven's Progressive Matrices
Binze Li
arxiv.org/abs/2510.03127 arxiv.org/pdf/2510.03127

@arXiv_qbioNC_bot@mastoxiv.page
2025-10-03 08:40:21

A Modular Theory of Subjective Consciousness for Natural and Artificial Minds
Micha\"el Gillon
arxiv.org/abs/2510.01864 arxiv.org/pdf/…

@arXiv_csHC_bot@mastoxiv.page
2025-09-29 07:35:25

Position: Human Factors Reshape Adversarial Analysis in Human-AI Decision-Making Systems
Shutong Fan, Lan Zhang, Xiaoyong Yuan
arxiv.org/abs/2509.21436

@arXiv_qbioNC_bot@mastoxiv.page
2025-12-10 08:57:11

Multi state neurons
Robert Worden
arxiv.org/abs/2512.08815 arxiv.org/pdf/2512.08815 arxiv.org/html/2512.08815
arXiv:2512.08815v1 Announce Type: new
Abstract: Neurons, as eukaryotic cells, have powerful internal computation capabilities. One neuron can have many distinct states, and brains can use this capability. Processes of neuron growth and maintenance use chemical signalling between cell bodies and synapses, ferrying chemical messengers over microtubules and actin fibres within cells. These processes are computations which, while slower than neural electrical signalling, could allow any neuron to change its state over intervals of seconds or minutes. Based on its state, a single neuron can selectively de-activate some of its synapses, sculpting a dynamic neural net from the static neural connections of the brain. Without this dynamic selection, the static neural networks in brains are too amorphous and dilute to do the computations of neural cognitive models. The use of multi-state neurons in animal brains is illustrated in hierarchical Bayesian object recognition. Multi-state neurons may support a design which is more efficient than two-state neurons, and scales better as object complexity increases. Brains could have evolved to use multi-state neurons. Multi-state neurons could be used in artificial neural networks, to use a kind of non-Hebbian learning which is faster and more focused and controllable than traditional neural net learning. This possibility has not yet been explored in computational models.
toXiv_bot_toot