2026-01-18 19:50:35
The president of the United States is at war with his own country (Paul Waldman/MS NOW)
https://www.ms.now/opinion/trump-ice-minnesota-renee-good-noem-civil-war
http://www.memeorandum.com/260118/p57#a260118p57
The president of the United States is at war with his own country (Paul Waldman/MS NOW)
https://www.ms.now/opinion/trump-ice-minnesota-renee-good-noem-civil-war
http://www.memeorandum.com/260118/p57#a260118p57
Trump is using ICE and health care as weapons of a war against his own country #nevertrump
Cognition releases SWE-1.5, a new coding model in Windsurf, saying it partnered with Cerebras to serve SWE-1.5 at speeds up to 13x faster than Claude Sonnet 4.5 (Cognition)
https://cognition.ai/blog/swe-1-5
The #genAI *research* bubble will burst around the same time; it’s largely driven by the economic promises, and it’s becoming increasingly clear that, as for any technology, it’s not going to scale up indefinitely: some *actual* research will be necessary to advance the SOTA.
As @…
The #genAI *research* bubble will burst around the same time; it’s largely driven by the economic promises, and it’s becoming increasingly clear that, as for any technology, it’s not going to scale up indefinitely: some *actual* research will be necessary to advance the SOTA.
As @…
Manifolds and Modules: How Function Develops in a Neural Foundation Model
Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker
https://arxiv.org/abs/2512.07869 https://arxiv.org/pdf/2512.07869 https://arxiv.org/html/2512.07869
arXiv:2512.07869v1 Announce Type: new
Abstract: Foundation models have shown remarkable success in fitting biological visual systems; however, their black-box nature inherently limits their utility for under- standing brain function. Here, we peek inside a SOTA foundation model of neural activity (Wang et al., 2025) as a physiologist might, characterizing each 'neuron' based on its temporal response properties to parametric stimuli. We analyze how different stimuli are represented in neural activity space by building decoding man- ifolds, and we analyze how different neurons are represented in stimulus-response space by building neural encoding manifolds. We find that the different processing stages of the model (i.e., the feedforward encoder, recurrent, and readout modules) each exhibit qualitatively different representational structures in these manifolds. The recurrent module shows a jump in capabilities over the encoder module by 'pushing apart' the representations of different temporal stimulus patterns; while the readout module achieves biological fidelity by using numerous specialized feature maps rather than biologically plausible mechanisms. Overall, we present this work as a study of the inner workings of a prominent neural foundation model, gaining insights into the biological relevance of its internals through the novel analysis of its neurons' joint temporal response patterns.
toXiv_bot_toot
NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang
https://arxiv.org/abs/2512.09524 https://arxiv.org/pdf/2512.09524 https://arxiv.org/html/2512.09524
arXiv:2512.09524v1 Announce Type: new
Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at https://github.com/Galaxy-Dawn/NeuroSketch.
toXiv_bot_toot