The four types of imagination and how they create our worlds https://www.newscientist.com/article/2480349-the-four-types-of-imagination-and-how-they-create-our-worlds/ (archived at
Deadlock-free Context-free Session Types
Andreia Mordido, Jorge A. P\'erez
https://arxiv.org/abs/2506.20356 https://arxiv.org/pdf…
2016 Video RTBF-TIPIK : Ils ont grandi dans la secte OKC https://chardonsbleus.org/rtbf-tipik-ils-ont-grandi-dans-la-secte-okc/
Mormon women are finally allowed to wear sleeveless tops. Here's why some are grieving.
https://religionnews.com/2025/06/27/mormon-women-are-finally-allowed-to-wear-sleeveless-tops-heres-why-some-are-grieving/
Source: OpenAI recently began renting Google's TPUs to power ChatGPT, marking its first significant use of non-Nvidia chips; Meta also considered using TPUs (The Information)
https://www.theinformation.com/articles/google-convinces-open…
#PostgreSQL 18 just dropped: 10 powerful new features devs need to know
https://medium.com/devlink-tips/postgresql-1…
from my link log —
Representing type lattices compactly.
https://bernsteinbear.com/blog/lattice-bitset/
saved 2025-03-12 https://…
TopK Language Models
Ryosuke Takahashi, Tatsuro Inaba, Kentaro Inui, Benjamin Heinzerling
https://arxiv.org/abs/2506.21468 https://arxiv.org/pdf/2506.21468 https://arxiv.org/html/2506.21468
arXiv:2506.21468v1 Announce Type: new
Abstract: Sparse autoencoders (SAEs) have become an important tool for analyzing and interpreting the activation space of transformer-based language models (LMs). However, SAEs suffer several shortcomings that diminish their utility and internal validity. Since SAEs are trained post-hoc, it is unclear if the failure to discover a particular concept is a failure on the SAE's side or due to the underlying LM not representing this concept. This problem is exacerbated by training conditions and architecture choices affecting which features an SAE learns. When tracing how LMs learn concepts during training, the lack of feature stability also makes it difficult to compare SAEs features across different checkpoints. To address these limitations, we introduce a modification to the transformer architecture that incorporates a TopK activation function at chosen layers, making the model's hidden states equivalent to the latent features of a TopK SAE. This approach eliminates the need for post-hoc training while providing interpretability comparable to SAEs. The resulting TopK LMs offer a favorable trade-off between model size, computational efficiency, and interpretability. Despite this simple architectural change, TopK LMs maintain their original capabilities while providing robust interpretability benefits. Our experiments demonstrate that the sparse representations learned by TopK LMs enable successful steering through targeted neuron interventions and facilitate detailed analysis of neuron formation processes across checkpoints and layers. These features make TopK LMs stable and reliable tools for understanding how language models learn and represent concepts, which we believe will significantly advance future research on model interpretability and controllability.
toXiv_bot_toot
This is why people are sending donations to Luigi's defense lawyers...
https://www.theguardian.com/us-news/ng-interactive/2025/may/27/exactech-tpg-medical-devices-bankruptcy
I’m going to try the Dia AI browser:
https://www.diabrowser.com/
More on:
Begun, the AI Browser Wars Have
https://spyglass.org/ai-browser-wars/