Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@seeingwithsound@mas.to
2025-05-27 19:56:18

The four types of imagination and how they create our worlds newscientist.com/article/24803 (archived at

@arXiv_csPL_bot@mastoxiv.page
2025-06-26 08:13:20

Deadlock-free Context-free Session Types
Andreia Mordido, Jorge A. P\'erez
arxiv.org/abs/2506.20356 arxiv.org/pdf…

@rmdes@mstdn.social
2025-06-26 20:31:32

2016 Video RTBF-TIPIK : Ils ont grandi dans la secte OKC chardonsbleus.org/rtbf-tipik-i

@servelan@newsie.social
2025-06-27 17:50:03

Mormon women are finally allowed to wear sleeveless tops. Here's why some are grieving.
religionnews.com/2025/06/27/mo

@Techmeme@techhub.social
2025-06-27 20:25:53

Source: OpenAI recently began renting Google's TPUs to power ChatGPT, marking its first significant use of non-Nvidia chips; Meta also considered using TPUs (The Information)
theinformation.com/articles/go

@frankel@mastodon.top
2025-06-26 16:20:01

#PostgreSQL 18 just dropped: 10 powerful new features devs need to know
medium.com/devlink-tips/postgr

@fanf@mendeddrum.org
2025-05-27 11:42:04

from my link log —
Representing type lattices compactly.
bernsteinbear.com/blog/lattice
saved 2025-03-12

@arXiv_csCL_bot@mastoxiv.page
2025-06-27 09:58:09

TopK Language Models
Ryosuke Takahashi, Tatsuro Inaba, Kentaro Inui, Benjamin Heinzerling
arxiv.org/abs/2506.21468 arxiv.org/pdf/2506.21468 arxiv.org/html/2506.21468
arXiv:2506.21468v1 Announce Type: new
Abstract: Sparse autoencoders (SAEs) have become an important tool for analyzing and interpreting the activation space of transformer-based language models (LMs). However, SAEs suffer several shortcomings that diminish their utility and internal validity. Since SAEs are trained post-hoc, it is unclear if the failure to discover a particular concept is a failure on the SAE's side or due to the underlying LM not representing this concept. This problem is exacerbated by training conditions and architecture choices affecting which features an SAE learns. When tracing how LMs learn concepts during training, the lack of feature stability also makes it difficult to compare SAEs features across different checkpoints. To address these limitations, we introduce a modification to the transformer architecture that incorporates a TopK activation function at chosen layers, making the model's hidden states equivalent to the latent features of a TopK SAE. This approach eliminates the need for post-hoc training while providing interpretability comparable to SAEs. The resulting TopK LMs offer a favorable trade-off between model size, computational efficiency, and interpretability. Despite this simple architectural change, TopK LMs maintain their original capabilities while providing robust interpretability benefits. Our experiments demonstrate that the sparse representations learned by TopK LMs enable successful steering through targeted neuron interventions and facilitate detailed analysis of neuron formation processes across checkpoints and layers. These features make TopK LMs stable and reliable tools for understanding how language models learn and represent concepts, which we believe will significantly advance future research on model interpretability and controllability.
toXiv_bot_toot

@ubuntourist@mastodon.social
2025-05-27 18:50:32

This is why people are sending donations to Luigi's defense lawyers...
theguardian.com/us-news/ng-int

@wfryer@mastodon.cloud
2025-06-27 12:30:56

I’m going to try the Dia AI browser:
diabrowser.com/
More on:
Begun, the AI Browser Wars Have
spyglass.org/ai-browser-wars/

A promotional graphic for the Dia app, featuring the text "Learn with your tabs" and a button labeled "Download Dia." It mentions early access for Arc members. The background has a light gradient.