This resonates: «Generative models are fundamentally cliche machines. If you ask AI to write a film script, it will produce an average film script masterfully. If you ask it to write an essay, it will produce an average essay masterfully.
Once upon a time, mastery of the banal was adequate for writers. It was enough to prove that you were capable of writing. But that skill has no purpose any more – it can be automated. Skill will be found in the purpose of the work. What can you alone …
Transcoder Adapters for Reasoning-Model Diffing
Nathan Hu, Jake Ward, Thomas Icard, Christopher Potts
https://arxiv.org/abs/2602.20904 https://arxiv.org/pdf/2602.20904 https://arxiv.org/html/2602.20904
arXiv:2602.20904v1 Announce Type: new
Abstract: While reasoning models are increasingly ubiquitous, the effects of reasoning training on a model's internal mechanisms remain poorly understood. In this work, we introduce transcoder adapters, a technique for learning an interpretable approximation of the difference in MLP computation before and after fine-tuning. We apply transcoder adapters to characterize the differences between Qwen2.5-Math-7B and its reasoning-distilled variant, DeepSeek-R1-Distill-Qwen-7B. Learned adapters are faithful to the target model's internal computation and next-token predictions. When evaluated on reasoning benchmarks, adapters match the reasoning model's response lengths and typically recover 50-90% of the accuracy gains from reasoning fine-tuning. Adapter features are sparsely activating and interpretable. When examining adapter features, we find that only ~8% have activating examples directly related to reasoning behaviors. We deeply study one such behavior -- the production of hesitation tokens (e.g., "wait"). Using attribution graphs, we trace hesitation to only ~2.4% of adapter features (5.6k total) performing one of two functions. These features are necessary and sufficient for producing hesitation tokens; removing them reduces response length, often without affecting accuracy. Overall, our results provide insight into reasoning training and suggest transcoder adapters may be useful for studying fine-tuning more broadly.
toXiv_bot_toot