Always fun/challenging to read new AI (pre)papers like this. "Base models know how to reason, thinking models learn when".
#AI #Google #reasoning
> DOGE, he said, began acting like “a bunch of people who didn’t know what they were doing, with ideas of how government should run — thinking it should work like a McDonald’s or a bank — screaming all the time.”
The "screaming all the time" part is really common for people in government tech, albeit internal.
Left #Bluesky or thinking about it? Into #genealogy or #familyhistory but don't know where to start? Follow @…
Detection of Earth's free oscillations utilizing TianQin
Yuxin Yang, Kun Liu, Xuefeng Zhang, Yi-Ming Hu
https://arxiv.org/abs/2510.10107 https://arxiv.…
Waaaaas? Eeeeecht? Na so eine Überrschung. 🤪
Plug-in-#Hybride stoßen laut aktuellen EU-Daten im Realbetrieb rund fünfmal so viel CO₂ aus wie offiziell angegeben.
Im Schnitt liegt der Ausstoß bei 139 g/km – ähnlich wie bei reinen #Verbrennern. Grund ist die häufige Nutzung des
Base Models Know How to Reason, Thinking Models Learn When
Constantin Venhoff, Iv\'an Arcuschin, Philip Torr, Arthur Conmy, Neel Nanda
https://arxiv.org/abs/2510.07364 https…
Kimi K2 Thinking is impressive, it is as at least as good as other top foundation models, fully openweights, 1 trillion parameters but only using 32B in a MoE setting. Also you can download and reuse it for free.
https://github.com/MoonshotAI/Kimi-K2
Reading thoughts about a new Chinese openweights AI model, Kimi K2 Thinking.
https://www.interconnects.ai/p/kimi-k2-thinking-what-it-means