"Leo betonte, dass er keine direkte Attacke auf Trump oder andere Personen beabsichtigt habe, als er eine «Wahnvorstellung der Allmächtigkeit» kritisierte, die den Krieg gegen den Iran und andere Konflikte weltweit befeuere."
Aber wem der Schuh passt, der zieht ihn sich an. 🤭
Quelle mit Bezahlschranke:
Ich verstehe, dass der Beauftragte der Bundesregierung für Kulturkampf, ein Problem mit linken Buchhandlungen hat und wenig Probleme hat, Preisverfahren mit unabhängigen Jurys vor die Wand zu fahren, aber ich begreife die Angriff auf die DNB nicht, die ja ohnehin einen gesetzlichen Auftrag erfüllt. Geht es um Austerität?
So to follow up on this, I've caught it in action. Models, when quantized a bit, just do a bit more poorly with short contexts. Even going from f32 (as trained) to bf16 (as usually run) to q8 tends to do okay for "normal" context windows. And q4 you start feeling like "this model is a little stupid and gets stuck sometimes” (it is! It's just that it's still mostly careening about in the space of "plausible" most of the time. Not good guesswork, but still in the zone). With long contexts, the probability of parameters collapsing to zero are higher, so the more context the more likelihood you are to see brokenness.
And then at Q2 (2 bits per parameter) or Q1, the model falls apart completely. Parameters collapse to zero easily. You start seeing "all work and no play makes jack a dull boy” sorts of behavior, with intense and unscrutinized repetition, followed by a hard stop when it just stops working.
And quantization is a parameter that a model vendor can turn relatively easily. (they have to regenerate the model from the base with more quantization, but it's a data transformation on the order of running a terabyte through a straightforward and fast process, not like training).
If you have 1000 customers and enough equipment to handle the requests of 700, going from bf16 to q8 is a no-brainer. Suddenly you can handle the load and have a little spare capacity. They get worse results, probably pay the same per token (or they're on a subscription that hides the cost anyway so you are even freer to make trade-offs. There's a reason that subscription products are kinda poorly described.)
It's also possible for them to vary this across a day: use models during quieter periods? Maybe you get an instance running a bf16 quantization. If you use it during a high use period? You get a Q4 model.
Or intelligent routing is possible. No idea if anyone is doing this, but if they monitor what you send a bit, and you generally shoot for an expensive model for simple requests? They could totally substitute a highly quantized version of the model to answer the question.
There are •so many tricks• that can be pulled here. Some of them very reasonable to make, some of them treading into outright misleading or fraudulent, and it's weirdly hard to draw the line between them.
China Sought Access to Anthropic’s Newest A.I. The Answer Was No.
https://www.nytimes.com/2026/05/12/us/politics/china-ai-anthropic-openai-mythos-chatgpt.html
Die feine @… mit ihren feinen Stoffen würde niemanls die eigenen Leute bedrohen oder erschießen, um ihre Politik gegen geltendes Recht durchzusetzen. Sie rüstet dafür ausländische Milizen aus und lässt sie in internationalen Gewässern auf deutsche Schiffe schießen.
Diese Schiffe voller Menschenleben sind leer von Öl und Gas und kaum sc…
Willem challenged us to ask ourselves what we would do if we were living under Nazi occupation. Before all of this, I doubt anyone thought they would be complicit. I doubt anyone said to themselves, "nothing. I would cower in fear and do nothing."
But for 4 years or so we all answered that question again and again with our lives. Now here we are, answering it again... Every day. But it's no longer "what would you do during the rise of Hitler?" It's now, "what would you do after the invasion of Poland," and "what would you do after you knew about the concentration camps?"
For some people, the answer is still, "nothing."
But a lot of people have been brave in the face of it all. A lot of people have died, and a lot more will die. He will die, perhaps after a ruling by some court or other but, honestly, probably not. That's just how these things work out. Lots of people die, some for no reason, some because they stood up against injustice. A whole lot of people do nothing, until it's safe to claim victory... Until it's no longer safe to be on the other side.
That's just how these things go. Fascism is self-defeating, but it causes incredible harm on it's path of self-destruction. The more people who stand up, who risk themselves, the faster it collapses and the fewer it can hurt. That's also just how these things go. It's incredibly dangerous for everyone until enough people take some extra risk and make it safe for everyone again.
But that question still stands... Which one of those groups are you in? Are you proud of what you are doing, or will you look back with shame? Some of y'all have a lot to be proud of, but, if you're not, it's never too late to earn your way into that proud group.
Sources: Anthropic officials refused a Chinese think tank's request to change its stance and allow Beijing to access Mythos at a meeting in Singapore last month (New York Times)
https://www.nytimes.com…
Trump nominates Cameron Hamilton to lead FEMA, a year after he was fired from the role (Gabriela Aoun Angueira/Associated Press)
https://apnews.com/article/fema-cameron-hamilton-trump-disasters-navy-seals-e1ef0f6c81f6ea992a2213714f6743b1
http://www.memeorandum.com/260511/p99#a260511p99
🙏 Ungarn 🙏
Orban hat seine Niederlage eingestanden?
Weshalb?
Ist er froh, dass er nicht aufräumen muss, was er angerichtet hat?
Ist seine Opposition immer noch stark genug, um Peter Magyar das Leben schwer zu machen?
Wenn die EU es jetzt schafft, geeinter aufzutreten und sich allenfalls bessere Regeln zu geben, dann fällt mir so ein grosser Stein vom Herzen. 😅
#Ungarn