And people keep being suprised by the regular genocides that *keep fucking happening*. And yes, call what ICE is doing what it is. It is a gencoide. Even if it's not killing millions of people (yet), the "genocide" doesn't mean "killing logs of people" it means "trying to wipe out a specific ethnic, racial, or religious group." What the fuck is ICE trying to do? They're carrying out ethnic cleansing. It's genocide.
Genocide in Gaza, genocide in Syria, genocide in Turkey, China, Ethiopia, Rwanda, Serbia... why the fuck does it keep happening? I'll tell you why. It's states.
People like to think of countries and ethnic reagions as the same thing, but they aren't. They never have been. There's never been clear divisions between ethnic groups. But the existence of the state depends on a shared identity. When the truth is more complicated, the state must find a way to fix that. The solution is genocide. You can't separate the two. There can be no state without genocide. The mechanism to carry ou the kind of mass murder and the incentive to do so are really not easy to put together without the state. The state makes genocide viable, and the state demands genocide to protect it's own existence.
Every election is a dice roll. Every state is on a clock, waiting for the luck to run out. And the worst people possible are just waiting for their chance to win and carry out those genocides in order to lock in their power.
Never again means nothing unless you are attacking the root of genocide: the state.
Overuse is pushing the world toward ‘water bankruptcy’ https://news.mongabay.com/short-article/2026/01/overuse-is-pushing-the-world-toward-water-bankruptcy/
My big gripe with "AI" is that a big reason why it's sold as the second coming of Jesus is that most tech people fundamentally do not understand how it actually works.
Their reasoning goes something like, "It works sort of ok for code generation, and programming is the hardest possible thing in the world to do, every other human endeavor is trivial compared to writing code, therefore it must excel at anything else!".
So it ends up being pushed due to a mixture of ignorance and hubris; and especially being stuffed into things it should never be used for (usually when users don't have a say which software they need to use for work).
The finbros are happily along for the ride because they just need something that can be hyped to pump and dump.
Q&A with David Liu, CEO of PlusAI, which is slated to go public next month, on the ongoing commercial trial of its autonomous truck driving software, and more (Rani Molla/Sherwood News)
https://sherwood.news/tech/plusai-ceo-david-liu-on-…
Ski Rental with Distributional Predictions of Unknown Quality
Qiming Cui, Michael Dinitz
https://arxiv.org/abs/2602.21104 https://arxiv.org/pdf/2602.21104 https://arxiv.org/html/2602.21104
arXiv:2602.21104v1 Announce Type: new
Abstract: We revisit the central online problem of ski rental in the "algorithms with predictions" framework from the point of view of distributional predictions. Ski rental was one of the first problems to be studied with predictions, where a natural prediction is simply the number of ski days. But it is both more natural and potentially more powerful to think of a prediction as a distribution p-hat over the ski days. If the true number of ski days is drawn from some true (but unknown) distribution p, then we show as our main result that there is an algorithm with expected cost at most OPT O(min(max({eta}, 1) * sqrt(b), b log b)), where OPT is the expected cost of the optimal policy for the true distribution p, b is the cost of buying, and {eta} is the Earth Mover's (Wasserstein-1) distance between p and p-hat. Note that when {eta} < o(sqrt(b)) this gives additive loss less than b (the trivial bound), and when {eta} is arbitrarily large (corresponding to an extremely inaccurate prediction) we still do not pay more than O(b log b) additive loss. An implication of these bounds is that our algorithm has consistency O(sqrt(b)) (additive loss when the prediction error is 0) and robustness O(b log b) (additive loss when the prediction error is arbitrarily large). Moreover, we do not need to assume that we know (or have any bound on) the prediction error {eta}, in contrast with previous work in robust optimization which assumes that we know this error.
We complement this upper bound with a variety of lower bounds showing that it is essentially tight: not only can the consistency/robustness tradeoff not be improved, but our particular loss function cannot be meaningfully improved.
toXiv_bot_toot
UPDATE: I am now seeing reports of Senators calling for the impeachment of Noem.
Good. Fine. Do that. Yay. Put a head on a pike.
My demand upthread doesn’t change. Having a different Trump appointee running ICE will not fix ICE. You can’t fix ICE. Scrap it. Abolish it. Shut it down. This deranged tyrant must not and cannot be allowed to have an unaccountable personal army.
Still, I’m delighted that Senators are scrambling to do something. Keep up the pressure. Keep them all scared.
Reconstruction of the cold-blooded execution of Alex Pretti by ICE in Minneapolis:
"..while Mr. Pretti is on his knees and restrained, the agent standing directly above him appears to fire one shot at Mr. Pretti at close range. He immediately fires three additional shots."
https://www.
Steven Waldman of Rebuild Local News estimates US local newsrooms will get ~$74M from state governments, but CA, MA, NJ, and NY efforts have fallen short (Dan Kennedy/Poynter)
https://www.poynter.org/business-work/2026/state-government-support-local…
I explained something for a friend in a simple way, and I think it's worth paraphrasing again here.
You cannot create a system that constrains itself. Any constraint on a system must be external to the system, or that constraint can be ignored or removed. That's just how systems work. Every constitution for every country claims to do this impossible thing, a thing proven is impossible almost 100 years ago now. Gödel's loophole has been known to exist since 1947.
Every constitution in the world, every "separation of powers" and set of "checks and balances," attempts to do something which is categorically impossible. Every government is always, at best, a few steps away from authoritarianism. From this, we would then expect that governments trand towards authoritarianism. Which, of course, is what we see historically.
Constraints on power are a formality, because no real controls can possibly exist. So then democratic processes become sort of collective classifiers that try to select only people who won't plunge the country into a dictatorship. Again, because this claim of restrictions on powers is a lie (willful or ignorant, a lie reguardless) that classifier has to be correct 100% of the time (even assuming a best case scenario). That's statistically unlikely.
So as long as you have a system of concentrated power, you will have the worst people attracted to it, and you will inevitably have that power fall into the hands of one of the worst possible person.
Fortunately, there is an alternative. The alternative is to not centralize power. In the security world we try to design systems that assume compromise and minimize impact, rather than just assuming that we will be right 100% of the time. If you build systems that maximially distribute power, then you minimize the impact of one horrible person.
Now, I didn't mention this because we're both already under enough stress, but...
Almost 90% of the nuclear weapons deployed around the world are in the hands of ghoulish dictators. Only two of the countries with nuclear weapons not straight up authoritarian, but they're not far off. We're one crashout away from steralizing the surface of the Earth with nuclear hellfire. Maybe countries shouldn't exist, and *definitely* multiple thousands of nuclear weapons shouldn't exist and shouldn't all be wired together to launch as soon as one of these assholes goes a bit too far sideways.