How ‘day zero’ water shortages in Iran are fuelling protests
https://www.theguardian.com/world/2026/jan/15/how-day-zero-water-shortages-in-iran-are-fuelling-protests?CMP=Share_AndroidApp_Other
How ‘day zero’ water shortages in Iran are fuelling protests
https://www.theguardian.com/world/2026/jan/15/how-day-zero-water-shortages-in-iran-are-fuelling-protests?CMP=Share_AndroidApp_Other
Here's another short story that reflects on our extollation of technology...
The Flying Machine – Ray Bradbury
https://xpressenglish.com/our-stories/flying-machine/
Demand for Russian oil is falling! Even India has turned away! #shorts: https://benborges.xyz/2026/02/13/demand-for-russian-oil-is.html
So to follow up on this, I've caught it in action. Models, when quantized a bit, just do a bit more poorly with short contexts. Even going from f32 (as trained) to bf16 (as usually run) to q8 tends to do okay for "normal" context windows. And q4 you start feeling like "this model is a little stupid and gets stuck sometimes” (it is! It's just that it's still mostly careening about in the space of "plausible" most of the time. Not good guesswork, but still in the zone). With long contexts, the probability of parameters collapsing to zero are higher, so the more context the more likelihood you are to see brokenness.
And then at Q2 (2 bits per parameter) or Q1, the model falls apart completely. Parameters collapse to zero easily. You start seeing "all work and no play makes jack a dull boy” sorts of behavior, with intense and unscrutinized repetition, followed by a hard stop when it just stops working.
And quantization is a parameter that a model vendor can turn relatively easily. (they have to regenerate the model from the base with more quantization, but it's a data transformation on the order of running a terabyte through a straightforward and fast process, not like training).
If you have 1000 customers and enough equipment to handle the requests of 700, going from bf16 to q8 is a no-brainer. Suddenly you can handle the load and have a little spare capacity. They get worse results, probably pay the same per token (or they're on a subscription that hides the cost anyway so you are even freer to make trade-offs. There's a reason that subscription products are kinda poorly described.)
It's also possible for them to vary this across a day: use models during quieter periods? Maybe you get an instance running a bf16 quantization. If you use it during a high use period? You get a Q4 model.
Or intelligent routing is possible. No idea if anyone is doing this, but if they monitor what you send a bit, and you generally shoot for an expensive model for simple requests? They could totally substitute a highly quantized version of the model to answer the question.
There are •so many tricks• that can be pulled here. Some of them very reasonable to make, some of them treading into outright misleading or fraudulent, and it's weirdly hard to draw the line between them.
Japan, driven by labor shortages, is increasingly adopting robotics and physical AI, with a hybrid model where startups innovate and corporations provide scale (Kate Park/TechCrunch)
https://techcrunch.com/2026/04/05/japan-is-proving-e…
IRS staffing shortage collides with GOP's tax cut campaign pitch (Danny Nguyen/Politico)
https://www.politico.com/news/2026/04/06/irs-falls-short-of-filing-season-staffing-00860634
http://www.memeorandum.com/260406/p106#a260406p106
Imagine:
You are these parent of an adorable 4-year-old kid. They have made a toy airplane out of spare cardboard. Sadly, during play the wing has fallen off. You, a wise parent, produce a piece of duct tape and tape it back on. Your kid asks: "but what if the tape breaks, or the other wing falls off?" Dutifully, and with a completely serious manner, you duct tape the other wing, and then with a sharpie you write "Please DO NOT fall off!" on each wing. "There," you say, now the wings will not fall off. "
Your child happily returns to their play.
Imagine:
You are boarding a Boeing airplane for an intercontinental flight. Just the other day you were reading news about the emergency exit door falling off a Boeing airplane during flight. Thankfully nobody was injured in that incident, but a passenger could have been sucked out the gap and killed. As you walk down the aisle towards your seat at the back, you notice that around the emergency exit door of this plane, there are some scratch marks. It looks like it might not be 100% seated in place. You see several rolls worth of duct tape slapped onto the gaps between the door and the frame. In sharpie, someone has written "Please DO NOT fall off!" on the duct tape.
This is a post about #Agentic #AI.
To clarify: there are a host of reasons why using Claude Code is unethical in the first place, besides the fact that its a danger to its users. These make it unethical to use it even for a child's-toy-like application. But the source code we've just witnessed in the recent leak is *exactly* this level of "engineering." If you see an app that claims to be "programmed with AI" and it has any possibility of failing in a way that could harm you (for example, if it connects to the internet, meaning that poor programming could allow hackers to take over the device you run it on), my advice is: "Do not use it and warn your friends and family."
P.S. yes, this advice does apply to Microsoft Widows at this point, although that can be a tougher bullet to bite.
What do Ukraine and Japan have in common? They are leaders in robotics, and for the right reasons
https://techcrunch.com/2026/04/05/japan-is-proving-experimental-physical-ai-is-ready-for-the-real-world/
Following UK CMA's proposals, Google says it is exploring controls to let websites opt out of AI Overviews and AI Mode (Barry Schwartz/Search Engine Roundtable)
https://www.seroundtable.com/google-opt-out-of-search-ai-features-40831.html