Urban Solitude III 🈳
城市孤独 III 🈳
📷 Pentax MX
🎞️Fujifilm Neopan SS, expired 1995
buy me ☕️ ?/请我喝杯☕️?
#filmphotography
1. Order two bags of cat litter.
2. The seller doesn't send it on time. No matter, you're not in a hurry.
3. Recall that they still didn't send it. Decide to resign.
4. Turns out they send it just before you resign.
5. The parcels come. Open the box.
6. Discover that the bag is damaged. Take a photo just in case.
7. Carefully take it out and check how damaged it is. Well, it's only ripped at the top, no point in returning it over that.
8. Notice they've sent the wrong litter (you've ordered non-scented) 🤦.
9. Call them asking whether there's a point in opening the other box. They say it's going to be the wrong litter too, just make a return request and attach photos.
10. You do that, requesting money return. They request you to make damage protocol from delivery company.
11. You do that. Maybe you'd agree to a 20 PLN discount for next shopping instead of returning two bags of the wrong cat litter?
12. They finally accept the return. And they prepare the return… for one bag 🤦.
So I was supposed to work less today. Instead, I've wasted a lot of time on dealing with damaged parcel and the vendor. And a lot of tape, so it wouldn't fall apart on the return journey.
If you've been paying attention, this is a *very* strong signal that OpenAI is hitting the limits of improved capability with more compute/data and they're (predictably) all out of other ideas. The quiet "exponential model capabilities" lie here is what Altmann promised but is now starting not to be able to deliver, even in cherry-picked demo terms.
https://www.cnbc.com/2025/08/11/sam-altman-says-agi-is-a-pointless-term-experts-agree.html
The "agentic" turn was never going to pan out, because it exposes the unreliability of LLMs too directly, and it turns out that no amount of yelling at your text vending machine to "Be smarter! Think harder!" will actually get you anything more than vended text.
I'm *praying* that we get into this crash sooner rather than later, since the faster it comes, the less painful it will be.
My recent reading in actual research papers corroborates this, for example, asking LLMs to play games exposes their utter lack of anything that can be termed "reasoning":
https://arxiv.org/pdf/2508.08501v1
This seems a good petition to sign for YP https://www.ourparty.org.uk/#openletter