Calamus 20 I saw in Louisiana a live-oak growing
What a heartaching poem of loneliness and the need for the love of another! Just wonderful. I understand now why this poem is so popular, particularly as a gay poem. It is full of meaning and is quite clear about it.
I wondered how it could utter joyous leaves, standing alone there, without its friend, its lover near—for I knew I could not
There's a more cerebral interpretation of this work, particularly if you understand "leaves" to mean "pages in my poetry book Leaves of Grass". Whitman talking about his own poetic inspiration from lovers.
Which well enough. But I'm more interested in Whitman's expressed need for "manly love". Which is clearly on his mind constantly:
my own dear friends ... I believe lately I think of little else than of them
Also Whitman's own eroticization of nature and himself. Here speaking of the tree,
its look, rude, unbending, lusty, made me think of myself
Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
https://social.coop/@eloquence/114940607434005478
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.
"On July 9th, two days after the woman was flung from the ICE SUV, [SF Mayor Daniel] Lurie posted a video to Instagram promoting a new [Labubu] store in Union Square...The timing of the announcement was panned, with nearly every comment under the post questioning Lurie over the ICE raids." #sfPol
In the hands of someone like Trump,
deals are ways to evade, postpone or subvert the efficient work of markets.
Trump does not like markets,
precisely because they are impersonal and objective.
Their results – for corporations, entrepreneurs, investors and shareholders – are subject to clear measures of success and failure.
Because deals are personal, adversarial and incomplete,
they are perfect grist for Trump’s relentless publicity machine,
and a…
"TikTok, Do Your Thing": User Reactions to Social Surveillance in the Public Sphere
Meira Gilbert, Miranda Wei, Lindah Kotut
https://arxiv.org/abs/2506.20884
I met a decadeslong Mission resident, a GenXer, earlier this year who was bemused by the idea there was such a thing as a “north Mission” and a “south Mission.”
You know, based on if you’re closer to the 22 or the 48, I said.
But Café de Olla feels south Mission to me even though geographically it’s not, being on 19th Street.
AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.
Forget America First
—Under Trump,
It's Corporations First
Big corporations donated heavily
to Trump’s inaugural fund.
Just a few months later,
federal cases against them are being dropped
https://inthesetimes.com/article/trump
Transit riders are not taking Daniel Lurie's service cuts sitting down. 😤
Unless, that is, we are waiting at one of these 12 stops
https://sfist.com/2025/08/22/guerilla-bench-activists-back-a…
The latest extraordinary intervention by the White House in corporate America.
Trump said on Friday the U.S. would take a 10% stake in Intel
under a deal with the struggling chipmaker
-- and is planning more such moves,
An official announcement on the arrangement is expected later in the day, a source familiar with the matter said. Trump is set to meet with CEO Lip-Bu Tan later on Friday