Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
Barren-plateau free variational quantum simulation of Z2 lattice gauge theories
Fariha Azad, Matteo Inajetovic, Stefan K\"uhn, Anna Pappa
https://arxiv.org/abs/2507.19203 h…
The natural-born posthuman: applying extended mind to post- and transhumanist discourse https://link.springer.com/article/10.1007/s11229-025-05202-4 "Newer discussions have expanded upon this idea through sensory substitution devices, such as The vOICe system which use…
Sergey Shulubin wurde am Strand in Constanța das Fahrrad geklaut. Das ist mies!
#tcrno11 #tcrno11cap104
Efficient Semiparametric Inference for Distributed Data with Blockwise Missingness
Jingyue Huang, Huiyuan Wang, Yuqing Lei, Yong Chen
https://arxiv.org/abs/2508.16902 https://…
1/2 Thanks to @… for this interesting article. It speaks to me. :)
I’ve been weather blogging @… since 2005. It is interesting how it has changed, and how I have changed.
My website used to be just data from the (expensive) station I bought when I moved back to Port Alberni. It was a hobby and a side project to practice web/coding skills I use at work. My focus was on creating useful data for people that was more local/relevant than the official EC station outside of the city.
Then I put up a webcam and learned how to make timelapses. This got the attention of local media… because pictures. :)
Then I added a blog and started to write about the weather almost daily. This was before Facebook. There was a popular local online forum where I would post things. The media would also follow my website and they started to call me when there was extreme weather (usually very hot or very wet/stormy).
Then Facebook started to get big and I made a page that eventually had a few thousand followers. I would blog often. Lots of traffic from Facebook… this was 2010 and on. I blogged about climate and weather pretty equally.
Like anyone in Port Alberni, I was/am obsessed with the Martin Mars and got wrapped up in that issue along with others which combined with the weather following probably gave me just enough exposure to have me elected as a councillor in 2014.
I continued through that 4 years, blogging often in addition to councillor duties and work, heavily on facebook, then it all went sideways on my own poor judgement (go ahead and google it, it’s ok :)) and I was not reelected, but Facebook by 2018 had also changed. Cambridge Analytica, etc.
….Continued…
https://www.theglobeandmail.com/canada/article-weather-apps-data-wildfires-storms-preparation-obsession-social-media/
Mass killings
Was looking through Wikipedia's list of mass killings in America (#guns #GunViolence #Shooting
Sharp Onofri trace inequality on the upper half space and quasi-linear Liouville equation with Neumann boundary
Jingbo Dou, Yazhou Han, Shuang Yuan, Yang Zhou
https://arxiv.org/abs/2509.17031
Modelling and Analysis of Non-Contacting Mechanical Face Seals with Axial Disturbances and Misalignment
Ben S Ashby, Tristan Pryer, Nicola Y Bailey
https://arxiv.org/abs/2509.19993
A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI