Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
Die chinesische Firma Gotion High-Tech hat mit der Produktion seiner 5MWh #Batteriespeicher in #Göttingen begonnen.
Ja, in Deutschland.🤪
Nach erfolgreicher #TÜV-Zertifizieru…
“‘It’s obvious that you don’t respect Copyright Law and Artist Rights any more than you respect Habeas Corpus and Due Process rights, not to mention the separation of Church and State per the US Constitution. For the record, we hereby order dhsgov [US Department of Homeland Security] to cease and desist the use of our recording and demand that you immediately pull down your video.’”
“They added: ‘Oh, and go f… yourselves.’”
The CARMENES search for exoplanets around M dwarfs. Revisiting the GJ 317, GJ 463, and GJ 3512 systems and two newly discovered planets orbiting GJ 9773 and GJ 508.2
J. C. Morales, I. Ribas, S. Reffert, M. Perger, S. Dreizler, G. Anglada-Escud\'e, V. J. S. B\'ejar, E. Herrero, J. Kemmer, M. Kuzuhara, M. Lafarga, J. H. Livingston, F. Murgas, B. B. Ogunwale, L. Tal-Or, T. Trifonov, S. Vanaverbeke, P. J. Amado, A. Quirrenbach, A. Reiners, J. A. Caballero, J. F. Ag\"u\'i F…
Interesting thing about tomorrow's tarot show, rendering now, is that I upgraded from Blender 4.0 to blender 4.4 and it's quite a bit nicer to look at the timeline editor.
Was sad to find that the render time was up though. From about 3 seconds per frame usually to more like 12!?
Trying it with an old version I see that the lights and textures look way better with 4.4 than 4.0 though. A substantial step up in the way the show looks without me even doing anything other than waiting four times longer per frame.
Seems to be heavily dependent upon lighting now. The slow frames are like 12 seconds but the fast frames with minimal lighting and close up on the video are more like 2.
Looks too beautiful now to go back though. Upgraded my cloud-remote render machines too. We will render on four machines tonight. FOUR! The power of it all.
g3.4xlarge is no faster than g3.large but g6.xlarge seems to be twice the speed.
But hard to be sure really coz of the massive variance in time depending on the lighting.
Anyway, great show coming tomorrow. Sometimes I wonder what the hell I'm trying to do with it but tomorrow's show is the answer. Hide the angry bitter political rant behind a strange CGI tarot show. When the rant comes together well I like it.
https://wordcloudtarot.com/@wordcloudtarot/statuses/01JYFF0GQV1680Z0VG0YTFZDTP
I’m pretty sure seeing my country bombed by an habitually violent country, causing me direct hardship, or maybe taking the lives of my friends and family, would not make me particularly “happy”.
#ukpolitics #iran
Ring said on social media that strange login patterns were caused by a programming error, not hackers.
https://www.snopes.com/news/2025/07/21/ring-hacked-may-28-2025/
Tänkvärd text om ett inte helt lätt ämne. I synnerhet detta är värt att reflektera kring:
”All journalistik värd namnet har ett subjekt – journalisten själv – oavsett om det redovisas eller inte. Opartiskhet kan därmed inte göras till en fråga om vad en journalist subjektivt tänker och tycker och kanske någon gång yttrar, utan om sannfärdigheten, noggrannheten, tillförlitligheten, omdömesgillheten och relevansen i de program och reportage som journalisten producerar.”
On the edge reconstruction of the second immanantal polynomials of undirected graph and digraph
Tingzeng Wu
https://arxiv.org/abs/2507.14607 https://
I chose a new home for my blogging and articles:
#DigitalMigration