The US is no longer an ally for western countries.
“U.S. Director of National Intelligence Tulsi Gabbard is blocking America's closest intelligence allies from receiving updates on Russia-Ukraine peace talks in a shock move that upends decades of tight cooperation.”
https://www.
Just picked up this #Salter Improved Family Scale to gift a friend who's getting married and is into old-fashioned stuff and British stuff.
But, it doesn't quite sit at 0. Does anyone have any experience on how to tare a machine like this? There's no knob as far as I can see. The plate on top also doesn't seem like it can be removed without force.
More pictures below. …
Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
Yield, noise and timing studies of ALICE ITS3 stitched sensor test structures: the MOST
Jory Sonneveld (on behalf of the ALICE collaboration), Ren\'e Barthel (on behalf of the ALICE collaboration), Szymon Bugiel (on behalf of the ALICE collaboration), Leonardo Cecconi (on behalf of the ALICE collaboration), Jo\~ao De Melo (on behalf of the ALICE collaboration), Martin Fransen (on behalf of the ALICE collaboration), Alessandro Grelli (on behalf of the ALICE collaboration), Isis Hobu…
After some refactoring, learning about `hatch`, moving more files around, and generally abusing `test.pypi.org`: I've uploaded `diceparse` to PyPI. Still need to update the web documentation, but it now feels like a proper project at this point.
I still need to add a CLI part so you can just roll dice after installing the package, but I'll handle that later. Also need to tweak the README.md a bit as well...
A look at SiriusXM's podcasting business; 2024's "off-platform earnings" were $606M, with podcast advertising making up most of that sum, up from $475M in 2022 (Jessica Testa/New York Times)
https://www.nytimes.com/2025/08/22/busines
Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement
Prerana Khatiwada, Grace Donaher, Jasymyn Navarro, Lokesh Bhatta
https://arxiv.org/abs/2507.14242
How the heck does this only have 304k views?
Dude does a tight 5 & also proposes to his girl? DAYM.
▶️ German Proposes to Jewish Girlfriend on Stage | Mario Adrion | Standup C...
https://youtube.com/watch?v=DBr-r-ZnRR8&si=RhH6OhhumfTmLQuG
Just finished "Get a Life, Chloe Brown" by Talia Hibbert. It's... much less chaste than most of the other romances I've been reading, but also incredibly sweet and positive, so I enjoyed it a lot.
My one reservation is that it does the thing a lot of romance novels do where they equate physical desire with romantic desire, and physical flirtations/advances with actual communication, and yes people equate those things in the real world all the time, by it's often really harmful when they do that.
This novel does better with consent than 99% of the field probably, and legitimately deserves props for that, so this isn't the harsh criticism I'd level if it seriously broke the "would this be okay if we didn't have access to interior monologues" test, but it skirts the edges of that a bit.
#AmReading