Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@fgraver@hcommons.social
2025-05-24 20:17:37

Et stort Ÿyeblikk i norsk filmhistorie – Ytring nrk.no/ytring/et-stort-oyeblik

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@Tupp_ed@mastodon.ie
2025-05-25 13:20:03

There is an open letter on BlueSky moderation practices as they apply to Gazans which I was invited to sign.
I have been feeling pretty helpless so I read the letter. It was very balanced and reasonable. The asks are not- as I saw them being misrepresented as immediately- to allow anyone who says they are in Gaza to post anything.

@Techmeme@techhub.social
2025-06-24 00:40:57

Court filings from in-ear hardware startup iyO's trademark dispute lawsuit against OpenAI detail OpenAI and io's early work on in-ear hardware devices (Maxwell Zeff/TechCrunch)
techcrunch.com/2025/06/23/cour

@davidshq@hachyderm.io
2025-04-26 11:11:40

I find it slightly annoying that the #Discord version on Linux is 0.0.xx and has been for-ev-er. 😆 You aren't following any of the versioning schemes I know! 😂

@samir@functional.computer
2025-07-25 06:59:43

Found an app called “Octave Coffee”. It’s an opinionated timer for pourover coffee. Tells you how much to pour, then how much to wait, and so on. And it has a cute pixel art kitty.
The problem is… it negs you while doing it, and your reward at the end for following the instructions is the message, “You actually listened. Shocking.”
I did it twice. It doesn’t vary.
So… I’m deleting it, even though I found it useful, because I don’t want to be insulted first thing in the morn…

@catsalad@infosec.exchange
2025-06-26 11:44:53

Found a bunch of BSides with fediverse accounts I wasn't following. I should probably update my fediverse #BSides index... 😅
Bsides & InfoSec Cons by Region
📌⁠ infosec.exchange/@catsalad/111

@aardrian@toot.cafe
2025-06-25 00:50:23

We accidentally wore a matching outfit for the first session of a new campaign.
Paul is wearing the tee I made at the start of the last campaign and I’m wearing the tee I made at the end of the same campaign.

A guy in a black tee with the text, “4RTU15 & Alfred & Battle Unit & Ch’k Tikka Tikka & Pimo & Prince Rauthe the Divine.” Next to him is a guy wearing a maroon version of the tee but all names except Battle Unit and Pimo are crossed out, and Pimo has an asterisk following it.
@smurthys@hachyderm.io
2025-07-24 18:14:16

With profits falling, Tesla is diversifying. 😀
#joke #Tesla

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.