Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mxp@mastodon.acm.org
2025-08-09 15:37:12

I’m sick and tired of reading “If used responsibly, [unhinged promises of a new Gilded Age of unprecedented prosperity, yadda yadda]” in relation to #GenAI.
Yeah, “if used responsibly,” many things would be fine, but people and organizations tend to not act responsibly, unless there are very strong incentives.

‪@mxp@mastodon.acm.org‬
2025-08-09 15:37:12

I’m sick and tired of reading “If used responsibly, [unhinged promises of a new Gilded Age of unprecedented prosperity, yadda yadda]” in relation to #GenAI.
Yeah, “if used responsibly,” many things would be fine, but people and organizations tend to not act responsibly, unless there are very strong incentives.

@mxp@mastodon.acm.org‬
2025-08-09 15:37:12

I’m sick and tired of reading “If used responsibly, [unhinged promises of a new Gilded Age of unprecedented prosperity, yadda yadda]” in relation to #GenAI.
Yeah, “if used responsibly,” many things would be fine, but people and organizations tend to not act responsibly, unless there are very strong incentives.

@tante@tldr.nettime.org
2025-06-05 11:40:04

"The process of coding with an “agentic” LLM appears to be the process of carefully distilling all the worst parts of code review, and removing and discarding all of its benefits."
Very insightful post on #GenAI by Glyph
(Original title: I Think I’m Done Thinking About genAI For Now)

@timbray@cosocial.ca
2025-07-06 19:16:08

In which I argue that arguments about whether #genAI is useful or not are the wrong arguments. The important issues are what it’s for and what it costs.
Having very unkind feelings about the people pushing it.

@elduvelle@neuromatch.social
2025-07-06 13:55:21

Here is a poll about #GenAI since that's all we're talking about at the moment:
Do you believe that you can detect AI-generated text?
If so, what are your tips to detect it? I found this article which has a few suggestions :

@felwert@fedihum.org
2025-06-05 09:53:58

I can’t help it, I somehow feel gaslighted by the whole #GenAI debate. People who are critical of GenAI are often told “but when done right, you’re just so much more productive, so obviously you just didn’t to it right.” So I’m trying. Not because I feel I need to, but because I just want to get a realistic impression of the capabilities, and be able to discuss these with my students.
Then fancy…

@pavelasamsonov@mastodon.social
2025-07-07 13:02:04

Managers were starting to understand that velocity on its own has no value. But then along came #AI and said "but what if we made that velocity 10x?" and they fell for it all over again - because they only have a surface level understanding of the work.
The logic of the feature factory has permitted #genAI

@hacksilon@infosec.exchange
2025-05-17 07:58:32

Interesting article on how #GenAI can be used effectively in a classroom setting, what methods the instructor uses to make sure students learn how and when (not) to use it, and how they ensure the students still learn what they need to learn.

@timbray@cosocial.ca
2025-06-02 16:00:51

So sad: #genAI

@pavelasamsonov@mastodon.social
2025-07-03 15:09:27

One rule for thee, another for me. #LLM #AI #GenAI

Clifton Sellers attended a Zoom meeting last month where robots outnumbered humans.
He counted six people on the call including himself, Sellers recounted in an interview. The 10 others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe and summarize the meeting.
Some of the AI helpers were assisting a person who was also present on the call — others represented humans who had declined to show up but sent a bot that listens but can’t talk in…
@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@awinkler@openbiblio.social
2025-06-23 16:52:38

goldene Worte von @… ! Ein toller Vortrag wider den KI-Hype #genAI
Chaos Computer Club - recent events feed (low quality): Entzaubert generative KI (#gpn23)
Episode we…

@tante@tldr.nettime.org
2025-07-27 18:23:05

If you use #genAI you can no longer claim to "care about quality". That is just a contradiction. You actions are saying loudly that you do in fact not give a shit about anything of the sort.

@geant@mstdn.social
2025-06-17 10:05:22

May 2025 recap | The latest edition of the GÉANT School of Software Engineering, hosted by PCSS, brought together developers and experts from across Europe to explore generative #AI tools, risks, secure coding practices.
Highlights:
🔹#GenAI Buzz-Free Programming with LLMs – training by Maciej Łabędzki…

GÉANT School of Software engineering. Photo by © Magda Madej, on behalf of PSNC, 2025.
@ErikJonker@mastodon.social
2025-06-13 10:44:32

In a certain cultural segment of IT it is custom to call everything related to AI "garbage", "bullshit", just as stupid as calling AI truly intelligent and reasoning. More people need to be in the middle ground instead of in those extremes.
#AI #GenAI

@simon_brooke@mastodon.scot
2025-07-11 11:33:19

"Interviewees should take note and always record their own version of a conversation in any hostile forum, although of course they could be accused of faking the real version!"
#DeepFake
#GenAI

@gedankenstuecke@scholar.social
2025-07-11 02:12:12

With the whole moving, we've been looking for dining table sets online. While scrolling we noticed that something was off with some of the listings, behold "AI"-generated product image #3:
Is the table gigantic? Or are the chairs tiny? Do they worship the table in some strange cult? Why is the table floating mid-air with 2 let's resting weirdly on some box on the wall and one leg being much longer?
Because it's mindless slop, that's why.
#ai #genai

@elduvelle@neuromatch.social
2025-07-12 11:18:35

It used to be that the use of #genAI for #PeerReview was forbidden (at least for anything else other than helping with the language).
I've just checked the policy on this from #Nature and it's…

@timbray@cosocial.ca
2025-07-14 16:26:17

Incredibly strong post about #genAI and journalism:
404media.co/the-medias-pivot-t

@pavelasamsonov@mastodon.social
2025-05-18 20:28:14

Every tech oligarch wants to build an Everything App - because they now have the power to do so, but also because they're completely out of ideas.
This week's issue of my newsletter tackles the rise of the "data-driven" Nothing Manager responsible for building Everything, with the "power" of #GenAI.

@mariyadelano@hachyderm.io
2025-07-01 14:40:23

Update on my stance here: I’ve changed my mind after reading way more about #AI. Stopped using #LLM products, cancelled paid subscription to #ChatGPT and am currently exploring smaller, specialized, and open-source alternative language model solutions to keep business functioning where needed (tldr because of client requests for certain types of automation I can’t say goodbye to LMs completely).
Planning to write up how my thinking developed soon.
#technology #artificialintelligence #genAI

@crell@phpc.social
2025-06-20 15:40:53

Priorities...
#GenAI #LLM #AI #ClimateChange

A post from @laurenkayes.bsky.social 

It's so cool that cities are like “pweeease only turn your AC on if you're actively dying and don't go below 79" while the Al nobody asked for is slurping up the power grid to make 1image of a girl with 5 tits.
@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@pavelasamsonov@mastodon.social
2025-05-29 21:13:07

Any second now... #LLM #AGI #GenAI #AI

r/agi
2 yr. ago
AGI 2 years away says CEO of leading AGI lab Anthropic
@tiotasram@kolektiva.social
2025-07-17 13:09:57

It bothers me that so many LLM/genAI applications seem to be all about "now that we have new tool X, what can we do with it" while completely ignoring the question "for problem Y, what is the best tool for the job?"
Perhaps unsurprisingly for developers where we have strong evidence of poor ethics (e.g., uncritically using big-brand LLMs), I suspect that many of the people behind these systems care more about the exhiliration of using new tech and the prestige it might bring them than any of the problems they might claim to solve (if they even bother to identify such things at all). Turns out that's a great way to cause a lot of harm in the world, since you likely won't do a good job of measuring outcomes (if you even bother to do so) and you especially won't carefully look for systemic biases or ways your system might unintentionally hurt/exclude people. You also won't be concerned about whether your system ends up displacing efforts that would have led to better solutions.
#AI #GenerativeAI #GenAI #LLM