Tootfinder

Opt-in global Mastodon full text search. Join the index!

@theprivacydad@social.linux.pizza
2025-04-30 16:56:17

What is the difference between Microsoft Copilot and a Microsoft Copilot laptop??
This article is useful in light of yesterday's news about them bringing back Recall on Copilot hardware:
wired.com/story/what-is-copilo
There a…

@dcm@social.sunet.se
2025-06-29 13:48:52

Some good venting by Steve Klabnik about the sorry state of significant chunks of the AI debate today:
"What is breaking my brain a little bit is that all of the discussion online around AI is so incredibly polarized. This isn’t a “the middle is always right” sort of thing either, to be clear. It’s more that both the pro-AI and anti-AI sides are loudly proclaiming things that are pretty trivially verifiable as not true."

@fgraver@hcommons.social
2025-06-29 11:32:16

It’s true that my fellow students are embracing AI – but this is what the critics aren’t seeing theguardian.com/commentisfree/

@hllizi@hespere.de
2025-05-30 17:19:25

"They ran the bare job titles through GPT, without looking at the details of the specific jobs, and got the chatbot to guess what those titles would have meant. Then they decided the chatbot could do most of the jobs. They were, after all, using the chatbot to do their job."
Rule: your job can successfully be taken over by a chatbot if it comes with no accountability.

@inthehands@hachyderm.io
2025-05-30 21:58:22

Here the addendum. Brown writes:
“There exists no coherent notion of what AI is or could be.”
There is in fact a perfectly coherent definition of AI — one that does not refute Brown’s point, but rather proves it.
2/

@esoriano@social.linux.pizza
2025-03-30 10:06:03

Ding!!
“Politicians promise to make policy that unleashes the power of A.I. to do … something, though many of them aren’t exactly sure what.”
nytimes.com/2025/03/29/opinion

@tiotasram@kolektiva.social
2025-06-29 17:34:34

Sam Altman: "What if an inhuman AI took control of the world by manipulating people's behavior but its core directive was to make paper clips and it burned down the planet to do that? This is so scary and it's a real thing you should be worried about!"
Governments: "Oh boy that's scary we'll make special rules to invest in your company so you can save us from this frightening possibility. Will we restrict or discourage AI use/development? Haha no that would be foolish!"
Me: "What if an inhuman handful of corporate charters took control of the world by manipulating human behavior but their core directive was to maximize venture capital and they burned down the planet to do that? ..."
Governments (sponsored by said corporations): "Send in the SWAT teams now, this idea is dangerous!"

@Tupp_ed@mastodon.ie
2025-05-30 07:36:51

The collapse of the volume of cars in London is pretty boggling, but even moreso to me, a person who was in London recently and couldn’t get over how jammed the streets were with cars.
What must it have been like before the reduction?
masto.ai/@bovine3dom/114590440

@arXiv_csHC_bot@mastoxiv.page
2025-05-30 09:53:13

This arxiv.org/abs/2407.02381 has been replaced.
link: scholar.google.com/scholar?q=a

@prachisrivas@masto.ai
2025-05-28 14:16:39

Look what came in the post!
Transforming Development in Education: From Coloniality to Rethinking, Reframing and Reimagining Possibilities with my chapter:
'Why is Epistemic Humility Provocative? A reflexive story'.
Through storytelling, I address epistemic humility as the explicit acknowledgement of the limits of knowledge and raise questions of 'just survival' and 'positionalities at play'.
Video:

@timbray@cosocial.ca
2025-05-26 23:47:41

That… is quite a number. I wonder what the return on that investment will look like?
finance.yahoo.com/news/how-nvi

@scott@carfree.city
2025-06-29 22:54:11

the Plant Labeler strikes again
this guy is going all over SF labeling sidewalk plants with a QR code to his vibe-coded AI landscaping startup. he thinks you’ll think he created these gardens
he didn’t, and can’t even tell what are weeds: he’s labeled invasive thistle and pellitory

sidewalk tree basin with a calla lily labeled with its scientific name. I scratched out the QR code in the picture
@inthehands@hachyderm.io
2025-05-30 21:58:22

Here the addendum. Brown writes:
“There exists no coherent notion of what AI is or could be.”
There is in fact a perfectly coherent definition of AI — one that does not refute Brown’s point, but rather proves it.
2/

@rberger@hachyderm.io
2025-04-27 19:48:27

"I think it is a huge mistake for people to assume that they can trust AI when they do not trust each other. The safest way to develop superintelligence is to first strengthen trust between humans, and then cooperate with each other to develop superintelligence in a safe manner. But what we are doing now is exactly the opposite. Instead, all efforts are being directed toward developing a superintelligence."
#AGI #AI
wired.com/story/questions-answ

@barijaona@mastodon.mg
2025-05-27 11:02:38

An AI machine has no idea what it means to be human.
Instruct it not to address you by name. Ask it to call itself AI, to speak in the third person, and to avoid emotional or cognitive terms.
theconversation.com/we-need-to

@wordsbywesink@mstdn.social
2025-05-27 14:34:27

Earlier this month, the Copyright Office issued the third part of its report on on AI: this one covering how generative AI may infringe on copyright and whether that's fair use (short answer: maybe, maybe not). I have a new article up summarizing the report.

@davidbody@fosstodon.org
2025-05-29 15:00:06

I just called AI bullshit on Marriott's web support. It's infuriating when a company can't be bothered to actually read what you send them, and responds with long, detailed instructions that contain made-up facts (I don't even have an iPhone), and that are unresponsive to a fully reproducible issue I repeatedly described in detail.
I don't think even the laziest, sloppiest human would produce a response like they just sent me.
And this is like the 10th message…

@tante@tldr.nettime.org
2025-06-23 10:00:04

"You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash."
(Original title: Google bribes iNaturalist to use generative AI — volunteers quit in outrage)

@darkrat@chaosfurs.social
2025-05-28 09:34:47

What annoys me the most about the current state of AI
I've dabbled with ML ever since the very first stanford ML class. That was almost 15 years ago.
On a technical level, it is mind meltingly insane. Just the fact that the stuff that was released in just half a decade works *at all* the way it does is just... amazing.
There is genunely amazing tech in there. Especially in pattern recognition and processing.
But then it gets tarnished by *gestures around broadly*

@Techmeme@techhub.social
2025-06-24 07:55:49

A look at 2025's AI models and what's ahead: OpenAI's o3 is a breakthrough, AI agents will improve randomly and in leaps, but scaling parameters will slow down (Nathan Lambert/Interconnects)
interconnects.ai/p/summertime-

There's a lot of pressurefor businesses to get ahead with AI.
And I imagine at many companies
there's a sense that if you don't keep up, you're leaving innovation on the table.
At the same time, there's a gap between the excitement around AI and understanding what it means for each role.
CarGurus started an internal initiative "AI Forward" to meet business units and function where they are.
The group works together to evaluate u…

@johnleonard@mastodon.social
2025-06-26 15:13:31

A US judge has said that Meta’s use of copyrighted books to train its AI models constitutes "fair use" under US copyright law. It follows a similar judgement about Anthropic earlier this week, and will come as disappointment to authors and other creators looking for compensation in what they see as use of their work without permission.

As an experiment, I uploaded an AI paper to ChatGPT and asked o3 to explain a specific parameter. Its answer started with
“In Generalized Reweighted PPO (GRPO),”
which is not what GRPO stands for.
Meanwhile, AI enthusiasts have started citing o3 as if they were citing an actual source. Including absurdly involved questions like “90% confidence interval” of “how many chips would likely be diverted from a G42 data center” if such-and-such.
(Am I an AI enthusiast thoug…

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and this more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of manservant caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@arXiv_csHC_bot@mastoxiv.page
2025-05-30 09:57:21

This arxiv.org/abs/2505.17418 has been replaced.
initial toot: mastoxiv.page/@arXiv_csHC_…

@inthehands@hachyderm.io
2025-05-28 16:28:33

Right now, there is a •preposterous• amount of money devoted to propping up an AI industry that is causing massive environmental damage, sits on unethical foundations, is enriching some of the worst people on earth, nurses a whole mountain of fascist fantasies (hachyderm.io/@inthehands/11455), and isn’t even profitable.
For all those reasons, I’m really, really careful about what kind of hype I share. And — this is important! — I’m careful about what kind of hype I allow to get inside my own head.
7/

@ethanwhite@hachyderm.io
2025-04-23 10:36:18

“What we ultimately want, and what we believe we need, is a commons that is strong, resilient, growing, useful (to machines and to humans)—all the good things, frankly. But as our open infrastructures mature they become increasingly taken for granted, and the feeling that “this is for all of us” is replaced with “everyone is entitled to this”. While this sounds the same, it really isn’t. Because with entitlement comes misuse, the social contract breaks, reciprocation evaporates, and ultimately the magic weakens.”
Very glad to see that @… is working to address the deep challenges that have arisen at the intersection of the open movement and corporate AI.
creativecommons.org/2025/04/02
h/t @…

@pbloem@sigmoid.social
2025-06-26 10:41:24

New pre-print! #ai
**Universal pre-training by iterated random computation.**
⌨️🐒 A monkey behind a typewriter will produce the collected works of Shakespeare eventually.
💻🐒 But what if we put a monkey behind a computer?
⌨️🐒 needs to be lucky enough to type all characters of all of Shakespeare correctly. 💻🐒 only needs to be lucky enough to type a program for Shakespeare.

A table showing one string of random characters next to an emoji of a monkey next to a keyboard (representing a typewriter). Below it, three strings, also of random characters, but with more structure. Some characters and n-grams repeat. Next to these three strings is an emoji of a monkey next to a laptop computer. The caption reads: (⌨️🐒) A string of randomly sampled characters. (💻🐒) The result of passing this string through three randomly initialized neural network models. The latter data is …
@ruth_mottram@fediscience.org
2025-06-24 16:59:34

I used the acronym "LLMs" after the word "AI" in a school parents' meeting today and only one other parent ( my husband) knew what it was.
Is it just a bit of jargon? Completely unknown? Or am I out of touch?
jargon but also a word people use
not jargon but a bit obscure
No idea what you're talking about
I'm a techbro who wants to know what people think

@inthehands@hachyderm.io
2025-05-28 16:28:33

Right now, there is a •preposterous• amount of money devoted to propping up an AI industry that is causing massive environmental damage, sits on unethical foundations, is enriching some of the worst people on earth, nurses a whole mountain of fascist fantasies (hachyderm.io/@inthehands/11455), and isn’t even profitable.
For all those reasons, I’m really, really careful about what kind of hype I share. And — this is important! — I’m careful about what kind of hype I allow to get inside my own head.
7/

@kubikpixel@chaos.social
2025-06-14 15:40:14

The Meta AI app is a privacy disaster
It sounds like the start of a 21st-century horror film: Your browser history has been public all along, and you had no idea. That’s basically what it feels like right now on the new stand-alone Meta AI app, where swathes of people are publishing their ostensibly private conversations with the chatbot. […]
😬

@davidaugust@mastodon.online
2025-06-26 00:17:02

Asked 5 different local AIs (not internet based nor connecting to the internet):
What is the average airspeed of a fully laden swallow?
And they attributed it to 3 novels: Alan Sillitoe's "The Lonely Voice"(which does not exist), Douglas Adams's "A Hitchhiker's Guide to the Galaxy," George Orwell's "Animal Farm" and also the correct answer, the movie Monty Python and the Holy Grail.
So as long as we're good with AI being 75%…

@ErikJonker@mastodon.social
2025-06-12 21:22:25

Meta AI is a disaster.
#meta

@gedankenstuecke@scholar.social
2025-06-24 02:21:46

«DOGE has provided a template for complete political and cultural rollback, exploiting AI's brittle affordances to trash any pretence at social contract. What the so-called educational offers from AI companies are actually doing is a form of cyberattack, building in the pathways for the hacker tactic of 'privilege escalation' to be used by future threat actors, especially those from a hostile administration.»
"The role of the University is to resist" by @…
danmcquillan.org/cpct_seminar.

@stefan@gardenstate.social
2025-06-26 16:07:41

Once AI takes over the billionaire elite will try and get the definition of what a human is redefined.

@draxil@social.linux.pizza
2025-04-24 15:56:28

What a headline:
"OpenAI wants to buy Chrome and make it an “AI-first” experience"
😬
arstechnica.com/ai/2025/04/cha

@aral@mastodon.ar.al
2025-06-15 08:35:45

AI is just a tool, like a pencil. A pencil that can only write and draw what some faceless corporation somewhere allows you to write and draw.
#AI #theMastersTools

@gfriend@mas.to
2025-06-23 11:16:48

And in a strange parallel with what we’re doing to the _living_ world…
Maybe AI Slop Is Killing the Internet, After All - Bloomberg Businessweek apple.news/AntpOpqOKQMeKlRgY0q

@arXiv_csCY_bot@mastoxiv.page
2025-06-18 08:18:08

Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor
Alexandra Olteanu, Su Lin Blodgett, Agathe Balayn, Angelina Wang, Fernando Diaz, Flavio du Pin Calmon, Margaret Mitchell, Michael Ekstrand, Reuben Binns, Solon Barocas
arxiv.org/abs/2506.14652

@camerontw@social.coop
2025-05-25 04:14:47

problematic but genuine request:
I am looking for any account of how it is that Israel can have used unlimited US weaponry targeted by so-called AI for 18months, but also of course deployed with genocidal indiscriminate-ness, on a tiny strip of land and still have an enemy to fight? What does this say about the exaggerated capabilities of modern warfare (in ways different to Russia's similar 'incompetence' [sorry for the poor phraseology])?

@tante@tldr.nettime.org
2025-06-17 12:40:05

"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline"
#AI is ruining our digital world
(Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)

@elduvelle@neuromatch.social
2025-06-18 17:19:31

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too.
"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselve…

@servelan@newsie.social
2025-06-20 22:32:33

I tried tatting once; even that minimal exposure lets me know that ain't tatting on the cover. Not sure what it is, but reviews say it's an AI book. This is what we have to put up with now. #craft #crafting #tatting
Amazon.com: Mastering Shuttle Tatting: From Simple Stitches to Complex Creations: 9798333221711: Noah, Olivia: Books
amazon.com/Mastering-Shuttle-T

@arXiv_csDL_bot@mastoxiv.page
2025-05-26 07:17:10

Towards Industrial Convergence : Understanding the evolution of scientific norms and practices in the field of AI
Antoine Houssard
arxiv.org/abs/2505.17945

@tinoeberl@mastodon.online
2025-06-14 15:24:47

‼️ Neues #Risiko für unbewusste #Datenlecks bei Meta AI:
Natürlich eher relevant, wenn Ihr eine #Katze seid und nach einer Methode fragt, die

@ruth_mottram@fediscience.org
2025-05-24 08:00:46

"Journalism screwed itself over by betting
on Meta and its profit-over-society peers.
Now, nobody trusts “the media” and
everyone is going bankrupt. The next bet
is on generative AI, with its inability to
distinguish truth from “hallucinations” –
fabrications that on the page become lies.
The Continent is an attempt to prove
journalism can be done differently.
Expect more of this in what our team
has decided to call our “serious era”.
We’re no longer a start-up. We’re going
to empower more people with quality
journalism. We’re going to help others
launch newspapers. We’re going to
stay sane. And we’re going to prove
that African excellence can set global
standards."
@… reaches 5 years and 200 issues. If you're not already receiving their copy via @… (or email, telegram or WhatsApp if you must) then it's really worth signing up to remind yourself just how big and diverse the world ist.
This weeks highlights are an extraordinary story about how Zanu-PF in Zimbabwe directly interfered with Mozambique's election. Plus a frankly beautiful photo piece on Addis Ababa

@Techmeme@techhub.social
2025-06-13 00:45:45

The public feed of the Meta AI app is filled with private and sensitive information, suggesting users might not be aware they are sharing their chats publicly (Amanda Silberling/TechCrunch)
techcrunch.com/2025/06/12/the-

@alethenorio@fosstodon.org
2025-05-25 21:39:21

The whole debacle about AI and copyright is very interesting. It sucks that AI risks affecting the income of a lot of people, especially in the Arts domain. But what we currently call AI is just the start of what, long term, has the potential to change our lives not unlike the internet did.

@seeingwithsound@mas.to
2025-06-12 10:56:17

We see what we have learned to see kaptur.co/we-see-what-we-have- via @…

AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries,
and are in some cases knocking their collections offline,
according to a new survey published today.
While the impact of AI bots on open collections has been reported anecdotally,
this survey is the first attempt at measuring the problem,
which in the worst cases can make valuable, public resources unavailable to humans
because the…

@arXiv_mathHO_bot@mastoxiv.page
2025-06-25 08:18:00

The Unreasonable Effectiveness of Mathematical Experiments: What Makes Mathematics Work
Asvin G
arxiv.org/abs/2506.19787

@theDuesentrieb@social.linux.pizza
2025-06-10 08:20:59

I mean, nothing yet beats the Brazilian Institute of Oriental Studies bit the trend is there and it fits
velvetshark.com/ai-company-log

@shriramk@mastodon.social
2025-06-21 02:02:21

OMG Google, you're desperate. What you just did is exactly *why* I just started to pay for Kagi.

Searching for "set kagi as default chrome" pops up a window asking me to try the "AI Mode" in Google.
@arXiv_qbioNC_bot@mastoxiv.page
2025-06-24 09:46:20

The Relationship between Cognition and Computation: "Global-first" Cognition versus Local-first Computation
Lin Chen
arxiv.org/abs/2506.17970

@tiotasram@kolektiva.social
2025-05-26 12:51:54

Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM

@fennek@cyberplace.social
2025-05-22 04:42:20

What has ... DRM (?!) ever done for us?
infosec.exchange/@dangoodin/11

@al3x@hachyderm.io
2025-06-20 19:24:40

I am struggling with how to deal with very long articles on topics I'm interested about but at a different level than the author.
1. I'm acknowledging that "very long" is a "very" subjective matter. I'd say that for me that's usually what goes beyond 5min read time.
2. I'm also acknowledging that it's quite impossible to find the perfect match of the level of details provided by an article and the level of detail I'm interested.
1. If I save the article for later, I know I won't read it.
2. Many times listening to the article (using ElevenReader) provides a solution.
3. I am starting to use AI summarization more often. Not Safari's which is useless.
I don't feel quite right about this last approach.

@mxp@mastodon.acm.org
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@newsie@darktundra.xyz
2025-06-17 10:01:26

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums 404media.co/ai-scraping-bots-a

@timbray@cosocial.ca
2025-06-09 16:01:40

1/2 “My input stream is full of it: Fear and loathing and cheerleading and prognosticating on what generative AI means and whether it’s Good or Bad and what we should be doing. All the channels: Blogs and peer-reviewed papers and social-media posts and business-news stories. So there’s lots of AI angst out there, but this is mine. I think the following is a bit unique because it focuses on cost, working backward from there.”

@camerontw@social.coop
2025-05-25 04:14:47

problematic but genuine request:
I am looking for any account of how it is that Israel can have used unlimited US weaponry targeted by so-called AI for 18months, but also of course deployed with genocidal indiscriminate-ness, on a tiny strip of land and still have an enemy to fight? What does this say about the exaggerated capabilities of modern warfare (in ways different to Russia's similar 'incompetence' [sorry for the poor phraseology])?

@grifferz@social.bitfolk.com
2025-06-03 00:27:47

There's no putting the genie back in the bottle, but education is [yet another part of our society that is] completely unprepared to deal with generative AI.
Teachers Are Not OK
404media.co/teachers-are-not-o

@fell@ma.fellr.net
2025-05-16 16:01:23

Is it just me, or do LLMs have a strong tendency to try and guess what you want to hear and then tell you exactly that, regardless of factual accuracy?
#AI #ML #LLM

@teledyn@mstdn.ca
2025-06-10 17:25:16

A common comment I hear from AI fanbois is you need to refine your prompt, and need to ask many times. And so they do.
Each prompt consumes 16oz of water just to answer, maybe double if you used voice, and a massive amount of energy and natural resources, so conservatively, what does each and every prompt REALLY cost? We do know OpenAI spent $12M to extract $9M, so maybe $10/prompt?
How is this "good for business"?
anatomyof.ai

‪@mxp@mastodon.acm.org‬
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@mxp@mastodon.acm.org‬
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@berlinbuzzwords@floss.social
2025-05-12 11:17:07

Kyle Liu is the Head of Engineering at Mercari, a second-hand e-commerce marketplace based in Japan. His team has been using Elastic Search for retrieval and DNN Learning to Rank for ranking for a long time. At #bbuzz, he will discuss how they re-architected their search system in response to developments in deep learning and LLM, and how they successfully convinced internal stakeholders to adopt new…

Session title: AI and LLM strategies and application at Mercari Search
Kaiyi Liu
Join us from June 15-17 in Berlin or online / berlinbuzzwords.de
@khalidabuhakmeh@mastodon.social
2025-06-11 13:02:35

I just saw a post saying someone let their agent AI run for 19 hours to solve a problem.
Is that the expectation?
I’m going to predict a future scandal: A global consulting firm has been caught charging human hours for agentic AI elapsed time. If I were an unscrupulous consulting firm, I could charge customers 24 hours a day at ludicrously high prices to deliver what likely would have been an ill-advised and doomed project anyway.

@aredridel@kolektiva.social
2025-05-02 17:49:15

Watch Github for what happens for companies that go all-in on AI.
Notice all the details that are wrong now. Notice the docs. Notice the confusion between different interfaces. Notice the UI jank in new places. This is what happens when engineering decisions are made by default instead of discussion and planning. This is what happens when writing (already easier than editing) becomes even easier without improving editing.

@brentsleeper@sfba.social
2025-06-02 20:14:27

First #AI came for the artists, and I did not speak out—
Because I did not make art.
Then AI came for the coders, and I did not speak out—
Because I did not code.
Then AI came for the shitposters, and I did not speak out—
Because I did not post shit.
Then AI came for the #coffee

Photograph, in desaturated black and brown tones, of what appears to be temporary drywall paneling enclosing an area of a mall or other interior space that is undergoing construction or renovation. The paneling is painted black and is stenciled with white stark, sans serif lettering that reads “PREMIUM ROBOTIC COFFEE COMING SOON.”
@crell@phpc.social
2025-06-09 20:23:02

Apparently, the way to get to a human on the #Simplifi support chat instead of the Artificial Stupidity bot is to be rude to the Artificial Stupidity bot. Then it will figure out you're frustrated and offer to refer you to a human.
This is, of course, not what the bot or the written instructions say you need to do, but it's what works.

@jonippolito@digipres.club
2025-04-03 11:52:02

What does the death of Julius Caesar and probability have to do with the future of knowledge? Join me at noon EDT today for a webinar on how AI, like a JPEG, compresses information, risking a future where history is endlessly rewritten maine.zoom.us/webinar/register

A  painting of the death of Julius Caesar
@pbloem@sigmoid.social
2025-06-10 07:31:42

This is an interesting take on how AI can, in specific cases, when used carefully, be used to level the playing field.
tylertringas.com/ai-legal/
I think this is true in many situations, including education. We just have to get past the shortcuts, the blind faith and the hype,…

@Techmeme@techhub.social
2025-06-11 19:41:03

Meta adds AI video editing tools to the Edits app and Meta AI, letting users edit up to 10 seconds of a video with 50 preset prompts, free for a "limited time" (Emma Roth/The Verge)
theverge.com/news/685581/meta-

@kazys@mastodon.social
2025-04-13 23:09:19

Will Sora, the AI image generator crush the meme?
Now that anybody can make a convincing political cartoon, what's the future for memes? Or is there a future for memes? Second and third are by me. The first is not.

@Duckbill4994@social.linux.pizza
2025-04-11 20:07:58

Why you do not use windows. Part n 1:
arstechnica.com/security/2025/

@arXiv_csCY_bot@mastoxiv.page
2025-06-26 07:42:10

That's Not the Feedback I Need! -- Student Engagement with GenAI Feedback in the Tutor Kai
Sven Jacobs, Maurice Kempf, Natalie Kiesler
arxiv.org/abs/2506.20433

@arXiv_csAI_bot@mastoxiv.page
2025-06-03 07:18:33

Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents
Xiao Yu, Baolin Peng, Ruize Xu, Michel Galley, Hao Cheng, Suman Nath, Jianfeng Gao, Zhou Yu
arxiv.org/abs/2506.00320

@eglassman@hci.social
2025-05-15 19:46:00

My largest remaining NSF grant, which was awarded by a competitive process on the recommendation of national experts, was terminated yesterday. The money would have paid for PhD students to invent better AI systems for everyday people who need programs written for them but can't or won't write them themselves. I'm sad that the rate of progress we make will slow down significantly, because all our progress is made public for everyone to benefit from. That's what

@scott@carfree.city
2025-06-04 19:30:45

And by “prevent crime,” what they mean is AI will be able to automatically yell at homeless people to move along! Innovation!
mastodon.social/@eff/114626708

In the video, two individuals approach the side of a parking ramp with a blanket. Keywords appear on the screen describing what the individuals in the shot are wearing and holding.

Soon after their arrival, an automated voice delivers a message: “Attention, the individual in the brown sweatshirt, and the individual wearing the black beanie near the parking garage entrance, please leave the premises immediately.”
@Techmeme@techhub.social
2025-06-07 06:21:10

AI research nonprofit EleutherAI releases the Common Pile v0.1, an 8TB dataset of licensed and open-domain text for AI models that it says is one of the largest (Kyle Wiggers/TechCrunch)
techcrunch.com/2025/06/06/eleu

@inthehands@hachyderm.io
2025-06-18 17:01:33

But here’s the thing: anyone could make music •before• gen AI. Some more skilled or more artistically successful than others, sure! But that’s not the point. •Doing it• is the point. •Living it• is the point.
A product that promises to generate it for you so that you neither do it nor live it is antithetical to the point, is hostile to the idea of art itself.
(Note: that’s exactly what the artists in the OP are •not• doing! They are all grabbing the AI and actively •doing• and •living• while poking at the curious new object.)
5/

@prachisrivas@masto.ai
2025-06-04 13:13:26

This looks very cool.
'OpenAIRE in collaboration with Area Science Park organizes a hands-on workshop titled “Where LEGO Meets FAIR Data,” designed to introduce the principles of FAIR data through a creative, interactive simulation using LEGO metaphors.'

@tiotasram@kolektiva.social
2025-06-21 05:46:51

Why AI can't possibly make you more productive; long
Addendum: for those in tech specifically, despite what it might seem like, now is an excellent time to be organizing a union in your workplace. In the current social order, it's one of the only & best ways to have any say in the upcoming AI employment debacles, and the solidarity that the organizing process engenders is amazing.

@davidaugust@mastodon.online
2025-06-14 15:43:24

I asked AI to design a challenge coin* for the U.S. Marines for their campaign in Los Angeles against the American people.
Here’s what it suggests. Are you few the proud yet?
*medallion recognizing membership/achievement/boost morale, often collected or exchanged.
#USpol #USMC

The image shows two sides of a proposed U.S. Marine Corps challenge coin.

The front side of the coin features a soldier in full gear standing over a person about to stomp on the person’s face as the person is lying on the ground holding a sign that reads "FREE SPEECH." The text around the edge of the coin reads "OPERATION DOMESTIC SILENCE" at the top and "HONOR, COURAGE, COMPLIANCE" at the bottom.

The back side of the coin displays a scroll of the U.S. Constitution with the text "We the Peopl…
@rberger@hachyderm.io
2025-04-13 18:34:14

"Inspired by the political philosopher Albert Hirschman, figures including Goff, Thiel and the investor and writer Balaji Srinivasan have been championing what they call “exit” – the principle that those with means have the right to walk away from the obligations of citizenship, especially taxes and burdensome regulation. Retooling and rebranding the old ambitions and privileges of empires, they dream of splintering governments and carving up the world into hyper-capitalist, democracy-free havens under the sole control of the supremely wealthy, protected by private mercenaries, serviced by AI robots and financed by cryptocurrencies."
...
Our opponents know full well that we are entering an age of emergency, but have responded by embracing lethal yet self-serving delusions. Having bought into various apartheid fantasies of bunkered safety, they are choosing to let the Earth burn. Our task is to build a wide and deep movement, as spiritual as it is political, strong enough to stop these unhinged traitors. A movement rooted in a steadfast commitment to one another, across our many differences and divides, and to this miraculous, singular planet."
#OligarchApocalypse #Oligarchy
theguardian.com/us-news/ng-int

@Techmeme@techhub.social
2025-06-10 18:11:15

Interview with Craig Federighi and Greg Joswiak on Apple's struggles to ship AI features, demoed in 2024, with the "V1 Siri architecture" and work on a V2 model (Lance Ulanoff/TechRadar)
techra…

@tante@tldr.nettime.org
2025-06-03 11:33:18

The whole "we need better AI criticism" is a bit like "we need better AI regulation".
The actual statement is: The existing criticism/regulation is in my way, we need something that let's me do what I want.

@elduvelle@neuromatch.social
2025-06-07 11:55:00

Coming back from holidays in Scotland (😍) I wonder what is the best way to organize photographs in albums, edit them, maybe comment them and share them online with chosen people?
(in a private and controlled way, no AI training of any kind)
#PhotoSharing #PhotoSoftware

@arXiv_csHC_bot@mastoxiv.page
2025-06-09 07:40:52

What Comes After Harm? Mapping Reparative Actions in AI through Justice Frameworks
Sijia Xiao, Haodi Zou, Alice Qian Zhang, Deepak Kumar, Hong Shen, Jason Hong, Motahhare Eslami
arxiv.org/abs/2506.05687

@prachisrivas@masto.ai
2025-06-04 13:26:54

The FOMO about the 10-year University of Cambridge REAL Centre Anniversary Celebration Conference is *real*.
If you can't be there in person, join online.
Incredible lineup.
12 June 2025
eventbrite.co…

@inthehands@hachyderm.io
2025-06-09 16:13:42

“What AI sells is vastly different from what it delivers, particularly what it delivers out of the box.”
The post gives some great context on the study of “the difference between work-as-imagined (WAI) and work-as-done (WAD),” and says:
“If what we have to do to be productive with LLMs is to add a lot of scaffolding and invest effort to gain important but poorly defined skills, we should be able to assume that what we’re sold and what we get are rather different things. That gap implies that better designed artifacts could have better affordances, and be more appropriate to the task at hand.”
5/

@tante@tldr.nettime.org
2025-06-02 13:18:25

After the whole "I asked ChatGPT" as talk opener I've recently seen a lot of "Look, my kids are using AI to build their own games and that's beautiful" stuff in presentations.
Makes me sad that instead of wanting kids to learn how to build something they get taught to accept what the kinda-passable code generator craps out. What they learn is not how to conceptualize or build something, what they learn is that shit comes from nowhere if you just match your e…

@inthehands@hachyderm.io
2025-06-18 16:47:59

My knee-jerk reaction to this article from @… was a cynical [citation needed] re “artists from all mediums emphatically support the use of AI.”
On further reflection…well, that’s still my response to the eyeball-grabbing lede, yes, but the actual article is what I wish we’d had instead of this inane hype avalanche:
Artists poking at the new thing, playing with it, finding its possibilities, critiquing it, problematizing it, asking us to ask what it is, helping us see it with fresh eyes from many angles. infosec.exchange/@adamshostack

@pbloem@sigmoid.social
2025-06-03 12:42:10

Everybody complaining about getting hammered with #AI traffic seems to think that these are crawlers scraping for training data.
How likely is it that this is a complete misconception and this is all inference time?
Most public companies give their cralwers and RAG agents different user agent strings. But what about security services trawling through their data?

@inthehands@hachyderm.io
2025-06-09 16:42:33

All this brings me back to some text I was writing yesterday for my students, on which I’d appreciate any thoughtful feedback:
❝You can let the computer do the typing for you, but never let it do the thinking for you.
This is doubly true in the current era of AI hype. If the AI optimists are correct (the credible ones, anyway), software development will consist of humans critically evaluating, shaping, and correcting the output of LLMs. If the AI skeptics are correct, then the future will bring mountains of AI slop to decode, disentangle, fix, and/or rewrite. Either way, it is •understanding• and •critically evaluating• code — not merely •generating• it — that will be the truly essential ability. Always has been; will be even more so. •That• is what you are learning here.❞
11/

@arXiv_csHC_bot@mastoxiv.page
2025-06-16 08:01:09

Conversational AI as a Catalyst for Informal Learning: An Empirical Large-Scale Study on LLM Use in Everyday Learning
Na{\dj}a Terzimehi\'c, Babette B\"uhler, Enkelejda Kasneci
arxiv.org/abs/2506.11789

@inthehands@hachyderm.io
2025-06-12 03:18:18

Sadly, @… is right — and the article she quotes is (I think) wrong when it says this:
❝In some real sense Meta's future depends on hiring and retaining those people, and Meta has already committed $65 billion to Al infrastructure this year. What's a few hundred million more?❞
Meta’s future does not in fact depend on hiring these specific researchers. This isn’t for the researchers, and it’s not for the research either. It’s for the investors. It’s making a show of being willing to being even more preposterously all-in on AI than anyone else, because investors are gaga for that shit. mas.to/@kims/11466606551179856

@inthehands@hachyderm.io
2025-06-09 16:18:00

I keep posting about how the AI hype bubble makes it almost impossible to have a reasonable conversation about LLMs, and it’s only when the bubble bursts that we can start thinking realistically about what if anything LLMs are actually good for in writing code.
That seems to be what Fred is getting at here: the massive gap between the hype and the reality means that the affordances of these tools fit neither the task at hand nor the tool’s own capabilities.
6/