Tootfinder

Opt-in global Mastodon full text search. Join the index!

@Techmeme@techhub.social
2025-06-13 00:45:45

The public feed of the Meta AI app is filled with private and sensitive information, suggesting users might not be aware they are sharing their chats publicly (Amanda Silberling/TechCrunch)
techcrunch.com/2025/06/12/the-

@ErikJonker@mastodon.social
2025-06-12 21:22:25

Meta AI is a disaster.
#meta

@seeingwithsound@mas.to
2025-06-12 10:56:17

We see what we have learned to see kaptur.co/we-see-what-we-have- via @…

@tante@tldr.nettime.org
2025-07-11 17:53:21

I did a short keynote for a bunch of composers and music people a few days ago on "AI". And since it landed well but wasn't recorded I'll try to redo it (maybe expand it marginally) and upload it.
It's about AI but more about the permission to to what feels right and humane anyways.
And it sounds weird to give people permission for something natural, but the reaction in that room felt like it was meaningful. That standing against the tide of "AI is wit…

@Techmeme@techhub.social
2025-06-11 19:41:03

Meta adds AI video editing tools to the Edits app and Meta AI, letting users edit up to 10 seconds of a video with 50 preset prompts, free for a "limited time" (Emma Roth/The Verge)
theverge.com/news/685581/meta-

@ErikJonker@mastodon.social
2025-07-13 08:32:34

Good explanation of MCP and A2A.
#ai

@inthehands@hachyderm.io
2025-06-12 03:18:18

Sadly, @… is right — and the article she quotes is (I think) wrong when it says this:
❝In some real sense Meta's future depends on hiring and retaining those people, and Meta has already committed $65 billion to Al infrastructure this year. What's a few hundred million more?❞
Meta’s future does not in fact depend on hiring these specific researchers. This isn’t for the researchers, and it’s not for the research either. It’s for the investors. It’s making a show of being willing to being even more preposterously all-in on AI than anyone else, because investors are gaga for that shit. mas.to/@kims/11466606551179856

@berlinbuzzwords@floss.social
2025-05-12 11:17:07

Kyle Liu is the Head of Engineering at Mercari, a second-hand e-commerce marketplace based in Japan. His team has been using Elastic Search for retrieval and DNN Learning to Rank for ranking for a long time. At #bbuzz, he will discuss how they re-architected their search system in response to developments in deep learning and LLM, and how they successfully convinced internal stakeholders to adopt new…

Session title: AI and LLM strategies and application at Mercari Search
Kaiyi Liu
Join us from June 15-17 in Berlin or online / berlinbuzzwords.de
@timbray@cosocial.ca
2025-06-09 16:01:40

1/2 “My input stream is full of it: Fear and loathing and cheerleading and prognosticating on what generative AI means and whether it’s Good or Bad and what we should be doing. All the channels: Blogs and peer-reviewed papers and social-media posts and business-news stories. So there’s lots of AI angst out there, but this is mine. I think the following is a bit unique because it focuses on cost, working backward from there.”

@thomastraynor@social.linux.pizza
2025-07-11 15:14:38

Yah, that is going to go over so well. About half my career is now supporting old stuff where the 'experts' claimed that it will replace programmers. Most of what I work on to be charitable is unmaintainable and inefficient code unless we use the developer package from the vendor. It has the potential to be a good tool that generates parts of the boring code, but needs someone who knows what the hell they are doing to make it secure and efficient! It also needs great (or a least …

@aral@mastodon.ar.al
2025-07-10 11:49:46

“The users who choose Cursor are hardcore vibe addicts. They are tech incompetents who somehow BSed their way into a developer job. They cannot code without a vibe coding bot.”
I see no lie.
pivot-to-ai.com/2025/07/09/cur

@arXiv_csAI_bot@mastoxiv.page
2025-07-11 09:53:01

Working with AI: Measuring the Occupational Implications of Generative AI
Kiran Tomlinson, Sonia Jaffe, Will Wang, Scott Counts, Siddharth Suri
arxiv.org/abs/2507.07935

@pbloem@sigmoid.social
2025-06-10 07:31:42

This is an interesting take on how AI can, in specific cases, when used carefully, be used to level the playing field.
tylertringas.com/ai-legal/
I think this is true in many situations, including education. We just have to get past the shortcuts, the blind faith and the hype,…

@mxp@mastodon.acm.org
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@prachisrivas@masto.ai
2025-07-10 11:33:49

Just out! My new article, 'Reimagining Our Futures: Education and the Promise of Possibility' in Comparative Education Studies is an exercise in hope to collectively, consciously, and critically re-examine education and education systems.
We can harness education's potential of promise by questioning assumptions - which/whose knowledges are legitimised and valued; what has been counted, discounted or marginalised as evidence.

@theDuesentrieb@social.linux.pizza
2025-06-10 08:20:59

I mean, nothing yet beats the Brazilian Institute of Oriental Studies bit the trend is there and it fits
velvetshark.com/ai-company-log

@teledyn@mstdn.ca
2025-06-10 17:25:16

A common comment I hear from AI fanbois is you need to refine your prompt, and need to ask many times. And so they do.
Each prompt consumes 16oz of water just to answer, maybe double if you used voice, and a massive amount of energy and natural resources, so conservatively, what does each and every prompt REALLY cost? We do know OpenAI spent $12M to extract $9M, so maybe $10/prompt?
How is this "good for business"?
anatomyof.ai

@Techmeme@techhub.social
2025-07-10 16:16:00

Sources: some AI researchers rejected Meta's offers to stay at jobs that align with their values; far fewer defected from Anthropic and DeepMind than OpenAI (Hayden Field/The Verge)
theverge.com/ai-artificial-int

‪@mxp@mastodon.acm.org‬
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@mxp@mastodon.acm.org‬
2025-06-11 11:01:30

This is of historical interest on so many levels.
When the notion of AI first appeared, chess was considered the epitome of intelligence (before that it was calculating)—which, of course, it isn’t. Now AI is all about pattern recognition, which certainly covers more of what we associate with human intelligence, but go figure, it’s not *just* pattern recognition either. ⇢

@khalidabuhakmeh@mastodon.social
2025-06-11 13:02:35

I just saw a post saying someone let their agent AI run for 19 hours to solve a problem.
Is that the expectation?
I’m going to predict a future scandal: A global consulting firm has been caught charging human hours for agentic AI elapsed time. If I were an unscrupulous consulting firm, I could charge customers 24 hours a day at ludicrously high prices to deliver what likely would have been an ill-advised and doomed project anyway.

@publicvoit@graz.social
2025-07-09 07:31:58

"Zero-Click Prompt Injection":
calypsoai.com/insights/prompt-
So instead of trying to trick an employee via phishing

@gfriend@mas.to
2025-07-08 17:58:56

Is AI your ghost writer or creative partner? The key is using AI as a collaborator, not a replacement for your authentic voice. I'm finding ways to leverage AI while preserving what makes my perspective unique. How are you navigating this balance in _your_ work? 

@thomasfuchs@hachyderm.io
2025-07-09 13:27:54

No, computers won’t replace humans to write code for themselves.
Please stop with this nonsense.
What we will see though is tremendous losses in productivity as deskilled programmers will get less and less education and practice—and take longer and longer to make broken AI-generated code work. Meanwhile, AI models will regress from eating their own generated shit when being trained on.
Eventually AI companies will finally run out of investors to scam—and when they disappear or get so expensive they become unaffordable, “prompt engineers” will be asked to not use AI anymore.
What’s gonna happen then?
We’re losing a whole generation of programmers to this while thought leaders in our field are talking about “inevitability” and are jerking off to sci-fi-nostalgia-fueled fantasies of AGI.

@hex@kolektiva.social
2025-06-12 07:31:28

The liberal obsession with optics serves the right and persuades no one. There is literally an active ethnic cleansing happening in the US right now, and the only thing that matters is making that as hard as possible to carry out.
Anarchists destroying intelligence assets saves lives. Every escooter thrown at a cop car is one less escort for a goon too afraid to kidnap random brown people without being flanked by a branch full of bad apples. Spray paint is not violence. Vandalism is not violence. Community self defense in all forms is legitimate.
Make no mistake, these raids are about changing demographics. Demographic trends have been shifting blue for a long time, and the right has, for a long time, been blaming "white replacement." Conspiracy theory aside, Democrats have also been relying on the growth of black and brown voters as a block. The nuances of whiteness as an identity are lost on the current administration and their supporters. They see that "white people will be a minority by 2050" and equate that with the "end of Western Civilization."
The only way to "save Western Civilization" is to change those demographics. Forced birth and forced removal are two sides of the same white nationalist objective. Of course they can't have due process, because they need to be able to kidnap anyone who they see as a threat to their demographic future.
They don't care about optics. The plan is to murder away any threat and flood everyone else with propaganda. There is no mythical middle. There's no one unconvinced. They know this, but they win when democrats buy that myth and save the police the work of policing the protests.
If your protest is 90% "peaceful," they'll take pictures of the 10% that isn't. If it's 99% peaceful, they'll shoot rubber bullets and teargas until someone throws a brick and take 100 pictures from a dozen angles. If its 100% "peaceful" and no one can be provoked, they'll generate pictures with AI or photoshop like they did during the George Floyd uprising and the pictures from the CHOP/CHAZ. Do you have literally no memory?
#USPol #FiftyFiftyOne #50501movenent #resistance #NoKingsDay #NoKingsDayOfAction

@inthehands@hachyderm.io
2025-06-09 16:13:42

“What AI sells is vastly different from what it delivers, particularly what it delivers out of the box.”
The post gives some great context on the study of “the difference between work-as-imagined (WAI) and work-as-done (WAD),” and says:
“If what we have to do to be productive with LLMs is to add a lot of scaffolding and invest effort to gain important but poorly defined skills, we should be able to assume that what we’re sold and what we get are rather different things. That gap implies that better designed artifacts could have better affordances, and be more appropriate to the task at hand.”
5/

@arXiv_csCY_bot@mastoxiv.page
2025-06-10 16:24:59

This arxiv.org/abs/2502.00561 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCY_…

@crell@phpc.social
2025-06-09 20:23:02

Apparently, the way to get to a human on the #Simplifi support chat instead of the Artificial Stupidity bot is to be rude to the Artificial Stupidity bot. Then it will figure out you're frustrated and offer to refer you to a human.
This is, of course, not what the bot or the written instructions say you need to do, but it's what works.

@samvarma@fosstodon.org
2025-07-10 21:16:08

Seems to me that the smartest AI play for Apple is a small, locally running agent, that knows me.
They will never win creating a universal intelligence model, but a local agent that works extremely well and can go get the information I need from one of the other AI companies would be epic.
Privacy could be maintained, and Apple can just use the best of what's out there , and still keep control of the platform.

@pavelasamsonov@mastodon.social
2025-07-09 13:14:08

Every post on LinkedIn is speculating on what it means for Meta (a company that makes bad AI) to poach an AI guy from Apple (a company that also makes bad AI).

@michabbb@social.vivaldi.net
2025-07-10 15:41:45

ClaudeCode: When CLI is Too Much #AI #opensource #developer 🤯
😅 It's pretty funny what pops up in the

@arXiv_csHC_bot@mastoxiv.page
2025-06-09 07:40:52

What Comes After Harm? Mapping Reparative Actions in AI through Justice Frameworks
Sijia Xiao, Haodi Zou, Alice Qian Zhang, Deepak Kumar, Hong Shen, Jason Hong, Motahhare Eslami
arxiv.org/abs/2506.05687

@Techmeme@techhub.social
2025-06-10 18:11:15

Interview with Craig Federighi and Greg Joswiak on Apple's struggles to ship AI features, demoed in 2024, with the "V1 Siri architecture" and work on a V2 model (Lance Ulanoff/TechRadar)
techra…

@padraig@mastodon.ie
2025-05-10 02:26:16

Imagine your brain just being non-existent that you _must_ refer to an #AI slop pot to do some basic task, and even though the information that the slop pot produces is either inaccurate or flat out false... you still use it as a response regardless. #AIBollocks

twitter / x screenshot of an initial tweet, with most of the content cut off asking the question at the end "Can someone explain to me what's going on?!"

With the first user responding stating "I asked GROK your question. Here's GROKs response." with the actual response edited out,
@theprivacydad@social.linux.pizza
2025-04-30 16:56:17

What is the difference between Microsoft Copilot and a Microsoft Copilot laptop??
This article is useful in light of yesterday's news about them bringing back Recall on Copilot hardware:
wired.com/story/what-is-copilo
There a…

@elduvelle@neuromatch.social
2025-07-06 13:55:21

Here is a poll about #GenAI since that's all we're talking about at the moment:
Do you believe that you can detect AI-generated text?
If so, what are your tips to detect it? I found this article which has a few suggestions :

@Techmeme@techhub.social
2025-07-12 17:36:02

xAI apologizes for Grok's "horrific behavior" when it wrote antisemitic posts on July 8, and blames "an update to a code path upstream of the Grok bot" (Anthony Ha/TechCrunch)
techcrunch.com/2025/07/12/xai-

@scott@carfree.city
2025-06-04 19:30:45

And by “prevent crime,” what they mean is AI will be able to automatically yell at homeless people to move along! Innovation!
mastodon.social/@eff/114626708

In the video, two individuals approach the side of a parking ramp with a blanket. Keywords appear on the screen describing what the individuals in the shot are wearing and holding.

Soon after their arrival, an automated voice delivers a message: “Attention, the individual in the brown sweatshirt, and the individual wearing the black beanie near the parking garage entrance, please leave the premises immediately.”
@inthehands@hachyderm.io
2025-06-09 16:42:33

All this brings me back to some text I was writing yesterday for my students, on which I’d appreciate any thoughtful feedback:
❝You can let the computer do the typing for you, but never let it do the thinking for you.
This is doubly true in the current era of AI hype. If the AI optimists are correct (the credible ones, anyway), software development will consist of humans critically evaluating, shaping, and correcting the output of LLMs. If the AI skeptics are correct, then the future will bring mountains of AI slop to decode, disentangle, fix, and/or rewrite. Either way, it is •understanding• and •critically evaluating• code — not merely •generating• it — that will be the truly essential ability. Always has been; will be even more so. •That• is what you are learning here.❞
11/

@compfu@mograph.social
2025-07-09 18:34:39

I've been listening to a podcast by the German public broadcaster ​ARD about the end of the world. Every episode had a different topic and one was about AI. It was mostly sourced from an interview with a youtuber but one idea is now stuck in my head: what if AI doesn't launch nukes but develops into an all-powerful actor whose aims are not aligned with those of human survival? Do we have a precedent?
Yes. There are such super-human and quasi-immortal beings here on earth today…

A young Keanu Reeves with scruffy black hair, white t-shirt and red jacket goes "whoa".
@samir@functional.computer
2025-06-08 19:24:16

@… @… I think you are spot on with regards to AI. I cannot see how LLMs will get us anywhere close to what you’re describing, and I am sad that all the funding for AI/ML is now being steered in this direction.

@aardrian@toot.cafe
2025-07-08 20:43:19

What the fuck is it with #overlay companies and their apologists commenting on my blog?
I got two today (on the same post):
adrianroselli.com/2025/01/ftc-

@arXiv_csSE_bot@mastoxiv.page
2025-06-10 16:59:49

This arxiv.org/abs/2502.06994 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@dcm@social.sunet.se
2025-06-29 13:48:52

Some good venting by Steve Klabnik about the sorry state of significant chunks of the AI debate today:
"What is breaking my brain a little bit is that all of the discussion online around AI is so incredibly polarized. This isn’t a “the middle is always right” sort of thing either, to be clear. It’s more that both the pro-AI and anti-AI sides are loudly proclaiming things that are pretty trivially verifiable as not true."

@Techmeme@techhub.social
2025-07-10 05:36:18

xAI introduces Grok 4, trained on its Colossus supercomputer, featuring multimodal tools, faster reasoning, Grok 4 Voice, Grok 4 Code, a new interface, and more (Amanda Caswell/Tom's Guide)
tomsguide.com/ai/grok-4-is-her

@trochee@dair-community.social
2025-06-08 23:52:10

I know I shouldn't still be using this curséd website
but what is happening here with your lesson planning, Duo
Employ humans to write QA validations to keep these questions from being this silly
And stop using LLMs. Please.

A Duolingo puzzle page

The question reads 
Choisis l'option qui veut dire « parfait »

[Pick the option that means "parfait"]

The words below are available for selection:
Un chien parfait. Je l'ai vu en ligne, papa.

[A perfect dog. I saw it on line, dad?…]

Both instances of the word "parfait" are circled in blue
@ELLIOTTCABLE@functional.cafe
2025-06-08 17:19:42

I’m unreasonably fucking pissed.
An r/me_irlgbt moderator banned me, and is now accusing me of being an A.I. … … … because I use fucking emdashes and ellipses.
#typography #AI #nightmaretimeline

a screenshot of a Reddit-messaging thread:

v me_irlgbt @ • 1d
Hi, mod that banned you in the first place here. What
tool did you use to write your comment?
v elliottcable • 1d
Uh, my fingers, on my iPhone. Although I guess in
2025 we're past being able to prove that.
Darkest fucking timeline.
• me_irlgbt !
• 1d
Are you telling me that you're the sole human that
actually types em dashes and ellipsis characters?
••.
• elliottcable • 1m
I really shouldn't be wasting my time on this thread,
but go…
@grifferz@social.bitfolk.com
2025-06-03 00:27:47

There's no putting the genie back in the bottle, but education is [yet another part of our society that is] completely unprepared to deal with generative AI.
Teachers Are Not OK
404media.co/teachers-are-not-o

@patrickquin@furry.engineer
2025-07-09 04:29:44
Content warning: Generative AI as religion, guts and bodily fluids

Generative AI is the new haruspicy, made into attempts to divine meaning out of feces filled entrails. Too many peeps think they’re connecting to the divine when what they are grabbing is full of rot, waste and disease.

@joergi@chaos.social
2025-07-04 09:20:00

I want to slap the AI (or the developers) in it's (their) face, when it's repeating the same error over and over again....
sometimes the outcome is impressive, but sometimes the AI is behaving worse than a little kid and not understanding what I want
(luckily for the AI (and the developers) I'm not slapping anyone.. but I would love to)
#AI

@thomasfuchs@hachyderm.io
2025-07-09 15:52:40

So the main "arguments" when I say "AI doesn't work" and "it will collapse" are:
1. "You don't know what you're talking about"
2. "It's inevitable and here to stay, might as well go with the program"
3. "But it's almost there! Just last week they released [name of model] that is so close!"
Literally no one ever replies with any concrete examples with how it reliably, ethically and non-wastefully works for them to increase their productivity and improve their and other people's lives in any meaningful way.
It's always ad hominems, hypotheticals or deeply flawed "it sort of works for this".

@aredridel@kolektiva.social
2025-05-02 17:49:15

Watch Github for what happens for companies that go all-in on AI.
Notice all the details that are wrong now. Notice the docs. Notice the confusion between different interfaces. Notice the UI jank in new places. This is what happens when engineering decisions are made by default instead of discussion and planning. This is what happens when writing (already easier than editing) becomes even easier without improving editing.

@brentsleeper@sfba.social
2025-06-02 20:14:27

First #AI came for the artists, and I did not speak out—
Because I did not make art.
Then AI came for the coders, and I did not speak out—
Because I did not code.
Then AI came for the shitposters, and I did not speak out—
Because I did not post shit.
Then AI came for the #coffee

Photograph, in desaturated black and brown tones, of what appears to be temporary drywall paneling enclosing an area of a mall or other interior space that is undergoing construction or renovation. The paneling is painted black and is stenciled with white stark, sans serif lettering that reads “PREMIUM ROBOTIC COFFEE COMING SOON.”
@rberger@hachyderm.io
2025-04-27 19:48:27

"I think it is a huge mistake for people to assume that they can trust AI when they do not trust each other. The safest way to develop superintelligence is to first strengthen trust between humans, and then cooperate with each other to develop superintelligence in a safe manner. But what we are doing now is exactly the opposite. Instead, all efforts are being directed toward developing a superintelligence."
#AGI #AI
wired.com/story/questions-answ

@fgraver@hcommons.social
2025-06-29 11:32:16

It’s true that my fellow students are embracing AI – but this is what the critics aren’t seeing theguardian.com/commentisfree/

There's a lot of pressurefor businesses to get ahead with AI.
And I imagine at many companies
there's a sense that if you don't keep up, you're leaving innovation on the table.
At the same time, there's a gap between the excitement around AI and understanding what it means for each role.
CarGurus started an internal initiative "AI Forward" to meet business units and function where they are.
The group works together to evaluate u…

@timbray@cosocial.ca
2025-07-06 19:16:08

In which I argue that arguments about whether #genAI is useful or not are the wrong arguments. The important issues are what it’s for and what it costs.
Having very unkind feelings about the people pushing it.

@fanf@mendeddrum.org
2025-07-01 17:42:03

from my link log —
Three cases against IF NOT EXISTS / IF EXISTS in PostgreSQL DDL.
postgres.ai/blog/20211103-thre
saved 2021-11-12

@kubikpixel@chaos.social
2025-06-14 15:40:14

The Meta AI app is a privacy disaster
It sounds like the start of a 21st-century horror film: Your browser history has been public all along, and you had no idea. That’s basically what it feels like right now on the new stand-alone Meta AI app, where swathes of people are publishing their ostensibly private conversations with the chatbot. […]
😬

@arXiv_csLG_bot@mastoxiv.page
2025-06-05 10:57:51

This arxiv.org/abs/2505.21677 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@hllizi@hespere.de
2025-05-30 17:19:25

"They ran the bare job titles through GPT, without looking at the details of the specific jobs, and got the chatbot to guess what those titles would have meant. Then they decided the chatbot could do most of the jobs. They were, after all, using the chatbot to do their job."
Rule: your job can successfully be taken over by a chatbot if it comes with no accountability.

@ethanwhite@hachyderm.io
2025-04-23 10:36:18

“What we ultimately want, and what we believe we need, is a commons that is strong, resilient, growing, useful (to machines and to humans)—all the good things, frankly. But as our open infrastructures mature they become increasingly taken for granted, and the feeling that “this is for all of us” is replaced with “everyone is entitled to this”. While this sounds the same, it really isn’t. Because with entitlement comes misuse, the social contract breaks, reciprocation evaporates, and ultimately the magic weakens.”
Very glad to see that @… is working to address the deep challenges that have arisen at the intersection of the open movement and corporate AI.
creativecommons.org/2025/04/02
h/t @…

@tante@tldr.nettime.org
2025-07-04 10:05:36

"Whatever" is a brilliant essay on "AI" by @…:
"But I think the core of what pisses me off is that selling this magic machine requires selling the idea that doing things is worthless. Because if doing something has some value, then it must be somehow better than pushing a button and receiving Whatever for essentially no cost."

@johnleonard@mastodon.social
2025-06-26 15:13:31

A US judge has said that Meta’s use of copyrighted books to train its AI models constitutes "fair use" under US copyright law. It follows a similar judgement about Anthropic earlier this week, and will come as disappointment to authors and other creators looking for compensation in what they see as use of their work without permission.

@wordsbywesink@mstdn.social
2025-05-27 14:34:27

Earlier this month, the Copyright Office issued the third part of its report on on AI: this one covering how generative AI may infringe on copyright and whether that's fair use (short answer: maybe, maybe not). I have a new article up summarizing the report.

@inthehands@hachyderm.io
2025-06-09 16:18:00

I keep posting about how the AI hype bubble makes it almost impossible to have a reasonable conversation about LLMs, and it’s only when the bubble bursts that we can start thinking realistically about what if anything LLMs are actually good for in writing code.
That seems to be what Fred is getting at here: the massive gap between the hype and the reality means that the affordances of these tools fit neither the task at hand nor the tool’s own capabilities.
6/

@prachisrivas@masto.ai
2025-06-04 13:13:26

This looks very cool.
'OpenAIRE in collaboration with Area Science Park organizes a hands-on workshop titled “Where LEGO Meets FAIR Data,” designed to introduce the principles of FAIR data through a creative, interactive simulation using LEGO metaphors.'

@arXiv_csCY_bot@mastoxiv.page
2025-07-10 09:17:31

Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change
Adrian Rauchfleisch, Joshua Philip Suarez, Nikka Marie Sales, Andreas Jungherr
arxiv.org/abs/2507.06876

@elduvelle@neuromatch.social
2025-06-07 11:55:00

Coming back from holidays in Scotland (😍) I wonder what is the best way to organize photographs in albums, edit them, maybe comment them and share them online with chosen people?
(in a private and controlled way, no AI training of any kind)
#PhotoSharing #PhotoSoftware

@ruth_mottram@fediscience.org
2025-06-24 16:59:34

I used the acronym "LLMs" after the word "AI" in a school parents' meeting today and only one other parent ( my husband) knew what it was.
Is it just a bit of jargon? Completely unknown? Or am I out of touch?
jargon but also a word people use
not jargon but a bit obscure
No idea what you're talking about
I'm a techbro who wants to know what people think

@teledyn@mstdn.ca
2025-07-08 15:55:04

Proof that Nature is capable of clickbait headlines; for the impatient, the final word seems to be that LLMs are fine and useful *PROVIDED* you already know the answer.
Nature: "AI ‘scientists’ joined these research teams: here’s what happened"
nature.com/articles/d41586-025

@tomkalei@machteburch.social
2025-07-11 13:46:00

There are these quite bold claims that LRMs are very adept at doing MY JOB (as a mathematician) [1].
Teaching and admin aside, what is it that we do in math and why?
Asvin G argues in [2] that we describe patterns in computation and predict results of computation much like physics predicts events in the world.
What do you think?
[1] scientificamerican.com/article
[2] arxiv.org/abs/2506.19787

As an experiment, I uploaded an AI paper to ChatGPT and asked o3 to explain a specific parameter. Its answer started with
“In Generalized Reweighted PPO (GRPO),”
which is not what GRPO stands for.
Meanwhile, AI enthusiasts have started citing o3 as if they were citing an actual source. Including absurdly involved questions like “90% confidence interval” of “how many chips would likely be diverted from a G42 data center” if such-and-such.
(Am I an AI enthusiast thoug…

@wfryer@mastodon.cloud
2025-07-03 02:39:10

(1/2) Check out the EdTech Situation Room Episode 347 “DeepSeek Disruption”
edtechsr.com/2025/07/02/edtech
Also on SubStack:

Square podcast cover image featuring a modern, abstract collage-style design. The background is light with subtle glowing circuit patterns representing artificial intelligence. A globe icon suggests global technology competition, while a large screen symbolizes the rise of YouTube and digital media. A padlock icon conveys cybersecurity and smart home risks. Bold, clean icons are visually balanced across the composition. Overlaid at the top in large, well-designed font is the podcast title: "EdT…
@crell@phpc.social
2025-06-08 03:54:04

Social media isn't the problem.
"Engagement maximizing" algorithms deciding what you see is the problem.
This has been true for 20 years now, and we still haven't addressed the real problem.
We've been subject to the "paperclip maximizer" AI threat for 20 years, and most people still don't realize it. The few that do have been bribed to not realize it.
(Preaching to the choir on Mastodon, I imagine, but still...)

@Techmeme@techhub.social
2025-06-07 06:21:10

AI research nonprofit EleutherAI releases the Common Pile v0.1, an 8TB dataset of licensed and open-domain text for AI models that it says is one of the largest (Kyle Wiggers/TechCrunch)
techcrunch.com/2025/06/06/eleu

@arXiv_csAI_bot@mastoxiv.page
2025-07-04 08:29:11

What Neuroscience Can Teach AI About Learning in Continuously Changing Environments
Daniel Durstewitz, Bruno Averbeck, Georgia Koppe
arxiv.org/abs/2507.02103

@tante@tldr.nettime.org
2025-06-03 11:33:18

The whole "we need better AI criticism" is a bit like "we need better AI regulation".
The actual statement is: The existing criticism/regulation is in my way, we need something that let's me do what I want.

@pbloem@sigmoid.social
2025-06-03 12:42:10

Everybody complaining about getting hammered with #AI traffic seems to think that these are crawlers scraping for training data.
How likely is it that this is a complete misconception and this is all inference time?
Most public companies give their cralwers and RAG agents different user agent strings. But what about security services trawling through their data?

@timbray@cosocial.ca
2025-07-01 18:35:38

“At the moment, we have no idea what the impact of genAI on software development is going to be. The impact of anything on coding is hard to measure systematically, so we rely on anecdata and the community’s eventual consensus. So, here’s my anecdata. Tl;dr: The AI was not useless.” tbray…

@prachisrivas@masto.ai
2025-06-04 13:26:54

The FOMO about the 10-year University of Cambridge REAL Centre Anniversary Celebration Conference is *real*.
If you can't be there in person, join online.
Incredible lineup.
12 June 2025
eventbrite.co…

@aral@mastodon.ar.al
2025-06-15 08:35:45

AI is just a tool, like a pencil. A pencil that can only write and draw what some faceless corporation somewhere allows you to write and draw.
#AI #theMastersTools

AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries,
and are in some cases knocking their collections offline,
according to a new survey published today.
While the impact of AI bots on open collections has been reported anecdotally,
this survey is the first attempt at measuring the problem,
which in the worst cases can make valuable, public resources unavailable to humans
because the…

@tante@tldr.nettime.org
2025-06-23 10:00:04

"You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash."
(Original title: Google bribes iNaturalist to use generative AI — volunteers quit in outrage)

@arXiv_csAI_bot@mastoxiv.page
2025-06-03 07:18:33

Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents
Xiao Yu, Baolin Peng, Ruize Xu, Michel Galley, Hao Cheng, Suman Nath, Jianfeng Gao, Zhou Yu
arxiv.org/abs/2506.00320

@timbray@cosocial.ca
2025-05-26 23:47:41

That… is quite a number. I wonder what the return on that investment will look like?
finance.yahoo.com/news/how-nvi

@tante@tldr.nettime.org
2025-06-02 13:18:25

After the whole "I asked ChatGPT" as talk opener I've recently seen a lot of "Look, my kids are using AI to build their own games and that's beautiful" stuff in presentations.
Makes me sad that instead of wanting kids to learn how to build something they get taught to accept what the kinda-passable code generator craps out. What they learn is not how to conceptualize or build something, what they learn is that shit comes from nowhere if you just match your e…

@Techmeme@techhub.social
2025-07-01 19:40:50

Source: Mark Zuckerberg has, on 10 occasions, offered to pay AI research talent up to $300M over four years, with $100M in compensation for the first year (Zoë Schiffer/Wired)
wired.com/story/mark-zuckerber

@inthehands@hachyderm.io
2025-05-28 16:28:33

Right now, there is a •preposterous• amount of money devoted to propping up an AI industry that is causing massive environmental damage, sits on unethical foundations, is enriching some of the worst people on earth, nurses a whole mountain of fascist fantasies (hachyderm.io/@inthehands/11455), and isn’t even profitable.
For all those reasons, I’m really, really careful about what kind of hype I share. And — this is important! — I’m careful about what kind of hype I allow to get inside my own head.
7/

@arXiv_csAI_bot@mastoxiv.page
2025-06-05 09:41:26

This arxiv.org/abs/2506.00202 has been replaced.
initial toot: mastoxiv.page/@arXiv_csAI_…

@prachisrivas@masto.ai
2025-05-28 14:16:39

Look what came in the post!
Transforming Development in Education: From Coloniality to Rethinking, Reframing and Reimagining Possibilities with my chapter:
'Why is Epistemic Humility Provocative? A reflexive story'.
Through storytelling, I address epistemic humility as the explicit acknowledgement of the limits of knowledge and raise questions of 'just survival' and 'positionalities at play'.
Video:

@tante@tldr.nettime.org
2025-06-17 12:40:05

"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline"
#AI is ruining our digital world
(Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)

@inthehands@hachyderm.io
2025-05-28 16:28:33

Right now, there is a •preposterous• amount of money devoted to propping up an AI industry that is causing massive environmental damage, sits on unethical foundations, is enriching some of the worst people on earth, nurses a whole mountain of fascist fantasies (hachyderm.io/@inthehands/11455), and isn’t even profitable.
For all those reasons, I’m really, really careful about what kind of hype I share. And — this is important! — I’m careful about what kind of hype I allow to get inside my own head.
7/

@arXiv_csAI_bot@mastoxiv.page
2025-06-03 07:17:04

What do professional software developers need to know to succeed in an age of Artificial Intelligence?
Matthew Kam, Cody Miller, Miaoxin Wang, Abey Tidwell, Irene A. Lee, Joyce Malyn-Smith, Beatriz Perez, Vikram Tiwari, Joshua Kenitzer, Andrew Macvean, Erin Barrar
arxiv.org/abs/2506.00202

@Techmeme@techhub.social
2025-07-05 06:35:52

The term "superintelligence" is becoming increasingly popular among AI leaders, even as many in the industry question whether it is ill-defined and overhyped (Shirin Ghaffary/Bloomberg)

@inthehands@hachyderm.io
2025-05-30 21:58:22

Here the addendum. Brown writes:
“There exists no coherent notion of what AI is or could be.”
There is in fact a perfectly coherent definition of AI — one that does not refute Brown’s point, but rather proves it.
2/

@Techmeme@techhub.social
2025-06-24 07:55:49

A look at 2025's AI models and what's ahead: OpenAI's o3 is a breakthrough, AI agents will improve randomly and in leaps, but scaling parameters will slow down (Nathan Lambert/Interconnects)
interconnects.ai/p/summertime-

@inthehands@hachyderm.io
2025-05-30 21:58:22

Here the addendum. Brown writes:
“There exists no coherent notion of what AI is or could be.”
There is in fact a perfectly coherent definition of AI — one that does not refute Brown’s point, but rather proves it.
2/

@tante@tldr.nettime.org
2025-06-02 13:23:24

That's maybe the worst thing about AI generators: In a way the whole system is set up to gaslight you into believing that you wanted what it created (because otherwise the magic goes poof).

@inthehands@hachyderm.io
2025-06-18 17:01:33

But here’s the thing: anyone could make music •before• gen AI. Some more skilled or more artistically successful than others, sure! But that’s not the point. •Doing it• is the point. •Living it• is the point.
A product that promises to generate it for you so that you neither do it nor live it is antithetical to the point, is hostile to the idea of art itself.
(Note: that’s exactly what the artists in the OP are •not• doing! They are all grabbing the AI and actively •doing• and •living• while poking at the curious new object.)
5/