Tootfinder

Opt-in global Mastodon full text search. Join the index!

@Techmeme@techhub.social
2025-12-11 10:04:22

An Ai2 research scientist argues that AGI, as commonly conceived, will not emerge because it ignores, among other things, the physical realities of computation (Tim Dettmers)
timdettmers.com/2025/12/10/why

@Erikmitk@mastodon.gamedev.place
2026-01-12 07:34:49

I completely disagree with the premise of this piece but I agree with it's overall conclusion… so this is awkward.
It's plain wrong for me to claim AGI is here and then only focus on LLMs being useful in a general sense.
Intelligence is only brought up as a segue to ask what technology ultimately should be used for.
Discuss the question at the end, for sure, but the first part is wholly unnecessary since the conclusion (here's the twist) is kinda general in its…

@Techmeme@techhub.social
2025-11-11 14:15:55

Microsoft AI CEO Mustafa Suleyman says his new superintelligence team will build "frontier-grade research capability" and Microsoft needs AI self-sufficiency (Ashley Stewart/Business Insider)
businessinsider.com/microsoft-

@ErikJonker@mastodon.social
2025-12-10 18:46:06

Interesting how Poetiq (company) can improve on the performance of the standard Gemini 3.0 Pro model by adding refinements and tricks. It leads to a 9% improvement on the ARC-AGI-2 Benchmark.
poetiq.ai/posts/arcagi_verifie

@almad@fosstodon.org
2025-11-06 03:58:09

I think it’s worth reiterating how good this article about #AI and AGI is
technologyreview.com/2025/10/3

@mapto@qoto.org
2025-11-04 05:18:13

Finally, MIT Technology Review had called AGI what it is: a conspiracy theory. technologyreview.com/2025/10/3
Yet, just recently MS CEO Satya Nadella so…

@Techmeme@techhub.social
2025-12-12 00:55:47

On OpenAI's 10th anniversary, Sam Altman reflects on "a decade of breakthroughs, learnings, and the path toward AGI that benefits all of humanity" (OpenAI)
openai.com/index/ten-years/

@hynek@mastodon.social
2025-12-10 16:36:22

can’t wait for ai bros to argue that csam laws are holding agi back 404media.co/a-developer-accide

@esoriano@social.linux.pizza
2025-11-01 11:59:50

Es una religión
“It’s a religion. We believe in technology. Technology is God. It’s really hard to push back against it. People don’t want to hear it.”
How AGI became the most consequential conspiracy theory of our time.
MIT Tech. Review:

@heiseonline@social.heise.de
2025-12-15 14:02:00

KI-Update kompakt: Google Labs "Disco", KI-Regulierung, AGI, Psychotherapie
Das "KI-Update" liefert drei mal pro Woche eine Zusammenfassung der wichtigsten KI-Entwicklungen.

@publicvoit@graz.social
2025-12-15 07:08:09

"In summary, #AGI, as commonly conceived, will not happen because it ignores the physical constraints of computation, the exponential costs of linear progress, and the fundamental limits we are already encountering. Superintelligence is a fantasy because it assumes that intelligence can recursively self-improve without bound, ignoring the physical and economic realities that constrain all systems. …

@hynek@mastodon.social
2025-12-07 04:05:47

Oura just told me to get ready for bedtime at 5am in the morning so I guess computers can get jet lag now too. #agi
(Yes, I’m physically back home; expect an exciting stamina release that was ready before I left but was too chicken to push before travel.)

As 2025 comes to a close, I thought it might be worth revisiting a fascinating social media post from the Silicon Valley pro-extinctionist #Daniel #Faggella.
He espouses the radical view that
👉we should build a
“worthy successor”
in the form of

@dnddeutsch@pnpde.social
2025-11-07 14:45:43

Im "balanced" Modus wird das Chaos der Charaktererstellung ein wenig gebändigt und versucht eine Figur mit hoher Toughness (Tank), eine mit hoher Stärke (Zweihand-Striker), eine mit hoher Geschicklichkeit (Agi-Striker) und zwei mit hoher Präsenz (Fernkämpfer, Zauberwirker) zu erzeugen. Das haut nicht immer hin, aber der nächste Kampftrupp ist auch immer nur ein F5 entfernt
Namen, Titel und Geschlechter werden übrigens genauso chaotisch gemixt wie der ganze Rest 🦄

@lpryszcz@genomic.social
2025-11-30 20:00:05

"Because sometimes the most sophisticated position is recognizing that you might be trying to solve the wrong kind of problem...
The real danger isn’t that machines will become intelligent—it’s that we’ll mistake impressive computation for understanding and surrender our judgment to those who control the servers.
The circus continues. The ground approaches. And some of us are paying attention to the actual distance.
"

@thomasfuchs@hachyderm.io
2025-12-03 14:20:29

I think the root of the “AI” evil is when AI researchers in the 1960s recognized that they outrageously underestimated the complexity of the human mind.
They became humiliated by their promises that AGI was just a few years away—and then went full goblin mode that’s lasting to this day.
Some of the OG researchers took it quite badly that they stalled and weren’t in the limelight anymore.
‣ Marvin Minsky (co-founder of MIT AI lab and arguably the most important early AI bro) went on to visit Epstein’s island multiple times.
‣ Karl Steinbuch, who came up with the German term for computer science ("Informatik")—who also was a literal Nazi (and likely war criminal) in World War II—later wrote articles in ultra-right magazines about things like “equal rights rob women of their children”.
‣ John McCarthy (inventor of Lisp, co-authored document that coined the term “Artificial Intelligence”) was a staunch Republican who years later claimed (in a serious article) that “thermostats have beliefs”.
[one moment, I am receiving more information]
‣ There’s a second Epstein Island AI pioneer? Who also was Chief Learning Officer at… Trump University? That would be Roger Schank (founded one of the first AI companies in the 1980s AI boom, it even had an IPO. Of course the 1980s AI bubble burst).
Obviously all of the above received all the awards in computer science and are very revered people.

@ErikJonker@mastodon.social
2025-12-03 12:51:52

Great documentary about Demis Hassabis working on AGI. Really about history and humans, working on a hard problem, less about the technology itself. But that makes it interesting to watch for everybody. Also it contains a clear warning about coming AGI.
youtu.be/d95J8yzvjbQ?si=BsfJDJ

@almad@fosstodon.org
2025-12-18 23:03:34

Such sad headlines 🥲👏
gizmodo.com/elon-musk-predicts

@hw@fediscience.org
2025-11-20 08:18:31

This website illustrates nicely how the US lost the competition–at least for now–in open(ish) LLM models: #AIResearch #AGI_hype
/via Wired

@Techmeme@techhub.social
2025-11-06 03:10:44

A profile of OpenAI President Greg Brockman and his role in the company's $1.4T infrastructure buildout that's required to reach AGI (Sharon Goldman/Fortune)
fortune.com/2025/11/05/openai-

@Mediagazer@mstdn.social
2025-12-02 14:40:37

Luma AI opens its first international office in London and appoints former WPP and Monks executive Jason Day to lead its global expansion outside the US (Georg Szalai/The Hollywood Reporter)
hollywoodreporter.com/business

@marcel@waldvogel.family
2025-12-02 06:55:09

Guten Morgen. Es ist überraschenderweise wieder Dienstag und damit #DNIPBriefing-Tag!
1️⃣ Wir nehmen die Behauptungen von Musk, Zuckerberg oder Altman unter die Lupe, dass die künstliche Superintelligenz oder Artificial General Intelligence (AGI) möglicherweise schon 2026 so weit sei.
Ein hervorragender Artikel von The Verge zeigt das wundervoll auf.
Alle Chatbots basieren…

@PaulWermer@sfba.social
2025-12-02 16:50:50

From the article: "He added: “It’s something where it’s moving very quickly and people don’t necessarily have time to absorb it or figure out what to do.”"
That impersonal, natural "it" is moving - Not the people developing the models and selling them (or inflicting them), not the wealthy investors demanding market share, not the sci-fi addled techbros (Muskrats?) imagining that if only we get AGI all problems will be solved tomorrow, - all wanting to be first. Oh…

@paulwermer@sfba.social
2025-12-02 16:50:50

From the article: "He added: “It’s something where it’s moving very quickly and people don’t necessarily have time to absorb it or figure out what to do.”"
That impersonal, natural "it" is moving - Not the people developing the models and selling them (or inflicting them), not the wealthy investors demanding market share, not the sci-fi addled techbros (Muskrats?) imagining that if only we get AGI all problems will be solved tomorrow, - all wanting to be first. Oh…

@hex@kolektiva.social
2025-11-27 22:32:54

I was just thinking about how the fact that #Musk named his AI "Grok" is evidence that he "reads sci-fi" in the same way he "plays video games." Like, he claims to do it but when it comes time to show the evidence it's clear he does not actually "grok" it.
Like... To grok something is to have a layer deeper than simply knowledge, but mathematically encoding statistical relationships between words is pretty obviously not even understanding much less qualifying as "groking" it. In the book, the ability to grok something is also the ability to annihilate that thing with a thought. Just pretending that an LLM actually *was* something that could become AGI (which it's not), this name would imply the AI would have the power to annihilate reality. That's bad. That's a bad name for an AI.
And why would a greedy fascist name something of his after something an anarchist communist space Jesus taught to the hippie cult he started? There are so many layers of facepalm to this. It's some kind of php-esque fractal of incompetence.
Like, there's no reason to talk about this but my brain does this to me sometimes and now it's your problem.

@pbloem@sigmoid.social
2025-10-28 15:39:15

Let's do a deep dive into this paper: "Why Language Models Hallucinate."
When this came out, many people's summary was "even OpenAI admits that hallucinations are a fundamental problem of transformers/autoregressive models/LLMs."
I've seen many people conclude that this means OpenAI is grifting, knows they're building on a wrong paradigm, knows they won't get to AGI etc.

The first page of the paper "Why language model hallucinate" by Kalai et al. Three of the authors are from OpenAI. Parts of the paper are highlighted.
@Techmeme@techhub.social
2026-01-07 07:21:19

An interview with Google DeepMind CTO Koray Kavukcuoglu on his new role as Google's chief AI architect, Gemini 3, progress toward the goal of AGI, and more (Melissa Heikkilä/Financial Times)
ft.com/content/3b477836-8a87-4
<…

@thomasfuchs@hachyderm.io
2025-10-25 04:47:07

istg people who have opinions on AGI should be required by law to at least grasp the basics of information theory

@Techmeme@techhub.social
2026-01-05 10:05:40

A profile of Max Tegmark, the physicist pushing to halt AGI development, who was subpoenaed by OpenAI over the Future of Life Institute's past ties to Elon Musk (Wall Street Journal)
wsj.com/tech/ai/who-is-max-teg

@ErikJonker@mastodon.social
2025-12-30 16:18:36

My message for the new year around GenAI, don't believe the hype about AGI, it taking large number over jobs, replacing humans etcetera.
It also makes mistakes, hallucinates and you should check every outcome you re-use. At the same also don't believe the hype about GenAI being worthless, awful, bad, only being a stochastic parrot. That simply isn't true as anyone who regularly uses topmodels knows.

@thomasfuchs@hachyderm.io
2025-11-26 15:30:16

The whole thing is optimized for scams, deception and other criminal behavior:
- user interface that deceptively pretends it's a human you're talking to
- claims from companies highly exaggerate capabilities
companies and "experts" constantly hype "AGI" which they (funnily enough) do to both make investors greedier and spread fear and as a distraction because these algorithms can't actually do what they keep promising
- large-scale accounting and financial fraud (e.g. what Nvidia is doing with circular selling)
- biggest case of copyright infringement in history
Note: I think the underlying technology is really cool, and definitely has use cases and can be used for actually good things. But: some technology just has more downsides than upsides, and some should only be used by experts in controlled environments. Leaded gasoline, asbestos and chlorofluorocarbon are also all really cool technology.
In this case perhaps the techology itself doesn't do anything inherently bad, however the people making it are lying about what it can do, the people selling it are motivated purely by greed and the people using it (often forced to do so) are being deceived.

@mapcar@mastodon.sdf.org
2025-11-14 17:54:15

The Vergecast of October 31. («God will be declared by a panel of experts») has a bit funny and very good discussion of the bizarre joint press release between Microsoft and OpenAI and the insanity of the panel that is to verify if AGI has been achieved or not.
A choice quote about the problems of finding people to put on such a panel:
“A bunch of the world's best drunks have verified that you made whiskey is not a thing that you can do.”

@Techmeme@techhub.social
2025-12-03 09:35:47

A look at startups like AGI and Plato, which build replicas of websites to let AI agents learn to navigate and complete specific tasks, like booking flights (Cade Metz/New York Times)
nytimes.com/2025/12/02…

@arXiv_csCR_bot@mastoxiv.page
2025-10-14 12:06:48

Secret-Protected Evolution for Differentially Private Synthetic Text Generation
Tianze Wang, Zhaoyu Chen, Jian Du, Yingtai Xiao, Linjun Zhang, Qiang Yan
arxiv.org/abs/2510.10990

@hex@kolektiva.social
2025-12-16 17:09:35

One of the things that made organizing a lot easier with the GDC was a thing called "GDC in a box." It was a zip file with all kinds of resources. There was a directory structure, templates for all kinds of things like meetings and paperwork you had to file (for legal reasons) and "read me" files.
We had all kinds of support. There were people you could talk to who had been there. There were people you could call to walk through legal paperwork (taxes). Centralized orgs are vulnerable and easy to infiltrate. They're easy for states to shut down. But there are benefits to org structures.
I think it's possible to have the type of support we had with the GDC, but without the politics of an org (even the IWW). I hope this most recent essay has some of the same properties. I hope that it makes building something new, something no one has really imagined before, easier.
This whole project is something a bit different. It's a collective vision and collective project, from the ground up. Some of it has felt like a brain dump, just getting things that have been swimming around in my head down somewhere. But I hope this feels more like an invitation.
Everything thus far written is all useless unless people do things with it. Only from that point does it become a thing that lives, a thing with its own consciousness that can't be controlled by any individual human.
Tech billionaire cultists want to bring a new era of humanity with AGI. That is definitely not possible with LLMs, and may not be possible at all. But there is a super intelligence that is possible, though it's been constrained by capitalism: collective human intelligence.
The grand vision of the tech dystopians is that of the ultimate slave that can then enslave all humans on their behalf. I think we can build a humanity that can liberate itself from their grasp, crush their vision, and build for itself a world in which people will never be enslaved again. Not only do I think it's possible, I think it's necessary. I think there are only two choices: collective liberation or death.
And that's what I plan to write about next time to wrap this whole project up. Today things often feel impossible. But people talked about the Middle Ages as though they were the end of the world, and then everything changed in unimaginable ways. Everything can, and will, change again.
"The profit motive often is in conflict with the aims of art. We live in capitalism. Its power seems inescapable. So did the divine right of kings."

@thomasfuchs@hachyderm.io
2025-11-15 13:53:47

User interface pretending to be human: fraud.
Statistical sentence generation described as “intelligence”: fraud.
Telling people, often children, to commit dangerous or criminal acts: all sorts of crimes.
Scientific papers generated, non-existing cases quoted at court, homework faked, artists ripped-off: fraud and copyright infringement.
Training based on unlicensed works: copyright infringement.
Data centers with tax incentives and special energy pricing: theft.
Promises to investors of AGI: fraud.
Government bailouts with taxpayer money: theft and fraud.

Media and politicians: This is innovation, don’t get left behind.

@Techmeme@techhub.social
2025-10-28 13:27:42

OpenAI-Microsoft deal: Microsoft gets a 27% stake, access to its AI models until 2032, including AGI-level models, and OpenAI will buy $250B of Azure services (Brody Ford/Bloomberg)
bloomberg.com/news/articles/20

@Techmeme@techhub.social
2025-11-25 17:39:05

Q&A with Ilya Sutskever about model jaggedness, why we are moving beyond the "age of scaling", SSI's plan to straight-shot superintelligence, AGI, and more (Dwarkesh Patel/Dwarkesh Podcast)
dwarkesh.com/p/ilya-sutskever-2

@Techmeme@techhub.social
2025-12-17 19:01:10

Amazon's AGI team lead Rohit Prasad is leaving at the end of 2025; AWS SVP Peter DeSantis will lead a group combining AI, silicon, and quantum computing teams (Todd Bishop/GeekWire)
geekwire.com/2025/amazon-ai-ch

@tomkalei@machteburch.social
2025-10-16 18:17:41

A new paper whose authors include Eric Schmidt, Gary Marcus and Yoshua Bengio not only defines AGI, but also has this nice figure. It shows us that GPT-5 has reached the maximum possible ability in math.
I think they are trying to say that they are done with #math. They solved it. End of story.
I don't know what emoji to put here ... I'll try 🤡 for a start.
And they don't put their paper on the arXiv with the riffraff. They have their own domain for their paper: agidefinition.ai/

@Techmeme@techhub.social
2025-11-20 18:50:51

OpenAI says GPT-5 has demonstrated the ability to accelerate scientific research workflows but can't run projects or solve scientific problems autonomously (Radhika Rajkumar/ZDNET)
zdnet.com/article/gpt-5-is-spe

@Techmeme@techhub.social
2025-10-17 18:23:34

Q&A with Andrej Karpathy on AGI still being a decade away, why reinforcement learning is terrible, superintelligence, his AI education startup Eureka, and more (Dwarkesh Patel/Dwarkesh Podcast)
dwarkesh.com/p/andrej-karpathy

@Techmeme@techhub.social
2025-11-15 20:16:07

The current AI strategies of China and the US are complementary, as unlike the US, China isn't "AGI-pilled" yet, focusing on embodied AI and open source models (Dean W. Ball/Hyperdimensional)
hyperdimensional.co/p/the-bitt

@Techmeme@techhub.social
2025-11-15 06:05:53

A profile of Yann LeCun, Meta's chief AI scientist, who says LLMs are a dead end for reaching AGI and backs world models instead, and is reportedly leaving Meta (Meghan Bobrowsky/Wall Street Journal)
wsj.com/tech/ai/yann-lecun-ai-

@Techmeme@techhub.social
2025-11-18 20:25:57

Q&A with Demis Hassabis on Gemini 3, spending most of his research time on world models, fitting the entire Google search index into Gemini, AI bubble, and more (Alex Heath/Sources)