Tootfinder

Opt-in global Mastodon full text search. Join the index!

@seeingwithsound@mas.to
2025-09-15 14:32:08

WIRED: OpenAI ramps up robotics work in race toward #AGI wired.com/story/openai-ramps-u hiring roboticists who work specifica…

@Techmeme@techhub.social
2025-09-15 14:45:54

Sources: OpenAI is recruiting people to work on humanoid robots and is training AI algorithms that are better able to make sense of the physical world (Will Knight/Wired)
wired.com/story/openai-ramps-u

@thomasfuchs@hachyderm.io
2025-08-15 20:17:07

Wait I thought it’s the future of humanity and close to AGI theverge.com/ai-artificial-int

@mxp@mastodon.acm.org
2025-08-11 17:41:57

AGI is “not a super useful term”? But IIRC, as defined by OpenAI, they’ll hit AGI when they generate $100 billion in profit. So, how’s that coming along? Not so great, huh?
cnbc.com/2025/08/11/sam-altman

@Techmeme@techhub.social
2025-07-11 13:10:48

Sources on key Microsoft-OpenAI clause: OpenAI decides when AGI and "sufficient AGI", capable of $100B in profits, are reached, and Microsoft can't build AGI (Steven Levy/Wired)
wired.com/story/microsoft-and-

@thomasrenkert@hcommons.social
2025-08-14 14:23:51

The geopolitical #aiarmsrace seems largely unimpressed by people proclaiming #LLMs have plateaued and #AGI is never coming.
Such assessments are only relevant for the market, but not so much for count…

Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei’s chips, highlighting the limits of Beijing’s push to replace US technology.

DeepSeek was encouraged by authorities to adopt Huawei’s Ascend processor rather than use Nvidia’s systems after releasing its R1 model in January, according to three people familiar with the matter.
@whitequark@mastodon.social
2025-08-07 00:44:36

github.com/google/agi

@jlpiraux@wallonie-bruxelles.social
2025-08-14 06:30:13

"À la question de savoir si l'intensification des approches actuelles de l'IA pourrait conduire Š l'intelligence artificielle générale (AGI), ou Š une IA Š usage général qui égalerait ou surpasserait la cognition humaine, 76 % des personnes interrogées ont répondu qu'il était "improbable" ou "très improbable" que cela réussisse."
#IA

@mxp@mastodon.acm.org‬
2025-08-11 17:41:57

AGI is “not a super useful term”? But IIRC, as defined by OpenAI, they’ll hit AGI when they generate $100 billion in profit. So, how’s that coming along? Not so great, huh?
cnbc.com/2025/08/11/sam-altman

‪@mxp@mastodon.acm.org‬
2025-08-11 17:41:57

AGI is “not a super useful term”? But IIRC, as defined by OpenAI, they’ll hit AGI when they generate $100 billion in profit. So, how’s that coming along? Not so great, huh?
cnbc.com/2025/08/11/sam-altman

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-16 08:57:11

Bridging Brains and Machines: A Unified Frontier in Neuroscience, Artificial Intelligence, and Neuromorphic Systems
Sohan Shankar, Yi Pan, Hanqi Jiang, Zhengliang Liu, Mohammad R. Darbandi, Agustin Lorenzo, Junhao Chen, Md Mehedi Hasan, Arif Hassan Zidan, Eliana Gelman, Joshua A. Konfrst, Jillian Y. Russell, Katelyn Fernandes, Tianze Yang, Yiwei Li, Huaqin Zhao, Afrar Jahin, Triparna Ganguly, Shair Dinesha, Yifan Zhou, Zihao Wu, Xinliang Li, Lokesh Adusumilli, Aziza Hussein, Sagar Nook…

@thomasfuchs@hachyderm.io
2025-07-16 00:35:19

If we just build a building the size of Manhattan and fill it with pet rocks, AGI will spontaneously erupt

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@arXiv_csRO_bot@mastoxiv.page
2025-08-15 09:30:23

Large Model Empowered Embodied AI: A Survey on Decision-Making and Embodied Learning
Wenlong Liang, Rui Zhou, Yang Ma, Bing Zhang, Songlin Li, Yijia Liao, Ping Kuang
arxiv.org/abs/2508.10399

@arXiv_csHC_bot@mastoxiv.page
2025-08-15 08:42:52

Artificial Emotion: A Survey of Theories and Debates on Realising Emotion in Artificial Intelligence
Yupei Li, Qiyang Sun, Michelle Schlicher, Yee Wen Lim, Bj\"orn W. Schuller
arxiv.org/abs/2508.10286

@arXiv_csAI_bot@mastoxiv.page
2025-08-11 07:32:49

A Framework for Inherently Safer AGI through Language-Mediated Active Inference
Bo Wen
arxiv.org/abs/2508.05766 arxiv.org/pdf/2508.05766

@trochee@dair-community.social
2025-09-11 15:19:12

Jobs at alignmentalignment.ai/jobs
CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months. We offer competitive salaries and generous benefits, including no performance management becaus…

Center for the Alignment of AI Alignment Centers

Jobs

About our team

CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months. We offer competitive salaries and generous benefits, including no performance management because we have no way to assess whether the work you do is at all useful.

We are currently actively hiring for the following roles:

(Page ends, because they're not actuall…
@macandi@social.heise.de
2025-07-10 07:57:00

Abgeworbener Apple-KI-Experte: Meta zahlt ihm angeblich 200 Millionen US-Dollar
Mark Zuckerberg will mit Macht zur AGI – und gibt dafür richtig viel Geld aus. Ruoming Pang, Ex-Grundmodell-Leiter bei Apple, profitiert davon offenbar massiv.

@pbloem@sigmoid.social
2025-06-29 13:23:33

Solving ARC (well... up to 20%) with inference-time compression. Another indication that #MDL and #AlgorithmicComplexity are becoming relevant in the age of Deep Learning.

@inthehands@hachyderm.io
2025-07-08 18:08:41

I am 100% certain that the intractable and ancient questions of “What is ‘human?’” and “What is ‘intelligence?’” will finally be resolved once and for all by corporate lawyers in a contract dispute with billions of dollars at stake
arstechnica.com/ai/2025/07/agi

@ErikUden@mastodon.de
2025-07-09 19:41:14

For those not up to speed with the latest AI jargon, I hope to clarify some things:
Every time a company promises a revolutionary “AI” product, they just exploit cheap labor from India.

@arXiv_csCY_bot@mastoxiv.page
2025-07-30 10:07:41

Against racing to AGI: Cooperation, deterrence, and catastrophic risks
Leonard Dung, Max Hellrigel-Holderbaum
arxiv.org/abs/2507.21839 arxi…

@Techmeme@techhub.social
2025-09-14 10:10:36

Q&A with Bret Taylor, CEO of Sierra and chairman of OpenAI, on Sierra's AI customer support agents, AGI, Sam Altman's comments on the AI bubble, and more (Alex Heath/The Verge)
theverge.com/decoder-podcast-w

@seeingwithsound@mas.to
2025-08-03 07:49:48

[OT] What is #AGI? Nobody agrees, and it's tearing Microsoft and OpenAI apart. arstechnica.com/ai/2025/07/agi

@arXiv_csLO_bot@mastoxiv.page
2025-08-06 08:34:00

Intensional FOL over Belnap's Billatice for Strong-AI Robotics
Zoran Majkic
arxiv.org/abs/2508.02774 arxiv.org/pdf/2508.02774

@eichkat3r@hessen.social
2025-08-09 10:50:31

agi = allgemeine günstlische indälligäns

@sean@scoat.es
2025-08-11 16:50:05

Dark prediction:
- Intellectual Property Class Action against AI companies gains traction
- AI companies use this as a plausible excuse to bail on AGI tech (that doesn’t exist)
- We see a pop like we saw “shipping heavy items by mail isn’t sustainable?!” pop in 2001
- AI companies fail, overwhelmingly, blaming only the IP Class Action
- The Money switches to a new/old grift; maybe back to crypto… lots of GPUs available
- In 5 years, everyone forgets and we have…

@arXiv_csAI_bot@mastoxiv.page
2025-08-14 07:30:32

The Othello AI Arena: Evaluating Intelligent Systems Through Limited-Time Adaptation to Unseen Boards
Sundong Kim
arxiv.org/abs/2508.09292

@hynek@mastodon.social
2025-08-01 06:45:10

cool cool cool #AGI

text screenshot that starts with `if app_cfg.is_prod:` where Cursor suggests a Chinese comment that according to Google Translate means “production environment”.
@arXiv_csCV_bot@mastoxiv.page
2025-09-11 10:01:43

AdsQA: Towards Advertisement Video Understanding
Xinwei Long, Kai Tian, Peng Xu, Guoli Jia, Jingxuan Li, Sa Yang, Yihua Shao, Kaiyan Zhang, Che Jiang, Hao Xu, Yang Liu, Jiaheng Ma, Bowen Zhou
arxiv.org/abs/2509.08621

@david@boles.xyz
2025-08-27 20:30:00

Here's my new Human Meme podcast episode about AI and AGI and the future of us!
#ai

@Techmeme@techhub.social
2025-08-10 18:50:32

Doomer predictions of a rapid, monopolistic AGI were wrong, as recent AI model releases resemble a Goldilocks scenario with competitive, specialized models (David Sacks/@davidsacks)
x.com/davidsacks/status/195424

@stevefoerster@social.fossdle.org
2025-07-07 17:12:18

I'm glad tjat it seems we're finally approaching the bet-hedging phase of the hype cycle.
#ai

@ErikJonker@mastodon.social
2025-08-07 07:23:45

Probably today OpenAI will announce GPT-5 which will not be a revolution, AGI or anything like that. It will just be a next step for OpenAI mainly integrating it's mess of models/numbers/names (4, o3 etcetera), the question will be how much improvement there will be in the actual models used, probably not dramatic because the previous models are quite good already. Time will tell.
#GPT5

@arXiv_csNI_bot@mastoxiv.page
2025-07-01 09:40:03

AGI Enabled Solutions For IoX Layers Bottlenecks In Cyber-Physical-Social-Thinking Space
Amar Khelloufi, Huansheng Ning, Sahraoui Dhelim, Jianguo Ding
arxiv.org/abs/2506.22487

@mgorny@social.treehouse.systems
2025-08-07 07:29:47

Claiming that LLMs bring us closer to AGI is like claiming that bullshitting brings one closer to wisdom.
Sure, you need "some" knowledge on different topics to bullshit successfully. Still, what's the point if all that knowledge is buried under an avalanche of lies? You probably can't distinguish what you knew from what you made up anymore.
#AI #LLM

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@thomasfuchs@hachyderm.io
2025-09-11 14:20:28

Why AGI has been so elusive

@Techmeme@techhub.social
2025-08-07 19:05:48

OpenAI highlights GPT-5 scores on math, coding, and health benchmarks: 94.6% on AIME 2025 without tools, 74.9% on SWE-bench Verified, 46.2% on HealthBench Hard (Carl Franzen/VentureBeat)
venturebeat.com/ai/openai-laun

@trochee@dair-community.social
2025-07-11 05:43:25

Thinking for no particular reason today* about the peculiar genius of Stanislaw Lem
A Jew, a doctor, a Pole, in the resistance against Hitler & Stalin & capitalism, an SF legend, feared by PK Dick, lauded by UK Le Guin
…& one of the earliest AGI skeptics
>… held that information technology drowns people in a glut of low-quality information, and considered truly intelligent robots as both undesirable and impossible to construct.

@cellfourteen@social.petertoushkov.eu
2025-07-06 06:37:01

John Carmack (the creator of Doom) Reveals AGI Future: Robots, Videogames and AI Agents
youtube.com/watch?v=4epAfU1FCu

@grumpybozo@toad.social
2025-07-05 16:54:02

We are so fucking doomed. The people with the real money are fulfilling their own paranoid fantasies: they’ve become servants of AGI: Artificial Generalized Imbecility. And the people with the skills and credentials to call them on it just don’t. eigenmagic.net/@NewtonMark/114

@unchartedworlds@scicomm.xyz
2025-07-12 20:17:53
Content warning: real-life effects of LLMs in tech workplaces

Fascinating collection of firsthand experiences, gathered by Brian Merchant.
From a comment:
"I can’t help but notice that stories aren’t “I lost my job because AI is able to do it better”, they are “I lost my job because upper management is hype-pilling and thinks AGI is around the corner”. Which is a bad thing, but if we suppose for a moment that AGI is not around the corner, and AI is a bubble? Those jobs will be back with vengeance once technical debt catches up. ... when your codebase is now an AI-written mess without documentation and tests and diffused knowledge in heads of those who have written it, it will collapse sooner or later."
#LLM #SoCalledAI #tech #jobs #coding #TechnicalDebt

@tante@tldr.nettime.org
2025-07-28 08:40:44

"AGI" is whatever makes "AI" companies' stocks go up.

@arXiv_csCY_bot@mastoxiv.page
2025-07-30 09:25:52

Safety Features for a Centralised AGI Project
Sarah Hastings-Woodhouse
arxiv.org/abs/2507.21082 arxiv.org/pdf/2507.21082

@teledyn@mstdn.ca
2025-06-21 06:23:21

tL;dr summary as to why you don't already know #PeterPutnam couldn't possibly be 40s-60s USA, him being gay, parterned lifelong with a black man, or even heaven forbid that his game-theory tinker-toy of consciousness was flawed (although should i learn the #agi fanfois will deploy it, it wouldn't surprise me)
Finding Peter Putnam - Nautilus
nautil.us/finding-peter-putnam

@arXiv_quantph_bot@mastoxiv.page
2025-06-18 10:08:30

Hamiltonian Formalism for Comparing Quantum and Classical Intelligence
Elija Perrier
arxiv.org/abs/2506.14456 arxiv.o…

@arXiv_csNE_bot@mastoxiv.page
2025-06-23 08:50:10

Neural Cellular Automata for ARC-AGI
Kevin Xu, Risto Miikkulainen
arxiv.org/abs/2506.15746 arxiv.org/pdf/2506.15746…

@Techmeme@techhub.social
2025-08-12 16:55:51

Character.AI says it's generating revenue at a run rate of $30M and has 20M MAUs who spend, on average, 75 minutes a day chatting with a bot (Kylie Robison/Wired)
wired.com/story/character-ai-c

@Techmeme@techhub.social
2025-09-12 00:50:49

Source: the clause rescinding Microsoft's access to OpenAI's most powerful tech if OpenAI develops AGI remains part of their new deal, but it has been modified (New York Times)
nytimes.com/2025/09/11/technol

@jlpiraux@wallonie-bruxelles.social
2025-06-26 04:46:04

"La Banque nationale de Belgique, dirigée par son gouverneur Pierre Wunsch, n’effectuerait pas une supervision très serrée de Worldline Belgique selon des sources concordantes. Bien qu’elle ait eu connaissance de plusieurs violations de la conformité."
#DirtyPayments #BNB

@thomasfuchs@hachyderm.io
2025-07-09 13:27:54

No, computers won’t replace humans to write code for themselves.
Please stop with this nonsense.
What we will see though is tremendous losses in productivity as deskilled programmers will get less and less education and practice—and take longer and longer to make broken AI-generated code work. Meanwhile, AI models will regress from eating their own generated shit when being trained on.
Eventually AI companies will finally run out of investors to scam—and when they disappear or get so expensive they become unaffordable, “prompt engineers” will be asked to not use AI anymore.
What’s gonna happen then?
We’re losing a whole generation of programmers to this while thought leaders in our field are talking about “inevitability” and are jerking off to sci-fi-nostalgia-fueled fantasies of AGI.

@arXiv_csAI_bot@mastoxiv.page
2025-09-03 14:00:03

AGI as Second Being: The Structural-Generative Ontology of Intelligence
Maijunxian Wang, Ran Ji
arxiv.org/abs/2509.02089 arxiv.org/pdf/2509…

@arXiv_csCV_bot@mastoxiv.page
2025-09-08 09:57:00

COGITAO: A Visual Reasoning Framework To Study Compositionality & Generalization
Yassine Taoudi-Benchekroun, Klim Troyan, Pascal Sager, Stefan Gerber, Lukas Tuggener, Benjamin Grewe
arxiv.org/abs/2509.05249

@donelias@mastodon.cr
2025-08-22 06:51:33

Nos despertó el #TemblorCR
Fuerte sonó en Heredia

MAPA FINAL: Este mapa muestra la interpolacion de la intensidad calculada luego de
finalizado el MAS-LIS a las 12:48 AM del 2025/08/22.

[3] Hatilo Cons. svn
[3] Cuesta Moras, San Jose B agi
[3] Bebedero, Escazu JS
INTENSIDAD SISMICA ESCALA JMA
N
0 1 2 3 4 5 6 7 k
gD Moderado Fuerte 50 km d
EG 85° ET 83°
Intensidad 4: Muchas personas se asustan, algunas intentan escapar del peligro. Se
despiertan la mayoria de la gente dormida. Objetos suspendidos oscilan
considerablemente y los platos traquete…
@scott@carfree.city
2025-06-17 08:19:35

One of my takeaways from reading _War and Peace_ last year was that Napoleon, whose warmongering killed millions across Europe, was basically a buffoon, a clown. Many of the most destructive men in history were ridiculous and laughable, and it just added insult to injury for contemporaries watching them come to power. There's nothing new under the sun.

@pbloem@sigmoid.social
2025-08-18 11:58:02

The HRM paper has been mostly debunked by the ARC-AGI people.
arcprize.org/blog/hrm-analysis
The results are legit but most of them are not down to the architecture (swapping it out for a transformer doesn't change that much).
Also, the model is purely transductive. It onl…

@Techmeme@techhub.social
2025-08-10 19:25:36

GPT-5's release was underwhelming, offering incremental improvements and failing to meet expectations, showing that pure scaling simply isn't the path to AGI (Gary Marcus/Marcus on AI)
garymarcus.substack.com/p/gpt-

@arXiv_csCY_bot@mastoxiv.page
2025-08-19 10:27:00

Several Issues Regarding Data Governance in AGI
Masayuki Hatta
arxiv.org/abs/2508.12168 arxiv.org/pdf/2508.12168

@thomasfuchs@hachyderm.io
2025-07-31 15:27:27

Yes, it’s absurd.
LLMs don’t scale to reach “AGI”. That is mathematically proven[1], so it doesn’t matter how large your data center is.
But that’s not the main reason why this is absurd—as a society we shouldn’t spend these enormous resources and lasting environmental damage on this _even if it would work_.
[1] irisvanrooijcogsci.com/2023/09

@arXiv_csCL_bot@mastoxiv.page
2025-08-25 10:05:40

RoMedQA: The First Benchmark for Romanian Medical Question Answering
Ana-Cristina Rogoz, Radu Tudor Ionescu, Alexandra-Valentina Anghel, Ionut-Lucian Antone-Iordache, Simona Coniac, Andreea Iuliana Ionescu
arxiv.org/abs/2508.16390

@seeingwithsound@mas.to
2025-08-27 00:14:45

From #AI to #AGI? If humans had general intelligence, blind people would be learning to see with sound youtube.com/watch?v=nVugtxWmW4E

@teledyn@mstdn.ca
2025-08-18 21:56:51

"The program totally backfired. People thought ELIZA was intelligent, they were confiding in the machine, revealing personal issues they would not tell anyone else. Even my secretary asked me to leave the room so she could be alone with the computer. They called me a genius for creating it, but I kept telling them that the computer was not thinking at all."
Oh, really, now, isn't that interesting, and so, of course, they reanimated it, because only a liberal socialist would call that a failure. 🤣
#posiwid #automatedgatheringofintel #agi

@Techmeme@techhub.social
2025-08-10 10:05:33

Q&A with OpenAI COO Brad Lightcap on GPT-5, its dynamic reasoning, defining AGI, scaling vs. post-training, hallucinations, enterprise adoption, and more (Alex Kantrowitz/Big Technology)

@thomasfuchs@hachyderm.io
2025-07-07 18:00:22

I think the root of the “AI” evil is when AI researchers in the 1960s recognized that they outrageously underestimated the complexity of the human mind.
They became utterly humiliated by their self-congratulatory promises from the 1950s that AGI was just a few years away—and then went full goblin mode that’s lasting to this day, even if the original researchers (like Minsky*) died long ago.
*probably raped children on Epstein’s island

@arXiv_csAI_bot@mastoxiv.page
2025-07-01 11:20:53

MMReason: An Open-Ended Multi-Modal Multi-Step Reasoning Benchmark for MLLMs Toward AGI
Huanjin Yao, Jiaxing Huang, Yawen Qiu, Michael K. Chen, Wenzheng Liu, Wei Zhang, Wenjie Zeng, Xikun Zhang, Jingyi Zhang, Yuxin Song, Wenhao Wu, Dacheng Tao
arxiv.org/abs/2506.23563

@Techmeme@techhub.social
2025-08-23 10:40:46

Q&A with David Luan, head of Amazon's AGI research lab, on leaving Adept in a reverse acquihire deal, why he believes progress on AI models has slowed, and more (Alex Heath/The Verge)
theverge.com/decoder-podcast-w

@arXiv_csRO_bot@mastoxiv.page
2025-07-02 10:12:20

A Survey: Learning Embodied Intelligence from Physical Simulators and World Models
Xiaoxiao Long, Qingrui Zhao, Kaiwen Zhang, Zihao Zhang, Dingrui Wang, Yumeng Liu, Zhengjie Shu, Yi Lu, Shouzheng Wang, Xinzhe Wei, Wei Li, Wei Yin, Yao Yao, Jia Pan, Qiu Shen, Ruigang Yang, Xun Cao, Qionghai Dai
arxiv.org/abs/2507.00917

@arXiv_csCY_bot@mastoxiv.page
2025-08-28 08:51:31

Deep Hype in Artificial General Intelligence: Uncertainty, Sociotechnical Fictions and the Governance of AI Futures
Andreu Belsunces Gon\c{c}alves
arxiv.org/abs/2508.19749

@tiotasram@kolektiva.social
2025-08-19 13:29:37

If you've been paying attention, this is a *very* strong signal that OpenAI is hitting the limits of improved capability with more compute/data and they're (predictably) all out of other ideas. The quiet "exponential model capabilities" lie here is what Altmann promised but is now starting not to be able to deliver, even in cherry-picked demo terms.
cnbc.com/2025/08/11/sam-altman
The "agentic" turn was never going to pan out, because it exposes the unreliability of LLMs too directly, and it turns out that no amount of yelling at your text vending machine to "Be smarter! Think harder!" will actually get you anything more than vended text.
I'm *praying* that we get into this crash sooner rather than later, since the faster it comes, the less painful it will be.
My recent reading in actual research papers corroborates this, for example, asking LLMs to play games exposes their utter lack of anything that can be termed "reasoning":
arxiv.org/pdf/2508.08501v1

@teledyn@mstdn.ca
2025-07-24 14:38:38

So the Top Score champion prize winner at the Perplexity LLM coding challenge scored a massive 7.5% correct answers. AGI is here now by gum! Hire that bot!
There's a million dollar prize for the first to score a droll B- on their benchmark exam.
#thesevenpointfivepercentsolution

@Techmeme@techhub.social
2025-06-18 14:01:29

Boston-based Maven, which builds autonomous AI agents for enterprises' customer support, raised a $50M Series B led by Dell, taking its total funding to $78M (Chris Metinko/Axios)
axios.com/pro/enterprise-softw

@arXiv_csLO_bot@mastoxiv.page
2025-08-19 07:58:00

From Interpolating Formulas to Separating Languages and Back Again
Agi Kurucz, Frank Wolter, Michael Zakharyaschev
arxiv.org/abs/2508.12805

@arXiv_csAI_bot@mastoxiv.page
2025-09-05 10:06:51

The human biological advantage over AI
William Stewart
arxiv.org/abs/2509.04130 arxiv.org/pdf/2509.04130

@Techmeme@techhub.social
2025-06-26 05:20:38

Sources: Microsoft is pushing to remove the AGI clause from its OpenAI contract, which lets OpenAI limit Microsoft's access to its IP once its systems reach AGI (Berber Jin/Wall Street Journal)
wsj.com/tech…

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-24 09:46:20

The Relationship between Cognition and Computation: "Global-first" Cognition versus Local-first Computation
Lin Chen
arxiv.org/abs/2506.17970

@arXiv_csAI_bot@mastoxiv.page
2025-07-31 08:35:41

On the Definition of Intelligence
Kei-Sing Ng
arxiv.org/abs/2507.22423 arxiv.org/pdf/2507.22423

@Techmeme@techhub.social
2025-06-19 06:41:36

A look at the lack of consensus in the tech industry on what AGI is, whether LLMs are the best path to it, and what AGI might look like if or when it arrives (Melissa Heikkilä/Financial Times)

@arXiv_csNE_bot@mastoxiv.page
2025-08-20 11:32:21

Replaced article(s) found for cs.NE. arxiv.org/list/cs.NE/new
[1/1]:
- Neural Cellular Automata for ARC-AGI
Kevin Xu, Risto Miikkulainen

@tiotasram@kolektiva.social
2025-07-19 08:14:41

AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.

@Techmeme@techhub.social
2025-08-31 04:55:52

Xi Jinping is pushing the country's tech industry to be oriented toward applications for AI, charting a pragmatic alternative to Silicon Valley's pursuit of AGI (Wall Street Journal)
wsj.com/tech/ai/china-has-a…

@Techmeme@techhub.social
2025-07-29 16:05:46

Sources: Microsoft is in advanced talks to land a deal that could give it ongoing access to critical OpenAI tech even if OpenAI reaches its goal of building AGI (Bloomberg)
bloomberg.com/news/articles/20

@arXiv_csCY_bot@mastoxiv.page
2025-08-26 10:11:47

Making AI Inevitable: Historical Perspective and the Problems of Predicting Long-Term Technological Change
Mark Fisher, John Severini
arxiv.org/abs/2508.16692

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:06:21

Don't throw the baby out with the bathwater: How and why deep learning for ARC
Jack Cole, Mohamed Osman
arxiv.org/abs/2506.14276

@arXiv_csCY_bot@mastoxiv.page
2025-08-19 10:04:40

The Stories We Govern By: AI, Risk, and the Power of Imaginaries
Ninell Oldenburg, Gleb Papyshev
arxiv.org/abs/2508.11729 arxiv.org/pdf/250…

@Techmeme@techhub.social
2025-06-19 06:06:10

Q&A with Hugging Face Chief Ethics Scientist Margaret Mitchell on aligning AI development with human needs, the "illusion of consensus" around AGI, and more (Melissa Heikkilä/Financial Times)
ft.com/content/7089bff2-25fc-4