Tootfinder

Opt-in global Mastodon full text search. Join the index!

@teledyn@mstdn.ca
2025-06-21 06:23:21

tL;dr summary as to why you don't already know #PeterPutnam couldn't possibly be 40s-60s USA, him being gay, parterned lifelong with a black man, or even heaven forbid that his game-theory tinker-toy of consciousness was flawed (although should i learn the #agi fanfois will deploy it, it wouldn't surprise me)
Finding Peter Putnam - Nautilus
nautil.us/finding-peter-putnam

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

@wfryer@mastodon.cloud
2025-06-02 02:25:11

[VIDEO] The Government Knows AGI is Coming | The Ezra Klein Show
#AI

Image features a graphic promoting "The Ezra Klein Show" with a focus on artificial general intelligence and Trump. A concerned cartoon character is depicted wearing glasses and a suit, alongside the quote: "AGI is right on the horizon... and we’re not
@rberger@hachyderm.io
2025-04-27 19:48:27

"I think it is a huge mistake for people to assume that they can trust AI when they do not trust each other. The safest way to develop superintelligence is to first strengthen trust between humans, and then cooperate with each other to develop superintelligence in a safe manner. But what we are doing now is exactly the opposite. Instead, all efforts are being directed toward developing a superintelligence."
#AGI #AI
wired.com/story/questions-answ

@pavelasamsonov@mastodon.social
2025-05-29 21:13:07

Any second now... #LLM #AGI #GenAI #AI

r/agi
2 yr. ago
AGI 2 years away says CEO of leading AGI lab Anthropic