Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2025-06-19 09:20:44

A look at seven UX issues in the Fediverse web experience, including complex onboarding, the lack of a dedicated DM UI wrapper, and fragmented user discovery (Tim Chambers)
timothychambers.net/2025/06/18

@simon_brooke@mastodon.scot
2025-07-20 09:59:30

Well, it's not quite 11:00 and we've already completed the section of rendering we planned for today. That feels good.
Not going to do any more today because it's extremely hard physical work, but tomorrow's shift should complete the first coat on the outside of the building.
#roundhouse

@CondeChocula@social.linux.pizza
2025-05-18 17:51:18

.: Resident Evil: Outbreak :.
Completed!!🏆
This game is very weird and unique at the same time. Its gameplay is different to others games from the saga and obviously its goal is playing it on-line cause the NPC controlled by AI system is so awful and frustrating.
But well is a good game despite all.
PD: In my honest opinion the best level is the last one. Is very long than the others, intense and filled of sort of monsters.

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

@arXiv_mathRT_bot@mastoxiv.page
2025-06-18 09:15:04

Restriction problem for mod $p$ representations of $\text{GL}_2$ over a finite field
Eknath Ghate, Shubhanshi Gupta
arxiv.org/abs/2506.14207

@arXiv_quantph_bot@mastoxiv.page
2025-07-17 10:12:40

On approximate quantum error correction for symmetric noise
Gereon Ko{\ss}mann, Julius A. Zeiss, Omar Fawzi, Mario Berta
arxiv.org/abs/2507.12326

@ruth_mottram@fediscience.org
2025-07-19 15:02:20

Long trip back to Copenhagen starts with EV to Lyon a packed but slowly emptying train to Geneva, during which I have been completely captured by an #IainMBanks #Culture novel. Actually, surprisingly my first. That's the gift of both train travel and a good book. #bookstodon #booksky #FlyingLess

@arXiv_csDC_bot@mastoxiv.page
2025-06-17 09:31:19

Distributed Computing From First Principles
Kenneth Odoh
arxiv.org/abs/2506.12959 arxiv.org/pdf/2506.12959

@arXiv_hepth_bot@mastoxiv.page
2025-06-17 11:06:09

Can Non-Relativistic Strings Propagate Without Geometric Baggage?
Partha Nandi, Sk. Moinuddin, Abdus Sattar
arxiv.org/abs/2506.12506

@kuba@toot.kuba-orlik.name
2025-07-16 07:14:50

> By the time we got to the first Q&A, Apple's lawyers had already wasted half of our time.
formularsumo.co.uk/blog/2025/a