Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2025-06-08 16:40:53

At a clandestine math conclave in Berkeley in May, a chatbot powered by o4-mini answered some of the hardest solvable problems much faster than a mathematician (Lyndie Chiou/Scientific American)
scientificamerican.com/article

@cowboys@darktundra.xyz
2025-06-05 16:12:05

Cooper Beebe continues to be underestimated as Dallas Cowboys o-line anchor si.com/nfl/cowboys/news/cooper

@arXiv_csCY_bot@mastoxiv.page
2025-07-08 11:48:31

Real-Time AI-Driven Pipeline for Automated Medical Study Content Generation in Low-Resource Settings: A Kenyan Case Study
Emmanuel Korir, Eugene Wechuli
arxiv.org/abs/2507.05212

@theawely@mamot.fr
2025-07-08 08:28:46

I tried using top LLMs for research and it was disastrous. o3 result looked appealing and did not make up studies titles, but the info allegedly extracted from them was completely hallucinated. Gemini 2.5 pro, while less bad, constantly made up a study title. Claude Opus 4 did not try to answer and just redirect me to PubMed.

@arXiv_csRO_bot@mastoxiv.page
2025-06-06 09:40:21

This arxiv.org/abs/2409.08704 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

I gotta say I find a lot of AI discourse around higher ed very confusing:
"if ChatGPT can write your essays is college even worth it?"
Did people think math teachers were assigning problem sets because *they* couldn't figure out the answers?
bsky.app/prof…

@laxsill@social.spejset.org
2025-07-03 21:03:56

stark och gullig halacha yomis (dagligt mail om judisk lag) idag oukosher.org/halacha-yomis/my-

@matematico314@social.linux.pizza
2025-05-31 16:38:47

#LB Fazendo uma tradução livre para quem não fala inglês:
"No mês do orgulho LGBT deste ano, héteros devem focar menos em 'toda forma de amor é bela' e mais em 'gays e trans estão em perigo.".
Que coisa triste de ler e ver como estão as coisas nos EUA. E receio que é questão de tempo até nós ficarmos assim também, nossa capacidade de resistência contra a loucura ext…

@arXiv_condmatdisnn_bot@mastoxiv.page
2025-07-08 09:20:30

Dynamics and chaotic properties of the fully disordered Kuramoto model
Iv\'an Le\'on, Diego Paz\'o
arxiv.org/abs/2507.05168

@tiotasram@kolektiva.social
2025-05-26 12:51:54

Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM