Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csSD_bot@mastoxiv.page
2025-05-30 07:22:20

Bridging the Gap Between Semantic and User Preference Spaces for Multi-modal Music Representation Learning
Xiaofeng Pan, Jing Chen, Haitong Zhang, Menglin Xing, Jiayi Wei, Xuefeng Mu, Zhongqian Xie
arxiv.org/abs/2505.23298

@arXiv_csCY_bot@mastoxiv.page
2025-05-30 07:16:51

Can Large Language Models Trigger a Paradigm Shift in Travel Behavior Modeling? Experiences with Modeling Travel Satisfaction
Pengfei Xu, Donggen Wang
arxiv.org/abs/2505.23262

@arXiv_csSE_bot@mastoxiv.page
2025-05-30 07:22:28

LLM-based Property-based Test Generation for Guardrailing Cyber-Physical Systems
Khashayar Etemadi, Marjan Sirjani, Mahshid Helali Moghadam, Per Strandberg, Paul Pettersson
arxiv.org/abs/2505.23549

@deprogrammaticaipsum@mas.to
2025-05-29 10:20:19

"William Zani, one of the core programmers of the first BASIC compiler, tells the story of the demo of the DTSS system at the San Francisco AFIPS 1964 conference (minute 26:17), sending a BASIC program to from San Francisco to Hanover, New Hampshire over a telephone line, live in front of an audience, who (I quote) “went bananas”."

@rperezrosario@mastodon.social
2025-05-25 19:21:01

Linguist Noam Chomsky is interviewed by Common Dreams writer C.J. Polychroniou on the subject of ChatGPT in this May 2023 piece. Chomsky's stance on LLMs is that as long as we're not able to atomically understand what goes on in the statistical black box that currently is an LLM, linguists won't be able to benefit from being able to see whether it learns language like a human does, or not.
"Noam Chomsky Speaks on What ChatGPT Is Really Good For"

@arXiv_csSD_bot@mastoxiv.page
2025-05-30 07:22:16

Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
Hao Li, Ju Dai, Xin Zhao, Feng Zhou, Junjun Pan, Lei Li
arxiv.org/abs/2505.23290

@tiotasram@kolektiva.social
2025-05-26 12:51:54

Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM

@arXiv_csDB_bot@mastoxiv.page
2025-05-30 07:17:10

KVzip: Query-Agnostic KV Cache Compression with Context Reconstruction
Jang-Hyun Kim, Jinuk Kim, Sangwoo Kwon, Jae W. Lee, Sangdoo Yun, Hyun Oh Song
arxiv.org/abs/2505.23416

@arXiv_physicsgeoph_bot@mastoxiv.page
2025-05-28 07:34:40

SeisCoDE: 3D Seismic Interpretation Foundation Model with Contrastive Self-Distillation Learning
Goodluck Archibong, Ardiansyah Koeshidayatullah, Umair Waheed, Weichang Li, Dicky Harishidayat, Motaz Alfarraj
arxiv.org/abs/2505.20518

@ckent@urbanists.social
2025-05-09 02:03:25

youtube.com/watch?v=wfpjNdhpMz
You know when a foreigner teaches you more about your own country than you know? This guy is such a good subject matter expert at languages, that by accident he knows things about Australian languages and culture that I guarantee you won…