Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:48

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[2/5]:
- POTSA: A Cross-Lingual Speech Alignment Framework for Speech-to-Text Translation
Li, Cui, Wang, Ge, Huang, Li, Peng, Lu, Tashi, Wang, Dang
arxiv.org/abs/2511.09232 mastoxiv.page/@arXiv_csCL_bot/
- Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Yunzhe Xu, Zhuosheng Zhang, Zhe Liu
arxiv.org/abs/2511.10465 mastoxiv.page/@arXiv_csCL_bot/
- $\pi$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling
Dong Liu, Yanxuan Yu
arxiv.org/abs/2511.10696 mastoxiv.page/@arXiv_csCL_bot/
- Based on Data Balancing and Model Improvement for Multi-Label Sentiment Classification Performanc...
Zijin Su, Huanzhu Lyu, Yuren Niu, Yiming Liu
arxiv.org/abs/2511.14073 mastoxiv.page/@arXiv_csCL_bot/
- HEAD-QA v2: Expanding a Healthcare Benchmark for Reasoning
Alexis Correa-Guill\'en, Carlos G\'omez-Rodr\'iguez, David Vilares
arxiv.org/abs/2511.15355 mastoxiv.page/@arXiv_csCL_bot/
- Towards Hyper-Efficient RAG Systems in VecDBs: Distributed Parallel Multi-Resolution Vector Search
Dong Liu, Yanxuan Yu
arxiv.org/abs/2511.16681 mastoxiv.page/@arXiv_csCL_bot/
- Estonian WinoGrande Dataset: Comparative Analysis of LLM Performance on Human and Machine Transla...
Marii Ojastu, Hele-Andra Kuulmets, Aleksei Dorkin, Marika Borovikova, Dage S\"arg, Kairit Sirts
arxiv.org/abs/2511.17290 mastoxiv.page/@arXiv_csCL_bot/
- A Systematic Study of In-the-Wild Model Merging for Large Language Models
O\u{g}uz Ka\u{g}an Hitit, Leander Girrbach, Zeynep Akata
arxiv.org/abs/2511.21437 mastoxiv.page/@arXiv_csCL_bot/
- CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer
Lavish Bansal, Naman Mishra
arxiv.org/abs/2512.02711 mastoxiv.page/@arXiv_csCL_bot/
- Multilingual Medical Reasoning for Question Answering with Large Language Models
Pietro Ferrazzi, Aitor Soroa, Rodrigo Agerri
arxiv.org/abs/2512.05658 mastoxiv.page/@arXiv_csCL_bot/
- OnCoCo 1.0: A Public Dataset for Fine-Grained Message Classification in Online Counseling Convers...
Albrecht, Lehmann, Poltermann, Rudolph, Steigerwald, Stieler
arxiv.org/abs/2512.09804 mastoxiv.page/@arXiv_csCL_bot/
- Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, an...
Hanyu Cai, Binqi Shen, Lier Jin, Lan Hu, Xiaojing Fan
arxiv.org/abs/2512.12812 mastoxiv.page/@arXiv_csCL_bot/
- Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
Ovalle, Ross, Ruder, Williams, Ullrich, Ibrahim, Sagun
arxiv.org/abs/2512.22712 mastoxiv.page/@arXiv_csCL_bot/
- Activation Steering for Masked Diffusion Language Models
Adi Shnaidman, Erin Feiglin, Osher Yaari, Efrat Mentel, Amit Levi, Raz Lapid
arxiv.org/abs/2512.24143 mastoxiv.page/@arXiv_csCL_bot/
- JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese L...
Liu, Li, Niu, Zhang, Xun, Hou, Wang, Iwasawa, Matsuo, Hatakeyama-Sato
arxiv.org/abs/2601.01627 mastoxiv.page/@arXiv_csCL_bot/
- FACTUM: Mechanistic Detection of Citation Hallucination in Long-Form RAG
Dassen, Kotula, Murray, Yates, Lawrie, Kayi, Mayfield, Duh
arxiv.org/abs/2601.05866 mastoxiv.page/@arXiv_csCL_bot/
- {\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems
Zabir Al Nazi, Shubhashis Roy Dipta, Sudipta Kar
arxiv.org/abs/2601.06853 mastoxiv.page/@arXiv_csCL_bot/
- Symphonym: Universal Phonetic Embeddings for Cross-Script Name Matching
Stephen Gadd
arxiv.org/abs/2601.06932 mastoxiv.page/@arXiv_csCL_bot/
- LLMs versus the Halting Problem: Revisiting Program Termination Prediction
Sultan, Armengol-Estape, Kesseli, Vanegue, Shahaf, Adi, O'Hearn
arxiv.org/abs/2601.18987 mastoxiv.page/@arXiv_csCL_bot/
- MuVaC: A Variational Causal Framework for Multimodal Sarcasm Understanding in Dialogues
Diandian Guo, Fangfang Yuan, Cong Cao, Xixun Lin, Chuan Zhou, Hao Peng, Yanan Cao, Yanbing Liu
arxiv.org/abs/2601.20451 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot

@raiders@darktundra.xyz
2026-03-11 21:38:36

Jets agree to terms with ex-Raider Dylan Parham, likely new starter at guard: Source nytimes.com/athletic/7109493/2

@aredridel@kolektiva.social
2026-03-11 22:11:35

RE: follow.ethanmarcotte.com/@beep
This is very good. While I don't think the personal choices are available quite the way this makes it seem, a lot of the core sentiments I deeply agree with: Thinking, for ourselves, and working slowly to make what's new and good rather than what's cliche and expected, is very good, and we need to make that choice. A lot.

@matematico314@social.linux.pizza
2026-01-04 21:36:13

Mandei um diff -r entre dois diretórios e ele estš hš mais de uma hora rodando. Agora que parei pra pensar que são diretórios grandes (87GB) e um deles é um diretório dentro do drive virtual do pcloud (ou seja, acho que funciona como um diretório remoto). Acho que mesmo se eu deixar este computador ligado durante toda a noite este comando não vai terminar de rodar. 😬

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM