Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csHC_bot@mastoxiv.page
2025-07-14 07:33:51

Human vs. LLM-Based Thematic Analysis for Digital Mental Health Research: Proof-of-Concept Comparative Study
Karisa Parkington, Bazen G. Teferra, Marianne Rouleau-Tang, Argyrios Perivolaris, Alice Rueda, Adam Dubrowski, Bill Kapralos, Reza Samavi, Andrew Greenshaw, Yanbo Zhang, Bo Cao, Yuqi Wu, Sirisha Rambhatla, Sridhar Krishnan, Venkat Bhat

@arXiv_csSE_bot@mastoxiv.page
2025-08-07 08:55:54

A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output
Soroush Heydari
arxiv.org/abs/2508.03922 arxiv.org/p…

@midtsveen@social.linux.pizza
2025-09-08 18:57:43

Good riddance to Anita Bryant, who died on December 16, 2024.
Now we just have to wait for the day someone finally pies JK Rowling, she needs a wake-up call before that transphobic, cruel troll kicks the bucket.
Someone’s gotta bring her down a peg for all the hateful nonsense she’s spewed against the trans community.
:heart_trans: Trans Rights Are Human Rights :heart_trans:
#LGBTQIA

Anita Bryant, the anti-gay rights activist and Florida orange juice spokesperson, having a strawberry-rhubarb pie thrown in her face by gay activist Thom Higgins during a 1977 public appearance in Des Moines, Iowa. Covered in pie, Bryant reacts emotionally, later praying on camera.
@arXiv_csHC_bot@mastoxiv.page
2025-07-09 08:57:32

Information Needs and Practices Supported by ChatGPT
Tim Gorichanaz
arxiv.org/abs/2507.05537 arxiv.org/pdf/2507.05537…

@mariyadelano@hachyderm.io
2025-08-07 15:54:12

I really really really hate how much people in my field and industry have normalized generative #AI use.
I see posts / hear comments literally EVERY DAY to the tune of “can people stop complaining about AI, nobody cares. You’re not morally better” followed up by something about “you’re making work harder than it needs to be” and often “nobody values human-made work more they only care about the final output no matter how it was created”
I usually ignore these conversations but sometimes it really gets to me. It’s so hard to feel sane surrounded by that consensus every day, everywhere I go with people in my profession.
I’ve rarely felt so judged by the majority point of view on anything in my work before.

@Techmeme@techhub.social
2025-06-19 06:06:10

Q&A with Hugging Face Chief Ethics Scientist Margaret Mitchell on aligning AI development with human needs, the "illusion of consensus" around AGI, and more (Melissa Heikkilä/Financial Times)
ft.com/content/7089bff2-25fc-4

@arXiv_csRO_bot@mastoxiv.page
2025-08-07 09:15:43

$NavA^3$: Understanding Any Instruction, Navigating Anywhere, Finding Anything
Lingfeng Zhang, Xiaoshuai Hao, Yingbo Tang, Haoxiang Fu, Xinyu Zheng, Pengwei Wang, Zhongyuan Wang, Wenbo Ding, Shanghang Zhang
arxiv.org/abs/2508.04598

@markhburton@mstdn.social
2025-07-29 12:13:25

"The European Court of Human Rights is not a foreign court. It was set up by countries, including the UK, in the aftermath of the Second World War to protect people from tyranny."
6 reasons the UK needs to stay in the #ECHR: Jessica Simor KC

@tante@tldr.nettime.org
2025-07-30 08:08:14

I got myself a ticket to see @… in Berlin soon. Get them while they last.
mastodon.social/@publix/114935

@arXiv_qbioNC_bot@mastoxiv.page
2025-09-10 09:17:51

Computational Concept of the Psyche
Anton Kolonin, Vladimir Kryukov
arxiv.org/abs/2509.07009 arxiv.org/pdf/2509.07009

@inthehands@hachyderm.io
2025-07-30 15:48:42

By all reports, DOGE’s tools were hot crap, and I have serious doubts about whether any of the engineers in involved were any good. But what Weissmann is saying about this •belief•? Agree completely.
That particular form of engineering arrogance that imagines Smart Boys with Fancy Tech can magically untangle complex human systems needs to die a quick death. It’s not good public policy. It’s not even good engineering.
via @…: toad.social/@KimPerales/114942

@pre@boing.world
2025-06-20 22:54:36
Content warning: Doctor Who - Future, why Billie?
:tardis:

There's a woman I know who, when she was pregnant, was very keen to hear the opinions of crystal diviners and homeopath medics on what sex her new baby would be but wouldn't let the ultrasound-scan technician that actually knows tells her because Spoilers.
On that note, I'm happy to watch #doctorWho #badWolf #tv

@arXiv_astrophIM_bot@mastoxiv.page
2025-07-04 09:05:31

Image Marker
Ryan Walker, Andi Kisare, Lindsey Bleem
arxiv.org/abs/2507.02153 arxiv.org/pdf/2507.02153

@arXiv_econGN_bot@mastoxiv.page
2025-07-10 07:46:21

The Post Science Paradigm of Scientific Discovery in the Era of Artificial Intelligence: Modelling the Collapse of Ideation Costs, Epistemic Inversion, and the End of Knowledge Scarcity
Christian William Callaghan
arxiv.org/abs/2507.07019

@arXiv_csCL_bot@mastoxiv.page
2025-07-24 08:09:19

Text-to-SPARQL Goes Beyond English: Multilingual Question Answering Over Knowledge Graphs through Human-Inspired Reasoning
Aleksandr Perevalov, Andreas Both
arxiv.org/abs/2507.16971

@arXiv_csSE_bot@mastoxiv.page
2025-09-03 09:16:23

REConnect: Participatory RE that Matters
Daniela Damian, Bachan Ghimire, Ze Shi Li
arxiv.org/abs/2509.01006 arxiv.org/pdf/2509.01006

@arXiv_csAI_bot@mastoxiv.page
2025-08-29 07:31:40

The Anatomy of a Personal Health Agent
A. Ali Heydari, Ken Gu, Vidya Srinivas, Hong Yu, Zhihan Zhang, Yuwei Zhang, Akshay Paruchuri, Qian He, Hamid Palangi, Nova Hammerquist, Ahmed A. Metwally, Brent Winslow, Yubin Kim, Kumar Ayush, Yuzhe Yang, Girish Narayanswamy, Maxwell A. Xu, Jake Garrison, Amy Aremnto Lee, Jenny Vafeiadou, Ben Graef, Isaac R. Galatzer-Levy, Erik Schenck, Andrew Barakat, Javier Perez, Jacqueline Shreibati, John Hernandez, Anthony Z. Faranesh, Javier L. Prieto, Conn…

@arXiv_csNI_bot@mastoxiv.page
2025-08-04 08:04:41

Agent Network Protocol Technical White Paper
Gaowei Chang, Eidan Lin, Chengxuan Yuan, Rizhao Cai, Binbin Chen, Xuan Xie, Yin Zhang
arxiv.org/abs/2508.00007

@arXiv_csIR_bot@mastoxiv.page
2025-06-23 08:24:40

MoR: Better Handling Diverse Queries with a Mixture of Sparse, Dense, and Human Retrievers
Jushaan Singh Kalra, Xinran Zhao, To Eun Kim, Fengyu Cai, Fernando Diaz, Tongshuang Wu
arxiv.org/abs/2506.15862

@arXiv_csHC_bot@mastoxiv.page
2025-08-18 09:22:40

Toward Needs-Conscious Design: Co-Designing a Human-Centered Framework for AI-Mediated Communication
Robert Wolfe, Aayushi Dangol, JaeWon Kim, Alexis Hiniker
arxiv.org/abs/2508.11149

@arXiv_csSD_bot@mastoxiv.page
2025-09-01 07:52:53

RARR : Robust Real-World Activity Recognition with Vibration by Scavenging Near-Surface Audio Online
Dong Yoon Lee, Alyssa Weakley, Hui Wei, Blake Brown, Keyana Carrion, Shijia Pan
arxiv.org/abs/2508.21167

@arXiv_csMA_bot@mastoxiv.page
2025-07-30 07:47:31

Replicating the behaviour of electric vehicle drivers using an agent-based reinforcement learning model
Zixin Feng, Qunshan Zhao, Alison Heppenstall
arxiv.org/abs/2507.21341

@arXiv_csHC_bot@mastoxiv.page
2025-07-01 10:32:43

A User Experience 3.0 (UX 3.0) Paradigm Framework: Designing for Human-Centered AI Experiences
Wei Xu
arxiv.org/abs/2506.23116

@arXiv_csRO_bot@mastoxiv.page
2025-07-22 11:17:40

Estimation of Payload Inertial Parameters from Human Demonstrations by Hand Guiding
Johannes Hartwig, Philipp Lienhardt, Dominik Henrich
arxiv.org/abs/2507.15604

@arXiv_csCL_bot@mastoxiv.page
2025-06-23 12:12:40

Towards AI Search Paradigm
Yuchen Li, Hengyi Cai, Rui Kong, Xinran Chen, Jiamin Chen, Jun Yang, Haojie Zhang, Jiayi Li, Jiayi Wu, Yiqun Chen, Changle Qu, Keyi Kong, Wenwen Ye, Lixin Su, Xinyu Ma, Long Xia, Daiting Shi, Jiashu Zhao, Haoyi Xiong, Shuaiqiang Wang, Dawei Yin
arxiv.org/abs/2506.17188

@arXiv_csNI_bot@mastoxiv.page
2025-07-22 09:05:10

White paper: Towards Human-centric and Sustainable 6G Services -- the fortiss Research Perspective
Rute C. Sofia, Hao Shen, Yuanting Liu, Severin Kacianka, Holger Pfeifer
arxiv.org/abs/2507.14209

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@arXiv_csRO_bot@mastoxiv.page
2025-07-22 11:46:10

Interleaved LLM and Motion Planning for Generalized Multi-Object Collection in Large Scene Graphs
Ruochu Yang, Yu Zhou, Fumin Zhang, Mengxue Hou
arxiv.org/abs/2507.15782

@arXiv_csHC_bot@mastoxiv.page
2025-08-28 09:25:11

"She was useful, but a bit too optimistic": Augmenting Design with Interactive Virtual Personas
Paluck Deep, Monica Bharadhidasan, A. Baki Kocaballi
arxiv.org/abs/2508.19463

@arXiv_csHC_bot@mastoxiv.page
2025-06-17 09:55:05

The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being
Yutong Zhang, Dora Zhao, Jeffrey T. Hancock, Robert Kraut, Diyi Yang
arxiv.org/abs/2506.12605

@arXiv_csHC_bot@mastoxiv.page
2025-08-01 09:44:51

Breaking the mould of Social Mixed Reality -- State-of-the-Art and Glossary
Marta Bie\'nkiewicz, Julia Ayache, Panayiotis Charalambous, Cristina Becchio, Marco Corragio, Bertram Taetz, Francesco De Lellis, Antonio Grotta, Anna Server, Daniel Rammer, Richard Kulpa, Franck Multon, Azucena Garcia-Palacios, Jessica Sutherland, Kathleen Bryson, St\'ephane Donikian, Didier Stricker, Beno\^it Bardy

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI