Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCY_bot@mastoxiv.page
2025-09-09 09:19:32

User Privacy and Large Language Models: An Analysis of Frontier Developers' Privacy Policies
Jennifer King, Kevin Klyman, Emily Capstick, Tiffany Saade, Victoria Hsieh
arxiv.org/abs/2509.05382

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@rasterweb@mastodon.social
2025-07-29 21:18:02

Canceled the Google account for my small business... Less money going to AI bullshit and Google's evil practices.
And I let them know.
(I had to click through a lot of screens where they tried to get me to stay, offered discounts, or offered to archive data for a few dollars a month. No thanks.)
#noAI

Let us know why you are cancelling your subscription (Select all that apply).
[ ] Lacked some features | needed
[ ] Business or organization shut down
[ ] Don'tuse it enough
[x]  Will use another productivity tool
[ ] Difficult to use or set up
[ ] cost reduction
[ ] Creating a Workspace account to replace this one
[x] I do not support the use of Al.

Do not include personal information

Next

By continuing, you agree Google uses your answers, account & system info to improve services, per…
@arXiv_csAI_bot@mastoxiv.page
2025-09-03 13:47:13

Physics Supernova: AI Agent Matches Elite Gold Medalists at IPhO 2025
Jiahao Qiu, Jingzhe Shi, Xinzhe Juan, Zelin Zhao, Jiayi Geng, Shilong Liu, Hongru Wang, Sanfeng Wu, Mengdi Wang
arxiv.org/abs/2509.01659

@arXiv_csHC_bot@mastoxiv.page
2025-08-20 09:16:00

Learning to Use AI for Learning: How Can We Effectively Teach and Measure Prompting Literacy for K-12 Students?
Ruiwei Xiao, Xinying Hou, Ying-Jui Tseng, Hsuan Nieu, Guanze Liao, John Stamper, Kenneth R. Koedinger
arxiv.org/abs/2508.13962

@arXiv_csDB_bot@mastoxiv.page
2025-09-04 09:25:21

NeurStore: Efficient In-database Deep Learning Model Management System
Siqi Xiang, Sheng Wang, Xiaokui Xiao, Cong Yue, Zhanhao Zhao, Beng Chin Ooi
arxiv.org/abs/2509.03228

@arXiv_csCY_bot@mastoxiv.page
2025-09-03 10:40:13

Pilot Study on Generative AI and Critical Thinking in Higher Education Classrooms
W. F. Lamberti, S. R. Lawrence, D. White, S. Kim, S. Abdullah
arxiv.org/abs/2509.00167

@arXiv_csCL_bot@mastoxiv.page
2025-06-12 08:55:21

PersonaLens: A Benchmark for Personalization Evaluation in Conversational AI Assistants
Zheng Zhao, Clara Vania, Subhradeep Kayal, Naila Khan, Shay B. Cohen, Emine Yilmaz
arxiv.org/abs/2506.09902

@arXiv_csSI_bot@mastoxiv.page
2025-06-23 09:18:30

Unpacking Generative AI in Education: Computational Modeling of Teacher and Student Perspectives in Social Media Discourse
Paulina DeVito, Akhil Vallala, Sean Mcmahon, Yaroslav Hinda, Benjamin Thaw, Hanqi Zhuang, Hari Kalva
arxiv.org/abs/2506.16412

@arXiv_csLG_bot@mastoxiv.page
2025-07-17 10:13:10

Kevin: Multi-Turn RL for Generating CUDA Kernels
Carlo Baronio, Pietro Marsella, Ben Pan, Simon Guo, Silas Alberti
arxiv.org/abs/2507.11948

@arXiv_csGT_bot@mastoxiv.page
2025-08-14 08:40:12

Collective dynamics of strategic classification
Marta C. Couto, Flavia Barsotti, Fernando P. Santos
arxiv.org/abs/2508.09340 arxiv.org/pdf/…

@arXiv_csCY_bot@mastoxiv.page
2025-07-18 08:33:42

The Case for Contextual Copyleft: Licensing Open Source Training Data and Generative AI
Grant Shanklin, Emmie Hine, Claudio Novelli, Tyler Schroder, Luciano Floridi
arxiv.org/abs/2507.12713