Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mia@hcommons.social
2025-09-19 14:22:23

Some nice examples in the 'use cases' section of AI for Humanists aiforhumanists.com/guides/usec - from OCR to annotation to identifying voices and styles

@arXiv_csSE_bot@mastoxiv.page
2025-09-22 08:44:51

Software Development Aspects of Integrating Linear Algebra Libraries
Marcel Koch, Tobias Ribizel, Pratik Nayak, Fritz G\"obel, Gregor Olenik, Terry Cojean
arxiv.org/abs/2509.16081

@arXiv_csLG_bot@mastoxiv.page
2025-09-22 10:23:51

Targeted Fine-Tuning of DNN-Based Receivers via Influence Functions
Marko Tuononen, Heikki Penttinen, Ville Hautam\"aki
arxiv.org/abs/2509.15950

@arXiv_csCY_bot@mastoxiv.page
2025-08-21 07:56:20

Documenting Deployment with Fabric: A Repository of Real-World AI Governance
Mackenzie Jorgensen, Kendall Brogle, Katherine M. Collins, Lujain Ibrahim, Arina Shah, Petra Ivanovic, Noah Broestl, Gabriel Piles, Paul Dongha, Hatim Abdulhussein, Adrian Weller, Jillian Powers, Umang Bhatt
arxiv.org/abs/2508.14119

@aardrian@toot.cafe
2025-10-20 21:46:11

I, too, think some of the efforts of the CSSWG have gotten ahead of the use cases and I, too, think catching our breath would be good.
“Chris’ Corner: Stage 2”
blog.codepen.io/2025/10/20/chr

If we’re entering an era where CSS innovation slows down a little and we catch our breath with Stage 2 sorta features and figuring out what to do with these new features, I’m cool with that. Sorta like…
• We’ve got corner-shape, so what can we actually do with it?
• We’ve got @layer now, how do we actually get it into a project?
• We’ve got View Transitions now, maybe we actually need to scope them for variety of real-world situations.
@arXiv_csCR_bot@mastoxiv.page
2025-09-23 10:34:41

In Numeris Veritas: An Empirical Measurement of Wi-Fi Integration in Industry
Vyron Kampourakis, Christos Smiliotopoulos, Vasileios Gkioulos, Sokratis Katsikas
arxiv.org/abs/2509.16987

@arXiv_mathNT_bot@mastoxiv.page
2025-09-23 08:21:00

Twisted Malle's Conjecture
Tanav Choudhary
arxiv.org/abs/2509.16770 arxiv.org/pdf/2509.16770

@arXiv_csIR_bot@mastoxiv.page
2025-08-21 09:03:20

Retrieval-Augmented Generation in Industry: An Interview Study on Use Cases, Requirements, Challenges, and Evaluation
Lorenz Brehme, Benedikt Dornauer, Thomas Str\"ohle, Maximilian Ehrhart, Ruth Breu
arxiv.org/abs/2508.14066

@Techmeme@techhub.social
2025-09-15 08:45:45

OpenAI releases the first detailed public study on how people use ChatGPT: 73% of chats were non-work related, practical guidance was the top use case, and more (Gerrit De Vynck/Washington Post)
washingtonpost.com/technology/

@servelan@newsie.social
2025-10-20 02:07:44

Aaron Rupar ‪@atrupar.com€
Trump: "I'm allowed as you know as president, like 50% of the presidents have used the Insurrection Act. Everybody agrees you're allowed to use that and there is no more court cases, there is no more anything. We're trying to do it in a nicer manner, but we can always use the Insurrection Act." — Bluesky
bsky.app/profile/atrupar.com/p

@geant@mstdn.social
2025-08-20 12:33:09

🌟 New SIGs Spotlight: SIG-AI 🌟
A new space for collaboration on Artificial Intelligence within the NREN community is here.
SIG-AI brings the Research & Education community together to share expertise, best practices, and explore practical use cases of AI in NREN context—from cybersecurity and High-Performance Computing (HPC) to network automation and next-generation networks.
📖 For more insights, read the full interview with Leonie Schäfer (@…

Picture from the SIG-AI meeting in Prague on April 7, 2025. Credits to Leonie Schäfer, DFN.
Picture from the SIG-AI meeting in Prague in December 2024.
@arXiv_csSD_bot@mastoxiv.page
2025-07-23 09:08:52

SDBench: A Comprehensive Benchmark Suite for Speaker Diarization
Eduardo Pacheco, Atila Orhon, Berkin Durmus, Blaise Munyampirwa, Andrey Leonov
arxiv.org/abs/2507.16136

@seeingwithsound@mas.to
2025-08-19 07:49:42

(LinkedIn) Revision Implant is despite its name already quickly looking for markets beyond visual prostheses linkedin.com/posts/revision-im

@arXiv_csHC_bot@mastoxiv.page
2025-09-19 08:43:11

Why Johnny Can't Use Agents: Industry Aspirations vs. User Realities with AI Agent Software
Pradyumna Shome, Sashreek Krishnan, Sauvik Das
arxiv.org/abs/2509.14528

@arXiv_csCL_bot@mastoxiv.page
2025-08-21 09:41:20

ISCA: A Framework for Interview-Style Conversational Agents
Charles Welch, Allison Lahnala, Vasudha Varadarajan, Lucie Flek, Rada Mihalcea, J. Lomax Boyd, Jo\~ao Sedoc
arxiv.org/abs/2508.14344

@arXiv_csSE_bot@mastoxiv.page
2025-08-20 07:39:09

A Comparative Study of Delta Parquet, Iceberg, and Hudi for Automotive Data Engineering Use Cases
Dinesh Eswararaj, Ajay Babu Nellipudi, Vandana Kollati
arxiv.org/abs/2508.13396

@arXiv_csCR_bot@mastoxiv.page
2025-09-23 11:32:20

Community Covert Communication - Dynamic Mass Covert Communication Through Social Media
Eric Filiol
arxiv.org/abs/2509.17508 arxiv.org/pdf/…

@michabbb@social.vivaldi.net
2025-09-20 06:53:09

📊 Versatile use cases include summarizing articles explaining complex concepts testing knowledge modifying recipes comparing products and making informed decisions
✍️ Get key takeaways from articles pages or discussion threads without leaving your current browsing session maintaining focus and workflow efficiency
🔍 Ask questions about content you're reading and receive relevant answers and explanations using the current page's information for accurate context

@arXiv_csAI_bot@mastoxiv.page
2025-08-20 09:43:50

Discrete Optimization of Min-Max Violation and its Applications Across Computational Sciences
Cheikh Ahmed, Mahdi Mostajabdaveh, Samin Aref, Zirui Zhou
arxiv.org/abs/2508.13437

@arXiv_csCV_bot@mastoxiv.page
2025-08-20 10:16:10

Shape-from-Template with Generalised Camera
Agniva Sengupta, Stefan Zachow
arxiv.org/abs/2508.13791 arxiv.org/pdf/2508.13791

@memeorandum@universeodon.com
2025-09-08 00:45:35

Perils of the Pentagon's Plan to Use Military Lawyers to Adjudicate Immigration Cases (Ilya Somin/Reason)
reason.com/volokh/2025/09/07/p
memeorandum.com/250907/p75#a25

@aardrian@toot.cafe
2025-08-11 20:49:45

I think we should use CSS logical properties wherever we can. Chris Coyier has outlined some cases where we cannot:
frontendmasters.com/blog/shoul
I made a traditional to logical mapping in [checks wat…

@arXiv_quantph_bot@mastoxiv.page
2025-09-15 09:37:01

Quantum Computing Technology Roadmaps and Capability Assessment for Scientific Computing -- An analysis of use cases from the NERSC workload
Daan Camps, Ermal Rrapaj, Katherine Klymko, Hyeongjin Kim, Kevin Gott, Siva Darbha, Jan Balewski, Brian Austin, Nicholas J. Wright
arxiv.org/abs/2509.09882

@arXiv_csDL_bot@mastoxiv.page
2025-09-11 07:37:02

A Use Case Lens on Digital Cultural Heritage
Gustavo Candela, Milena Dobreva, Henk Alkemade, Olga Holownia, Mahendra Mahey, Sarah Ames, Karen Renaud, Ines Vodopivec, Benjamin Charles Germain Lee, Thomas Padilla, Steven Claeyssens, Isto Huvila, Beth Knazook
arxiv.org/abs/2509.08710

@cdp1337@social.veraciousnetwork.com
2025-08-18 06:58:34

Continuing on my Meshtastic kick, (probably because I keep buying radios to play with...) This time I have a configuration guide for common settings I've found which are useful. Still a work in progress but I think most of the common options are there.
If I've forgotten something, gotten something wrong, or you have a trick I should add, let me know!

@aral@mastodon.ar.al
2025-09-03 10:53:54

I haven’t added an example of how you implement migrations with Kitten’s¹ built-in JSDB database² yet but here’s one that I just used when renaming a field (property) in a table (JavaScript object) from “account” to “data” that illustrates the general granular approach you should take within persisted instances of JavaScript classes.
This is, of course, an advanced use case of the built-in JavaScript database that all Kitten apps have.
Kitten is simple for simple use cases. So ch…

Screenshot of code (detail) in Helix Editor on macOS, showing the source for app_modules/database/database.js. The following code is highlighted with a pink border:

initialise () {
    // Migration.
    if (this.account !== undefined) {
      this.data = this.account
      delete this.account
    }
  }

Full listing

texport class VerifiedAccount extends Model {
  url = this.url || ''
  /**
    This is the object returned from the accounts/lookup
    method of the Mastodon API.

    …
Screenshot of code for app_modules/database/Model.js.

The following code is highlighted with a pink border:

  /**
    Optional hook: override this to perform initialisation
    at constructor time. (Do not override the constructor
    or the automatic property assignment will fail.)
  */
  initialise () {}

Full code listing:

/**
  Base model class.

  (To use, extend this with your own model classes.)

  When adding properties in subclasses, make sure you
  only set values after checking if…
@arXiv_csNI_bot@mastoxiv.page
2025-08-20 08:46:50

Architecture Considerations for ISAC in 6G
Sebastian Robitzsch, Laksh Bhatia, Konstantinos G. Filis, Neda Petreska, Michael Bahr, Pablo Picazo Martinez, Xi Li
arxiv.org/abs/2508.13736

@tbones@social.tchncs.de
2025-10-18 06:51:29

Ein sehr guter Ansatz, wie die Bevölkerung auf "gut gemeint" verschleierte Komplexität reagiert.
troet.cafe/@datawuppi/11539135

@mariyadelano@hachyderm.io
2025-09-12 20:16:16

This AI complaint is brought to you today by this ridiculous poster at Manhattan’s Guitar Center
“Make your dream tone a reality”, my ass.
Ever notice how 90% of AI marketing copy is literally just vague platitudes because they can’t actually think of any legitimate benefits or use cases?
“Be anything you imagine” = “we can’t imagine anything good enough to say here so you do the thinking for us”

@arXiv_eessAS_bot@mastoxiv.page
2025-08-11 09:11:30

Use Cases for Voice Anonymization
Sarina Meyer, Ngoc Thang Vu
arxiv.org/abs/2508.06356 arxiv.org/pdf/2508.06356

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@frankel@mastodon.top
2025-10-12 18:31:04

In #OOP, objects collaborate. The initial idea of collaboration, first found in Smalltalk, was for object A to send a message to object B. Languages designed later use method calling. In both cases, the same question stands: how does an object reference other objects to reach the desired results?
In this post, I tackle the problem of passing

@mgorny@social.treehouse.systems
2025-10-03 16:59:41

Brutal similes
Using #AI is an ethical choice.
I know that there cases when an #LLM could make my job easier. Which doesn't mean I'll use one. Just like I won't be buying cheap junk gadgets that could help me with some random stuff a bunch of times before they'll end up on a trash pile.
Yes, sometimes I am curious what an LLM could come up with. But then, there are people who are curious how many donuts they can eat before throwing up. A waste of good donuts.
What world would you rather live in? One where you put a little more effort in your job? Or one where LLM helps with with your job, but you can't enjoy your free time anymore because the capitalists are using LLMs to turn every single aspect of your life into a nightmare, and eventually your employer just makes you do more and more until you're thrown out? But at least you will get a monthly trial of a statistical "friend" to "talk" about your trouble to.
Yeah, you can claim that training models does the most harm, and that's already happened, so not using them doesn't change much, and all the energy spent on it would be wasted. Or use the traditional "others" fallacy — others will use it anyway, others will fuel the vicious circle, so why renounce convenience. It's like when you learn that your dinner is human meat, and you decide to eat it anyway, because not eating it won't bring that human back to life, and if it's wasted, then their death will be for naught.
#AntiCapitalism

@mxp@mastodon.acm.org
2025-08-11 17:41:57

AGI is “not a super useful term”? But IIRC, as defined by OpenAI, they’ll hit AGI when they generate $100 billion in profit. So, how’s that coming along? Not so great, huh?
cnbc.com/2025/08/11/sam-altman

@arXiv_csCY_bot@mastoxiv.page
2025-09-10 09:01:51

Develop-Fair Use for Artificial Intelligence: A Sino-U.S. Copyright Law Comparison Based on the Ultraman, Bartz v. Anthropic, and Kadrey v. Meta Cases
Chanhou Lou
arxiv.org/abs/2509.07365

@lysander07@sigmoid.social
2025-08-26 10:30:21

AI use cases introduced by Rob Finn from EMBL-EBI, as pointed out in his #CORDI2025 keynote "Delivering life science data resources in a world of growing data and impacts from AI"
nfdi.de/cordi-2025/keynotes/

AI use cases presented in the CORDI2025 keynote focus mostly in programming copiloting, natural language interfaces, and some agentic AI.
@arXiv_mathAP_bot@mastoxiv.page
2025-09-18 07:48:41

Critical p-biharmonic problems and applications to Hamiltonian systems
Kanishka Perera, Bruno Ribeiro
arxiv.org/abs/2509.13596 arxiv.org/pd…

@arXiv_csIT_bot@mastoxiv.page
2025-09-16 07:57:56

A Broadcast Channel Framework for MIMO-OFDM Integrated Sensing and Communication
Homa Nikbakht, Husheng Li, Zhu Han, H. Vincent Poor
arxiv.org/abs/2509.10878

@ripienaar@devco.social
2025-08-08 11:13:17

Not much to say about GPT-5 yet for my use cases but OMG FINALLY NO FUCKING EMOJIS EVERYWHERE.

@arXiv_eessIV_bot@mastoxiv.page
2025-10-15 08:27:02

LiteVPNet: A Lightweight Network for Video Encoding Control in Quality-Critical Applications
Vibhoothi Vibhoothi, Fran\c{c}ois Piti\'e, Anil Kokaram
arxiv.org/abs/2510.12379

@grumpybozo@toad.social
2025-09-09 19:06:03

#PSA: BEFORE selecting a domain name which you want to use for email, you definitely should consult the #SpamAssassin list of "suspicious" gTLDs. Those are gTLDs which have been so badly run that the overwhelming majority (99% in most cases) of messages using them for email addresses or even in …

@mia@hcommons.social
2025-08-13 16:11:14

I read 'The Public Interest Corpus Update – NYC Edition'. More work on the project's principles and goals, research and library service use cases, and thinking ahead to prospective year 1-3 and year 4-6 activities publicinterestcorpus.org/the-p

@ErikJonker@mastodon.social
2025-09-05 19:31:27

This was always how it was going to end, just a financial settlement and AI companies can continue.
nytimes.com/2025/09/05/technol

@Techmeme@techhub.social
2025-09-10 15:56:07

Stability AI launches Stable Audio 2.5, which the company claims is the first audio generation model designed for "enterprise-grade use cases" (Sean Michael Kerner/Venturebeat)
venturebeat.com/ai/stability-a

@arXiv_grqc_bot@mastoxiv.page
2025-08-15 09:24:22

Compact Binary Coalescence Sensitivity Estimates with Injection Campaigns during the LIGO-Virgo-KAGRA Collaborations' Fourth Observing Run
Reed Essick, Michael W. Coughlin, Michael Zevin, Deep Chatterjee, Teagan A. Clarke, Utkarsh Mali, Simona Miller, Nathan Steinle, Pratyusava Baral, Amanda C. Baylor, Gareth Cabourn Davies, Thomas Dent, Prathamesh Joshi, Praveen Kumar, Cody Messick, Tanmaya Mishra, Amazigh Ouzriat, Khun Sang Phukon, Lorenzo Piccari, Marion Pillas, Max Trevor, Thom…

@arXiv_statML_bot@mastoxiv.page
2025-08-06 08:51:00

Hedging with memory: shallow and deep learning with signatures
Eduardo Abi Jaber, Louis-Amand G\'erard
arxiv.org/abs/2508.02759 arxiv.o…

@arXiv_csMM_bot@mastoxiv.page
2025-09-16 07:57:26

Nagare Media Ingest: A System for Multimedia Ingest Workflows
Matthias Neugebauer
arxiv.org/abs/2509.11972 arxiv.org/pdf/2509.11972

@arXiv_mathGR_bot@mastoxiv.page
2025-10-14 09:24:48

Asymptotically rigid mapping class groups III: Presentations and isomorphisms
Anthony Genevois, Anne Lonjou, Christian Urech
arxiv.org/abs/2510.11336

@arXiv_mathCO_bot@mastoxiv.page
2025-08-12 08:42:33

Tropical fans supporting a reduced 0-dimensional complete intersection
Linxuan Li
arxiv.org/abs/2508.06694 arxiv.org/pdf/2508.06694

@arXiv_csAR_bot@mastoxiv.page
2025-10-13 08:39:20

A High-Efficiency SoC for Next-Generation Mobile DNA Sequencing
Abel Beyene, Zhongpan Wu, Yunus Dawji, Karim Hammad, Ebrahim Ghafar-Zadeh, Sebastian Magierowski
arxiv.org/abs/2510.08940

@maxheadroom@hub.uckermark.social
2025-09-03 15:12:32

Seems like AI is a bit like the discovery of radioactive materials. At first that was used for all sorts of magical applications without understanding the implications. The future seemed to be bright and radiating with radioactive use cases in every household appliance.
Then slowly reality crept in and revealed the dangers. Now it's a highly regulated and contained domain.
Now think about AI and how it's advertised and what casualties we already witness.

@toxi@mastodon.thi.ng
2025-09-30 20:14:16

The RPI Zero-based OpenPrinter looks very promising (600dpi color inkjet):
crowdsupply.com/open-tools/ope
Also instant throwback to BERG's cute Little Printer (from 2012, much smaller and very different use cases):

@jae@mastodon.me.uk
2025-07-29 18:56:34

Literally one of the best use cases for the Switch 2 mouse. nintendolife.com/news/2025/07/

@arXiv_csDC_bot@mastoxiv.page
2025-08-12 09:41:43

The Fused Kernel Library: A C API to Develop Highly-Efficient GPU Libraries
Oscar Amoros (Universitat Politecnica de Catalunya), Albert Andaluz (Independent researcher), Johnny Nunez (NVIDIA), Antonio J. Pena (Barcelona Supercomputing Center)
arxiv.org/abs/2508.07071

@seeingwithsound@mas.to
2025-09-13 07:10:57

Neurochips: The state of brain-computer interfaces in 2025 andersenlab.com/blueprint/bci- "The coming two or three years will be pivotal: early trial results will either validate the hopes or temper the hype". A looming Ga…

@arXiv_astrophHE_bot@mastoxiv.page
2025-08-13 09:14:22

CORSIKA 8: A modern and universal framework for particle cascade simulations
Marvin Gottowik (for the CORSIKA 8 collaboration)
arxiv.org/abs/2508.08755

@arXiv_physicsoptics_bot@mastoxiv.page
2025-08-25 08:56:50

Novel cases of diffraction of light from a grating: Theory and experiment
Ninad R. Jetty, Akash Suman, Rajesh B. Khaparde
arxiv.org/abs/2508.15970

@arXiv_csCR_bot@mastoxiv.page
2025-09-18 09:14:51

Protocol-Aware Firmware Rehosting for Effective Fuzzing of Embedded Network Stacks
Moritz Bley, Tobias Scharnowski, Simon W\"orner, Moritz Schloegel, Thorsten Holz
arxiv.org/abs/2509.13740

@arXiv_mathAG_bot@mastoxiv.page
2025-10-03 08:56:01

Projective models for Hilbert squares of $K3$ surfaces
\'Angel David R\'ios Ortiz, Andr\'es Rojas, Jieao Song
arxiv.org/abs/2510.02065

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@arXiv_csCY_bot@mastoxiv.page
2025-09-18 09:06:41

The Provenance Problem: LLMs and the Breakdown of Citation Norms
Brian D. Earp, Haotian Yuan, Julian Koplin, Sebastian Porsdam Mann
arxiv.org/abs/2509.13365

@azonenberg@ioc.exchange
2025-07-26 04:38:42

Quick test board for STM32H750 UFBGA240 25 paired with the Trion T20 BGA256 FPGA devkit.
Goals:
* Validate my KiCAD symbol for STM32H750 in TFBGA240 25
* Bring up a basic BSP and stm32-cpp support for the STM32H750 (should be straightforward with most stuff very similar to the H735)
* Play with the H7 QUADSPI and see how it works for both XIP and copy-code-to-SRAM use cases
* Add QUADSPI flash driver for microkvs (which currently only supports internal flash)
*…

KiCAD 3D render of a board with a single big BGA MCU surrounded by connectors and not much else
Underside of the board showing decoupling caps
KiCAD layout view of the board showing dense routing on both outer signal layers
@samueljohn@mastodon.world
2025-08-02 15:38:16

This resonates 50% with me. But the other 50%, I am like you and your manager have to become more the architects and less the lines-of-code-checker. Also thinking about tests and edge cases is even more important now. exquisite.social/@thomholwerda

@arXiv_quantph_bot@mastoxiv.page
2025-09-15 09:36:01

Comparative Studies of Quantum Annealing, Digital Annealing, and Classical Solvers for Reaction Network Pathway Analysis and mRNA Codon Selection
Milind Upadhyay, Mark Nicholas Jones
arxiv.org/abs/2509.09862

@arXiv_csCL_bot@mastoxiv.page
2025-10-09 10:28:51

OpenJAI-v1.0: An Open Thai Large Language Model
Pontakorn Trakuekul, Attapol T. Rutherford, Jullajak Karnjanaekarin, Narongkorn Panitsrisit, Sumana Sumanakul
arxiv.org/abs/2510.06847

@arXiv_eessSP_bot@mastoxiv.page
2025-09-24 09:52:04

Enabling Drone Detection with SWARM Repeater-Assisted MIMO ISAC
Palatip Jopanya, Diana P. M. Osorio
arxiv.org/abs/2509.19119 arxiv.org/pdf/…

@arXiv_csPF_bot@mastoxiv.page
2025-08-13 08:03:22

Maximizing GPU Efficiency via Optimal Adapter Caching: An Analytical Approach for Multi-Tenant LLM Serving
Ferran Agullo, Joan Oliveras, Chen Wang, Alberto Gutierrez-Torre, Olivier Tardieu, Alaa Youssef, Jordi Torres, Josep Ll. Berral
arxiv.org/abs/2508.08343

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-08-11 09:15:30

An explicit formula of the Oslo stationary state
Valentin Lallemant, Vincent Rossetto
arxiv.org/abs/2508.06315 arxiv.org/pdf/2508.06315

@arXiv_mathNT_bot@mastoxiv.page
2025-08-15 08:56:02

Local-global compatibility and the exceptional zero conjecture for GL(3)
Daniel Barrera Salazar, Andrew Graham, Chris Williams
arxiv.org/abs/2508.10225

@arXiv_statME_bot@mastoxiv.page
2025-10-09 09:31:51

Confidence Regions for Multiple Outcomes, Effect Modifiers, and Other Multiple Comparisons
Paul N Zivich, Stephen R Cole, Noah Greifer, Lina M Montoya, Michael R Kosorok, Jessie K Edwards
arxiv.org/abs/2510.07076

@arXiv_mathGT_bot@mastoxiv.page
2025-09-09 09:14:32

On detection probabilities of link invariants
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz, Victor L. Zhang
arxiv.org/abs/2509.05574 arxiv…

@arXiv_csHC_bot@mastoxiv.page
2025-08-04 09:32:50

Why Do Decision Makers (Not) Use AI? A Cross-Domain Analysis of Factors Impacting AI Adoption
Rebecca Yu, Valerie Chen, Ameet Talwalkar, Hoda Heidari
arxiv.org/abs/2508.00723

@frankel@mastodon.top
2025-09-04 07:39:17

#PostgreSQL for everything
postgresforeverything.com/

@arXiv_csDB_bot@mastoxiv.page
2025-07-29 09:44:21

Data Cleaning of Data Streams
Valerie Restat, Niklas Rodenhausen, Carina Antonin, Uta St\"orl
arxiv.org/abs/2507.20839 arxiv.org/pdf/2…

@arXiv_csNI_bot@mastoxiv.page
2025-10-07 09:50:32

Analysis of LTE/5G Network Performance Parameters in Smartphone Use Cases: A Study of Packet Loss, Delay, and Slice Types
Almamoon Alauthman, Abeer Al-Hyari
arxiv.org/abs/2510.04035

@mgorny@social.treehouse.systems
2025-08-14 20:04:24

New on #Quansight PBC blog: Python Wheels: from Tags to Variants
#Python distributions are uniform across different Python versions and platforms. For these distributions, it is sufficient to publish a single wheel that can be installed everywhere. However, some packages are more complex than that; they include compiled Python extensions or binaries. In order to robustly deploy these software on different platforms, you need to publish multiple binary packages, and the installers need to select the one that fits the platform used best.
For a long time, Python wheels made do with a relatively simple mechanism to describe the needed variance: Platform compatibility tags. These tags identified different Python implementations and versions, operating systems, and CPU architectures. Over time, they were extended to facilitate new use cases. To list a couple: PEP 513 added manylinux tags to standardize the core library dependencies on GNU/Linux systems, and PEP 656 added musllinux tags to facilitate Linux systems with musl libc.
However, not all new use cases can be handled effectively within the framework of tags. To list a few:
• The advent of GPU-backed computing made distinguishing different acceleration frameworks such as NVIDIA CUDA or AMD ROCm important.
• As the compatibility with older CPUs became less desirable, many distributions have set baselines for their binary packages to x86-64-v2 microarchitecture level, and Python packages need to be able to express the same requirement.
• Numerical libraries support different BLAS/LAPACK, MPI, OpenMP providers, and wish to enable the users to choose the build matching their desired provider.
While tags could technically be bent to facilitate all these use cases, they would grow quite baroque, and, critically, every change to tags needs to be implemented in all installers and package-related tooling separately, making the adoption difficult.
Facing these limitations, software vendors have employed different solutions to work around the lack of an appropriate mechanism. Eventually, the #WheelNext initiative took up the challenge to design a more robust solution.
"""
#packaging

@arXiv_csCV_bot@mastoxiv.page
2025-09-11 09:52:13

Maximally Useful and Minimally Redundant: The Key to Self Supervised Learning for Imbalanced Data
Yash Kumar Sharma, Vineet Nair, Wilson Naik
arxiv.org/abs/2509.08469

@arXiv_csSE_bot@mastoxiv.page
2025-10-15 09:58:22

DarTwin made precise by SysMLv2 -- An Experiment
{\O}ystein Haugen, Stefan Klikovits, Martin Arthur Andersen, Jonathan Beaulieu, Francis Bordeleau, Joachim Denil, Joost Mertens
arxiv.org/abs/2510.12478

@seeingwithsound@mas.to
2025-08-26 08:21:19

Prediction: #BCI companies currently focusing on developing an implantable visual prosthesis will soon shift their focus toward more general #neuromodulation use cases to reach economy of scale and recoup investments. #NeuroTech

@arXiv_csIR_bot@mastoxiv.page
2025-08-11 09:29:49

Fine-Tuning Vision-Language Models for Markdown Conversion of Financial Tables in Malaysian Audited Financial Reports
Jin Khye Tan (Faculty of Computer Science,Information Technology, Universiti Malaya), En Jun Choong, Ethan Jeremiah Chitty, Yan Pheng Choo, John Hsin Yang Wong, Chern Eu Cheah
arxiv.org/abs/2508.05669

@Techmeme@techhub.social
2025-09-29 10:25:41

OpenAI launches its first extended ad campaign for ChatGPT featuring three 30-second cinematic ads with a focus on everyday AI use cases in the US and the UK (Tim Nudd/Ad Age)
adage.com/creativity/creative-

@mariyadelano@hachyderm.io
2025-10-16 21:46:46

I became a US citizen yesterday!
4 years in the making, a dream for much longer than that. I was fortunate to have one of the fastest routes to citizenship and to have mostly experienced a very smooth process with lovely immigration agents who processed my cases with respect and dignity.
I spent the last 9 months living in fear, watching immigrants get demonized and even permanent residency (green cards) to be treated as “a privilege” to be revoked at whim.
I've been quiet, afraid to say anything in public that could be misinterpreted or used against me.
I've repeatedly said goodbye to my city, I've cried walking my favorite streets, bracing for the worst as top officials bragged about getting rid of immigrants and "cleaning up" the country as if we were filth.
Now, I intend to find ways to use my citizenship for the greater good and to be civically engaged in ways not available to people here on visas and green cards, affected but forced to suffer in silence. I am looking forward to discovering what that will look like for me.
#USA #citizenship #immigration #immigrants
1/2 (A mini-thread)

@arXiv_astrophHE_bot@mastoxiv.page
2025-10-09 09:36:31

Probing evolution of Long GRB properties through their cosmic formation history aided by Machine Learning predicted redshifts
Dhruv S. Bal, Aditya Narendra, Maria Giovanna Dainotti, Nikita S. Khatiya, Aleksander L. Lenart, Dieter H. Hartmann
arxiv.org/abs/2510.07306

@arXiv_csMM_bot@mastoxiv.page
2025-09-25 07:42:22

Comparative Study of Subjective Video Quality Assessment Test Methods in Crowdsourcing for Varied Use Cases
Babak Naderi, Ross Cutler
arxiv.org/abs/2509.20118

@arXiv_csSE_bot@mastoxiv.page
2025-10-08 09:57:29

Automated Program Repair of Uncompilable Student Code
Griffin Pitts, Aum Pandya, Darsh Rank, Tirth Bhatt, Muntasir Hoq, Bita Akram
arxiv.org/abs/2510.06187

@arXiv_csCY_bot@mastoxiv.page
2025-08-14 07:41:12

From Hard Refusals to Safe-Completions: Toward Output-Centric Safety Training
Yuan Yuan, Tina Sriskandarajah, Anna-Luisa Brakman, Alec Helyar, Alex Beutel, Andrea Vallone, Saachi Jain
arxiv.org/abs/2508.09224

@arXiv_mathGR_bot@mastoxiv.page
2025-08-06 09:30:30

An Algorithm for Computing Hopf--Galois Structures and Skew Bracoids of Low Degree
Andrew Darlington
arxiv.org/abs/2508.03372 arxiv.org/pdf…

@arXiv_csCR_bot@mastoxiv.page
2025-08-11 09:45:29

On Digital Twins in Defence: Overview and Applications
Marco Giberna, Holger Voos, Paulo Tavares, Jo\~ao Nunes, Tobias Sorg, Andrea Masini, Jose Luis Sanchez-Lopez
arxiv.org/abs/2508.05717

@arXiv_csNI_bot@mastoxiv.page
2025-09-10 07:51:41

TEGRA: A Flexible & Scalable NextGen Mobile Core
Bilal Saleem, Omar Basit, Jiayi Meng, Iftekhar Alam, Ajay Thakur, Christian Maciocco, Muhammad Shahbaz, Y. Charlie Hu, Larry Peterson
arxiv.org/abs/2509.07410

@arXiv_csSE_bot@mastoxiv.page
2025-09-12 09:36:59

Altered Histories in Version Control System Repositories: Evidence from the Trenches
Solal Rapaport (IP Paris, LTCI, ACES, INFRES), Laurent Pautet (INFRES, LTCI, ACES, IP Paris), Samuel Tardieu (INFRES, ACES, IP Paris, LTCI), Stefano Zacchiroli (IP Paris, LTCI, ACES, INFRES)
arxiv.org/abs/2509.09294

@arXiv_csIR_bot@mastoxiv.page
2025-08-08 07:37:02

Augmented Question-guided Retrieval (AQgR) of Indian Case Law with LLM, RAG, and Structured Summaries
Vishnuprabha V, Daleesha M Viswanathan, Rajesh R, Aneesh V Pillai
arxiv.org/abs/2508.04710

@arXiv_csCL_bot@mastoxiv.page
2025-07-31 09:33:21

Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Jingwei Zuo, Maksim Velikanov, Ilyas Chahed, Younes Belkada, Dhia Eddine Rhayem, Guillaume Kunsch, Hakim Hacid, Hamza Yous, Brahim Farhat, Ibrahim Khadraoui, Mugariya Farooq, Giulia Campesan, Ruxandra Cojocaru, Yasser Djilali, Shi Hu, Iheb Chaabane, Puneesh Khanna, Mohamed El Amine Seddik, Ngoc Dung Huynh, Phuc Le Khac, Leen AlQadi, Billel Mokeddem, Mohamed Chami, Abdalgader Abubaker, Mikhail Lubin…

@arXiv_csSE_bot@mastoxiv.page
2025-09-12 08:07:59

Pattern-Based File and Data Access with Python Glob: A Comprehensive Guide for Computational Research
Sidney Shapiro
arxiv.org/abs/2509.08843

@arXiv_csNI_bot@mastoxiv.page
2025-09-09 09:03:32

Optimized Split Computing Framework for Edge and Core Devices
Andrea Tassi, Oluwatayo Yetunde Kolawole, Joan Pujol Roig, Daniel Warren
arxiv.org/abs/2509.06049

@arXiv_csCR_bot@mastoxiv.page
2025-08-05 11:49:51

A Comprehensive Analysis of Evolving Permission Usage in Android Apps: Trends, Threats, and Ecosystem Insights
Ali Alkinoon, Trung Cuong Dang, Ahod Alghuried, Abdulaziz Alghamdi, Soohyeon Choi, Manar Mohaisen, An Wang, Saeed Salem, David Mohaisen
arxiv.org/abs/2508.02008

@arXiv_csCY_bot@mastoxiv.page
2025-10-06 09:13:49

Sensors in viticulture: functions, benefits, and data-driven insights
Milan Milenkovic
arxiv.org/abs/2510.03000 arxiv.org/pdf/2510.03000

@arXiv_csSE_bot@mastoxiv.page
2025-09-10 08:57:01

What's Coming Next? Short-Term Simulation of Business Processes from Current State
Maksym Avramenko, David Chapela-Campa, Marlon Dumas, Fredrik Milani
arxiv.org/abs/2509.07747

@arXiv_csCY_bot@mastoxiv.page
2025-10-07 09:16:52

An Adaptive Responsible AI Governance Framework for Decentralized Organizations
Kiana Jafari Meimandi, Anka Reuel, Gabriela Aranguiz-Dias, Hatim Rahama, Ala-Eddine Ayadi, Xavier Boullier, J\'er\'emy Verdo, Louis Montanie, Mykel Kochenderfer
arxiv.org/abs/2510.03368

@arXiv_csSE_bot@mastoxiv.page
2025-08-01 08:25:52

On LLM-Assisted Generation of Smart Contracts from Business Processes
Fabian Stiehle, Hans Weytjens, Ingo Weber
arxiv.org/abs/2507.23087 ar…

@arXiv_csCR_bot@mastoxiv.page
2025-10-01 08:14:47

Managing Differentiated Secure Connectivity using Intents
Loay Abdelrazek, Filippo Rebecchi
arxiv.org/abs/2509.25462 arxiv.org/pdf/2509.254…