
2025-09-19 14:22:23
Some nice examples in the 'use cases' section of AI for Humanists https://aiforhumanists.com/guides/usecases/ - from OCR to annotation to identifying voices and styles
Some nice examples in the 'use cases' section of AI for Humanists https://aiforhumanists.com/guides/usecases/ - from OCR to annotation to identifying voices and styles
Software Development Aspects of Integrating Linear Algebra Libraries
Marcel Koch, Tobias Ribizel, Pratik Nayak, Fritz G\"obel, Gregor Olenik, Terry Cojean
https://arxiv.org/abs/2509.16081
Targeted Fine-Tuning of DNN-Based Receivers via Influence Functions
Marko Tuononen, Heikki Penttinen, Ville Hautam\"aki
https://arxiv.org/abs/2509.15950 https://
Documenting Deployment with Fabric: A Repository of Real-World AI Governance
Mackenzie Jorgensen, Kendall Brogle, Katherine M. Collins, Lujain Ibrahim, Arina Shah, Petra Ivanovic, Noah Broestl, Gabriel Piles, Paul Dongha, Hatim Abdulhussein, Adrian Weller, Jillian Powers, Umang Bhatt
https://arxiv.org/abs/2508.14119
I, too, think some of the efforts of the CSSWG have gotten ahead of the use cases and I, too, think catching our breath would be good.
“Chris’ Corner: Stage 2”
https://blog.codepen.io/2025/10/20/chris-corner-stage-2/
In Numeris Veritas: An Empirical Measurement of Wi-Fi Integration in Industry
Vyron Kampourakis, Christos Smiliotopoulos, Vasileios Gkioulos, Sokratis Katsikas
https://arxiv.org/abs/2509.16987
Twisted Malle's Conjecture
Tanav Choudhary
https://arxiv.org/abs/2509.16770 https://arxiv.org/pdf/2509.16770 …
Retrieval-Augmented Generation in Industry: An Interview Study on Use Cases, Requirements, Challenges, and Evaluation
Lorenz Brehme, Benedikt Dornauer, Thomas Str\"ohle, Maximilian Ehrhart, Ruth Breu
https://arxiv.org/abs/2508.14066
OpenAI releases the first detailed public study on how people use ChatGPT: 73% of chats were non-work related, practical guidance was the top use case, and more (Gerrit De Vynck/Washington Post)
https://www.washingtonpost.com/technology/2025/09/15/openai-…
Aaron Rupar @atrupar.com€
Trump: "I'm allowed as you know as president, like 50% of the presidents have used the Insurrection Act. Everybody agrees you're allowed to use that and there is no more court cases, there is no more anything. We're trying to do it in a nicer manner, but we can always use the Insurrection Act." — Bluesky
https://bsky.app/profile/atrupar.com/post/3m3ln7ru4v72f
🌟 New SIGs Spotlight: SIG-AI 🌟
A new space for collaboration on Artificial Intelligence within the NREN community is here.
SIG-AI brings the Research & Education community together to share expertise, best practices, and explore practical use cases of AI in NREN context—from cybersecurity and High-Performance Computing (HPC) to network automation and next-generation networks.
📖 For more insights, read the full interview with Leonie Schäfer (@…
SDBench: A Comprehensive Benchmark Suite for Speaker Diarization
Eduardo Pacheco, Atila Orhon, Berkin Durmus, Blaise Munyampirwa, Andrey Leonov
https://arxiv.org/abs/2507.16136
(LinkedIn) Revision Implant is despite its name already quickly looking for markets beyond visual prostheses https://www.linkedin.com/posts/revision-implant-nv_elmedix-medicalinnovation-oncology-activity-7363137424329252864-bp…
Why Johnny Can't Use Agents: Industry Aspirations vs. User Realities with AI Agent Software
Pradyumna Shome, Sashreek Krishnan, Sauvik Das
https://arxiv.org/abs/2509.14528 h…
ISCA: A Framework for Interview-Style Conversational Agents
Charles Welch, Allison Lahnala, Vasudha Varadarajan, Lucie Flek, Rada Mihalcea, J. Lomax Boyd, Jo\~ao Sedoc
https://arxiv.org/abs/2508.14344 …
A Comparative Study of Delta Parquet, Iceberg, and Hudi for Automotive Data Engineering Use Cases
Dinesh Eswararaj, Ajay Babu Nellipudi, Vandana Kollati
https://arxiv.org/abs/2508.13396
Community Covert Communication - Dynamic Mass Covert Communication Through Social Media
Eric Filiol
https://arxiv.org/abs/2509.17508 https://arxiv.org/pdf/…
📊 Versatile use cases include summarizing articles explaining complex concepts testing knowledge modifying recipes comparing products and making informed decisions
✍️ Get key takeaways from articles pages or discussion threads without leaving your current browsing session maintaining focus and workflow efficiency
🔍 Ask questions about content you're reading and receive relevant answers and explanations using the current page's information for accurate context
Discrete Optimization of Min-Max Violation and its Applications Across Computational Sciences
Cheikh Ahmed, Mahdi Mostajabdaveh, Samin Aref, Zirui Zhou
https://arxiv.org/abs/2508.13437
Shape-from-Template with Generalised Camera
Agniva Sengupta, Stefan Zachow
https://arxiv.org/abs/2508.13791 https://arxiv.org/pdf/2508.13791
Perils of the Pentagon's Plan to Use Military Lawyers to Adjudicate Immigration Cases (Ilya Somin/Reason)
https://reason.com/volokh/2025/09/07/perils-of-the-pentagons-plan-to-use-military-lawyers-to-adjudicate-immigration-cases/
http://www.memeorandum.com/250907/p75#a250907p75
I think we should use CSS logical properties wherever we can. Chris Coyier has outlined some cases where we cannot:
https://frontendmasters.com/blog/should-we-never-use-non-logical-properties/
I made a traditional to logical mapping in [checks wat…
Quantum Computing Technology Roadmaps and Capability Assessment for Scientific Computing -- An analysis of use cases from the NERSC workload
Daan Camps, Ermal Rrapaj, Katherine Klymko, Hyeongjin Kim, Kevin Gott, Siva Darbha, Jan Balewski, Brian Austin, Nicholas J. Wright
https://arxiv.org/abs/2509.09882
A Use Case Lens on Digital Cultural Heritage
Gustavo Candela, Milena Dobreva, Henk Alkemade, Olga Holownia, Mahendra Mahey, Sarah Ames, Karen Renaud, Ines Vodopivec, Benjamin Charles Germain Lee, Thomas Padilla, Steven Claeyssens, Isto Huvila, Beth Knazook
https://arxiv.org/abs/2509.08710
Continuing on my Meshtastic kick, (probably because I keep buying radios to play with...) This time I have a configuration guide for common settings I've found which are useful. Still a work in progress but I think most of the common options are there.
If I've forgotten something, gotten something wrong, or you have a trick I should add, let me know!
I haven’t added an example of how you implement migrations with Kitten’s¹ built-in JSDB database² yet but here’s one that I just used when renaming a field (property) in a table (JavaScript object) from “account” to “data” that illustrates the general granular approach you should take within persisted instances of JavaScript classes.
This is, of course, an advanced use case of the built-in JavaScript database that all Kitten apps have.
Kitten is simple for simple use cases. So ch…
Architecture Considerations for ISAC in 6G
Sebastian Robitzsch, Laksh Bhatia, Konstantinos G. Filis, Neda Petreska, Michael Bahr, Pablo Picazo Martinez, Xi Li
https://arxiv.org/abs/2508.13736
Ein sehr guter Ansatz, wie die Bevölkerung auf "gut gemeint" verschleierte Komplexität reagiert.
https://troet.cafe/@datawuppi/115391352534642088
This AI complaint is brought to you today by this ridiculous poster at Manhattan’s Guitar Center
“Make your dream tone a reality”, my ass.
Ever notice how 90% of AI marketing copy is literally just vague platitudes because they can’t actually think of any legitimate benefits or use cases?
“Be anything you imagine” = “we can’t imagine anything good enough to say here so you do the thinking for us”
Use Cases for Voice Anonymization
Sarina Meyer, Ngoc Thang Vu
https://arxiv.org/abs/2508.06356 https://arxiv.org/pdf/2508.06356
How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).
In #OOP, objects collaborate. The initial idea of collaboration, first found in Smalltalk, was for object A to send a message to object B. Languages designed later use method calling. In both cases, the same question stands: how does an object reference other objects to reach the desired results?
In this post, I tackle the problem of passing
Brutal similes
Using #AI is an ethical choice.
I know that there cases when an #LLM could make my job easier. Which doesn't mean I'll use one. Just like I won't be buying cheap junk gadgets that could help me with some random stuff a bunch of times before they'll end up on a trash pile.
Yes, sometimes I am curious what an LLM could come up with. But then, there are people who are curious how many donuts they can eat before throwing up. A waste of good donuts.
What world would you rather live in? One where you put a little more effort in your job? Or one where LLM helps with with your job, but you can't enjoy your free time anymore because the capitalists are using LLMs to turn every single aspect of your life into a nightmare, and eventually your employer just makes you do more and more until you're thrown out? But at least you will get a monthly trial of a statistical "friend" to "talk" about your trouble to.
Yeah, you can claim that training models does the most harm, and that's already happened, so not using them doesn't change much, and all the energy spent on it would be wasted. Or use the traditional "others" fallacy — others will use it anyway, others will fuel the vicious circle, so why renounce convenience. It's like when you learn that your dinner is human meat, and you decide to eat it anyway, because not eating it won't bring that human back to life, and if it's wasted, then their death will be for naught.
#AntiCapitalism
AGI is “not a super useful term”? But IIRC, as defined by OpenAI, they’ll hit AGI when they generate $100 billion in profit. So, how’s that coming along? Not so great, huh?
https://www.cnbc.com/2025/08/11/sam-altman-says-agi-is-a-pointless-term-expe…
Develop-Fair Use for Artificial Intelligence: A Sino-U.S. Copyright Law Comparison Based on the Ultraman, Bartz v. Anthropic, and Kadrey v. Meta Cases
Chanhou Lou
https://arxiv.org/abs/2509.07365
AI use cases introduced by Rob Finn from EMBL-EBI, as pointed out in his #CORDI2025 keynote "Delivering life science data resources in a world of growing data and impacts from AI"
https://www.nfdi.de/cordi-2025/keynotes/
Critical p-biharmonic problems and applications to Hamiltonian systems
Kanishka Perera, Bruno Ribeiro
https://arxiv.org/abs/2509.13596 https://arxiv.org/pd…
A Broadcast Channel Framework for MIMO-OFDM Integrated Sensing and Communication
Homa Nikbakht, Husheng Li, Zhu Han, H. Vincent Poor
https://arxiv.org/abs/2509.10878 https://
Not much to say about GPT-5 yet for my use cases but OMG FINALLY NO FUCKING EMOJIS EVERYWHERE.
LiteVPNet: A Lightweight Network for Video Encoding Control in Quality-Critical Applications
Vibhoothi Vibhoothi, Fran\c{c}ois Piti\'e, Anil Kokaram
https://arxiv.org/abs/2510.12379
#PSA: BEFORE selecting a domain name which you want to use for email, you definitely should consult the #SpamAssassin list of "suspicious" gTLDs. Those are gTLDs which have been so badly run that the overwhelming majority (99% in most cases) of messages using them for email addresses or even in …
I read 'The Public Interest Corpus Update – NYC Edition'. More work on the project's principles and goals, research and library service use cases, and thinking ahead to prospective year 1-3 and year 4-6 activities https://publicinterestcorpus.org/the-p
This was always how it was going to end, just a financial settlement and AI companies can continue.
https://www.nytimes.com/2025/09/05/technology/anth…
Stability AI launches Stable Audio 2.5, which the company claims is the first audio generation model designed for "enterprise-grade use cases" (Sean Michael Kerner/Venturebeat)
https://venturebeat.com/ai/stability-ais-enterprise-au…
Compact Binary Coalescence Sensitivity Estimates with Injection Campaigns during the LIGO-Virgo-KAGRA Collaborations' Fourth Observing Run
Reed Essick, Michael W. Coughlin, Michael Zevin, Deep Chatterjee, Teagan A. Clarke, Utkarsh Mali, Simona Miller, Nathan Steinle, Pratyusava Baral, Amanda C. Baylor, Gareth Cabourn Davies, Thomas Dent, Prathamesh Joshi, Praveen Kumar, Cody Messick, Tanmaya Mishra, Amazigh Ouzriat, Khun Sang Phukon, Lorenzo Piccari, Marion Pillas, Max Trevor, Thom…
Hedging with memory: shallow and deep learning with signatures
Eduardo Abi Jaber, Louis-Amand G\'erard
https://arxiv.org/abs/2508.02759 https://arxiv.o…
Nagare Media Ingest: A System for Multimedia Ingest Workflows
Matthias Neugebauer
https://arxiv.org/abs/2509.11972 https://arxiv.org/pdf/2509.11972
Asymptotically rigid mapping class groups III: Presentations and isomorphisms
Anthony Genevois, Anne Lonjou, Christian Urech
https://arxiv.org/abs/2510.11336 https://
Tropical fans supporting a reduced 0-dimensional complete intersection
Linxuan Li
https://arxiv.org/abs/2508.06694 https://arxiv.org/pdf/2508.06694
A High-Efficiency SoC for Next-Generation Mobile DNA Sequencing
Abel Beyene, Zhongpan Wu, Yunus Dawji, Karim Hammad, Ebrahim Ghafar-Zadeh, Sebastian Magierowski
https://arxiv.org/abs/2510.08940
Seems like AI is a bit like the discovery of radioactive materials. At first that was used for all sorts of magical applications without understanding the implications. The future seemed to be bright and radiating with radioactive use cases in every household appliance.
Then slowly reality crept in and revealed the dangers. Now it's a highly regulated and contained domain.
Now think about AI and how it's advertised and what casualties we already witness.
The RPI Zero-based OpenPrinter looks very promising (600dpi color inkjet):
https://www.crowdsupply.com/open-tools/open-printer
Also instant throwback to BERG's cute Little Printer (from 2012, much smaller and very different use cases):
Literally one of the best use cases for the Switch 2 mouse. https://www.nintendolife.com/news/2025/07/nintendo-expands-switch-onlines-snes-library-with-a-mouse-game
The Fused Kernel Library: A C API to Develop Highly-Efficient GPU Libraries
Oscar Amoros (Universitat Politecnica de Catalunya), Albert Andaluz (Independent researcher), Johnny Nunez (NVIDIA), Antonio J. Pena (Barcelona Supercomputing Center)
https://arxiv.org/abs/2508.07071
Neurochips: The state of brain-computer interfaces in 2025 https://andersenlab.com/blueprint/bci-challenges-and-opportunities "The coming two or three years will be pivotal: early trial results will either validate the hopes or temper the hype". A looming Ga…
CORSIKA 8: A modern and universal framework for particle cascade simulations
Marvin Gottowik (for the CORSIKA 8 collaboration)
https://arxiv.org/abs/2508.08755 https://
Novel cases of diffraction of light from a grating: Theory and experiment
Ninad R. Jetty, Akash Suman, Rajesh B. Khaparde
https://arxiv.org/abs/2508.15970 https://
Protocol-Aware Firmware Rehosting for Effective Fuzzing of Embedded Network Stacks
Moritz Bley, Tobias Scharnowski, Simon W\"orner, Moritz Schloegel, Thorsten Holz
https://arxiv.org/abs/2509.13740
Projective models for Hilbert squares of $K3$ surfaces
\'Angel David R\'ios Ortiz, Andr\'es Rojas, Jieao Song
https://arxiv.org/abs/2510.02065 https://
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
The Provenance Problem: LLMs and the Breakdown of Citation Norms
Brian D. Earp, Haotian Yuan, Julian Koplin, Sebastian Porsdam Mann
https://arxiv.org/abs/2509.13365 https://
Quick test board for STM32H750 UFBGA240 25 paired with the Trion T20 BGA256 FPGA devkit.
Goals:
* Validate my KiCAD symbol for STM32H750 in TFBGA240 25
* Bring up a basic BSP and stm32-cpp support for the STM32H750 (should be straightforward with most stuff very similar to the H735)
* Play with the H7 QUADSPI and see how it works for both XIP and copy-code-to-SRAM use cases
* Add QUADSPI flash driver for microkvs (which currently only supports internal flash)
*…
This resonates 50% with me. But the other 50%, I am like you and your manager have to become more the architects and less the lines-of-code-checker. Also thinking about tests and edge cases is even more important now. https://exquisite.social/@thomholwerda/114959217780568638…
Comparative Studies of Quantum Annealing, Digital Annealing, and Classical Solvers for Reaction Network Pathway Analysis and mRNA Codon Selection
Milind Upadhyay, Mark Nicholas Jones
https://arxiv.org/abs/2509.09862
OpenJAI-v1.0: An Open Thai Large Language Model
Pontakorn Trakuekul, Attapol T. Rutherford, Jullajak Karnjanaekarin, Narongkorn Panitsrisit, Sumana Sumanakul
https://arxiv.org/abs/2510.06847
Enabling Drone Detection with SWARM Repeater-Assisted MIMO ISAC
Palatip Jopanya, Diana P. M. Osorio
https://arxiv.org/abs/2509.19119 https://arxiv.org/pdf/…
Maximizing GPU Efficiency via Optimal Adapter Caching: An Analytical Approach for Multi-Tenant LLM Serving
Ferran Agullo, Joan Oliveras, Chen Wang, Alberto Gutierrez-Torre, Olivier Tardieu, Alaa Youssef, Jordi Torres, Josep Ll. Berral
https://arxiv.org/abs/2508.08343
An explicit formula of the Oslo stationary state
Valentin Lallemant, Vincent Rossetto
https://arxiv.org/abs/2508.06315 https://arxiv.org/pdf/2508.06315
Local-global compatibility and the exceptional zero conjecture for GL(3)
Daniel Barrera Salazar, Andrew Graham, Chris Williams
https://arxiv.org/abs/2508.10225 https://
Confidence Regions for Multiple Outcomes, Effect Modifiers, and Other Multiple Comparisons
Paul N Zivich, Stephen R Cole, Noah Greifer, Lina M Montoya, Michael R Kosorok, Jessie K Edwards
https://arxiv.org/abs/2510.07076
On detection probabilities of link invariants
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz, Victor L. Zhang
https://arxiv.org/abs/2509.05574 https://arxiv…
Why Do Decision Makers (Not) Use AI? A Cross-Domain Analysis of Factors Impacting AI Adoption
Rebecca Yu, Valerie Chen, Ameet Talwalkar, Hoda Heidari
https://arxiv.org/abs/2508.00723
Data Cleaning of Data Streams
Valerie Restat, Niklas Rodenhausen, Carina Antonin, Uta St\"orl
https://arxiv.org/abs/2507.20839 https://arxiv.org/pdf/2…
Analysis of LTE/5G Network Performance Parameters in Smartphone Use Cases: A Study of Packet Loss, Delay, and Slice Types
Almamoon Alauthman, Abeer Al-Hyari
https://arxiv.org/abs/2510.04035
New on #Quansight PBC blog: Python Wheels: from Tags to Variants
#Python distributions are uniform across different Python versions and platforms. For these distributions, it is sufficient to publish a single wheel that can be installed everywhere. However, some packages are more complex than that; they include compiled Python extensions or binaries. In order to robustly deploy these software on different platforms, you need to publish multiple binary packages, and the installers need to select the one that fits the platform used best.
For a long time, Python wheels made do with a relatively simple mechanism to describe the needed variance: Platform compatibility tags. These tags identified different Python implementations and versions, operating systems, and CPU architectures. Over time, they were extended to facilitate new use cases. To list a couple: PEP 513 added manylinux tags to standardize the core library dependencies on GNU/Linux systems, and PEP 656 added musllinux tags to facilitate Linux systems with musl libc.
However, not all new use cases can be handled effectively within the framework of tags. To list a few:
• The advent of GPU-backed computing made distinguishing different acceleration frameworks such as NVIDIA CUDA or AMD ROCm important.
• As the compatibility with older CPUs became less desirable, many distributions have set baselines for their binary packages to x86-64-v2 microarchitecture level, and Python packages need to be able to express the same requirement.
• Numerical libraries support different BLAS/LAPACK, MPI, OpenMP providers, and wish to enable the users to choose the build matching their desired provider.
While tags could technically be bent to facilitate all these use cases, they would grow quite baroque, and, critically, every change to tags needs to be implemented in all installers and package-related tooling separately, making the adoption difficult.
Facing these limitations, software vendors have employed different solutions to work around the lack of an appropriate mechanism. Eventually, the #WheelNext initiative took up the challenge to design a more robust solution.
"""
#packaging
Maximally Useful and Minimally Redundant: The Key to Self Supervised Learning for Imbalanced Data
Yash Kumar Sharma, Vineet Nair, Wilson Naik
https://arxiv.org/abs/2509.08469 ht…
DarTwin made precise by SysMLv2 -- An Experiment
{\O}ystein Haugen, Stefan Klikovits, Martin Arthur Andersen, Jonathan Beaulieu, Francis Bordeleau, Joachim Denil, Joost Mertens
https://arxiv.org/abs/2510.12478
Prediction: #BCI companies currently focusing on developing an implantable visual prosthesis will soon shift their focus toward more general #neuromodulation use cases to reach economy of scale and recoup investments. #NeuroTech
Fine-Tuning Vision-Language Models for Markdown Conversion of Financial Tables in Malaysian Audited Financial Reports
Jin Khye Tan (Faculty of Computer Science,Information Technology, Universiti Malaya), En Jun Choong, Ethan Jeremiah Chitty, Yan Pheng Choo, John Hsin Yang Wong, Chern Eu Cheah
https://arxiv.org/abs/2508.05669
OpenAI launches its first extended ad campaign for ChatGPT featuring three 30-second cinematic ads with a focus on everyday AI use cases in the US and the UK (Tim Nudd/Ad Age)
https://adage.com/creativity/creative-strategy-tactics/aa-chatgpt-firs…
I became a US citizen yesterday!
4 years in the making, a dream for much longer than that. I was fortunate to have one of the fastest routes to citizenship and to have mostly experienced a very smooth process with lovely immigration agents who processed my cases with respect and dignity.
I spent the last 9 months living in fear, watching immigrants get demonized and even permanent residency (green cards) to be treated as “a privilege” to be revoked at whim.
I've been quiet, afraid to say anything in public that could be misinterpreted or used against me.
I've repeatedly said goodbye to my city, I've cried walking my favorite streets, bracing for the worst as top officials bragged about getting rid of immigrants and "cleaning up" the country as if we were filth.
Now, I intend to find ways to use my citizenship for the greater good and to be civically engaged in ways not available to people here on visas and green cards, affected but forced to suffer in silence. I am looking forward to discovering what that will look like for me.
#USA #citizenship #immigration #immigrants
1/2 (A mini-thread)
Probing evolution of Long GRB properties through their cosmic formation history aided by Machine Learning predicted redshifts
Dhruv S. Bal, Aditya Narendra, Maria Giovanna Dainotti, Nikita S. Khatiya, Aleksander L. Lenart, Dieter H. Hartmann
https://arxiv.org/abs/2510.07306
Comparative Study of Subjective Video Quality Assessment Test Methods in Crowdsourcing for Varied Use Cases
Babak Naderi, Ross Cutler
https://arxiv.org/abs/2509.20118 https://…
Automated Program Repair of Uncompilable Student Code
Griffin Pitts, Aum Pandya, Darsh Rank, Tirth Bhatt, Muntasir Hoq, Bita Akram
https://arxiv.org/abs/2510.06187 https://
From Hard Refusals to Safe-Completions: Toward Output-Centric Safety Training
Yuan Yuan, Tina Sriskandarajah, Anna-Luisa Brakman, Alec Helyar, Alex Beutel, Andrea Vallone, Saachi Jain
https://arxiv.org/abs/2508.09224
An Algorithm for Computing Hopf--Galois Structures and Skew Bracoids of Low Degree
Andrew Darlington
https://arxiv.org/abs/2508.03372 https://arxiv.org/pdf…
On Digital Twins in Defence: Overview and Applications
Marco Giberna, Holger Voos, Paulo Tavares, Jo\~ao Nunes, Tobias Sorg, Andrea Masini, Jose Luis Sanchez-Lopez
https://arxiv.org/abs/2508.05717
TEGRA: A Flexible & Scalable NextGen Mobile Core
Bilal Saleem, Omar Basit, Jiayi Meng, Iftekhar Alam, Ajay Thakur, Christian Maciocco, Muhammad Shahbaz, Y. Charlie Hu, Larry Peterson
https://arxiv.org/abs/2509.07410
Altered Histories in Version Control System Repositories: Evidence from the Trenches
Solal Rapaport (IP Paris, LTCI, ACES, INFRES), Laurent Pautet (INFRES, LTCI, ACES, IP Paris), Samuel Tardieu (INFRES, ACES, IP Paris, LTCI), Stefano Zacchiroli (IP Paris, LTCI, ACES, INFRES)
https://arxiv.org/abs/2509.09294…
Augmented Question-guided Retrieval (AQgR) of Indian Case Law with LLM, RAG, and Structured Summaries
Vishnuprabha V, Daleesha M Viswanathan, Rajesh R, Aneesh V Pillai
https://arxiv.org/abs/2508.04710 …
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Jingwei Zuo, Maksim Velikanov, Ilyas Chahed, Younes Belkada, Dhia Eddine Rhayem, Guillaume Kunsch, Hakim Hacid, Hamza Yous, Brahim Farhat, Ibrahim Khadraoui, Mugariya Farooq, Giulia Campesan, Ruxandra Cojocaru, Yasser Djilali, Shi Hu, Iheb Chaabane, Puneesh Khanna, Mohamed El Amine Seddik, Ngoc Dung Huynh, Phuc Le Khac, Leen AlQadi, Billel Mokeddem, Mohamed Chami, Abdalgader Abubaker, Mikhail Lubin…
Pattern-Based File and Data Access with Python Glob: A Comprehensive Guide for Computational Research
Sidney Shapiro
https://arxiv.org/abs/2509.08843 https://
Optimized Split Computing Framework for Edge and Core Devices
Andrea Tassi, Oluwatayo Yetunde Kolawole, Joan Pujol Roig, Daniel Warren
https://arxiv.org/abs/2509.06049 https://
A Comprehensive Analysis of Evolving Permission Usage in Android Apps: Trends, Threats, and Ecosystem Insights
Ali Alkinoon, Trung Cuong Dang, Ahod Alghuried, Abdulaziz Alghamdi, Soohyeon Choi, Manar Mohaisen, An Wang, Saeed Salem, David Mohaisen
https://arxiv.org/abs/2508.02008
Sensors in viticulture: functions, benefits, and data-driven insights
Milan Milenkovic
https://arxiv.org/abs/2510.03000 https://arxiv.org/pdf/2510.03000
What's Coming Next? Short-Term Simulation of Business Processes from Current State
Maksym Avramenko, David Chapela-Campa, Marlon Dumas, Fredrik Milani
https://arxiv.org/abs/2509.07747
An Adaptive Responsible AI Governance Framework for Decentralized Organizations
Kiana Jafari Meimandi, Anka Reuel, Gabriela Aranguiz-Dias, Hatim Rahama, Ala-Eddine Ayadi, Xavier Boullier, J\'er\'emy Verdo, Louis Montanie, Mykel Kochenderfer
https://arxiv.org/abs/2510.03368
On LLM-Assisted Generation of Smart Contracts from Business Processes
Fabian Stiehle, Hans Weytjens, Ingo Weber
https://arxiv.org/abs/2507.23087 https://ar…
Managing Differentiated Secure Connectivity using Intents
Loay Abdelrazek, Filippo Rebecchi
https://arxiv.org/abs/2509.25462 https://arxiv.org/pdf/2509.254…