Tootfinder

Opt-in global Mastodon full text search. Join the index!

@Techmeme@techhub.social
2025-10-28 12:25:55

Adobe unveils Project Moonlight, an AI agent on its Firefly platform designed to act as a creative director for social media campaigns via text prompts (The Verge)
theverge.com/news/807457/adobe

@Techmeme@techhub.social
2025-10-29 14:57:13

Amazon opens Project Rainier, an $11B AI data center on 1,200 acres in Indiana that trains and runs Anthropic's AI models using 500K Amazon Trainium 2 chips (MacKenzie Sigalos/CNBC)
cnbc.com/2025/10/29/amazon-ope

@jonippolito@digipres.club
2025-08-29 12:02:23

"They’re unknowingly becoming the bad guys”: AI-powered bounty hunters think they’re helping, but their fabricated bug reports are overwhelming solo maintainers like cURL’s Daniel Stenberg—who’s paid $92K for real flaws and now may scrap the program.

A conference presenter with this quote:

A lot of users are annoying. And that's not new. The new thing here is not the only the ease that you can produce this with AI, but also they actually think they are helping out....They're just unknowingly becoming the bad guys.

—Daniel Stenberg, "Al slop attacks on the curl project"
@arXiv_csCY_bot@mastoxiv.page
2025-08-29 08:27:01

Composable Life: Speculation for Decentralized AI Life
Botao Amber Hu, Fangting
arxiv.org/abs/2508.20668 arxiv.org/pdf/2508.20668

@Mediagazer@mstdn.social
2025-09-24 18:01:31

Cloudflare expands its Project Galileo program, providing access to its Bot Management and AI Crawl Control tools for free to nonprofits and independent media (Mediaweek)
mediaweek.com.au/cloudflare-ex

@arXiv_csHC_bot@mastoxiv.page
2025-07-29 10:53:31

CoGrader: Transforming Instructors' Assessment of Project Reports through Collaborative LLM Integration
Zixin Chen, Jiachen Wang, Yumeng Li, Haobo Li, Chuhan Shi, Rong Zhang, Huamin Qu
arxiv.org/abs/2507.20655

@arXiv_csDC_bot@mastoxiv.page
2025-09-29 08:09:17

The AI_INFN Platform: Artificial Intelligence Development in the Cloud
Lucio Anderlini, Giulio Bianchini, Diego Ciangottini, Stefano Dal Pra, Diego Michelotto, Rosa Petrini, Daniele Spiga
arxiv.org/abs/2509.22117

@Techmeme@techhub.social
2025-10-29 01:40:56

GitHub updates VS Code with Plan Mode for building step-by-step project approaches, MCP Registry integration, definable project rules via AGENTS.md, and more (Sean Michael Kerner/VentureBeat)
venturebeat.com/ai/githubs-age

@primonatura@mstdn.social
2025-08-26 11:00:41

"As AI becomes part of everyday life, it brings a hidden climate cost"
#AI #ArtificialIntelligence #Climate

"the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally".
Whoever contributes the code is required to be fully transparent on what AI tool has been used for it.
gamingonlinux.co…

@frankel@mastodon.top
2025-08-28 08:17:02

#LinuxFoundation Announces the Formation of the #DeveloperRelations Foundation

@arXiv_physicsaoph_bot@mastoxiv.page
2025-09-29 09:10:37

Generative AI-Downscaling of Large Ensembles Project Unprecedented Future Droughts
Hamish Lewis, Neelesh Rampal, Peter B. Gibson, Luke J. Harrington, Chiara M. Holgate, Anna Ukkola, Nicola M. Maher
arxiv.org/abs/2509.21844

@thesaigoneer@social.linux.pizza
2025-09-29 13:05:14

That Codeberg-from-Github import is very convenient. Really no reason to hang around on that old AI hub anymore.
Along the way I decided to revive cosmix-saigon, my Cosmic spin on NixOS, based on the great Nixbook project by @… .
Once Nix updates the packages to the beta I'll be in touch 😃

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@pavelasamsonov@mastodon.social
2025-08-25 13:54:47

There's magic trick that can make any #AI project successful. When the LLM's success rate is too low, just say "it's OK, we will have a human in the loop" and ship it anyway.
There's just one problem with this magic trick: most #LLM implementations don't have the thoughtfu…

@Mediagazer@mstdn.social
2025-10-25 10:36:21

The American Journalism Project, known mostly for providing grants to US local news outlets, is offering a "field guide" to AI tools for local reporting (Andrew Deck/Nieman Lab)
niemanlab.org/2025/10/the-amer

@jonippolito@digipres.club
2025-09-24 13:43:42

On yesterday's Teaching, Learning, and Everything Else podcast, I talked about AI's impact on the environment and classrooms—and argued that it isn’t reinventing education so much as exposing bad habits we should’ve let go years ago link…

Quote: "Instead of thinking of AI transforming pedagogy, I think of AI pointing out problems that were already there. We know there's better ways to teach. We know that project-oriented, individualized learning is better than regurgitation. We know that no one outside of academia writes five paragraph essays anymore. Writing has become discursive and dialogic, and it's dispersed through all these different social media and work contexts. There's lots of reasons to validate writing as a form of …
@arXiv_csNE_bot@mastoxiv.page
2025-08-27 07:35:02

Leveraging Evolutionary Surrogate-Assisted Prescription in Multi-Objective Chlorination Control Systems
Rivaaj Monsia, Olivier Francon, Daniel Young, Risto Miikkulainen
arxiv.org/abs/2508.19173

@mia@hcommons.social
2025-09-19 14:22:23

Some nice examples in the 'use cases' section of AI for Humanists aiforhumanists.com/guides/usec - from OCR to annotation to identifying voices and styles

@detondev@social.linux.pizza
2025-09-24 22:00:54

Embarrassing. Google is absolutely in cahoots with the Israeli settler-colonial project. Refusing to provide this information only makes it more suspicious...

Google search for is Benjamin Netanyahu a blackwater kamikaze drone doesnt have ai overview
@arXiv_csCV_bot@mastoxiv.page
2025-08-20 10:18:10

RED.AI Id-Pattern: First Results of Stone Deterioration Patterns with Multi-Agent Systems
Daniele Corradetti, Jos\'e Delgado Rodrigues
arxiv.org/abs/2508.13872

@poppastring@dotnet.social
2025-09-25 18:49:36

Microsoft will no longer allow Israel to use its cloud services to enable its mass surveillance of occupied Palestinians – "the first known case of a US technology company withdrawing services provided to the Israeli military since the beginning of its war on Gaza"

@jerome@jasette.facil.services
2025-09-25 15:31:16

Microsoft has terminated the Israeli military’s access to technology it used to operate a powerful surveillance system that collected millions of Palestinian civilian phone calls made each day in Gaza and the West Bank

@Techmeme@techhub.social
2025-09-23 13:46:05

Meta launches the American Technology Excellence Project, a super PAC to fight AI policy bills at the state level; it previously launched a California PAC (Ashley Gold/Axios)
axios.com/2025/09/23/meta-supe

@arXiv_csSE_bot@mastoxiv.page
2025-10-14 10:55:38

Generative AI for Software Project Management: Insights from a Review of Software Practitioner Literature
Lakshana Iruni Assalaarachchi, Zainab Masood, Rashina Hoda, John Grundy
arxiv.org/abs/2510.10887

@akosma@mastodon.online
2025-10-10 18:46:55

"Chatbots are turning on the flattery, patience, and support. Microsoft AI CEO Mustafa Suleyman said the “cool thing” about the company’s AI personal assistant is that it doesn’t “judge you for asking a stupid question.” It exhibits “kindness and empathy.” Here’s the rub: We need people to judge us. We need people to call us out for making stupid statements. Friction and conflict are key to developing resilience and learning how to function in society."

@metacurity@infosec.exchange
2025-09-05 11:18:34

So last month ESET discovered AI-powered ransomware it called PromptLock. Turns out it was a research project from some NYU School of Engineering students.
PromptLock Ransomware Is Just a Research Project, But It's Still Disturbing
pcmag.com/news/pr…

@jonippolito@digipres.club
2025-08-18 17:46:58

Need to talk to your students about AI ethics this fall? I've uploaded a course module for the AI IMPACT RISK framework, with interactive website, video, infographics, and quiz, that you can import into your own courseware
This imscc file can be imported directly into Canvas/Brightspace/Blackboard, or you can cherrypick resources directly from the IMPACT RISK website. Also new: SVG versions of all graphics

A schoolroom with children at desks and the title "AI IMPACT RISK" behind them out the window.
@arXiv_csHC_bot@mastoxiv.page
2025-09-16 10:24:47

AI Hasn't Fixed Teamwork, But It Shifted Collaborative Culture: A Longitudinal Study in a Project-Based Software Development Organization (2023-2025)
Qing Xiao, Xinlan Emily Hu, Mark E. Whiting, Arvind Karunakaran, Hong Shen, Hancheng Cao
arxiv.org/abs/2509.10956

In his new investigative series "Military AI Watch", award-winning science reporter Peter Byrne explores
“how Silicon Valley, corporate media, the Department of Defense, the banking industry, and scientific institutions all intersect in the effort to militarize AI.”
thereal…

@arXiv_csIR_bot@mastoxiv.page
2025-09-05 08:39:21

Safeguarding Patient Trust in the Age of AI: Tackling Health Misinformation with Explainable AI
Sueun Hong, Shuojie Fu, Ovidiu Serban, Brianna Bao, James Kinross, Francesa Toni, Guy Martin, Uddhav Vaghela
arxiv.org/abs/2509.04052

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:44:31

Evaluation of Real-Time Preprocessing Methods in AI-Based ECG Signal Analysis
Jasmin Freudenberg, Kai Hahn, Christian Weber, Madjid Fathi
arxiv.org/abs/2510.12541

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@newsie@darktundra.xyz
2025-10-09 15:41:45

Data Hoarder Uses AI to Create Searchable Database of Epstein Files 404media.co/data-hoarder-uses-

@UP8@mastodon.social
2025-09-18 15:19:35

🦾 Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm
blog.arduino.cc/2025/08/21/mee

@thomasfuchs@hachyderm.io
2025-10-08 02:52:47

“Don't personalize it. The next step is deification.”
Quote about a rogue AI from Colossus: The Forbin Project (1970)

@arXiv_csAI_bot@mastoxiv.page
2025-09-08 07:30:19

The Ethical Compass of the Machine: Evaluating Large Language Models for Decision Support in Construction Project Management
Somtochukwu Azie, Yiping Meng
arxiv.org/abs/2509.04505

@Techmeme@techhub.social
2025-09-25 15:20:56

Sources: Microsoft terminated Israel's Unit 8200 access to Azure, after an investigation found the tech was used for mass surveillance in Gaza and West Bank (The Guardian)
theguardian.com/world/2025/sep

@arXiv_eessSY_bot@mastoxiv.page
2025-09-17 09:58:40

Towards Native AI in 6G Standardization: The Roadmap of Semantic Communication
Ping Zhang, Xiaodong Xu, Mengying Sun, Haixiao Gao, Nan Ma, Xiaoyun Wang, Ruichen Zhang, Jiacheng Wang, Dusit Niyato
arxiv.org/abs/2509.12758

@arXiv_csCR_bot@mastoxiv.page
2025-10-06 07:57:39

Agentic-AI Healthcare: Multilingual, Privacy-First Framework with MCP Agents
Mohammed A. Shehab
arxiv.org/abs/2510.02325 arxiv.org/pdf/2510…

@mgorny@social.treehouse.systems
2025-08-23 05:02:41

It's time to shame #PDM, the "#Python package and dependency manager", for embracing unethical coding. Also, clearly wasting large amounts energy to make a 4-line change, and get it wrong twice while at it.
(Yes, the "no significant changes" is another fix to the same #LLM coding mistake.)
#AI

@arXiv_csCY_bot@mastoxiv.page
2025-08-13 07:33:02

Resisting AI Solutionism through Workplace Collective Action
Kevin Zheng, Linda Huber, Aaron Stark, Nathan Kim, Francesca Lameiro, Wells Lucas Santo, Shreya Chowdhary, Eugene Kim, Justine Zhang
arxiv.org/abs/2508.08313

@Techmeme@techhub.social
2025-09-14 08:05:40

How thousands of small-scale farmers in Malawi are using a government-backed AI chatbot, designed by the nonprofit Opportunity International, for farming advice (Gregory Gondwe/Associated Press)
apnews.com/article/malawi-ai-f

@kcase@mastodon.social
2025-09-05 00:05:11

Do you have any projects that have stalled out? Could you use some prompts to help break them down into smaller pieces?
Now in TestFlight, OmniFocus 4.8 plug-ins can consult Apple's new on-device Foundation Models. These AI models are built into your device, so you’re not sending any data to any other systems, nor are you using any expensive outside resources—you’re just using more capabilities of the device that’s already at your fingertips.

9-second video showing someone type "Add solar projects to my home", activate the "Help Me Plan" plug-in, waiting a few seconds, and receiving suggestions for breaking down the project into smaller steps.

(If any of the suggested steps still aren't yet actionable enough, you can select them and use the plug-in again to break them down even further.)
@ErikJonker@mastodon.social
2025-08-05 06:04:03

GPT-NL, een mooi initiatief dat voortgang maakt, let ook op het doel "Tot slot is het goed om in het achterhoofd te houden dat GPT-NL wordt ontwikkeld voor specifieke taken: samenvatten, versimpelen, en het extraheren van informatie. Het doel van GPT-NL is niet om een generiek kennismodel te ontwikkelen."
Lees deze blog:

@grumpybozo@toad.social
2025-10-08 20:49:53

LOL.
Until last week, the scraperbots were poisoning models with the arcane performance details and masscheck logs of the ASF #SpamAssassin project. 2 decades of data that requires a deep knowledge of SA to make any sense of. It's freely available as a matter of principle. Between thousands of days & hundreds of rules with about a dozen distinct corpora, we poisoned the models wi…

@Techmeme@techhub.social
2025-10-14 04:40:51

Nvidia says it is donating the Vera Rubin NVL144 server rack architecture to the Open Compute Project and outlines its vision for "gigawatt AI factories" (Mike Wheatley/SiliconANGLE)
siliconangle.com/2025/10/13/nv

@sean@scoat.es
2025-09-03 18:50:04

The constant “you must want AI so bad that we’re forcing it on you” hubris is really wearing me out.
GitHub’s current suggestions when clicking the “Assignees” dialogue—for me, on our main repo/project with over a thousand issues/PRs—pre-fills with `scoates` (me) and `Copilot`. Not the other members of our team. There are only 3 other members in this team and they’d all fit in the UI that pops up.
This is either utter incompetence, or the worst nudging. So tired. Give me agency; …

The “assignees” dialog I mentioned with just me and Copilot. Plenty of room for 3 more.
@arXiv_csHC_bot@mastoxiv.page
2025-09-03 12:54:43

Look: AI at Work! - Analysing Key Aspects of AI-support at the Work Place
Stefan Schiffer, Anna Milena Rothermel, Alexander Ferrein, Astrid Rosenthal-von der P\"utten
arxiv.org/abs/2509.02274

@arXiv_csAI_bot@mastoxiv.page
2025-10-03 07:58:21

Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems
Fabian Kovac, Sebastian Neumaier, Timea Pahi, Torsten Priebe, Rafael Rodrigues, Dimitrios Christodoulou, Maxime Cordy, Sylvain Kubler, Ali Kordia, Georgios Pitsiladis, John Soldatos, Petros Zervoudakis
arxiv.org/abs/2510.00084

@Techmeme@techhub.social
2025-10-01 10:51:03

Wikimedia Deutschland launches the Wikidata Embedding Project, a vector-based semantic search database with nearly 120M entries, to make data accessible to AI (Russell Brandom/TechCrunch)
techcrunch.com/2025/10/01/new-

@arXiv_csSE_bot@mastoxiv.page
2025-08-19 09:30:20

LinkAnchor: An Autonomous LLM-Based Agent for Issue-to-Commit Link Recovery
Arshia Akhavan, Alireza Hosseinpour, Abbas Heydarnoori, Mehdi Keshani
arxiv.org/abs/2508.12232

@mgorny@social.treehouse.systems
2025-08-23 10:26:37

Well, I am complaining about #AI slop introducing some random bugs in a minor userspace project, and in the meantime I learn that #Linux #kernel LTS developers are using AI to backport patches, and creating new vulnerabilities in the process.
Note: the whole thread is quite toxic, so I'd take it with a grain of salt, but still looks like the situation is quite serious.
"You too can crash today's 6.12.43 LTS kernel thanks to a stable maintainer's AI slop."
And apparently this isn't the first time either:
"When AI decided to select a random CPU mitigation patch for backport last month that turned a mitigation into a no-op, nothing was done, it sat unfixed with a report for a month (instead of just immediately reverting it), and they rejected a CVE request for it."
#security #LLM #NVIDIA #Gentoo

@jonippolito@digipres.club
2025-10-07 13:11:38

Did you know that the Learning With AI toolkit has a speakers bureau for AI and education? Filter for expertise like "agents administration" or "data bias," then click on presenters to watch recordings that demonstrate their presentation styles.
Have a speaker to recommend who's not on the list? DM me and I'll send you a form to submit a name, specialty, and recording of any events for consideration.

A screenshot of the speakers bureau from Learning With AI with a grid of names and a tag cloud for filtering by expertise
@Techmeme@techhub.social
2025-08-14 12:46:12

The US NSF and Nvidia partner to fund the Open Multimodal Infrastructure to Accelerate Science project, led by Ai2; NSF is contributing $75M and Nvidia $77M (Kyt Dotson/SiliconANGLE)
siliconangle.com/2025/08/14/ns

@metacurity@infosec.exchange
2025-08-06 14:16:55

You don't want to miss today's extra-packed Metacurity for the most critical infosec developments you should know today, including
--Microsoft enabled Israeli spy agency's mass surveillance of Palestinians' mobile calls,
--Cisco's registered web users disclosed in a likely Salesforce breach-related vishing attack,
--Google confirms customer theft in Salesforce breach-related incident,
--Broadcom chip flaw exposes millions of Dell laptops to attack, …

@arXiv_csCY_bot@mastoxiv.page
2025-10-15 07:58:01

Artificial Intelligence for Optimal Learning: A Comparative Approach towards AI-Enhanced Learning Environments
Ananth Hariharan
arxiv.org/abs/2510.11755

@Techmeme@techhub.social
2025-08-18 08:50:37

SoftBank buys Foxconn's Ohio EV plant; Foxconn will produce AI servers at the plant as part of Softbank's $500B Stargate project with OpenAI and Oracle (Debby Wu/Bloomberg)
bloomberg.com/news/articles/20

@Techmeme@techhub.social
2025-09-06 06:01:25

Public records show C3 AI's Project Sherlock, company's flagship contract to speed up policing in San Mateo County, has struggled with usability issues and more (Thomas Brewster/Forbes)
forbes.com/sites/thomasbrewste

@arXiv_csHC_bot@mastoxiv.page
2025-10-14 08:48:58

ROBOPSY PL[AI]: Using Role-Play to Investigate how LLMs Present Collective Memory
Margarete Jahrmann, Thomas Brandstetter, Stefan Glasauer
arxiv.org/abs/2510.09874

@arXiv_astrophIM_bot@mastoxiv.page
2025-10-03 08:36:21

Enhancing the development of Cherenkov Telescope Array control software with Large Language Models
Dmitriy Kostunin, Elisa Jones, Vladimir Sotnikov, Valery Sotnikov, Sergo Golovachev, Alexandre Strube
arxiv.org/abs/2510.01299

@mgorny@social.treehouse.systems
2025-08-24 03:14:44

You know what's the difference between a human programmer and an "#AI coding assistant"?
Sure, human programmers make mistakes. And rookies often make "worse" mistakes than an #LLM can come up with. However, the difference is that humans can actually learn. Teaching them comes with a payoff; not always and not necessarily for your project, but there's a good chance that they'll become better programmers and contribute back to the community.
Sure, human programmers sometimes plagiarize. And of course they need to look at some code while they learn. However, they actually can think on their own and come up with something original. And they can learn that plagiarism is wrong.
And most importantly, usually they don't lie if they don't have to, and there are limits to their smugness. You can build a healthy community with them.
You can't build a community with unethical bullshit-spitting machines.
#programming #FreeSoftware #OpenSource

@arXiv_physicsmedph_bot@mastoxiv.page
2025-09-30 10:09:41

Real-Time Motion Correction in Magnetic Resonance Spectroscopy: AI solution inspired by fundamental science
Benedetta Argiento, Alberto Annovi, Silvia Capuani, Matteo Cacioppo, Andrea Ciardiello, Roberto Coccurello, Stefano Giagu, Federico Giove, Alessandro Lonardo, Francesca Lo Cicero, Alessandra Maiuro, Carlo Mancini Terracciano, Mario Merola, Marco Montuori, Emilia Nistic\`o, Pierpaolo Perticaroli, Biagio Rossi, Cristian Rossi, Elvira Rossi, Francesco Simula, Cecilia Voena

@Techmeme@techhub.social
2025-10-21 12:50:47

Documents: OpenAI has more than 100 ex-investment bankers paid $150 per hour to train its AI to build financial models as part of its secretive Project Mercury (Omar El Chmouri/Bloomberg)
bloomberg.com/news/articles/20

@arXiv_csCY_bot@mastoxiv.page
2025-07-30 09:25:52

Safety Features for a Centralised AGI Project
Sarah Hastings-Woodhouse
arxiv.org/abs/2507.21082 arxiv.org/pdf/2507.21082

@arXiv_csIR_bot@mastoxiv.page
2025-10-07 09:32:52

The LCLStream Ecosystem for Multi-Institutional Dataset Exploration
David Rogers, Valerio Mariani, Cong Wang, Ryan Coffee, Wilko Kroeger, Murali Shankar, Hans Thorsten Schwander, Tom Beck, Fr\'ed\'eric Poitevin, Jana Thayer
arxiv.org/abs/2510.04012

@arXiv_csSE_bot@mastoxiv.page
2025-07-31 08:34:51

Machine Learning Experiences: A story of learning AI for use in enterprise software testing that can be used by anyone
Michael Cohoon, Debbie Furman
arxiv.org/abs/2507.22064

@arXiv_csCY_bot@mastoxiv.page
2025-09-04 08:38:51

AI-Generated Images for representing Individuals: Navigating the Thin Line Between Care and Bias
Julia C. Ahrend, Bj\"orn D\"oge, Tom M Duscher, Dario Rodighiero
arxiv.org/abs/2509.03071

@Techmeme@techhub.social
2025-10-16 06:21:00

Nvidia partners with startup Firmus on Project Southgate, a $2.9B initial undertaking to build renewable energy-powered AI data centers across Australia (Keira Wright/Bloomberg)
bloomberg.com/news/articles/20

@Techmeme@techhub.social
2025-08-06 04:51:05

Microsoft unveils Project Ire, a prototype AI system that can reverse engineer and identify malicious software autonomously, without human assistance (Todd Bishop/GeekWire)
geekwire.com/2025/microsofts-n

@Techmeme@techhub.social
2025-10-06 19:15:53

Bee Maps, a decentralized mapping project powered by Hivemapper on the Solana blockchain, raised $32M to expand its network by distributing AI-enabled dashcams (CoinDesk)
coindesk.com/tech/2025/10/06/b

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@mgorny@social.treehouse.systems
2025-10-22 06:52:00

Remember the package that recently had some trailing junk in the .tar.gz that broke GNU tar, and replied to my bug report with a comprehensive #LLM analysis and a slightly sloppy release checking workflow?
They've made a new release and this time the source distribution is completely broken gzip stream.
Honestly, bumping #Python packages for #Gentoo all these years, I don't recall ever seeing a problem with gzip streams. And then, #autobahn starts using #ClaudeCode heavily, and two bad releases in a row. I can't help but consider the project compromised at this point.
#NoAI #AI

@Techmeme@techhub.social
2025-09-06 00:01:42

OpenAI is merging its Model Behavior team with its Post Training group to bring the work of the Model Behavior team closer to core model development (Maxwell Zeff/TechCrunch)
techcrunch.com/2025/09/05/open

@Techmeme@techhub.social
2025-10-09 09:15:52

A look at ASML's planned 100-hectare expansion project in Eindhoven, set to create 20,000 jobs and that is focused on expanding production of its EUV machines (Bloomberg)
bloomberg.com/news/features/20

@Techmeme@techhub.social
2025-08-08 11:35:50

Sources: SoftBank purchased Foxconn's EV plant in Ohio for $375M, a move aimed at kickstarting its $500B Stargate data center project with OpenAI and Oracle (Bloomberg)
bloomberg.com/news/articles/20