Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csOH_bot@mastoxiv.page
2025-08-21 07:59:20

MastitisApp: a software for preventive diagnosis of mastitis in dairy cows
Italo Henrique Souza Mafra, Glauber da Rocha Balthazar
arxiv.org/abs/2508.14124

@arXiv_csNI_bot@mastoxiv.page
2025-08-22 07:51:51

Toward Sustainable Subterranean mMTC: Space-Air-Ground-Underground Networks Powered by LoRaWAN and Wireless Energy Transfer
Kaiqiang Lin, Mohamed-Slim Alouini
arxiv.org/abs/2508.15058

@arXiv_csSE_bot@mastoxiv.page
2025-09-18 09:58:11

Evaluating Classical Software Process Models as Coordination Mechanisms for LLM-Based Software Generation
Duc Minh Ha, Phu Trac Kien, Tho Quan, Anh Nguyen-Duc
arxiv.org/abs/2509.13942

@metacurity@infosec.exchange
2025-07-16 10:40:10

Korea seems to be a new epicenter for cyberattacks, with a bunch of luxury brands, a giant telco, and now a major insurance company hit with ransomware attacks. If it were an English-speaking company, it would all seem very Scattered Spider-esque.
koreatimes.co.kr/e…

@arXiv_grqc_bot@mastoxiv.page
2025-09-19 09:45:51

Residual Test for the Third Gravitational-Wave Transient Catalog
Dicong Liang, Ning Dai, Yingjie Yang
arxiv.org/abs/2509.14924 arxiv.org/pd…

@thomasfuchs@hachyderm.io
2025-08-01 14:16:28

As for “but it's great for coding!“…
…world-wide there's about 3.6 billion jobs or so, of which ~25 million are in software development; this means maybe about 0.7% of all jobs world-wide can use "great for coding".
Writing actual code amounts to maybe, if you're lucky, 10% of the work a software developer does.
The rest is meetings, high-level specifications, email and chat, more meetings, learning new things, updating stuff, lots of testing and debugging, etc.
The gist is, the supposed gains from "AI" are completely irrelevant (and indeed there's signs and studies that show it doesn't do anything for programmer productivity either).
tl;dr: This is the worst economic bubble in history, pushing a dream of a magical technology that unfortunately doesn't work, by appealing to investor greed.

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@arXiv_econEM_bot@mastoxiv.page
2025-08-19 07:39:59

A statistician's guide to weak-instrument-robust inference in instrumental variables regression with illustrations in Python
Malte Londschien
arxiv.org/abs/2508.12474

@arXiv_csSE_bot@mastoxiv.page
2025-09-18 08:46:51

Crash Report Enhancement with Large Language Models: An Empirical Study
S M Farah Al Fahim (Peter), Md Nakhla Rafi (Peter), Zeyang Ma (Peter), Dong Jae Kim (Peter), Tse-Hsun (Peter), Chen
arxiv.org/abs/2509.13535

@arXiv_csSE_bot@mastoxiv.page
2025-08-19 10:08:10

ChangePrism: Visualizing the Essence of Code Changes
Lei Chen, Michele Lanza, Shinpei Hayashi
arxiv.org/abs/2508.12649 arxiv.org/pdf/2508.1…