Tootfinder

Opt-in global Mastodon full text search. Join the index!

@guerda@ruhr.social
2025-07-29 06:54:39

Frage an alle #GitLab Admins: gibt es eine Möglichkeit, einen User automatisch auf alle Projekte Instanz-weit in GitLab self hosted zuzulassen?
Für eine Überprüfung reicht ein Audit User nicht aus und ich möchte das so weit wir möglich automatisieren, auch für neue Projekte.

@tiotasram@kolektiva.social
2025-06-24 09:39:49

Subtooting since people in the original thread wanted it to be over, but selfishly tagging @… and @… whose opinions I value...
I think that saying "we are not a supply chain" is exactly what open-source maintainers should be doing right now in response to "open source supply chain security" threads.
I can't claim to be an expert and don't maintain any important FOSS stuff, but I do release almost all of my code under open licenses, and I do use many open source libraries, and I have felt the pain of needing to replace an unmaintained library.
There's a certain small-to-mid-scale class of program, including many open-source libraries, which can be built/maintained by a single person, and which to my mind best operate on a "snake growth" model: incremental changes/fixes, punctuated by periodic "skin-shedding" phases where make rewrites or version updates happen. These projects aren't immortal either: as the whole tech landscape around them changes, they become unnecessary and/or people lose interest, so they go unmaintained and eventually break. Each time one of their dependencies breaks (or has a skin-shedding moment) there's a higher probability that they break or shed too, as maintenance needs shoot up at these junctures. Unless you're a company trying to make money from a single long-lived app, it's actually okay that software churns like this, and if you're a company trying to make money, your priorities absolutely should not factor into any decisions people making FOSS software make: we're trying (and to a huge extent succeeding) to make a better world (and/or just have fun with our own hobbies share that fun with others) that leaves behind the corrosive & planet-destroying plague which is capitalism, and you're trying to personally enrich yourself by embracing that plague. The fact that capitalism is *evil* is not an incidental thing in this discussion.
To make an imperfect analogy, imagine that the peasants of some domain have set up a really-free-market, where they provide each other with free stuff to help each other survive, sometimes doing some barter perhaps but mostly just everyone bringing their surplus. Now imagine the lord of the domain, who is the source of these peasants' immiseration, goes to this market secretly & takes some berries, which he uses as one ingredient in delicious tarts that he then sells for profit. But then the berry-bringer stops showing up to the free market, or starts bringing a different kind of fruit, or even ends up bringing rotten berries by accident. And the lord complains "I have a supply chain problem!" Like, fuck off dude! Your problem is that you *didn't* want to build a supply chain and instead thought you would build your profit-focused business in other people's free stuff. If you were paying the berry-picker, you'd have a supply chain problem, but you weren't, so you really have an "I want more free stuff" problem when you can't be arsed to give away your own stuff for free.
There can be all sorts of problems in the really-free-market, like maybe not enough people bring socks, so the peasants who can't afford socks are going barefoot, and having foot problems, and the peasants put their heads together and see if they can convince someone to start bringing socks, and maybe they can't and things are a bit sad, but the really-free-market was never supposed to solve everyone's problems 100% when they're all still being squeezed dry by their taxes: until they are able to get free of the lord & start building a lovely anarchist society, the really-free-market is a best-effort kind of deal that aims to make things better, and sometimes will fall short. When it becomes the main way goods in society are distributed, and when the people who contribute aren't constantly drained by the feudal yoke, at that point the availability of particular goods is a real problem that needs to be solved, but at that point, it's also much easier to solve. And at *no* point does someone coming into the market to take stuff only to turn around and sell it deserve anything from the market or those contributing to it. They are not a supply chain. They're trying to help each other out, but even then they're doing so freely and without obligation. They might discuss amongst themselves how to better coordinate their mutual aid, but they're not going to end up forcing anyone to bring anything or even expecting that a certain person contribute a certain amount, since the whole point is that the thing is voluntary & free, and they've all got changing life circumstances that affect their contributions. Celebrate whatever shows up at the market, express your desire for things that would be useful, but don't impose a burden on anyone else to bring a specific thing, because otherwise it's fair for them to oppose such a burden on you, and now you two are doing your own barter thing that's outside the parameters of the really-free-market.

@cowboys@darktundra.xyz
2025-08-28 23:21:32

Ex-Cowboys WR Predicted To Be Traded To The Steelers heavy.com/sports/nfl/dallas-co]

@Techmeme@techhub.social
2025-08-26 09:55:46

A profile of A24, valued at $3.5B in June 2024, as the studio focuses on big budget projects and invests in AI tools via A24 Labs, drawing mixed reactions (Alex Barasch/New Yorker)
newyorker.com/magazine/2025/09

@Mediagazer@mstdn.social
2025-08-26 09:55:43

A profile of A24, valued at $3.5B in June 2024, as the studio focuses on big budget projects and invests in AI tools via A24 Labs, drawing mixed reactions (Alex Barasch/New Yorker)
newyorker.com/magazine/2025/09

@gedankenstuecke@scholar.social
2025-07-27 02:26:52

«Komoot’s core tech of Leaflet map, Graphhopper routing engine, and #OpenStreetMap data are all free, open-source projects. This is in addition to all the open-source web servers, databases, and operating systems that tech companies build upon. They leech off the open-source commons»
I'm convinced that this is one of the big reasons why we need some form of @… sooner rather than later, to ensure our commons can be saved from corp capture.

@arXiv_csCY_bot@mastoxiv.page
2025-08-29 08:21:41

RelAItionship Building: Analyzing Recruitment Strategies for Participatory AI
Eugene Kim, Vaibhav Balloli, Berelian Karimian, Elizabeth Bondi-Kelly, Benjamin Fish
arxiv.org/abs/2508.20176

@kazys@mastodon.social
2025-06-15 18:49:08

Sunday reading. A long-form essay on the Rise and Fall of the Author. I won't hide the link today so it'll be shadow-banned across social media. Well, what else is new? Except that this should either be a book or heavily edited down. So it goes.
varnelis.net/works_and_project

@andres4ny@social.ridetrans.it
2025-08-22 22:09:51

LinkedIn is like, "Linux Engineer - <random small company> High experience match!"
And then when you click on the link, it's all "<random small company> is here to rise above the ordinary. The work we do here goes far beyond day-to-day projects - we protect the US defense industry with our automated, efficient mass killing machines. Huge growth opportunity and career growth in the Genocide Industry!"

@andycarolan@social.lol
2025-06-25 19:01:47

Ok, just a smol rant... Every time theres a new job platform, it's pretty neccessary to upload a whole portfolio again. And once all the images, and text are uploaded, the tags need to be accurate so they appear in site-searches correctly.
Not only that, but it then needs to be maintained. So, any changes or new projects to add... need to be done on every platform.
Am I wrong in thinking this is so incredibly tedious? lol
#Work #Jobs #Career

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@NFL@darktundra.xyz
2025-08-22 15:49:32

NFC West betting preview: 49ers and Rams top odds, but don't count out Seahawks and Cards espn.com/espn/betting/story/_/

@publicvoit@graz.social
2025-07-11 20:38:43

Photos: The Scale of China’s Solar-Power Projects - The Atlantic
theatlantic.com/photography/ar
"As the Trump administration’s “Big, Beautiful Bill” eliminate…

@jeang3nie@social.linux.pizza
2025-06-23 13:54:12

Last week of this school term starts today for me. It feels good to be able to relax a little. All of the major assignments and projects are done and turned in. The best part is that I'm at the halfway point now, and almost completely done with GE requirements. Next term I get two computer science courses for the first time since starting on this journey.

@ErikJonker@mastodon.social
2025-07-12 11:23:22

China is dramatically investing in renewable energy. In pure self-interest but it will also help to improve it’s position as a world superpower. In the USA Trump is breaking down investments in renewable energy. Europe should not follow his self-destructive example.

Robert F. Kennedy Jr., the health secretary and a longtime vaccine critic,
announced in a statement Tuesday that
❌ $500 million worth of vaccine development projects -- all using mRNA technology -- will be halted.
The projects — 22 of them — are being led by some of the nation’s leading pharmaceutical companies like Pfizer and Moderna to prevent flu, COVID-19 and H5N1 infections

@aredridel@kolektiva.social
2025-08-20 13:06:56

Y'know what rubs me weird? Open Source guys who are all into competing projects. Like sure sometimes you gotta fork 'cause people's desires really are incompatible, but here we are with a system that lets us join forces for the greater good and here you are making Yet Another Thingy instead of collaborating.
I kinda fundamentally don't trust you to behave well in your project if this is the foundation.
There's reasons for forks and alternatives but come on

@bencurthoys@mastodon.social
2025-08-15 11:03:57

Syncthing V2 is out!
github.com/syncthing/syncthing
Of all the open source projects I use, this is by far the most useful AND has the most friendly community. Funny how those things go together. If you are at all interested in decouplin…

@NFL@darktundra.xyz
2025-06-18 16:40:53

2025 NFL All-Rookie Team: Projecting 11 instant-impact newcomers on offense nfl.com/news/2025-nfl-all-rook

@adulau@infosec.exchange
2025-07-18 14:08:34

Curious about all the open source and projects developed by @… ?
CIRCL Open Source tools powering SOC & CSIRT teams.
#opensource

CIRCL Open Source tools and SOC/CSIRT eco-system
@teledyn@mstdn.ca
2025-06-02 23:25:26

Probably stating the obvious, but for the past 30 years at least, contributing (in any capacity) on opensource projects is an excellent way to network with a global pool of top-tier folks in all the related domains, and to test the limits of your craft, plus all the work you do can be openly shared with any prospective employer. So you aren't 'volunteering' as much as you're building your portfolio in grad school, but with no student fees 😁
#fedihire #lifelonglearning

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-08-18 09:08:20

Ferromagnetic and Spin-Glass Finite-Tempeature Order but no Antiferromagetic Order in the d=1 Ising Model with Long-Range Power-Law Interactions
E. Can Artun, A. Nihat Berker
arxiv.org/abs/2508.11168

@Dragofix@veganism.social
2025-07-07 23:08:47

Ban all new fossil fuel projects for a safe future! #environment

@georgiamuseum@glammr.us
2025-07-07 17:38:05

Kelsey Siegert, a recent master's graduate from the #UniversityOfGeorgia #LamarDoddSchoolOfArt, recently joined our team as our first William J. Thompson Curatorial Fellow. Here's a little bit more about Siegert as well as the projects she'll be working on.

Kelsey Siegert stands to the right of a sculpture by William J. Thompson titled "Archangel." She wears a gray cardigan and stands with her right hand on her hip, near a lightpost. The scenery all around is lush and green.
@yaxu@post.lurk.org
2025-07-13 09:00:52

Is it better to let people know when you see them doing something when others have done something similar previously?
I've found myself doing this a lot around live coding, on one level it seems helpful to know about prior art, and fun to talk about weird old projects/events. On the other it could be stifling to obsess over identifying the 'first' person to try something, and might feel like old people are trying to pitch their tents all over your garden.

@rasterweb@mastodon.social
2025-06-03 21:48:03

I may need to get back to doing illustrations again since it's been about a year that I took a break...
Hand drawn stuff (even if digitally) might be the future of human art with all this AI bullshit going on.
➡️ rasterweb.net/raster/projects/

@cdp1337@social.veraciousnetwork.com
2025-07-12 05:01:50

All this talk of peer-to-peer texting via bluetooth with bitch at pushed me over the ledge; finally pulled the trigger to pick up some LoRA devices and installed Meshtastic on them.
heltec.org/project/wifi-lora-3
Thus far am VERY impressed with the capabi…

@ripienaar@devco.social
2025-08-02 13:55:38

Claude projects are amazing.
I have a ton of docs for my house. All materials, eu certificates for everything. plans for every section of the wood structure. Road manufactur details, sewers, land purchase etc all Latvian.
I uploaded all into a Claude project and it just answers English questions about it all in English.

@mgorny@social.treehouse.systems
2025-07-05 12:23:39

Like other large #FreeSoftware projects, #Gentoo developers have varying degrees of activity. There are some people who dedicate a lot of their free time to Gentoo, maintain hundreds of packages, participate in multiple areas. Then, there are people with narrower interests, lower commit counts, but they are still putting an effort and making Gentoo a better distribution — and that matters. But then, there is the tail.
There is a few of developers whose main talents seem to be 1) finding packages that require absolutely minimal maintenance effort, and 2) justifying their developer status with long essays. I mean, this is getting beyond absurd. It is not just "my packages are all up-to-date". It is not even "my packages require very low maintenance, that's why I'm not doing much". It is literally "I deliberately choose low-maintenance packages, so I don't have to do anything". But of course, all these people definitely need commit access to Gentoo, and show off their Gentoo developer badges, and it's *so damn unfair*.
And in the meantime, other developers are overburdened, and getting burned out. And they step down from more things. And who takes these things over? Of course, not the developers who just admitted to not having much to do…

@dwf@social.linux.pizza
2025-06-21 13:14:15

Looking at the Markdown note-taking/Zettelkasten/"second brain" software space and being really put off by the almost religious fervor around it all.
I have a variety of "things I need infrequently want to note down in a place I can reliably retrieve them later" and "ongoing personal projects that get picked up and put down as bandwidth allows, for which I'd like to serialize my work".
I don't want to save every stray thought. Most of my tho…

@grumpybozo@toad.social
2025-06-07 21:37:23

Wow, this is a first. A site I actually want to see (archive.ph) is using an IP address on the @… DROP list.
(no, I do NOT think that’s a Spamhaus problem. They are doing their job.)

@azonenberg@ioc.exchange
2025-07-05 21:52:20

Just a little more readable than the original implementation...
github.com/azonenberg/common-e
I could probably make this even better if i added some con…

@arXiv_hepex_bot@mastoxiv.page
2025-06-06 09:49:22

This arxiv.org/abs/2505.11292 has been replaced.
initial toot: mastoxiv.page/@arXiv_hepe…

@arXiv_astrophIM_bot@mastoxiv.page
2025-06-16 08:48:20

A Comprehensive Guide to the U.S. Civil Space Budget
Lindsay DeMarchi
arxiv.org/abs/2506.11138 arxiv.org/pdf/2506.111…

@stefan@gardenstate.social
2025-07-31 16:12:17

Every day I try and think on a small site or service I could build that would generate enough money to have it be my main source of income. I've failed at this for so long that I'm probably approaching this problem all wrong and I assume I just don't have enough patience to dedicate to long term projects that might never payout. This is why so many of the projects I complete are short term.

@michabbb@social.vivaldi.net
2025-06-12 18:19:49

@… it´s hard to find the good between all this stuff. every week we have 100 new projects.....

@simon_lucy@mastodon.social
2025-08-05 13:50:33

Iain Banks said in 2010:
"Appeals to reason, international law, U. N. resolutions and simple human decency mean – it is now obvious – nothing to Israel... I would urge all writers, artists and others in the creative arts, as well as those academics engaging in joint educational projects with Israeli institutions, to consider doing everything they can to convince Israel of its moral degradation and ethical isolation, preferably by simply having nothing more to do with this outlaw st…

@NFL@darktundra.xyz
2025-08-20 18:30:27

2025 NFL season: Projecting stat leaders in 11 major categories nfl.com/news/2025-nfl-season-p

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@tschfflr@fediscience.org
2025-06-20 14:13:22

🤩 We spent today recording video for a new professional short introduction into our ViCom project on #emojis 📹 It was fun trying to find the most photogenic public spots on campus (as emoji research consists mostly of us staring at computer screens all day long, and is not very... visually interesting)
Very excited to show you the results in a few weeks or months! #visualCommunication #linguistics #sciComm @… vicom.info/projects/semantics-

@cyrevolt@mastodon.social
2025-07-16 12:10:01

There is now a first RC for the #Espressif #Embedded #Rust #HAL:

@cowboys@darktundra.xyz
2025-06-03 16:41:41

Cowboys Projected to Land ‘Difference Maker’ Compared to $22 Million All-Pro RB heavy.com/sports/nfl/dallas-co]

@primonatura@mstdn.social
2025-07-14 13:00:33

"Nearly three-quarters of solar and wind projects are being built in China"
#China #Energy #Renewables #SolarPower

@NFL@darktundra.xyz
2025-06-05 19:21:17

All-Paid Team of Tomorrow: Lamar Jackson, Micah Parsons poised to reset market at respective positions nfl.com/news/all-paid-team-of-

@chris@mstdn.chrisalemany.ca
2025-06-02 15:42:43

I am willing to be pleasantly surprised, but... when all I see from these "major projects" is oil and gas and other 20th century backwards nonsense, I can't help but be pessimistic.
Ironically, a real "nation building" project that would be a huge benefit to Canadians would be more akin to something from the 19th century: Standing up a true public railway corporation and rebuilding a high speed electrified rail network from sea to sea to sea that could reduce transportation costs and increase flexibility and convenience for millions of Canadians and businesses including in remote communities.
But our current leadership is too beholden to existing capitalist interests and status quo business to be that bold. And they're positively allergic to anything done purely in the public interest. How dare we not include some profiteering private corporation… /sarc
#CanPoli #CdnPoli #NaitonBuilding #Transportation #rail
cbc.ca/news/politics/premiers-

@mgorny@social.treehouse.systems
2025-06-16 10:22:27

Some fun facts about #Python limited API / stable ABI.
1. #CPython supports "limited API". When you use it, you get extensions that are compatible with the specified CPython version and versions newer than that. To indicate this compatibility, such extensions use `.abi3.so` suffix (or equivalent) rather than the usual `.cpython-313-x86_64-linux-gnu.so` or alike.
2. The actual support is split between CPython itself and #PEP517 build systems. For example, if you use #setuptools and specify `py_limited_api=` argument to the extension, setuptools will pass appropriate C compiler flags and swap extension suffix. There's a similar support in #meson, and probably other build systems.
3. Except that CPython freethreading builds don't support stable ABI right now, so building with "limited API" triggers an explicit error from the headers. Setuptools have opted for building explicit about this: it emits an error if you try to use `py_limited_api` on a freethreading interpreter. Meson currently just gives the compile error. This implies that package authors need to actively special-case freethreading builds and enable "limited API" conditionally.
4. A some future versions of CPython will support "limited API" in freethreading builds. I haven't been following the discussions closely, but I suspect that it will only be possible when you target that version or newer. So I guess people will need to be building two stable ABI wheels for a time — one targeting older Python versions, and one targeting newer versions plus freethreading. On top of that, all these projects will need to update their "no 'limited API' on freethreading" conditions.
5. And then there's #PyPy. PyPy does not feature a stable ABI, but it allows you to build extensions using "limited API". So setuptools and meson just detect that there is no `.abi3.so` on PyPy, and use regular suffix for the extensions built with "limited API".