
2025-08-10 16:57:44
🚂 Trans-Afghan Railway Project Gains Momentum After Eight Years
https://oilprice.com/Geopolitics/International/Trans-Afghan-Railway-Project-Gains-Momentum-After-Eight-Years.html
🚂 Trans-Afghan Railway Project Gains Momentum After Eight Years
https://oilprice.com/Geopolitics/International/Trans-Afghan-Railway-Project-Gains-Momentum-After-Eight-Years.html
Hello #Fediverse, I have a project for which I need to map 100-200 addresses from a spreadsheet. I found several services where I can just upload the spreadsheet or paste the data that work with Google Maps but I keep thinking there must be an #OpenStreetMap solution too.
Does anyone k…
If you are a last.fm user, i would highly recommend you to have a look at this project:
https://katelyn.moe/bleh/
https://github.com/katel…
For more than a century, putrid fumes emanated from
the “sewer of the Ruhr”,
creating a pungent whiff that assaulted towns throughout Germany’s industrial heartland.
But today, the Emscher bears little resemblance to Europe’s dirtiest river.
Water that used to be fouled by factory waste and human excrement have been free from effluent since 2021.
The river system, the main part of which was once considered biologically dead, is witnessing the return of an abun…
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
Series A, Episode 09 - Project Avalon
BLAKE: You have a woman prisoner here called Avalon. Where is she being kept?
GUARD: All prisoners are being held in the main detention block. [Chevner scouts ahead to the nearest corridor junction.]
https://blake.torpidity.net/m/109/279 B7B2
A decade of missed opportunities: Texas couldn't find $1M for flood warning system near camps
https://apnews.com/article/texas-floods-camp-warning-system-not-funded-0845df62390b9623331ba4a030c5fc7d
As I will be leaving #OAPEN, the managing director wrote this blog. For me it feels a bit weird, like reading your obituary, but if you are looking for a job in the space of #OpenAccess #books and have t…
This https://arxiv.org/abs/1811.03437 has been replaced.
link: https://scholar.google.com/scholar?q=a
Okay, finally have iocaine (https://iocaine.madhouse-project.org/) running on my web server. Was a bit of an uphill battle but I'll try to write about a few things tonight to make it easier for others and document a few of the roadblocks I ran into
wayland is a governance failure https://github.com/flatpak/xdg-desktop-portal/issues/880
KramaBench: A Benchmark for AI Systems on Data-to-Insight Pipelines over Data Lakes
Eugenie Lai, Gerardo Vitagliano, Ziyu Zhang, Sivaprasad Sudhir, Om Chabra, Anna Zeng, Anton A. Zabreyko, Chenning Li, Ferdi Kossmann, Jialin Ding, Jun Chen, Markos Markakis, Matthew Russo, Weiyang Wang, Ziniu Wu, Michael J. Cafarella, Lei Cao, Samuel Madden, Tim Kraska
"We cannot preclude developers from “vibe coding” their way into a working application; but we can teach them how to properly integrate the very likely spaghetti mess produced by those bullshit machines, how to understand it, and how to make it work with today’s compilers, which, let us be honest: are the best we have ever had, and it would be a shame to ignore them completely."
Sources: Trump's World Liberty Financial is seeking to raise $1.5B to launch a publicly traded crypto treasury company that would hold its WLFI token and cash (Fortune)
https://fortune.com/crypto/2025/08/08/donald-trump-wo…
SKYSURF-10: A Novel Method for Measuring Integrated Galaxy Light
Delondrae D. Carter, Timothy Carleton, Daniel Henningsen, Rogier A. Windhorst, Seth H. Cohen, Scott Tompkins, Rosalia O'Brien, Anton M. Koekemoer, Juno Li, Zak Goisman, Simon P. Driver, Aaron Robotham, Rolf Jansen, Norman Grogin, Haina Huang, Tejovrash Acharya, Jessica Berkheimer, Haley Abate, Connor Gelb, Isabela Huckabee, John MacKenty
I am ignorant on what it is to be trans or a woman or a world-class athlete, but I THINK the problem here is a doomed project of representing a multidimensional continuously variable attribute with a single Boolean https://infosec.exchange/@JessTheUnstill/114639211101368863
@… For the course I had in mind it’s a programming project, and students have admitted using ChatGPT at least for some parts. It didn’t work for everything, though.
So, yes, the point would also be to show that some things actually can’t be accomplished this way.
@… For the course I had in mind it’s a programming project, and students have admitted using ChatGPT at least for some parts. It didn’t work for everything, though.
So, yes, the point would also be to show that some things actually can’t be accomplished this way.
New ZFS AnyRAID feature would probably get me to use ZFS at home instead of btrfs.
I have a lot of different-sized old SATA enterprise SSDs retired from @… and a cheap 8 bay eSATA enclosure. The performance is good enough. I can't really justify buying 8 new, matching drives.
AnyMirror would be good enough for my purposes.
To all professionals & retirees in the VFX industry, join the #vfxPeopleChallenge to post a photo of yourself at work. Just share a picture along with a brief description. The goal is to demonstrate that people are working on movies & shows that are supposed to have no CGI!
Thanks for your participation and boots on this project!
Long; central Massachusetts colonial history
Today on a whim I visited a site in Massachusetts marked as "Huguenot Fort Ruins" on OpenStreetMaps. I drove out with my 4-year-old through increasingly rural central Massachusetts forests & fields to end up on a narrow street near the top of a hill beside a small field. The neighboring houses had huge lawns, some with tractors.
Appropriately for this day and this moment in history, the history of the site turns out to be a microcosm of America. Across the field beyond a cross-shaped stone memorial stood an info board with a few diagrams and some text. The text of the main sign (including typos/misspellings) read:
"""
Town Is Formed
Early in the 1680's, interest began to generate to develop a town in the area west of Natick in the south central part of the Commonwealth that would be suitable for a settlement. A Mr. Hugh Campbell, a Scotch merchant of Boston petitioned the court for land for a colony. At about the same time, Joseph Dudley and William Stoughton also were desirous of obtaining land for a settlement. A claim was made for all lands west of the Blackstone River to the southern land of Massachusetts to a point northerly of the Springfield Road then running southwesterly until it joined the southern line of Massachusetts.
Associated with Dudley and Stoughton was Robert Thompson of London, England, Dr. Daniel Cox and John Blackwell, both of London and Thomas Freak of Hannington, Wiltshire, as proprietors. A stipulation in the acquisition of this land being that within four years thirty families and an orthodox minister settle in the area. An extension of this stipulation was granted at the end of the four years when no group large enough seemed to be willing to take up the opportunity.
In 1686, Robert Thompson met Gabriel Bernor and learned that he was seeking an area where his countrymen, who had fled their native France because of the Edict of Nantes, were desirous of a place to live. Their main concern was to settle in a place that would allow them freedom of worship. New Oxford, as it was the so-named, at that time included the larger part of Charlton, one-fourth of Auburn, one-fifth of Dudley and several square miles of the northeast portion of Southbridge as well as the easterly ares now known as Webster.
Joseph Dudley's assessment that the area was capable of a good settlement probably was based on the idea of the meadows already established along with the plains, ponds, brooks and rivers. Meadows were a necessity as they provided hay for animal feed and other uses by the settlers. The French River tributary books and streams provided a good source for fishing and hunting. There were open areas on the plains as customarily in November of each year, the Indians burnt over areas to keep them free of underwood and brush. It appeared then that this area was ready for settling.
The first seventy-five years of the settling of the Town of Oxford originally known as Manchaug, embraced three different cultures. The Indians were known to be here about 1656 when the Missionary, John Eliott and his partner Daniel Gookin visited in the praying towns. Thirty years later, in 1686, the Huguenots walked here from Boston under the guidance of their leader Isaac Bertrand DuTuffeau. The Huguenot's that arrived were not peasants, but were acknowledged to be the best Agriculturist, Wine Growers, Merchant's, and Manufacter's in France. There were 30 families consisting of 52 people. At the time of their first departure (10 years), due to Indian insurrection, there were 80 people in the group, and near their Meetinghouse/Church was a Cemetery that held 20 bodies. In 1699, 8 to 10 familie's made a second attempt to re-settle, failing after only four years, with the village being completely abandoned in 1704.
The English colonist made their way here in 1713 and established what has become a permanent settlement.
"""
All that was left of the fort was a crumbling stone wall that would have been the base of a higher wooden wall according to a picture of a model (I didn't think to get a shot of that myself). Only trees and brush remain where the multi-story main wooden building was.
This story has so many echoes in the present:
- The rich colonialists from Boston & London agree to settle the land, buying/taking land "rights" from the colonial British court that claimed jurisdiction without actually having control of the land. Whether the sponsors ever actually visited the land themselves I don't know. They surely profited somehow, whether from selling on the land rights later or collecting taxes/rent or whatever, by they needed poor laborers to actually do the work of developing the land (& driving out the original inhabitants, who had no say in the machinations of the Boston court).
- The land deal was on condition that there capital-holders who stood to profit would find settlers to actually do the work of colonizing. The British crown wanted more territory to be controlled in practice not just in theory, but they weren't going to be the ones to do the hard work.
- The capital-holders actually failed to find enough poor suckers to do their dirty work for 4 years, until the Huguenots, fleeing religious persecution in France, were desperate enough to accept their terms.
- Of course, the land was only so ripe for settlement because of careful tending over centuries by the natives who were eventually driven off, and whose land management practices are abandoned today. Given the mention of praying towns (& dates), this was after King Phillip's war, which resulted in at least some forced resettlement of native tribes around the area, but the descendants of those "Indians" mentioned in this sign are still around. For example, this is the site of one local band of Nipmuck, whose namesake lake is about 5 miles south of the fort site: #LandBack.
Analyzing C/C Library Migrations at the Package-level: Prevalence, Domains, Targets and Rationals across Seven Package Management Tools
Haiqiao Gu, Yiliang Zhao, Kai Gao, Minghui Zhou
https://arxiv.org/abs/2507.03263
We are a stupid species.... Here is my county government trying to create a Rube Goldberg class blockchain based system to deal with a problem that could be solved by printed lists of paper
The crypto/blockchain mindset certainly contains a big element of "if all you have is a hammer then everything looks like a nail".
(By-the-way, our county government is dumb in other dimensions - for year the emergency response command center was in a basement next to, and and *below…
Do we do #ICanHazPDF on Mastodon? https://saemobilus.sae.org/papers/demonstration-a-dme-dimethyl-ether-fuelled-city-bus-2000-01-2005
Finally finished the VSC8512 writeup! Ended up being just a biiiit longer than I had expected but there was a lot to talk about.
I still want to refactor my code a bit to be cleaner and more OO, what I have now is a bit quick-and-dirty, but it works.
https://serd.es/2025/07/04/Switch-proj
TIL: »Specific typographic rules have been developed for each language for centuries, but in recent decades, especially due to globalisation and the unification of software tools, they have been disregarded. The international project of a typographic proofreader for European languages will preserve these rules as an expression of European cultural diversity for future generations.«
Source:
As an OCaml and GameBoy enthusiast, this is a great writeup.
https://linoscope.github.io/writing-a-game-boy-emulator-in-ocaml/
Yesterday attending a project strategy meeting for MaterialDigital at the Institute for microtribology. No, I don’t have any idea why they have put this in front of it😜
#pmd #materialsscience #mse
Question for any folk that do reader/audience/play testing.
I find that the more fidelity (detail, production value) a project has, the less accurate the user's identification of an issue. I try to interpret accordingly.
Do you know of any studies regarding this or personal experiences (that confirm or refute this)? I have studies on how people don't know what they want, etc. But keen on the connection btw fidelity & inaccuracy
Thank you! :)
Other relate…
One place where you CAN donate to support people in Gaza:
https://chuffed.org/project/allchildrenareourchildren
Raiders Predicted to Cut Intriguing Rookie With 4.39 40-Yard Dash Speed https://heavy.com/sports/nfl/las-vegas-raiders/tommy-mellott-roster-cut-prediction/?adt_ei=[email]
Two folks here on the Fediverse, @… and @…, have set up a giving circle to help several families trying to survive in Gaza. This is direct mutual aid, with a direct impact.
I set up a recurring donation. If you have a little money yourself to chip in, it will have an impact.
https://chuffed.org/project/hope-giving-circle
EVOC2RUST: A Skeleton-guided Framework for Project-Level C-to-Rust Translation
Chaofan Wang, Tingrui Yu, Jie Wang, Dong Chen, Wenrui Zhang, Yuling Shi, Xiaodong Gu, Beijun Shen
https://arxiv.org/abs/2508.04295
Oh great... employee department had a project to create a company cookbook. Since we're a very international company we have people from more than 60 countries and first cookbook turned out really great. Now where to get those partially "exotic" ingredients in Germany...
A Project Moohan benchmark gets spotted, and may have revealed the Android XR headset's key spec https://www.techradar.com/computing/virtual-reality-augmented-reality…
Weekend #Plankton Factoid 🦠🦐
Plankton have #parasites, just like any other organism, but in some cases, this leads to improved feeding conditions. #Daphnia zooplankton must often feed on freshwater cy…
#ScribesAndMakers for July 3: When (and if) you procrastinate, what do you do? If you don't, what do you do to avoid it?
I'll swap right out of programming to read a book, play a video game, or watch some anime. Often got things open in other windows so it's as simple as alt-tab.
I've noticed recently I tend to do this more often when I have a hard problem to solve that I'm not 100% sure about. I definitely have cycles of better & worse motivation and I've gotten to a place where I'm pretty relaxed about it instead of feeling guilty. I work how I work, and that includes cycles of rest, and that's enough (at least, for me it has been so far, and I'm in a comfortable career, married with 2 kids).
Some projects ultimately lose steam and get abandoned, and I've learned to accept that too. I learn a lot and grow from each project, so nothing is a true waste of time, and there remains plenty of future ahead of me to achieve cool things.
The procrastination does sometimes impact my wife & kids, and that's something I do sometimes feel bad about, but I think I keep that in check well enough, and for things my wife worries about, I usually don't procrastinate those too much (used to be worse about this).
Right now I'm procrastinating a big work project by working on a hobby project instead. The work project probably won't get done by the start of the semester as a result. But as I remind myself, my work doesn't actually pay me to work during the summer, and things will be okay without the work project being finished until later.
When I want to force myself into a more productive cycle, talking to people about project details sometimes helps, as does finding some new tech I can learn about by shoehorning it into a project. Have been thinking about talking to a rubber duck, but haven't motivated myself to try that yet, and I'm not really in doldrums right now.
fab interview with @… guitarist dm hotep about building the wildly cool/expansive ghost horizons project around marshall allen & collaborators. https://pos…
A 130,000-year-old archaeological site in southern California, USA
The Cerutti Mastodon site is, to our knowledge, the oldest in situ, well-documented archaeological site in North America and, as such, substantially revises the timing of arrival of Homo into the Americas.
https://www.…
Ethan is right: I'm gonna reread Project Hail Mary. Because now I have to wait until March 20, 2026.
▶️ Reacting to the Project Hail Mary Trailer - THIS CHANGES EVERYTHING
https://youtube.com/watch?v=Jq3m8Q1IakA&si=9IYZriYettfKafoC
I've a "old #astronomy software" side project and I'm looking for the early Mac software "StarMap" or "Star Map" from Bruce Webster.
So far my searches came up empty, but I'm not a specialist in old Mac software.
Does anyone have a copy of this app?
It's described in detail in a July 1985 BYTE article: #retrocomputing
Authoritarianism and the decentring of the constitution
https://ift.tt/Ar46wTX
by Mayur Suresh The impetus for this project stems from a disquiet, felt by so many in different…
via Input 4 RELCFP
"Because I deserved support in that moment. I deserved having my boss taking me to the side to have a coffee and asking me how I was doing. I deserved having somebody more in my team to help me figure out a new technical solution for that project, instead of leaving me alone. I deserved having a product owner communicating issues to that customer. I deserved having someone giving a fucking shit about my mental health."
A few days ago I got very excited when seeing a link for a minimal emacs setup.
Why? My Emacs setup is not minimal. And I always think it is not well enough organized.
I stopped looking at this minimal setup project very early. I have realized that I’d have to use and learn yet another set of settings that this would translate into Emacs options.
I perceive this extra indirection layer as an added complexity and distancing myself from understanding Emacs.
Some cool footage and discussion of full-scale EV fire testing by UL/FSRI. This should have been done a long time ago, given how many EVs are on the roads now, but better late than never I guess.
Video: https://www.youtube.com/watch?v=K6j3GtcAfE0
Project website:
A project we kicked off in March, have worked on pretty much non-stop and have done loads of interesting new work on, all started to come together today.
Learnt loads; fully containerised dev & prod environment, brand new #rails 8 app, Avo dashboards, headless CMS, AstroJS website, new analytics platform, “novel” Google Docs to CMS ETL process, 3-stage production/staging/trunk AWS acco…
So, the days when PHP was "my" programming language, is gone for a long time.... (Kotlin (before Java) only at the moment bash)
But, I guess I will have a look now and then to make small PRs to the Pixelfed project to fix some small bugs.
... this time it was a typescript/vuejs bug
Totally simple but as I found it...
#SpringBoot 4 Released: A Full Analysis of 11 Major Changes!
https://medium.com/@haiou-a/spring-boo
i should have a counter on the wall i flip every time i use a new build system
so far this project involves:
- bazel
- xmake
- cmake
- ninja
- opam
- make
- pip
but i'm sure i've missed something
y'all have no idea how much work on all levels went into this little beta release 😅
https://pypi.org/project/cffi/2.0.0b1/
dbpedia_recordlabel: DBpedia artist-label affiliations
Bipartite networks of the affiliations (contractual relations) between artists and the record labels under which they have performed, as extracted from Wikipedia by the DBpedia project.
This network has 186758 nodes and 233286 edges.
Tags: Economic, Employment, Unweighted
http…
JoyTTS: LLM-based Spoken Chatbot With Voice Cloning
Fangru Zhou, Jun Zhao, Guoxin Wang
https://arxiv.org/abs/2507.02380 https://arxiv…
Update - I think I managed to come to a compromise on timeline and firmly established that no AI will be used for this project.
They agreed, although they did make say that “it’s as if you decided to dig up a garden using a shovel instead of a tractor. As long as it’s done, it’s done”
I don’t wanna pick a fight so I left the comment alone but now that I think of it I have two responses:
- who digs a garden with a tractor?
- last I checked tractor didn’t have a ~30% chance of lying about the dirt they did or up to 70% odds of just giving up on working if the ground was too hard to dig
Another donation made to The Good Law Project. Honestly, they have already made some great progress on defending my safety in relation to the Equality Act and EHRC, its money well spent.
Can you join me?
https://goodlawproject.org/donate/
Two folks here on the Fediverse, @… and @…, have set up a giving circle to help several families trying to survive in Gaza. This is direct mutual aid, with a direct impact.
I set up a recurring donation. If you have a little money yourself to chip in, it will have an impact.
https://chuffed.org/project/hope-giving-circle
A Simulation of the Fermilab Main Injector Dual Power Amplifier Cavities
Susanna Stevenson (Fermi National Accelerator Laboratory)
https://arxiv.org/abs/2508.03312 https://
Claude projects are amazing.
I have a ton of docs for my house. All materials, eu certificates for everything. plans for every section of the wood structure. Road manufactur details, sewers, land purchase etc all Latvian.
I uploaded all into a Claude project and it just answers English questions about it all in English.
Demand for LNG in Europe dropped by 18 per cent between 2022 and 2024, and Canadian exports would have a hard time competing in Asian markets, says advocacy group Investors for Paris Compliance. “Investing in infrastructure that will be very expensive and likely won’t be profitable will weaken our economy rather than strengthen it,” Renaud Gignac, an economist and senior adviser for the group, said in an interview.
The solution (or at least a step towards one) would have been the treaty Iran and USA signed, but which Trump withdrew from. https://mastodon.social/@jmcrookston/114727962307106386
This https://arxiv.org/abs/2505.07212 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csSI_…
The IRS open sourced much of its incredibly popular Direct File software as the future of the free tax filing program is at risk of being killed by Intuit’s lobbyists and Donald Trump’s megabill.
Meanwhile, several top developers who worked on the software have left the government and joined a project to explore the “future of tax filing” in the private sector
A Bi-Objective Mathematical Model for the Multi-Skilled Resource-Constrained Project Scheduling Problem Considering Reliability: An AUGMECON2VIKOR Hybrid Method
Mohammad Ghasemi, Asef Nazari, Dhananjay Thiruvady, Reza Tavakkoli-Moghaddam, Reza Shahabi-Shahmiri, Seyed-Ali Mirnezami
https://arxiv.org/abs/2507.21436
Runaway Growth of Planetesimals Revisited: Presenting Criteria Required for Realistic Modeling of Planetesimal Growth
Nader Haghighipour, Luciano A. Darriba
https://arxiv.org/abs/2507.21390
⚗️ Secrets of the dark genome could spark new drug discoveries
#drugs
CoGrader: Transforming Instructors' Assessment of Project Reports through Collaborative LLM Integration
Zixin Chen, Jiachen Wang, Yumeng Li, Haobo Li, Chuhan Shi, Rong Zhang, Huamin Qu
https://arxiv.org/abs/2507.20655
Religious Studies Project Opportunities Digest – October 9, 2024
https://ift.tt/aHDBoXS
Welcome to the Religious Studies Project Opportunities Digest! This week, you will find 28 new job…
via Input 4 RELCFP
from my link log —
Servo revival 2023-2024.
https://blogs.igalia.com/mrego/servo-revival-2023-2024/
saved 2025-01-07 htt…
Some Mathematical Problems Behind Lattice-Based Cryptography
Chuanming Zong
https://arxiv.org/abs/2506.23438 https://arxiv.org/pdf/25…
Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
https://social.coop/@eloquence/114940607434005478
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.
This giving circle has apparently stalled out, and could use a boost if you have a few bucks to spare:
https://chuffed.org/project/hope-giving-circle
Git Context Controller: Manage the Context of LLM-based Agents like Git
Junde Wu
https://arxiv.org/abs/2508.00031 https://arxiv.org/pdf/2508.00031
The Trump administration has quietly fast-tracked a massive oil expansion project that environmentalists and Democratic lawmakers warned could have a destructive impact on local communities and the climate.
As reported recently by the Oil and Gas Journal,
👉 the plan “involves expanding the Wildcat Loadout Facility, a key transfer point for moving Uinta basin crude oil to rail lines that transport it to refineries along the Gulf Coast.”
🔥The goal of the plan is to transfer an …
I have vague memories of reading years ago about a weapon called a "photic driver" consisting of an LED or strobe or laser array flashing in patterns deliberately crafted to induce photosensitive seizures and disorientation (not just a conventional dazzler intended primarily to interfere with the target's vision).
I have no idea if it was a failed CIA project, a sci-fi plot element, something some military or police department actually fielded, etc.
Anybody ever hea…
dbpedia_recordlabel: DBpedia artist-label affiliations
Bipartite networks of the affiliations (contractual relations) between artists and the record labels under which they have performed, as extracted from Wikipedia by the DBpedia project.
This network has 186758 nodes and 233286 edges.
Tags: Economic, Employment, Unweighted
http…
How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love
Petrichor 7.3v
Black IPA with Gin Oak. Brewed by Temporal Artisan Ales in collaboration with Boombox Brewing
<a href="https://www.temporalales.com" rel="noreferrer nofollow">www.temporalales.com</a>
I only got one can of this: I think I will have to go back to …
I have a large amount of code logging using logrus, and it's basically abandonware now that slog is in Go standard.
Big problem.
I spent some time getting Claude Code to make me a full drop-in replacement for logrus. I moved my biggest project to slog in sub 1 hour with this. Quality wise I think the generated outcome is pretty good - I would not have done it this well as its super tedious work.
I can gradually move to slog for new code with a aim to eradicate logrus…
Improving LLM-Based Fault Localization with External Memory and Project Context
Inseok Yeo, Duksan Ryu, Jongmoon Baik
https://arxiv.org/abs/2506.03585 http…
Just had a video call with @… from Gaza. Like every family there, they need your help to survive.
Our governments have failed us, we are the only ones who can help. Your donations go directly to keeping families like Aseel’s alive as they struggle to survive Israel’s genocide of the Palestinian people.
Please help Aseel and her family if you can.
"This report investigates the corporate machinery sustaining Israel’s settler-colonial project of displacement and replacement of the Palestinians in the occupied territory. While political leaders and governments shirk their obligations, far too many corporate entities have profited from Israel’s economy of illegal occupation, #apartheid and now,
This story seems to be slipping between the cracks.
Apparently the Air Force has decided to undertake the conversion of that bribe 747 to FFOTUS - an amount that will probably well exceed a $billion or far more - under cover of a classified project to do something else.
This is a clear violation of the US Anti-deficiency act. It is theft of taxpayer money. And it violates the requirement that money only be spent after appropriation by Congress.
i have not seen any journal…
The MAGA era
— and the three latest Trump appointees to the Court
— has resulted in a new, gruesome project:
❌giving Trump whatever he wants.
🔥This toxic combination of bigotry and fealty has created a Court that uses all its might to attack the less powerful
while coddling those who already have it all — particularly Donald Trump.
It’s a Court with a very clear vision of who matters and who needs protection.
The majority opinion in "Trump v. C…
Divya and Shantini have vetted that yes, these are real people in real crisis who need help.
The two of them aren’t professional fundraisers or anything. They’re just doing their best with the tools they have, trying desperately to help a few people.
Yes, I wish that we had proper organizations — NGOs, international aid, a functioning democracy! — to help •all• the people in need all at once. We don’t. Elbow grease is all we’ve got.
https://chuffed.org/project/hope-giving-circle
The future of debate: get an LLM to generate your tirade, copy it to the world dog, have it spotted by an LLM spotter and rejected on that basis.
https://lists.debian.org/debian-project/2025/07/msg00031.html
dbpedia_recordlabel: DBpedia artist-label affiliations
Bipartite networks of the affiliations (contractual relations) between artists and the record labels under which they have performed, as extracted from Wikipedia by the DBpedia project.
This network has 186758 nodes and 233286 edges.
Tags: Economic, Employment, Unweighted
http…
Divya and Shantini have vetted that yes, these are real people in real crisis who need help.
The two of them aren’t professional fundraisers or anything. They’re just doing their best with the tools they have, trying desperately to help a few people.
Yes, I wish that we had proper organizations — NGOs, international aid, a functioning democracy! — to help •all• the people in need all at once. We don’t. Elbow grease is all we’ve got.
https://chuffed.org/project/hope-giving-circle
This https://arxiv.org/abs/2503.07010 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csSE_…
dbpedia_recordlabel: DBpedia artist-label affiliations
Bipartite networks of the affiliations (contractual relations) between artists and the record labels under which they have performed, as extracted from Wikipedia by the DBpedia project.
This network has 186758 nodes and 233286 edges.
Tags: Economic, Employment, Unweighted
http…
do i know any people who:
(a) have modern web development experience
(b) like #GlasgowInterfaceExplorer or just want to do something fairly simple and useful
(c) want to work with me on a new piece of the project?
Intention-Driven Generation of Project-Specific Test Cases
Binhang Qi, Yun Lin, Xinyi Weng, Yuhuan Huang, Chenyan Liu, Hailong Sun, Jin Song Dong
https://arxiv.org/abs/2507.20619
LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding
I’ve been using it a bit, and…Codeberg is good.
I mean, I’m sure there are just a bunch of brick walls if you’re relying on specific Github features that it just doesn’t have — actions, project management features, certain tool integrations, whatever.
But if you don’t hit those walls? It works. It works well. It continues to work such that my attention is mostly on other things — and that’s high praise in my book!
I’ve been using it a bit, and…Codeberg is good.
I mean, I’m sure there are just a bunch of brick walls if you’re relying on specific Github features that it just doesn’t have — actions, project management features, certain tool integrations, whatever.
But if you don’t hit those walls? It works. It works well. It continues to work such that my attention is mostly on other things — and that’s high praise in my book!