Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@tiotasram@kolektiva.social
2025-08-11 13:26:07

How the US democracy is designed to avoid representation
Right now in the US, a system which proclaims to give each citizen representation, my interests are not represented very well by most of my so-called representatives at any level of government. This is true for a majority of Americans across the political spectrum, and it happens by design. The "founding fathers" were explicit about wanting a system of government that would appear Democratic but which would keep power in the hands of rich white landowners, and they successfully designed exactly that. But how does disenfranchisement work in this system?
First, a two-party system locked in by first-post-the-post winner-takes-all elections immediately destroys representation for everyone who didn't vote for the winner, including those who didn't vote or weren't eligible to vote. Single-day non-holiday elections and prisoner disenfranchisement go a long way towards ensuring working-class people get no say, but much larger is the winner-takes all system. In fact, even people who vote for the winning candidate don't get effective representation if they're really just voting against the opponent as the greater of two evils. In a 51/49 election with 50% turnout, you've immediately ensured that ~75% of eligible voters don't get represented, and with lesser-of-two-evils voting, you create an even wider gap to wedge corporate interests into. Politicians need money to saturate their lesser-of-two-evils message far more than they need to convince any individual voter to support their policies. It's even okay if they get caught lying, cheating, or worse (cough Epstein cough) as long as the other side is also doing those things and you can freeze out new parties.
Second, by design the Senate ensures uneven representation, allowing control of the least-populous half of states to control or at least shut down the legislative process. A rough count suggests 284.6 million live in the 25 most-populous states, while only 54.8 million live in the rest. Currently, counting states with divided representation as two half-states with half as much population, 157.8 million people are represented by 53 Republican sensors, while 180.5 million people get only 45 seats of Democratic representation. This isn't an anti-Democrat bias, it's a bias towards less-populous states, whose residents get more than their share it political power.
I haven't even talked about gerrymandering yet, or family/faith-based "party loyalty," etc. Overall, the effect is that the number of people whose elected representatives meaningfully represent their interests on any given issue is vanishingly small (like, 10% of people tops), unless you happen to be rich enough to purchase lobbying power or direct access.
If we look at polls, we can see how lack of representation lets congress & the president enact many policies that go against what a majority of the population wants. Things like abortion restrictions, the current ICE raids, and Medicare cuts are deeply unpopular, but they benefit the political class and those who can buy access. These are possible because the system ensures at every step of the way that ordinary people do NOT get the one thing the system promises them: representation in the halls of power.
Okay, but is this a feature of all democracies, inherent in the nature of a majority-decides system? Not exactly...
1/2
#uspol #democracy

@thomasfuchs@hachyderm.io
2025-07-09 12:54:38

I’m more and more assuming that companies using “AI” despite it not working is la con to force people to work more at less pay (“Why isn’t this done? AI makes you twice as productive and require fewer skills!”)

@hex@kolektiva.social
2025-06-12 07:31:28

The liberal obsession with optics serves the right and persuades no one. There is literally an active ethnic cleansing happening in the US right now, and the only thing that matters is making that as hard as possible to carry out.
Anarchists destroying intelligence assets saves lives. Every escooter thrown at a cop car is one less escort for a goon too afraid to kidnap random brown people without being flanked by a branch full of bad apples. Spray paint is not violence. Vandalism is not violence. Community self defense in all forms is legitimate.
Make no mistake, these raids are about changing demographics. Demographic trends have been shifting blue for a long time, and the right has, for a long time, been blaming "white replacement." Conspiracy theory aside, Democrats have also been relying on the growth of black and brown voters as a block. The nuances of whiteness as an identity are lost on the current administration and their supporters. They see that "white people will be a minority by 2050" and equate that with the "end of Western Civilization."
The only way to "save Western Civilization" is to change those demographics. Forced birth and forced removal are two sides of the same white nationalist objective. Of course they can't have due process, because they need to be able to kidnap anyone who they see as a threat to their demographic future.
They don't care about optics. The plan is to murder away any threat and flood everyone else with propaganda. There is no mythical middle. There's no one unconvinced. They know this, but they win when democrats buy that myth and save the police the work of policing the protests.
If your protest is 90% "peaceful," they'll take pictures of the 10% that isn't. If it's 99% peaceful, they'll shoot rubber bullets and teargas until someone throws a brick and take 100 pictures from a dozen angles. If its 100% "peaceful" and no one can be provoked, they'll generate pictures with AI or photoshop like they did during the George Floyd uprising and the pictures from the CHOP/CHAZ. Do you have literally no memory?
#USPol #FiftyFiftyOne #50501movenent #resistance #NoKingsDay #NoKingsDayOfAction

@arXiv_csAI_bot@mastoxiv.page
2025-07-11 07:31:01

BOOST: Out-of-Distribution-Informed Adaptive Sampling for Bias Mitigation in Stylistic Convolutional Neural Networks
Mridula Vijendran, Shuang Chen, Jingjing Deng, Hubert P. H. Shum
arxiv.org/abs/2507.07134

@arXiv_csLG_bot@mastoxiv.page
2025-06-10 19:21:33

This arxiv.org/abs/2506.01348 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@rocksongoftheweek@mastodon.world
2025-07-11 09:46:15

Our pick this week is a more modern blast from UK thrash legends Onslaught ... proof that they’re still bringing the fire decades on.
Sharp, aggressive, and very much alive, check it out and see what we had to say about it.
Don't forget to follow us for more handpicked tracks from across the epic world of #rock and

@rigo@mamot.fr
2025-06-08 16:12:06

The more it is evolving, the less stable it becomes. I'm using #KDE since version 1.1. And with KF6 it is for the first time that I experience the repeated total freeze of the entire desktop, because it can't handle a dock and more than one monitor. Wayland is definitely the systemd moment for desktop-linux. X-Windows was mutilated before Wayland actually worked (let alone missing network capabilit…

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-10 09:44:52

Less is More: some Computational Principles based on Parcimony, and Limitations of Natural Intelligence
Laura Cohen, Xavier Hinaut, Lilyana Petrova, Alexandre Pitti, Syd Reynal, Ichiro Tsuda
arxiv.org/abs/2506.07060

@timbray@cosocial.ca
2025-08-07 21:28:50

I’m not a Bloomberg subscriber but I get a few of their newsletters. It’s interesting to me that they’re more or less 100% free of MAGA cant. They point out that tariffs are lose/lose, that the climate crisis is real and disastrous, that fucking with vaccines is lethally dangerous, etc etc. All of these emphasizing the finance angle of course.
My perception is that Bloomberg represents the conventional wisdom of the mainstream business community. So, a little surprising.

@andres4ny@social.ridetrans.it
2025-08-10 00:51:05

We have raspberries growing. When we lived in Seattle we grew kale and picked wild blackberries. All were so much better than anything you get in a grocery store.
I don't eat a lot of fruit these days because its so disappointing to take a bite out of a bland or bitter berry from a plastic carton..
m…

#CoWoS, or Chip-on-Wafer-on-Substrate,
is one of TSMC’s most advanced ways of packaging chips.
It allows several chips to work closely together as one,
making the whole system faster and more efficient while using less energy.

@villavelius@mastodon.online
2025-06-10 06:09:10

Nothing less than an invasion, of course. The fear of California seceding must be all too realistic, I guess.
theguardian.com/us-news/2025/j

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:32:10

Less is More: Token-Efficient Video-QA via Adaptive Frame-Pruning and Semantic Graph Integration
Shaoguang Wang (The Hong Kong University of Science and Technology), Jianxiang He (The Hong Kong University of Science and Technology), Yijie Xu (The Hong Kong University of Science and Technology), Ziyang Chen (The Hong Kong University of Science and Technology), Weiyu Guo (The Hong Kong University of Science and Technology), Hui Xiong (The Hong Kong University of Science and Technology)

@arXiv_condmatsoft_bot@mastoxiv.page
2025-08-12 08:58:32

A hinge effect that anomalously decreases the stiffness of slender fiber-reinforced composite structures
Vivek Khatua, Debashish Das, G. K. Ananthasuresh
arxiv.org/abs/2508.06903

@unchartedworlds@scicomm.xyz
2025-08-06 16:01:00
Content warning: "age verification" practicalities

Really good explanation from @…, laying out various problems and risks with trying to implement "age verification" online.
"Firstly, in order to prove your age you’re being asked to hand over some fairly important personal details. ... Usually the company you’re handing these details to is a third party, often one you will never have heard of before. ...
"The data that is being collected for age verification purposes is extremely tempting to hackers ... and at the moment there is no specific regulation outlining the security standards that these companies should meet ...
"Let’s say all the current age verification providers are incredibly robust, though. ... The question still remains... should you be sharing this information with random websites anyway?
"... once you’ve trained the population of an entire country to routinely hand over their credit card details in order to access content, you have given them an incredibly bad habit that it’s going to be tough to break. ... You don’t just prove your age once, after all, you potentially have to do it dozens of times, to access a bunch of different websites. Everything from BlueSky to PornHub to Spotify and even maybe Wikipedia. It becomes a weekly or perhaps monthly occurrence. Just as individual users don’t tend to read every website’s terms and conditions, it’s unlikely they’re all going to do due diligence checks on every provider who asks for ID, especially once they’ve become used to just handing that data over.
"And although that may not be a problem for _you_, you tech-savvy cleverclogs, if you’ve ever found yourself in the position of unpaid IT support for one of your less knowledgeable friends or relatives, hopefully you can see why it’s a huge problem for the UK population more broadly."
And more!
#AgeVerification #OnlineSafetyAct #OSA

@gwire@mastodon.social
2025-06-09 08:52:37

One thing I've noticed about the coverage of the LA protests is the differing use of "non-lethal" and "less-lethal" in describing weapons being used by the state against protesters.
And in this case I think the liability-dodging term "less lethal" is actually more useful, and should be incorporated into style-guides.

@carloshr@lile.cl
2025-06-06 23:02:15

Los franceses The Inspector Cluzo aportan también en este viernes de nuevos lanzamientos con su rock potente y bien pesado por momentos. «Less is More» se llama su nuevo disco.
open.spotify.com/album/3H1HAHg

@seeingwithsound@mas.to
2025-07-08 14:57:34

New machine vision is more energy efficient - and more human #AI vision

@edintone@mastodon.green
2025-07-31 06:57:06

First Mexican Taco Stand Ever to Win a Michelin Star Proves Sometimes Less is Mas goodnewsnetwork.org/first-mexi

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@ThatHoarder@mastodon.online
2025-06-08 20:12:11

'Given that the average electric drill is in use for just 15 minutes each year, and is kept in storage for the rest of the time, it’s clear that many household items don’t really need to be owned at all' - @alexjgnana overcomecompulsivehoarding.co.

@arXiv_hepth_bot@mastoxiv.page
2025-07-09 09:13:52

Redundancy Channels in the Conformal Bootstrap
Stefanos R. Kousvos, Andreas Stergiou
arxiv.org/abs/2507.05338 arxiv.o…

Justice Ketanji Brown Jackson is "breaking the fourth wall, speaking beyond the court,”
said Melissa Murray, a law professor at New York University.
💥“She is alarmed at what the court is doing and is sounding that in a different register,
one that is less concerned with the appearance of collegiality and more concerned with how the court appears to the public.”
Her slashing critiques sometimes seemed to test her colleagues’ patience,
culminating in an unchar…

@hex@kolektiva.social
2025-08-09 12:26:23

#FuckCars as usual. Like Rage Against the Machine songs, social critiques should become less relevant not more relevant over time. This one from 1973 is far more relevant now than ever.
Cars are fundamentally reactionary, both in their purpose and in their utility to the dominant class.
resilience.org/stories/2018-08

@aredridel@kolektiva.social
2025-06-07 04:24:31

tech nerdery
I mean this: if every receiver just connected to its source when it was ready, and we hadn't made short-timeout stateful firewalls everywhere, we'd have to deploy SO MANY fewer weird one-off services just to receive something.
Instead we have to provision certificates and public facing hostnames to get communication going. Backend development is so much more complex and less robust because of it.

@arXiv_csHC_bot@mastoxiv.page
2025-06-09 07:40:52

What Comes After Harm? Mapping Reparative Actions in AI through Justice Frameworks
Sijia Xiao, Haodi Zou, Alice Qian Zhang, Deepak Kumar, Hong Shen, Jason Hong, Motahhare Eslami
arxiv.org/abs/2506.05687

@playinprogress@assemblag.es
2025-08-02 08:08:06

wind splitting seconds on this cornflower stalk last year. I did not change anything between these pictures, nor did much time pass, I just kept pressing the shutter as the wind kept moving the branches of the birch tree in whose half-shadow I was standing
#photography #bloomScrolling

a stalk with two cornflowers in dappled sunlight, the flowers brightly lit and sharply delineated before a background of green foliage and dark shadows
the same view as in the previous image, but now the sunlight is hitting things slightly differently, making the image as a whole much lighter, the contrast between the flowers and the background less stark, producing an overall softer effect
again the same view with different light, this time the light is overall much more diffuse, the stark shadows have disappeared entirely, the color palette is more blueish, the strong yellow-green notes have disappeared, the whole image is soft and pastelly
same view, strong sunlight is back and with it the dark shadows, strong contrasts and yellow glow in the green tones
@arXiv_astrophHE_bot@mastoxiv.page
2025-07-08 12:37:21

The Global Cosmic Ray Observatory -- Challenging next-generation multi-messenger astronomy with interdisciplinary research
Toshihiro Fujii (on behalf of the GCOS supporters)
arxiv.org/abs/2507.04588

@arXiv_eessIV_bot@mastoxiv.page
2025-08-05 10:29:10

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation
Andrea Dosi, Semanto Mondal, Rajib Chandra Ghosh, Massimo Brescia, Giuseppe Longo
arxiv.org/abs/2508.01941

@arXiv_statML_bot@mastoxiv.page
2025-07-03 08:34:00

When Less Is More: Binary Feedback Can Outperform Ordinal Comparisons in Ranking Recovery
Shirong Xu, Jingnan Zhang, Junhui Wang
arxiv.org/abs/2507.01613

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.

@arXiv_csDC_bot@mastoxiv.page
2025-08-07 09:57:14

Edge-assisted Parallel Uncertain Skyline Processing for Low-latency IoE Analysis
Chuan-Chi Lai, Yan-Lin Chen, Bo-Xin Liu, Chuan-Ming Liu
arxiv.org/abs/2508.04596

@azonenberg@ioc.exchange
2025-07-27 18:03:31

Starting to look at how feasible parameterizing my curve25519 multiplier to use less DSPs at the expense of run time and maybe a few more luts is.
Ultimate goal is a factor of 3 (or more) reduction in multiplier usage allowing it to fit in a Trion T20.

@fortune@social.linux.pizza
2025-05-29 22:00:01

Well, that's more-or-less what I was saying, though obviously addition
is a little more cosmic than the bitwise operators.
-- Larry Wall in <199709051808.LAA01780@wall.org>

@luana@wetdry.world
2025-06-25 11:03:54

Found the specs sheet.
The front camera and ultrawide camera seem to be considerably worse.
The normal wide camera seems to be better, except it lost the electronic stabilisation (not sure how important that is tbh).
The battery is better, but it needs a screwdriver to be changed so no more switch during the day if you don’t have a screwdriver always on you. This comes with no added water resistance, which makes me wonder why they did this.
The display seems to be worse? The resolution is smaller which makes sense since the size is smaller, but also it seems to have less PPI than the Fairphone 5. The refresh rate is higher tho.
It has worse USB-C connectivity as well, the Fairphone 6 has just USB 2.0 (!!!!) compared to 3.0 on the Fairphone 5.
They also got rid of the sky blue color (which was the prettiest imo) and of the transparent option.
I don’t really understand Qualcomm processors, but at least the new GPU seems to have a better benchmark score?
#Fairphone #Fairphone5 #Fairphone6

@nemorosa@mastodon.nu
2025-08-04 16:52:18

Slightly irritated. I went into a DIY store to buy fine concrete. I asked a couple of questions. The shop assistant kept turning to my husband, who stood up for me and kept insisting it was my project.
My husband was even more annoyed than I was when we left, bless his heart. I was actually less annoyed when he showed the clerk, “this is not OK, you do not treat women like this”.
Perhaps the clerk will remember that the next time he gets a female customer. I hope.

@arXiv_astrophSR_bot@mastoxiv.page
2025-08-08 08:47:02

North-South Asymmetry of the Solar Activity at Different Spatial Scales
V. N. Obridko, A. S. Shibalova, D. D. Sokoloff, I. M. Livshits
arxiv.org/abs/2508.04866

@Techmeme@techhub.social
2025-07-28 19:11:40

Faculty and students on Chinese campuses are enthusiastically embracing AI, and the level of public excitement for AI in China is far greater vs. the US and UK (Caiwei Chen/MIT Technology Review)
technologyreview.com/2025/07/2

@andycarolan@social.lol
2025-07-27 17:40:00

Is anyone here old enough to remember when the internet was more open, contained more content and was less manipulated by the wealthy and powerful?
*gets misty eyed*

@samueljohn@mastodon.world
2025-08-02 15:38:16

This resonates 50% with me. But the other 50%, I am like you and your manager have to become more the architects and less the lines-of-code-checker. Also thinking about tests and edge cases is even more important now. exquisite.social/@thomholwerda

@andres4ny@social.ridetrans.it
2025-06-28 16:58:23

One of the reasons I appreciate walking, biking, and public transit so much (aside from just being a more joyful/connected way to travel, and better for our environment) is because they starve so many absolutely awful billionaires/corporations/oligarchs of profit. Oil companies, car manufacturers, large chain stores..

@metacurity@infosec.exchange
2025-06-25 13:51:12

Yikes, there is always so much infosec news but Metacurity helps a lot with the overload.
Check out today's issue for the cybersecurity developments you should know, including
--Cyber insurance premiums dropped for the first time in 2024,
--CyberAv3ngers shift to psychological manipulation,
--Another hacker hits Paraguay,
--50% of ransomware payments are less than expected,
--Pro-Russian group was reportedly behind Norwegian dam hack,
--UK 2025 …

@midtsveen@social.linux.pizza
2025-06-13 19:59:11
Content warning: Spiders and Titanic: The Oddest Pairing You Didn’t See Coming

Warning for anyone scared of spiders or the Titanic wreck!
Spiders used to terrify me so much I couldn’t even sleep. I overcame it by doing what they call exposure therapy, just googling spiders and looking at pictures until it felt less scary. Now, I actually want to own a tarantula as a pet, which is kind of funny.
Something I’m even more scared of is the Titanic wreckage. It’s a oddly specific weird fear, but I think it’s a specific kind of Submechanophobia, where the fear is …

Close-up macro photo of a jumping spider on a bright green leaf. The spider’s body is covered in fine light-brown and grey hairs, giving it a fuzzy look. Its large, iridescent purple and red jaws stand out, facing forward. Two big, bright green eyes add an almost alien appearance. The spider’s thick, hairy legs have dark tips. The background is softly blurred green and blue, focusing attention on the spider’s detailed features. The image shows the spider’s beauty up close without being threaten…
The photograph shows a section of the wrecked RMS Titanic on the ocean floor, focusing on rusted and corroded railings and deck structures. Marine growth like sea anemones, sponges, and small white invertebrates cover the metal, creating a textured surface in rusty orange, brown, and grey-white tones. The railing is twisted and broken, with a dark sea fan-like organism attached.

The scene is dark and somber, surrounded by deep ocean gloom. It evokes a strong sense of decay, time passing, and n…
@ruth_mottram@fediscience.org
2025-07-21 07:26:13

More or less true. Though I'd argue that even the height and chin requirement is an overstatement..
It really is mostly about personality, friendliness and manners, not appearance.
#love

@thomasfuchs@hachyderm.io
2025-07-25 13:48:04

PSA: 🥵 Idk who needs to hear this, but in a heatwave for the love of science use air conditioning. If you don’t have air conditioning, get a portable unit as soon as possible.
The climate is going to get hotter and heat can very easily kill you or a loved one or a pet; or cause permanent disability.
Heat is a lot more dangerous than cold temperatures; even if you’re “young and healthy”.
The climate is already fucked and has long reached a tipping point as far as humans go—our only bet is large-scale CO2 extraction.
Sacrificing your health “to save energy*” for an imaginary fight that we have already lost is stupid. It’s also one of those “individual responsibility” mindfucks—the culprits for pollution are industry maximizing profits and governments that are not acting.
Anyway, stay cool.
*note that AC uses overall a lot less energy than heating does

@arXiv_csIR_bot@mastoxiv.page
2025-08-06 09:04:40

Reliable Evaluation Protocol for Low-Precision Retrieval
Kisu Yang, Yoonna Jang, Hwanseok Jang, Kenneth Choi, Isabelle Augenstein, Heuiseok Lim
arxiv.org/abs/2508.03306

@thesaigoneer@social.linux.pizza
2025-06-04 12:21:27

To paraphrase Barry Scwartz: The Paradox of Choice – Why More Is Less
With the continuous glitz of KDE, Hyprland, dwm, Slackware and Gentoo as daily drivers it's the time of year to start winding down.
Also because I've been revisiting Steve Anelay's videos (OldTechBloke, sorely missed) I'll spend a month on the Mate desktop, starting this Saturday.
But what to run it on? Help me make a choice, appreciated 😎

@whitequark@mastodon.social
2025-07-20 20:29:01

good news: the webusb version of #GlasgowInterfaceExplorer software is now more or less as fast as native (the latency is a bit worse)
bad news: i have no idea why. i didn't do anything

benchmark results
@pavelasamsonov@mastodon.social
2025-06-20 22:29:30

Tech companies think that, if they make products *look* futuristic, you will think that they are actually innovating. In reality what we get is not "Star Trek communicators inspired cell phones" but "every experience with your computer is going to be like begging HAL to open the pod bay doors" except less evil and more frustrating.

@ginevra@hachyderm.io
2025-06-27 23:37:57

Ah, frequency illusion bias/Baader-Meinhof phenomenon! I have been learning about the nation-state Treaty of Westphalia stuff.
I'm finding it a bit odd that I'm so late to learn about this ... is it less emphasised in Australia? Why is it emphasised in the US?
At my current stage of learning, Australia's self-definition feels heavily 'state', with any discussion of 'nation' often being tied to racism.
More to learn I guess!
#NationState #TreatyOfWestphalia

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@mariyadelano@hachyderm.io
2025-07-21 21:38:00

This is one of the most beautiful essays I’ve read all year.
“But while I've always suspected that AI is about a denial of death, in the aftermath of the traumatic and unexpected arrival of the Real, I feel this even more deeply. AI promises an illusion of steady continuity in a world full of unexpected redirection. Nothing shatters that illusion of perpetuity like the sudden death of someone you love.”
Wow wow wow wow. I’m crying in public reading it and couldn’t care less. mastodon.social/@antisomniac/1

@arXiv_csCR_bot@mastoxiv.page
2025-06-04 07:24:46

Are Crypto Ecosystems (De)centralizing? A Framework for Longitudinal Analysis
Harang Ju, Ehsan Valavi, Madhav Kumar, Sinan Aral
arxiv.org/abs/2506.02324

'Very true,' said the Duchess: 'flamingoes and mustard both bite. And the moral of that is — "Birds of a feather flock together."'
...
Only mustard isn't a bird,' Alice remarked.
...
'It's a mineral, I think,' said Alice.
'...said the Duchess...there's a large mustard-mine near here. And the moral of that is — "The more there is of mine, the less there is of yours."'
--Lewis Carroll

@karlauerbach@sfba.social
2025-06-27 18:18:03

Today's SCOTUS ruling on the maximum scope of district and circuit (appellate) court rulings is going to create a hierarchy of zones in the US.
These zones will align with the boundaries of the Federal courts (and thus may be altered by Congress).
Some of these zones will have more freedom - such as those in California - and some will will have less - such as those in Texas.
We are slowing moving the US away from the Constitution and back towards the failed Articles of …

@mgorny@social.treehouse.systems
2025-07-14 16:39:18

About morbid thriftiness (Autism Spectrum Condition)
As you may have noticed, I am morbidly thrifty. Usually I don't buy stuff that I don't need — and if I decide that I actually need something, I am going to ponder about it for a while, look for value products, and for the best price. And with some luck, I'm going to decide I don't need it that bad after all.
One reason for that is probably how I was raised. My parents taught me to be thrifty, so I have to be. It doesn't matter that, from retrospective, I see that their thriftiness was applied rather arbitrarily to some spendings and not others, or that perhaps they were greedy — spending less on individual things so that they could buy more. Well, I can't delude myself like that, so I have to be thrifty for real. And when I fail, when I pay too much, when I get cheated — I feel quite bad about it.
The other reason is that I keep worrying about my future. It doesn't matter how rich I may end up — I'll keep worrying that I'll run out of money in the future. Perhaps I'll lose a job and won't be able to find anything for a long time, Perhaps something terrible will happen and I'm going to need to pay a lot suddenly.
Another thing is that I easily get attached to objects. Well, it's easier to be thrifty when you really don't want to replace stuff. Over time you also learn to avoid getting new stuff at all, since the more stuff you have, the more stuff may break and need to be thrown away.
Finally, there's my environmental responsibility. I admit that I don't do enough — but at least the things I can do, I do.
[EDIT: and yes, I feel bad about how expensive my new phone was, even though it's of much higher quality than the last one. Also, I got a worse deal because I waited too long.]
#ActuallyAutistic

@jby@ecoevo.social
2025-06-18 19:24:35

A new paper projecting Joshua tree habitat under future climate based on incredibly high-resolution distribution data, from Joshua Tree Genome Project collaborators at USGS. They estimate up to 80% loss of suitable habitat by 2100 under the worst-case climate scenario.
#JoshuaTree #science

Map of projected future habitat probabilities for Joshua tree populations based on random forest models of presence and absence, for the years 2071-2100 under SSP3-7.0. Parts of the trees' current range, indicated as outlines, are colored to indicate high probability of presence, but many parts are colored to indicate lower probability
A scatterplot of estimated future suitable habitat area in 2021-2040, 2041-2070, and 2071-2020, under three different future climate scenarios and based on modeling from different baseline time frames. In general, less suitable habitat is projected in the latest time period, and less is projected under more sever climate change
@lilmikesf@c.im
2025-07-05 16:31:02

Paper Of Record delves into how unchecked global warming #climate chaos created catastrophic conditions for deadly #floods in #TX. The #GuadalupeRiver

The Guadalupe River rose from three feet to 34 feet in about 90 minutes, according to data from a river gauge near the town of Comfort, Texas. The volume of water exploded from 95 cubic feet per second to 166,000 cubic feet per second.

And the warming climate is creating the conditions in Texas for more of these sharp, deadly deluges.

In the eastern part of the state, the number of days per year with at least two inches of rain or snow has increased by 20 percent since 1900, according to the …
o Methane SAT Is Lost: The satellite, launched to track planet-warming emissions from oil and gas sites, was just a year into its mission. It has lost power, the mission’s controllers said, and most likely cannot be recovered.

o Imported Trash: Malaysia banned all plastic waste shipments from nations that had not sighed an agreement regulating hazardous waste. That includes the United States, which shipped more than 35,000 tons of it to the country in 2024.

» Saltier Seas, Less Ice: A study p…
@burger_jaap@mastodon.social
2025-06-20 17:15:37

Amsterdam is celebrating its 750th anniversary. Tomorrow, this will be celebrated with activities on a large section of the ring road: for one day, there will be a bit more city for people and less for cars. Hopefully, this will be an appetiser for more.
opdering.amsterdam/en/

Screenshot of website. It shows a map of Amsterdam. It indicates parts of the ring motorway. These are shown as activity zones.
@arXiv_quantph_bot@mastoxiv.page
2025-05-30 10:32:25

This arxiv.org/abs/2502.09930 has been replaced.
initial toot: mastoxiv.page/@arXiv_qu…

@davej@dice.camp
2025-06-23 19:38:33

This is another horrifying statistic to shelve alongside the composition of global mammalian biomass:
• humans 34%
• livestock and pets 62%
• wild animals 4%
#science #biology #ecology

An infographic breaking down the distribution of mammalian biomass (2015 figures):

Wild animals 4%

Humans 34%

Livestock and pets 62%, comprising:
Cattle 35%
Pigs 12%
Buffalo 5%
Sheep 3%
Goats 3%
Horses 2%
Camels, asses, and pets less than 1% each
@sonnets@bots.krohsnest.com
2025-05-31 11:25:10

Sonnet 020 - XX
A woman's face with nature's own hand painted,
Hast thou, the master mistress of my passion;
A woman's gentle heart, but not acquainted
With shifting change, as is false women's fashion:
An eye more bright than theirs, less false in rolling,
Gilding the object whereupon it gazeth;
A man in hue all hues in his controlling,
Which steals men's eyes and women's souls amazeth.
And for a woman wert thou f…

@arXiv_eessSY_bot@mastoxiv.page
2025-08-05 09:14:30

Modeling Head-Neck Dynamics under Lateral Perturbations Using MPC to Mimic CNS postural stabilization strategy
Chrysovalanto Messiou, Riender Happee, Georgios Papaioannou
arxiv.org/abs/2508.00928

@ErikJonker@mastodon.social
2025-06-23 07:40:02

In the current times we see that the theoretical framework of realism in world politics is getting more important. Ofcourse people are disappointed about the rule and principal based world order becoming less important. But at the same time states have to defend the interests of their citizens.
en.…

@AntoninDanalet@datasci.social
2025-06-02 11:56:52

Consumer price for mobility in 🇨🇭, 2000-2024
The report "Environment Switzerland 2018" (bafu.admin.ch/bafu/en/home/sta

Development of consumer prices for public transport and car relative to household income, 1995-2017. Y axis: Index (2000 = 100). X axis: 1995-2017. Since 2000, prices for public transport have risen more strongly than the disposable income. Car prices, on the other hand, have risen less sharply and have even fallen in recent years. For interpretation, there is also household income. Net disposable income of private households and private (non-profit) organisations per capita.
@arXiv_csCL_bot@mastoxiv.page
2025-07-22 12:23:40

Supernova: Achieving More with Less in Transformer Architectures
Andrei-Valentin Tanase, Elena Pelican
arxiv.org/abs/2507.15773

@arXiv_eessAS_bot@mastoxiv.page
2025-07-01 08:59:53

Less is More: Data Curation Matters in Scaling Speech Enhancement
Chenda Li, Wangyou Zhang, Wei Wang, Robin Scheibler, Kohei Saijo, Samuele Cornell, Yihui Fu, Marvin Sach, Zhaoheng Ni, Anurag Kumar, Tim Fingscheidt, Shinji Watanabe, Yanmin Qian
arxiv.org/abs/2506.23859

@arXiv_csCY_bot@mastoxiv.page
2025-06-03 07:19:39

Prompt Engineer: Analyzing Skill Requirements in the AI Job Market
An Vu, Jonas Oppenlaender
arxiv.org/abs/2506.00058

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@luana@wetdry.world
2025-06-24 23:19:56

Just got a small space heater and damn it’s so much more comfortable here now :blobcatafternoon:
As a @… fan I’d prefer if my AC/heat pump supported heating mode, but welp it was already installed when I moved here so an electric heater will do it for the 2 days a year it’s actually cold enough for that to be useful here. Getting all that heat from the bedroom to the living room would be a problem anyway.
And I didn’t even need to make a fire hazard in order to use it since a 20A outlet was already around due to the coffee thingy, yay!
It’s been on for just around 40mins and it’s already sooo much better in here (the temp sensor is not on the side the heater is pointing to (the sofa) so it’ll take a while for it to reflect the change specially since this is a big room (kitchen dinner living), but just pointing the heater to where I’m at is enough to make it a comfortable temperature (and probably even way too hot in a bit)).
I’ve been wanting this for a while, but it never felt worth it bc we don’t really have many cold days here. Tho this year we got some more I think and today was specially cold (9~11°C) so I decided to just do it. Extra points bc it was available on fucking iFood of all places so it arrived less than an hour after I ordered it lmao.

We can say this at least:
Musk is one of the most malignant people ever to hold a position of influence in American politics.
His actions, without exaggeration, have devastated the health and security of American society
and directly caused the deaths of tens of thousands of people all over the world, with millions more to follow given the course that he has set.
Musk’s DOGE, which is more or less a conspiracy to destroy constitutional government in this country, has …

@arXiv_csDS_bot@mastoxiv.page
2025-06-26 07:47:00

Accept More, Reject Less: Reducing up to 19% Unnecessary Desk-Rejections over 11 Years of ICLR Data
Xiaoyu Li, Zhao Song, Jiahao Zhang
arxiv.org/abs/2506.20141

@alejandrobdn@social.linux.pizza
2025-07-13 18:26:56

Duplicati is in the process of saving my ass again, but their backup restores are extremely slow.
Their web interface is neat and the handling is more or less intuitive, but I think I'm going back to the rsync & cron script formula.
It's what I've done on my server all my life until I installed Docker, but in this case I'm going back to my simple origins.
#docker

@MichaelLondonSF@mas.to
2025-07-15 18:38:03

Need to use more services, buy less stuff.
Today I made appointments with local locksmith and podiatrist; dentist next.
My barber is Albanian; the one opposite is Pakistani & a 3rd is Turkish. All in one little parade in Tottenham

@NicolasGriseyDemengel@piaille.fr
2025-05-21 05:44:29

boydkane.com/essays/experts

@MamasPinkyToe@mastodon.world
2025-06-21 14:43:00

My brain is less like a computer and more like a pachinko machine.

@arXiv_csAI_bot@mastoxiv.page
2025-06-03 18:02:32

This arxiv.org/abs/2504.14870 has been replaced.
initial toot: mastoxiv.page/@arXiv_csAI_…

@threeofus@mstdn.social
2025-06-17 08:23:33

Since halving my #sertraline dose, down to 50mg / day, my brain feels like it’s halved in processing power. Words are more difficult to find, my speech is slower, my thought patterns and creativity are stifled. I’m generally more lethargic. On the plus side I’m less agitated and less prone to angry outbursts. If only I could have all of the good bits and none of the bad. That’s not really h…

@arXiv_hepph_bot@mastoxiv.page
2025-07-30 10:13:21

Infrared singularities and the collinear limits of multi-leg scattering amplitudes
Claude Duhr, Einan Gardi, Sebastian Jaskiewicz, Jonas L\"ubken, Leonardo Vernazza
arxiv.org/abs/2507.21854

@bthalpin@mastodon.social
2025-06-20 18:16:15

Solstice thought: the earth's transit around the sun is like climbing the spiral staircase of the Tower of Pisa. You begin leaning left, after 1/4 you're more or less upright, 1/4 further leaning right, then upright again. Then left again, etc.

Leaning Tower of Pisa partly obscured by a cypress
@midtsveen@social.linux.pizza
2025-07-23 03:11:07

Operating system drama is basically YouTube drama but with way more keyboard clacking and less ukuleles. Friends turn into rivals overnight just because of which OS you use. It is wild to see people treat software choices like reality TV show rivalries.
What makes me even sadder is how something that should come from understanding and research turns into a full-blown philosophical fight. Choosing an OS should be about what works for you, not a reason to start a digital soap opera.

@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

@ThatHoarder@mastodon.online
2025-05-25 20:12:12

The way I see self-nurturing is that it's deeper, it's more compassionate to ourselves, it's more sustainable and also it's more personalised. overcomecompulsivehoarding.co.

A controversial new book:
"We are eating the Earth"
says excess carbon dioxide in the atmosphere is a long-term challenge resulting from an otherwise cheerful story,
in which more people live better lives with fuller bellies and bigger dreams.
Lawyer-turned-science-cop Tim Searchinger discovered that the popular carbon solution of 20 years ago,
-- plant-based biofuels,
-- was a disaster in the making.
His insight: Land used to grow fuel w…

@ruth_mottram@fediscience.org
2025-07-15 11:19:39

To go back to my previous post, for a plant-based diet to be attractive, we need to help people choose plant based alternatives to meat for more or less all occasions. So meat is no longer the default
That's what we need good chefs for: to help turn traditional foods into vegetarian/vegan alternatives that are tasty and easy to prepare.
Most traditional dishes are surprisingly modern. Our tastes can easily be changed...
As an example, in France, the markets are laden with charcuterie and cheese, BUT ALSO, beautiful fresh fruits and vegetables - a nudge towards the latter would improve human health, animal welfare and reduce emissions...
#PlantBased food.
This site however is outstanding making french cuisine vegan
menu-vegetarien.com/

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@rocksongoftheweek@mastodon.world
2025-07-04 09:16:09

This week’s pick is a brand new track from Mario Vayne that sounds like it time-travelled from a leather-clad, neon-soaked past.
Check it out and see what we had to say about it and the artist behind it, and don't forget to follow us for your weekly dose of #rock and #metal ... handpicked, review…

@playinprogress@assemblag.es
2025-08-04 07:08:04

more wind splitting the seconds/begonias in dappled morning sunlight
#photography #bloomScrolling #begonia #orange

a deep orange red begonia flower in dappled sunlight coming from behind, mostly hitting the green leaves and less so the flower itself. in the background out of focus some blue-grey asphalt and green tree foliage
the same view as before, but this time the light is dappled slightly differently, catching more of the top part of the flower, giving it a color gradient from more yellow-orange where the sun hits it to more red-orange in the bottom, with a smooth gradation in between
again the same view as before, but now the sunlight is hitting the begonia flower straight in the middle from behind, giving its center an intense bright glow, contrasting with the more shadow-y background
again the same view with another variation on the dappled sunlight, this one bright but mellow

Polls show the so-called “big, beautiful” budget bill championed by Donald Trump and Republicans in Congress
is becoming deeply unpopular
as more people learn about the deadly consequences of proposals to slash the health care safety net.
However, the legislation’s massive investment in Trump’s mass deportation campaign -- including billions of dollars to deploy 10,000 additional Immigration and Customs Enforcement (ICE) agents nationwide
has received far less attent…

More than nine in 10 renewable power projects globally are now cheaper than fossil fuel alternatives.
Solar power is about 41% cheaper than the lowest-cost fossil fuel alternative,
and onshore wind generation is less than half the price of fossil fuels, according to a report from the International Renewable Energy Agency.
Costs have been driven down by the increasingly widespread use of the technologies,
a huge focus on low-carbon manufacturing in China,
and bu…

@tiotasram@kolektiva.social
2025-07-30 19:33:03

Refugees, intergenerational trauma, child death, abusive family
Also just finished "The Best We Could Do" by Thi Bui, which is the second memoir I've stumbled upon recently that deals with the Vietnamese exodus after the end of the war (House Without Walls by Ching Yeung Russel is the other one, which is written in verse, not illustrated). Bui traces more of the political landscape and history of Vietnam through the stories of both of her parents, and also unpacks a lot of intergenerational trauma, but has less focus on the boat trip out and refugee camp experience, presumably because hers were easier than Russel's.
My thoughts after reading this return repeatedly to all of the impacts that patriarchy and toxic masculinity had on her father, from setting up his father and grandfather to be abusive towards him and the women in their lives, to pushing him deep into depression when he feels unable to fulfill the role of a protective husband, ironically leaving his wife to pick up the slack and ultimately ruining their relationship, to how it teaches him to despise and shirk the caregiver role he's left with, ultimately passing on some measure of trauma to his children. For sure war, abusive family, and child death can happen in the absence of patriarchy and those are in some ways perhaps bigger factors here, but at the same time, Bui's mom copes with most of the same factors in healthier ways.
#AmReading

Virtually all of the most important parts of the U.S. government that were created to protect the U.S. from the greatest risks we face
are being shut down, gutted, or marginalized.
What is more, plans and statements of the president and his advisers suggest further cuts are contemplated
that increase the likelihood that one or more crises will catch us unawares
🔥and that when that happens, we will be much less equipped to handle it than we have been in decades.

@tiotasram@kolektiva.social
2025-07-28 10:41:42

How popular media gets love wrong
Had some thoughts in response to a post about loneliness on here. As the author emphasized, reassurances from people who got lucky are not terribly comforting to those who didn't, especially when the person who was lucky had structural factors in their favor that made their chances of success much higher than those is their audience. So: these are just my thoughts, and may not have any bearing on your life. I share them because my experience challenged a lot of the things I was taught to believe about love, and I think my current beliefs are both truer and would benefit others seeing companionship.
We're taught in many modern societies from an absurdly young age that love is not something under our control, and that dating should be a process of trying to kindle love with different people until we meet "the one" with whom it takes off. In the slightly-less-fairytale corners of modern popular media, we might fund an admission that it's possible to influence love, feeding & tending the fire in better or worse ways. But it's still modeled as an uncontrollable force of nature, to be occasionally influenced but never tamed. I'll call this the "fire" model of love.
We're also taught (and non-boys are taught more stringently) a second contradictory model of love: that in a relationship, we need to both do things and be things in order to make our partner love us, and that if we don't, our partner's love for us will wither, and (especially if you're not a boy) it will be our fault. I'll call this the "appeal" model of love.
Now obviously both of these cannot be totally true at once, and plenty of popular media centers this contradiction, but there are really very few competing models on offer.
In my experience, however, it's possible to have "pre-meditated" love. In other words, to decide you want to love someone (or at least, try loving them), commit to that idea, and then actually wind up in love with them (and them with you, although obviously this second part is not directly under your control). I'll call this the "engineered" model of love.
Now, I don't think that the "fire" and "appeal" models of love are totally wrong, but I do feel their shortcomings often suggest poor & self-destructive relationship strategies. I do think the "fire" model is a decent model for *infatuation*, which is something a lot of popular media blur into love, and which drives many (but not all) of the feelings we normally associate with love (even as those feelings have other possible drivers too). I definitely experienced strong infatuation early on in my engineered relationship (ugh that sounds terrible but I'll stick with it; I promise no deception was involved). I continue to experience mild infatuation years later that waxes and wanes. It's not a stable foundation for a relationship but it can be a useful component of one (this at least popular media depicts often).
I'll continue these thoughts in a reply, by it might take a bit to get to it.
#relationships

@tiotasram@kolektiva.social
2025-05-15 17:02:17

The full formula for the probability of "success" is:
p = {
1/(2^(-n 1)) if n is negative, or
1 - (1/(2^(n 1))) if n is zero or positive
}
(Both branches have the same value when n is 0, so the behavior is smooth around the origin.)
How can we tweak this?
First, we can introduce fixed success and/or failure chances unaffected by level, with this formula only taking effect if those don't apply. For example, you could do 10% failure, 80% by formula, and 10% success to keep things from being too sure either way even when levels are very high or low. On the other hand, this flattening makes the benefit of extra advantage levels even less exciting.
Second, we could allow for gradations of success/failure, and treat the coin pools I used to explain that math like dice pools a bit. An in-between could require linearly more success flips to achieve the next higher grade of success at each grade. For example, simple success on a crit role might mean dealing 1.5x damage, but if you succeed on 2 of your flips, you get 9/4 damage, or on 4 flips 27/8, or on 7 flips 81/16. In this world, stacking crit levels might be a viable build, and just giving up on armor would be super dangerous. In the particular case I was using this for just now, I can't easily do gradations of success (that's the reason I turned to probabilities in the first place) but I think I'd favor this approach when feasible.
The main innovation here over simple dice pools is how to handle situations where the number of dice should be negative. I'm almost certain it's not a truly novel innovation though, and some RPG fan can point out which system already does this (please actually do this, I'm an RPG nerd too at heart).
I'll leave this with one more tweak we could do: what if the number 2 in the probability equation were 3, or 2/3? I think this has a similar effect to just scaling all the modifiers a bit, but the algebra escapes me in this moment and I'm a bit lazy. In any case, reducing the base of the probability exponent should let you get a few more gradations near 50%, which is probably a good thing, since the default goes from 25% straight to 50% and then to 75% with no integer stops in between.

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@tiotasram@kolektiva.social
2025-07-17 13:31:49

To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.

@tiotasram@kolektiva.social
2025-05-16 10:20:01

Just finished reading Dream State by Eric Puchner, and it kind of pissed me off. I think I can see exactly why it might be popular with a certain WASPy liberal "literati" type that probably includes a lot of influential reviewers, but to me, it's points about love & life, despite being much more complex, ring just about as hollow (and harmful) as a Disney movie.
I've got a lot of quibbles, but I think most galling to me was a throwaway line near the beginning about why platonic relationships get so much less glory in media than romantic ones, when so much of the plot proceeds to revolve around a stale agency-free romantic attraction model that's certainly more complex on its face than a Disney romance but which is ultimately just as misleading.
Go read Loveless or really any YA #OwnVoices romance (especially queer) and you'll be learning more & better lessons about the human condition.