Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_mathPR_bot@mastoxiv.page
2025-10-08 10:05:19

A Universal Moments-Only Bound for Cumulants
Jiechen Zhang
arxiv.org/abs/2510.05739 arxiv.org/pdf/2510.05739

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@arXiv_mathNA_bot@mastoxiv.page
2025-10-08 08:56:59

Optimal $L^2$ Error Estimates for Non-symmetric Nitsche's Methods
Gang Chen, Chaoran Liu, Yangwen Zhang
arxiv.org/abs/2510.05597 arxiv.…

@arXiv_csCV_bot@mastoxiv.page
2025-10-07 12:38:12

Did you just see that? Arbitrary view synthesis for egocentric replay of operating room workflows from ambient sensors
Han Zhang, Lalithkumar Seenivasan, Jose L. Porras, Roger D. Soberanis-Mukul, Hao Ding, Hongchao Shu, Benjamin D. Killeen, Ankita Ghosh, Lonny Yarmus, Masaru Ishii, Angela Christine Argento, Mathias Unberath
arxiv.o…

@arXiv_astrophGA_bot@mastoxiv.page
2025-10-06 09:40:29

Probing the Low Radio Frequency Emission in PG Quasars with the uGMRT - II
Sanna Gulati (NCRA-TIFR), Silpa Sasikumar (Universidad de Concepci\'on), Preeti Kharb (NCRA-TIFR), Luis C. Ho (Peking University), Salmoli Ghosh (NCRA-TIFR), Janhavi Baghel (NCRA-TIFR)
arxiv.org/abs/2510.02736

@arXiv_csDS_bot@mastoxiv.page
2025-10-06 09:03:09

Low Recourse Arborescence Forests Under Uniformly Random Arcs
J Niklas Dahlmeier, D Ellis Hershkowitz
arxiv.org/abs/2510.02950 arxiv.org/pd…

@arXiv_astrophSR_bot@mastoxiv.page
2025-09-04 09:48:31

Crowning the Queen: Membership, Age, Rotation, and Activity for the Open Cluster Coma Berenices
M. A. Ag\"ueros (Columbia University, Laboratoire d'astrophysique de Bordeaux), J. L. Curtis (Columbia University), A. N\'u\~nez (Columbia University), C. Burhenne (Rutgers), P. Rothstein (Columbia University), B. J. Shaham (Columbia University), K. Singh (Columbia University), P. Bergeron (Universit\'e de Montr\'eal), M. Kilic (University of Oklahoma), K. R. Covey (West…

@hex@kolektiva.social
2025-07-16 22:25:58

War is an unconscionable horror. The illusions of "international law" and "rules of war" have lead us to believe that war can be clean, managed, and "civilized."
But wars are fought by humans and humans are messy. Humans are not well suited to following orderly rules. Humans respond to their environment. Humans in extraordinary situations can be extraordinarily vindictive and brutal. Sufficiently traumatized humans can act without a conscience, spreading trauma like an infection. If humans respond to their situation, then there can be no "civilized" war because war is itself an situation outside of the society. It is a place that promotes antisocial behavior and punishes pro-social behavior. War cannot be expected to follow "international law" because it is what fills the void created by the failure of "international law" (so long as we rely on nations).
To call for war is to inflict atrocities on civilians. It is to kill the parents and children who serve, and to destroy the combatants who survive. It is to infect both sides with a trauma that will spread if untreated, when soldiers come home or when they become mercenaries in other wars.
And yet... there are times when the brutality, the incompetence, the evil becomes so unbearable that no other option exists, when taking up arms is simply bringing symmetry to an existing asymmetric conflict. There are times when the worst possible thing is inescapable, though it can never be justified.
In this new era of war, in the scramble of conflict under the collapsing of the (poorly named) "Pax Americana," I hope that we, the people, can understand that war is not a tool to fulfill an objective. It is not part of a larger strategy. It is not an extension of deplomacy.
War is a failure.
While it may be the only way to deal with the irrational - the genocidal, the slaver, the dictator - it is still a failure. It is a failure to build a world in which these people can't control armies and economies, can't turn populations in to cults and bend nations to their will.
And we will continue to have such wars until we unite against those who would use as as pawns, who would control our lives and lead us to our deaths. We will have these wars until we unite, as one world, against those rulers. This is what I mean, and what a lot of other people mean, when we say, "No War, but Class War."

@tiotasram@kolektiva.social
2025-09-13 23:43:29

TL;DR: what if nationalism, not anarchy, is futile?
Since I had the pleasure of seeing the "what would anarchists do against a warlord?" argument again in my timeline, I'll present again my extremely simple proposed solution:
Convince the followers of the warlord that they're better off joining you in freedom, then kill or exile the warlord once they're alone or vastly outnumbered.
Remember that even in our own historical moment where nothing close to large-scale free society has existed in living memory, the warlord's promise of "help me oppress others and you'll be richly rewarded" is a lie that many understand is historically a bad bet. Many, many people currently take that bet, for a variety of reasons, and they're enough to coerce through fear an even larger number of others. But although we imagine, just as the medieval peasants might have imagined of monarchy, that such a structure is both the natural order of things and much too strong to possibly fail, in reality it takes an enormous amount of energy, coordination, and luck for these structures to persist! Nations crumble every day, and none has survived more than a couple *hundred* years, compared to pre-nation societies which persisted for *tends of thousands of years* if not more. I'm this bubbling froth of hierarchies, the notion that hierarchy is inevitable is certainly popular, but since there's clearly a bit of an ulterior motive to make (and teach) that claim, I'm not sure we should trust it.
So what I believe could form the preconditions for future anarchist societies to avoid the "warlord problem" is merely: a widespread common sense belief that letting anyone else have authority over you is morally suspect. Given such a belief, a warlord will have a hard time building any following at all, and their opponents will have an easy time getting their supporters to defect. In fact, we're already partway there, relative to the situation a couple hundred years ago. At that time, someone could claim "you need to obey my orders and fight and die for me because the Queen was my mother" and that was actually a quite successful strategy. Nowadays, this strategy is only still working in a few isolated places, and the idea that one could *start a new monarchy* or even resurrect a defunct one seems absurd. So why can't that same transformation from "this is just how the world works" to "haha, how did anyone ever believe *that*? also happen to nationalism in general? I don't see an obvious reason why not.
Now I think one popular counterargument to this is: if you think non-state societies can win out with these tactics, why didn't they work for American tribes in the face of the European colonizers? (Or insert your favorite example of colonialism here.) I think I can imagine a variety of reasons, from the fact that many of those societies didn't try this tactic (and/or were hierarchical themselves), to the impacts of disease weakening those societies pre-contact, to the fact that with much-greater communication and education possibilities it might work better now, to the fact that most of those tribes are *still* around, and a future in which they persist longer than the colonist ideologies actually seems likely to me, despite the fact that so much cultural destruction has taken place. In fact, if the modern day descendants of the colonized tribes sow the seeds of a future society free of colonialism, that's the ultimate demonstration of the futility of hierarchical domination (I just read "Theory of Water" by Leanne Betasamosake Simpson).
I guess the TL;DR on this is: what if nationalism is actually as futile as monarchy, and we're just unfortunately living in the brief period during which it is ascendant?

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding