Tootfinder

Opt-in global Mastodon full text search. Join the index!

China’s modern influence is primarily built on a formidable economy, dramatic advancements in renewable energy, and a willingness to engage globally with the greatest crisis facing humanity: climate breakdown.
In that sense, the tanks, cannon and missiles that filed past Tiananmen Square may well prove less important in reshaping the world order than the wind turbines, solar panels and electric cars that are churning out of Chinese factories
China has already won the battle for t…

@arXiv_eessSP_bot@mastoxiv.page
2025-08-26 10:53:37

Synchrosqueezed X-Ray Wavelet-Chirplet Transform for Accurate Chirp Rate Estimation and Retrieval of Modes from Multicomponent Signals with Crossover Instantaneous Frequencies
Qingtang Jiang, Shuixin Li, Jiecheng Chen, Lin Li
arxiv.org/abs/2508.17942

@arXiv_csDL_bot@mastoxiv.page
2025-06-27 08:39:59

The State of Papers, Retractions, and Preprints: Evidence from the CrossRef Database (2004-2024)
Khalid M. Saqr
arxiv.org/abs/2506.21232

The creators of “South Park” aren’t holding back.
Nor are Stephen Colbert and Jon Stewart.
Just a few weeks after Paramount settled a lawsuit with Donald Trump,
and less than a week after the company made the abrupt decision to cancel “The Late Show With Stephen Colbert,”
some of the company’s marquee names have been using their Paramount platforms to attack their corporate bosses
— as well as the president.
In the season premiere of the animated Comedy Ce…

@arXiv_astrophSR_bot@mastoxiv.page
2025-07-23 08:41:02

On the Image Profiles of Transients in the Palomar Sky Survey
Beatriz Villarroel, Enrique Solano, Geoffrey W. Marcy
arxiv.org/abs/2507.15896

@thomastraynor@social.linux.pizza
2025-08-22 13:34:53

SIgh, looking for a replacement printer and the specs are less than helpful.
Wi-Fi Enabled - Yes
Well that is helpful (not). Is it just 2.4GHz, 5 GHz, dual-band? What level of 802.11? I want one that supports 802.11a or 802.11ac or 802.11ax.
en.wikipedia.org/wiki/IEEE_802

@davidaugust@mastodon.online
2025-07-09 16:11:37

So Assemblyperson Nick Schultz says well how potus is acting as a despot and ice and the military are out of line. Local law enforcement can handle any civil unrest.
instagram.com/reel/DL453gcxFr5/

@thomasfuchs@hachyderm.io
2025-09-09 15:08:41

So if you get into my replies and are proud of not caring about your code and whether it is well-written and well thought out and that it's fine that LLMs write it for you... I will block you.
Literally your opinion is worth less than shit.

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@mgorny@social.treehouse.systems
2025-07-14 16:39:18

About morbid thriftiness (Autism Spectrum Condition)
As you may have noticed, I am morbidly thrifty. Usually I don't buy stuff that I don't need — and if I decide that I actually need something, I am going to ponder about it for a while, look for value products, and for the best price. And with some luck, I'm going to decide I don't need it that bad after all.
One reason for that is probably how I was raised. My parents taught me to be thrifty, so I have to be. It doesn't matter that, from retrospective, I see that their thriftiness was applied rather arbitrarily to some spendings and not others, or that perhaps they were greedy — spending less on individual things so that they could buy more. Well, I can't delude myself like that, so I have to be thrifty for real. And when I fail, when I pay too much, when I get cheated — I feel quite bad about it.
The other reason is that I keep worrying about my future. It doesn't matter how rich I may end up — I'll keep worrying that I'll run out of money in the future. Perhaps I'll lose a job and won't be able to find anything for a long time, Perhaps something terrible will happen and I'm going to need to pay a lot suddenly.
Another thing is that I easily get attached to objects. Well, it's easier to be thrifty when you really don't want to replace stuff. Over time you also learn to avoid getting new stuff at all, since the more stuff you have, the more stuff may break and need to be thrown away.
Finally, there's my environmental responsibility. I admit that I don't do enough — but at least the things I can do, I do.
[EDIT: and yes, I feel bad about how expensive my new phone was, even though it's of much higher quality than the last one. Also, I got a worse deal because I waited too long.]
#ActuallyAutistic

@grumpybozo@toad.social
2025-08-16 00:31:31

Random tangent: I heard recently that a native turtle in Florida has been found eating them as well, not by any sort of toxin avoidance, but because they have a genetic immunity to bufotoxin.
mathstodon.xyz/@gregeganSF/115

@wwwgem@social.linux.pizza
2025-08-16 00:51:48

You may know about #taskwarrior, but do you know #vit?
www-gem.codeberg.page/cli_task

@arXiv_mathPR_bot@mastoxiv.page
2025-09-22 09:39:41

Sharpness of the phase transition for constrained-degree percolation
Ivailo Hartarsky, Roger W. C. Silva
arxiv.org/abs/2509.16162 arxiv.org…

@arXiv_csCL_bot@mastoxiv.page
2025-09-17 10:30:40

Investigating ReLoRA: Effects on the Learning Dynamics of Small Language Models
Yuval Weiss, David Demitri Africa, Paula Buttery, Richard Diehl Martinez
arxiv.org/abs/2509.12960

@arXiv_astrophEP_bot@mastoxiv.page
2025-07-18 09:14:12

A dynamical dichotomy in large binary asteroids
K. Minker, B. Carry, F. Vachier, M. Marsset, J. \v{D}urech, J. Hanu\v{s}, L. Liberato, W. J. Merline, J. L. Margot, C. Dumas, L. M. Close, A. Conrad, W. M. Grundy, R. Behrend, R. Roy, J. Berthier, I. Sokova, E. Sokov, D. Gorshanov, M. Ferrais, E. Jehin, A. Martin, K. B. Alton

@Mediagazer@mstdn.social
2025-08-07 16:40:56

SEC filing: Paramount Skydance CEO David Ellison and President Jeff Shell will get paid no less than $3.5M per year and have bonus targets of $1.5M per year (Alex Weprin/The Hollywood Reporter)
hollywoodreporter.com/business

@andres4ny@social.ridetrans.it
2025-09-01 19:33:48

Before we had kids, we had a crappy electric stove (and after that, a less crappy but still crappy gas stove) that wasted a bunch of heat. It heats the coil, then the pan, then whatever you cooked. I'd fry an egg, add cheese and fake deli meat, and then turn off the stove and put it all on a tortilla in the pan; the residual heat, rather than being wasted, would continue melting the cheese as well as making the tortilla crispy (with upturned edges). Less waste heat, and delicious.

@floheinstein@chaos.social
2025-07-04 11:37:16

It's less than 6 months to Xmas, you might as well vote for the Lego Idea
"National Lampoon's Christmas Vacation - Griswold House"
beta.ideas.lego.com/product-id

The house of the Griswold family from the holiday movie National Lampoon's Christmas Vacation, including their camper, completely from Lego.
@paulbusch@mstdn.ca
2025-08-22 11:55:01

Good Morning #Canada
Yesterday, we upgraded our water filtration and, to ease the pain in my wallet, I thought I would share some facts on well water for those who care.
- approximately 11% of Canadians rely on non-municipal water sources.
- the vast majority of wells are drilled wells because they are safer, provide higher volume, and generally last longer.
- dug wells (like ours) are less common and are usually placed where there is a high water table. They are more susceptible to surface runoff.
- well water, although free, is not necessarily cheaper than municipal supply. There is a large upfront cost, which can vary greatly depending on soil conditions, but $25K for drilled and $10K for dug is not uncommon.
- a pump and filtration equipment can cost another $10K, depending on water treatment needed. We needed an additional Iron Filter due to high concentration. Sediment filters and UV treatment require annual maintenance, typically $400 .
#CanadaIsAwesome #Water #GlassHalfFull

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@arXiv_astrophSR_bot@mastoxiv.page
2025-08-19 09:34:10

The highest mass Kepler red giants -- II. Spectroscopic parameters, the amplitude-activity relation, and unexpected halo orbits
Courtney L. Crawford, Yaguang Li, Daniel Huber, Jie Yu, Timothy R. Bedding, Sarah L. Martell, Benjamin T. Montet, Dennis Stello, Howard Isaacson, Andrew W. Howard, Benjamin J. Fulton, Jingwen Zhang, Alex S. Polanski, Lauren M. Weiss

@cowboys@darktundra.xyz
2025-09-04 14:44:36

Mailbag: Catching Philly at the right time? dallascowboys.com/news/mailbag

@arXiv_csCV_bot@mastoxiv.page
2025-08-01 10:23:51

Phi-Ground Tech Report: Advancing Perception in GUI Grounding
Miaosen Zhang, Ziqiang Xu, Jialiang Zhu, Qi Dai, Kai Qiu, Yifan Yang, Chong Luo, Tianyi Chen, Justin Wagle, Tim Franklin, Baining Guo
arxiv.org/abs/2507.23779

@mgorny@social.treehouse.systems
2025-09-02 11:29:40

Well, turns out that my laptop is even less portable than I thought. On top of dead battery, I've just discovered that the wifi card goes nuts when I move the laptop a little, which in the best case results in kernel panic, and in the worst in plain old hardware hang.

@tiotasram@kolektiva.social
2025-08-11 13:26:07

How the US democracy is designed to avoid representation
Right now in the US, a system which proclaims to give each citizen representation, my interests are not represented very well by most of my so-called representatives at any level of government. This is true for a majority of Americans across the political spectrum, and it happens by design. The "founding fathers" were explicit about wanting a system of government that would appear Democratic but which would keep power in the hands of rich white landowners, and they successfully designed exactly that. But how does disenfranchisement work in this system?
First, a two-party system locked in by first-post-the-post winner-takes-all elections immediately destroys representation for everyone who didn't vote for the winner, including those who didn't vote or weren't eligible to vote. Single-day non-holiday elections and prisoner disenfranchisement go a long way towards ensuring working-class people get no say, but much larger is the winner-takes all system. In fact, even people who vote for the winning candidate don't get effective representation if they're really just voting against the opponent as the greater of two evils. In a 51/49 election with 50% turnout, you've immediately ensured that ~75% of eligible voters don't get represented, and with lesser-of-two-evils voting, you create an even wider gap to wedge corporate interests into. Politicians need money to saturate their lesser-of-two-evils message far more than they need to convince any individual voter to support their policies. It's even okay if they get caught lying, cheating, or worse (cough Epstein cough) as long as the other side is also doing those things and you can freeze out new parties.
Second, by design the Senate ensures uneven representation, allowing control of the least-populous half of states to control or at least shut down the legislative process. A rough count suggests 284.6 million live in the 25 most-populous states, while only 54.8 million live in the rest. Currently, counting states with divided representation as two half-states with half as much population, 157.8 million people are represented by 53 Republican sensors, while 180.5 million people get only 45 seats of Democratic representation. This isn't an anti-Democrat bias, it's a bias towards less-populous states, whose residents get more than their share it political power.
I haven't even talked about gerrymandering yet, or family/faith-based "party loyalty," etc. Overall, the effect is that the number of people whose elected representatives meaningfully represent their interests on any given issue is vanishingly small (like, 10% of people tops), unless you happen to be rich enough to purchase lobbying power or direct access.
If we look at polls, we can see how lack of representation lets congress & the president enact many policies that go against what a majority of the population wants. Things like abortion restrictions, the current ICE raids, and Medicare cuts are deeply unpopular, but they benefit the political class and those who can buy access. These are possible because the system ensures at every step of the way that ordinary people do NOT get the one thing the system promises them: representation in the halls of power.
Okay, but is this a feature of all democracies, inherent in the nature of a majority-decides system? Not exactly...
1/2
#uspol #democracy

@arXiv_physicsfludyn_bot@mastoxiv.page
2025-08-05 08:26:00

Breakup cascade in gas filament
Ali\'enor Rivi\`ere (PMMH), Zehua Liu (MAE), Jishen Zhang (PMMH), Laurent Duchemin (PMMH), Luc Deike (HMEI), St\'ephane Perrard (PMMH)
arxiv.org/abs/2508.00872

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-08-05 09:56:40

Momentum distribution and correlation of free particles in the Tsallis statistics using conventional expectation value and equilibrium temperature
Masamichi Ishihara
arxiv.org/abs/2508.01609

@arXiv_mathAP_bot@mastoxiv.page
2025-09-05 08:42:41

Energy decay and blow-up of viscoelastic wave equations with polynomial nonlinearity and damping
Qingqing Peng, Yikan Liu
arxiv.org/abs/2509.03799

@arXiv_physicsappph_bot@mastoxiv.page
2025-07-10 08:23:51

Theory of Dielectric Behavior in Composites
Lifeng Hao, Fan Li, Yongqi Li, Siyong Wang, Xiaodong He
arxiv.org/abs/2507.06240

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-08-01 09:39:41

Electron Doping Stabilization of Highly-Polar Supertetragonal BaSnO3
Qing Zhang, Karin M. Rabe, Xiaohui Liu
arxiv.org/abs/2507.23649 arxiv.…

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@ruth_mottram@fediscience.org
2025-08-28 21:31:38

I do not like that we in Denmark and the EU have to spend so much on weapons and defence rather than welfare and quality of life. But I like even less the idea of sending my children to fight wars we were told couldn't happen and losing control over our own rights and protections.
And if we want to get anything done on a range of subjects from #ClimateChange and #BiodiversityCrisis to #democracy, #Humanrights, #equality and bodily autonomy as well as #techRegulation ethical AI then we need to carry the big sticks.
And we need to do all this without leaving Ukraine as a hostage to fortune.
However, we're here and we're not going away...

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI