Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_quantph_bot@mastoxiv.page
2025-08-12 09:45:23

Unambiguous discrimination of the change point for quantum channels
Kenji Nakahira
arxiv.org/abs/2508.06785 arxiv.org/pdf/2508.06785

@arXiv_mathCT_bot@mastoxiv.page
2025-09-10 07:52:21

On fibred products of toposes
L\'eo Bartoli, Olivia Caramello
arxiv.org/abs/2509.07719 arxiv.org/pdf/2509.07719

@arXiv_hepth_bot@mastoxiv.page
2025-10-09 10:16:01

Observables of boundary RG flows from string field theory
Jaroslav Scheinpflug, Martin Schnabl, Jakub Vo\v{s}mera
arxiv.org/abs/2510.07155

@arXiv_statML_bot@mastoxiv.page
2025-09-09 09:06:22

MOSAIC: Minimax-Optimal Sparsity-Adaptive Inference for Change Points in Dynamic Networks
Yingying Fan, Jingyuan Liu, Jinchi Lv, Ao Sun
arxiv.org/abs/2509.06303

@arXiv_csAI_bot@mastoxiv.page
2025-09-05 09:59:51

Oruga: An Avatar of Representational Systems Theory
Daniel Raggi, Gem Stapleton, Mateja Jamnik, Aaron Stockdill, Grecia Garcia Garcia, Peter C-H. Cheng
arxiv.org/abs/2509.04041

@arXiv_mathPR_bot@mastoxiv.page
2025-09-11 08:34:53

Quenched and annealed heat kernel estimates for Brox's diffusion
Xin Chen, Jian Wang
arxiv.org/abs/2509.08559 arxiv.org/pdf/2509.08559

@arXiv_qbioQM_bot@mastoxiv.page
2025-09-09 09:26:32

Data-driven discovery of dynamical models in biology
Bartosz Prokop, Lendert Gelens
arxiv.org/abs/2509.06735 arxiv.org/pdf/2509.06735

@arXiv_grqc_bot@mastoxiv.page
2025-10-06 09:18:09

A Conceptual Introduction To Signature Change Through a Natural Extension of Kaluza-Klein Theory
Vincent Moncrief, Nathalie E. Rieger
arxiv.org/abs/2510.02492

@mgorny@social.treehouse.systems
2025-07-22 10:21:15

Time for another "review". This one's hard. While the book was quite interesting, it required me to be quite open-minded. Still, I think it's worth mentioning:
Robert Wright — Nonzero: The Logic of Human Destiny
The book basically focused on a thesis that both biological evolution and cultural evolution are a thing, they are directional and this directionality can be explained together using game theory — as eventually leading to more non-zero sum games.
It consists of three chapters. The first one is is focused on the history of civilization. It features many examples from different parts of the world, which makes it quite interesting. The author argues that the culture inevitably is evolving as information processing techniques improve — from writing to the Internet.
The second chapter is focused on biological evolution. Now, the argument is that it's not quite random, but actually directed towards greater complexity — eventually leading to the development of highly intelligent species, and a civilization.
The third chapter is quite speculative and metaphysical, and I'm just going to skip it.
The book is full of optimism. Capitalism creates freedom — because people are more productive when they're working for their own gain, so the free market eliminates slavery. Globalisation creates networks of interdependence that make wars uneconomic. Increased contacts between different cultures makes people more tolerant. And eventually, the humanity may be able to unite facing a common "external" enemy — the climate change.
What can I say? The examples are quite interesting, the whole theory seems self-consistent. Still, I repeatedly looked at the publication date (it's 1999), and wondered if author would write the same thing today (yes, I know I can search for his current opinions).
#books #bookstodon @…

@arXiv_mathAP_bot@mastoxiv.page
2025-10-01 10:08:57

On the propagation of mountain waves: linear theory
Adrian Constantin, J\"org Weber
arxiv.org/abs/2509.26125 arxiv.org/pdf/2509.26125

@arXiv_physicsoptics_bot@mastoxiv.page
2025-10-02 09:43:41

Analysis and Design of a Reconfigurable Metasurface based on Chalcogenide Phase-Change Material for Operation in the Near and Mid Infrared
Alexandros Pitilakis, Alexandros Katsios, Alexandros-Apostolos A. Boulogeorgos
arxiv.org/abs/2510.00950

@arXiv_hepph_bot@mastoxiv.page
2025-10-01 09:46:18

Magnetic Helicity, Magnetic Monopoles, and Higgs Winding
Hajime Fukuda, Yuta Hamada, Kohei Kamada, Kyohei Mukaida, Fumio Uchida
arxiv.org/abs/2509.25734

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@arXiv_condmatquantgas_bot@mastoxiv.page
2025-10-02 08:49:10

Unified theory of attractive and repulsive polarons in one-dimensional Bose gas
Nikolay Yegovtsev, T. Alper Yo\u{g}urt, Matthew T. Eiles, Victor Gurarie
arxiv.org/abs/2510.01046

@arXiv_csRO_bot@mastoxiv.page
2025-09-29 09:40:07

Improved Vehicle Maneuver Prediction using Game Theoretic Priors
Nishant Doshi
arxiv.org/abs/2509.21873 arxiv.org/pdf/2509.21873

@arXiv_hepth_bot@mastoxiv.page
2025-09-04 09:53:31

Computing $c$- and $a$-functions from entanglement
Konstantinos Boutivas, Dimitrios Katsinis, Georgios Pastras, Nikolaos Tetradis
arxiv.org/abs/2509.03259

@mapto@qoto.org
2025-08-24 02:19:29

"In a broader sense, however, today’s ruling is of a piece with this Court’s recent tendencies. “[R]ight when the Judiciary should be hunkering down to do all it can to preserve the law’s constraints,” the Court opts instead to make vindicating the rule of law and preventing manifestly injurious Government action as difficult as possible….. This is Calvinball jurisprudence with a twist. Calvinball has only one rule: There are no fixed rules. We seem to have two: that one, and this Admin…

Calvinball is a reference to a cartoon depicting child games that always change rules to make the kid win
@arXiv_csCL_bot@mastoxiv.page
2025-08-25 10:06:00

A Probabilistic Inference Scaling Theory for LLM Self-Correction
Zhe Yang, Yichang Zhang, Yudong Wang, Ziyao Xu, Junyang Lin, Zhifang Sui
arxiv.org/abs/2508.16456

@arXiv_condmatmeshall_bot@mastoxiv.page
2025-09-24 08:32:44

Interplay of Rashba and valley-Zeeman splittings in weak localization of spin-orbit coupled graphene
L. E. Golub
arxiv.org/abs/2509.18332 a…

@arXiv_physicsbioph_bot@mastoxiv.page
2025-09-29 08:47:08

The relationship between the structural transitions of DMPG membranes and the melting process, and their interaction with water
Thomas Heimburg, Holger Ebel, Peter Grabitz, Julia Preu, Yue Wang
arxiv.org/abs/2509.22457

@arXiv_statME_bot@mastoxiv.page
2025-08-27 09:03:53

Unified theory of testing relevant hypothesis in functional time series
Leheng Cai, Qirui Hu
arxiv.org/abs/2508.18624 arxiv.org/pdf/2508.18…

@arXiv_mathAT_bot@mastoxiv.page
2025-08-21 08:12:20

Enriched model categories and the Dold-Kan correspondence
Martin Frankland, Arnaud Ngopnang Ngomp\'e
arxiv.org/abs/2508.14291 arxiv.org…

@arXiv_mathNT_bot@mastoxiv.page
2025-09-19 09:06:51

Normalized Indexing for Ramification Subgroups
Stephen DeBacker, David Schwein, Cheng-Chiang Tsai
arxiv.org/abs/2509.14881 arxiv.org/pdf/25…

@arXiv_mathFA_bot@mastoxiv.page
2025-09-15 08:13:01

Index theory for non-compact quantum graphs
Daniele Garrisi, Alessandro Portaluri, Li Wu
arxiv.org/abs/2509.09749 arxiv.org/pdf/2509.09749

@arXiv_condmatstrel_bot@mastoxiv.page
2025-09-15 08:30:01

Pseudogap-induced change in the nature of the Lifshitz transition in the two-dimensional Hubbard model
Maria C. O. Aguiar, Helena Bragan\c{c}a, Indranil Paul, Marcello Civelli
arxiv.org/abs/2509.09783

@arXiv_hepth_bot@mastoxiv.page
2025-09-03 12:28:03

Operator Algebras and Third Quantization
Yidong Chen, Marius Junge, Nima Lashkari
arxiv.org/abs/2509.02293 arxiv.org/pdf/2509.02293

@arXiv_hepph_bot@mastoxiv.page
2025-07-21 08:35:00

Theory-informed neural networks for particle physics
Barry M. Dillon, Michael Spannowsky
arxiv.org/abs/2507.13447 arx…

@arXiv_mathPR_bot@mastoxiv.page
2025-09-23 10:27:20

It\^o formula for reduced rough paths
Nannan Li, Xing Gao
arxiv.org/abs/2509.17342 arxiv.org/pdf/2509.17342

@arXiv_nlinAO_bot@mastoxiv.page
2025-09-15 08:37:31

Generalizing thermodynamic efficiency of interactions: inferential, information-geometric and computational perspectives
Qianyang Chen, Nihat Ay, Mikhail Prokopenko
arxiv.org/abs/2509.10102

@arXiv_mathAP_bot@mastoxiv.page
2025-08-21 09:15:00

Steady states of FitzHugh-Nagumo-type systems with sign-changing coefficients
Jo\~ao Marcos do \'O, Evelina Shamarova, Victor V. Silva
arxiv.org/abs/2508.14854

@arXiv_hepth_bot@mastoxiv.page
2025-09-18 08:13:41

Detector-based measurement-induced state updates in AdS/CFT
Vijay Balasubramanian, Esko Keski-Vakkuri, Nicola Pranzini
arxiv.org/abs/2509.13457