2026-02-22 18:14:18
Why did Deutsche Bahn number the two connecting cars 628 and 928? Is it because they connect as 69?
Why did Deutsche Bahn number the two connecting cars 628 and 928? Is it because they connect as 69?
On the Generalization Behavior of Deep Residual Networks From a Dynamical System Perspective
Jinshu Huang, Mingfei Sun, Chunlin Wu
https://arxiv.org/abs/2602.20921 https://arxiv.org/pdf/2602.20921 https://arxiv.org/html/2602.20921
arXiv:2602.20921v1 Announce Type: new
Abstract: Deep neural networks (DNNs) have significantly advanced machine learning, with model depth playing a central role in their successes. The dynamical system modeling approach has recently emerged as a powerful framework, offering new mathematical insights into the structure and learning behavior of DNNs. In this work, we establish generalization error bounds for both discrete- and continuous-time residual networks (ResNets) by combining Rademacher complexity, flow maps of dynamical systems, and the convergence behavior of ResNets in the deep-layer limit. The resulting bounds are of order $O(1/\sqrt{S})$ with respect to the number of training samples $S$, and include a structure-dependent negative term, yielding depth-uniform and asymptotic generalization bounds under milder assumptions. These findings provide a unified understanding of generalization across both discrete- and continuous-time ResNets, helping to close the gap in both the order of sample complexity and assumptions between the discrete- and continuous-time settings.
toXiv_bot_toot
Urban Spots ✴️
城市噪点 ✴️
📷 Nikon F4E
🎞️ Ilford HP5 Plus 400, expired 1993
#filmphotography #Photography #blackandwhite
Yes, exactly!
https://discuss.systems/@ricci/115687153999092934
Did you know that #PEP425 ("Compatibility Tags for Built Distributions") said:
> Why isn’t there a . in the Python version number?
>
> CPython has lasted 20 years without a 3-digit major release. This should continue for some time. Other implementations may use _ as a delimiter, since both - and . delimit the surrounding filename.
This didn't age well.
#Python
In the last, guttering hours of the year, it’s tradition to look back on the last 12 months’ #TTRPG activity and reflect. You can read last year’s summary at https://dice.camp/@davej/113747664699824381…
Some positive signs for AI coding tools. Claude-code is a $1B run rate product six month after launch. The latest Claude Opus 4.5 model is several times cheaper and faster than last month’s version and uses about a quarter of the number of tokens to get work done. My own benchmark saw over an hour of coding reduced to 17 minutes. The high rate of change continues. The boundary of what does/doesn’t work is pushing back fast.
The US military has always had a massive global advantage against enemies by having bases all over the world. There are bases in every NATO country. This would appear to be a powerful threat to anyone willing to oppose American hegemon, and under normal conditions it would be.
But a lot of those kids serving on those bases joined, not because they love America but, because they needed a ticket out of poverty. They joined for the education, for the money, maybe a bit for the adventure, but, more than anything, to escape the ghetto or podunk backwater that trapped them. Under normal times, this is the best deal they could expect. Maybe they risk their lives, usually they sit around being bored for a few years, and they get to come out with respect and paid college.
But what they are being offered is normal in most of the countries they're stationed in. Free healthcare, cheap or free education, is just what citizens in a lot of countries have come to expect. If the US attacked a NATO country, how many would snap up citizenship if they were given a chance to defect? Bonus points for taking some hardware with you, I'm sure.
But there are some who love their country. There are some patriotic Americans on those bases. Some of them joined specifically to protect the US from all enemies, foreign *and* domestic. Given a chance to fulfill that oath or violate international law, what happens?
There are a good number of former military folks too who now are unsafe in the countries they served, who would do just about anything for citizenship in any EU country and almost any NATO ally. Some of those folks know things they swore an oath to never share, but the country they swore an oath to has betrayed them. Today there's no value in leaking those secrets, but in a war between the US and NATO allies things would be different. Some of those former military folks still believe in their oath, and know exactly who the real enemy is. What happens when there's a real threat of war, when they can use their knowledge to fulfill that oath to protect the US against those domestic threats?
There are a bunch of civilian tech workers who have become targets of the regime. Some of them had clearance, or know about the skeletons in the closet. They know about critical infrastructure, classified systems, all sorts of things that would be extremely valuable to an opponent. But the opponents of the US have always been a frightening *other*, never familiar societies these folks look up to, have visited, have thought about moving to, are trying to escape to.
All I'm saying here is that invading Venezuela and kidnapping the president has a very different calculus than does attacking Greenland. I don't know if Trump or his people are able to understand that, but if he and his folks aren't then I hope European leaders are. But more than that, I hope it never comes down to finding out.
But perhaps we should all think about what we would do to make sure things ended quickly if American leadership ever made such an incredible mistake.
Voice authentication???
I would not trust the basic competence of any org doing that. Facial recognition is bad enough.
Part of this is that I’m whatever the voice equivalent is to face-blindness. I don’t believe that such a thing as a “voice print” can exist. I can tell the difference between Neil Young and Bob Dylan, but any more similar and I’m lost.
Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM
Bounded Local Generator Classes for Deterministic State Evolution
R. Jay Martin II
https://arxiv.org/abs/2602.11476 https://arxiv.org/pdf/2602.11476 https://arxiv.org/html/2602.11476
arXiv:2602.11476v1 Announce Type: new
Abstract: We formalize a constructive subclass of locality-preserving deterministic operators acting on graph-indexed state systems. We define the class of Bounded Local Generator Classes (BLGC), consisting of finite-range generators operating on bounded state spaces under deterministic composition. Within this class, incremental update cost is independent of total system dimension. We prove that, under the BLGC assumptions, per-step operator work satisfies W_t = O(1) as the number of nodes M \to \infty, establishing a structural decoupling between global state size and incremental computational effort. The framework admits a Hilbert-space embedding in \ell^2(V; \mathbb{R}^d) and yields bounded operator norms on admissible subspaces. The result applies specifically to the defined subclass and does not claim universality beyond the stated locality and boundedness constraints.
toXiv_bot_toot
Psychologist: you mentioned that you think you may be on the autism spectrum if you haven't been diagnosed. What leads you to believe that?
Me: I've created featural scripts for multiple alternative number systems. I have multiple notebook pages full of math in base 12 and base 16 with custom notation I created.
Psychologist, nodding meaningfully: ahh....
Towards Efficient Data Structures for Approximate Search with Range Queries
Ladan Kian, Dariusz R. Kowalski
https://arxiv.org/abs/2602.06860 https://arxiv.org/pdf/2602.06860 https://arxiv.org/html/2602.06860
arXiv:2602.06860v1 Announce Type: new
Abstract: Range queries are simple and popular types of queries used in data retrieval. However, extracting exact and complete information using range queries is costly. As a remedy, some previous work proposed a faster principle, {\em approximate} search with range queries, also called single range cover (SRC) search. It can, however, produce some false positives. In this work we introduce a new SRC search structure, a $c$-DAG (Directed Acyclic Graph), which provably decreases the average number of false positives by logarithmic factor while keeping asymptotically same time and memory complexities as a classic tree structure. A $c$-DAG is a tunable augmentation of the 1D-Tree with denser overlapping branches ($c \geq 3$ children per node). We perform a competitive analysis of a $c$-DAG with respect to 1D-Tree and derive an additive constant time overhead and a multiplicative logarithmic improvement of the false positives ratio, on average. We also provide a generic framework to extend our results to empirical distributions of queries, and demonstrate its effectiveness for Gowalla dataset. Finally, we quantify and discuss security and privacy aspects of SRC search on $c$-DAG vs 1D-Tree, mainly mitigation of structural leakage, which makes $c$-DAG a good data structure candidate for deployment in privacy-preserving systems (e.g., searchable encryption) and multimedia retrieval.
toXiv_bot_toot
HALO: A Fine-Grained Resource Sharing Quantum Operating System
John Zhuoyang Ye, Jiyuan Wang, Yifan Qiao, Jens Palsberg
https://arxiv.org/abs/2602.07191 https://arxiv.org/pdf/2602.07191 https://arxiv.org/html/2602.07191
arXiv:2602.07191v1 Announce Type: new
Abstract: As quantum computing enters the cloud era, thousands of users must share access to a small number of quantum processors. Users need to wait minutes to days to start their jobs, which only takes a few seconds for execution. Current quantum cloud platforms employ a fair-share scheduler, as there is no way to multiplex a quantum computer among multiple programs at the same time, leaving many qubits idle and significantly under-utilizing the hardware. This imbalance between high user demand and scarce quantum resources has become a key barrier to scalable and cost-effective quantum computing.
We present HALO, the first quantum operating system design that supports fine-grained resource-sharing. HALO introduces two complementary mechanisms. First, a hardware-aware qubit-sharing algorithm that places shared helper qubits on regions of the quantum computer that minimize routing overhead and avoid cross-talk noise between different users' processes. Second, a shot-adaptive scheduler that allocates execution windows according to each job's sampling requirements, improving throughput and reducing latency. Together, these mechanisms transform the way quantum hardware is scheduled and achieve more fine-grained parallelism.
We evaluate HALO on the IBM Torino quantum computer on helper qubit intense benchmarks. Compared to state-of-the-art systems such as HyperQ, HALO improves overall hardware utilization by up to 2.44x, increasing throughput by 4.44x, and maintains fidelity loss within 33%, demonstrating the practicality of resource-sharing in quantum computing.
toXiv_bot_toot
Let's get this straight: it is entirely normal for a #OpenSource project to accumulate bug reports over time. They're not a thing to be ashamed of.
On the contrary, if you see a nontrivial project with a very small number of bug reports, it usually means one of the following:
a. you've hit a malicious fake,
b. the project is very young and it doesn't have many users (so it's likely buggy),
c. the project is actively shoving issues under the carpet.
None of that is a good sign. You don't want to use that (except for b., if you're ready to be the beta tester).
#FreeSoftware #Gentoo #GitHub #Python