2026-02-20 17:11:49
Evaluating #AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?
https://arxiv.org/abs/2602.11988
Evaluating #AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?
https://arxiv.org/abs/2602.11988
Open models will largely lose if they keep chasing closed frontier AI models; instead, open models should serve as complementary tools to closed agents (Nathan Lambert/Interconnects AI)
https://www.interconnects.ai/p/the-next-phase-of-open-models
Ashton Jeanty showed flashes for Raiders but complementary back is needed https://raiderswire.usatoday.com/story/sports/nfl/raiders/2026/02/17/raiders-team-needs-running-back-ashton-jeanty-nfl-free-agen…
Mailbag: Any changes to special teams? https://www.dallascowboys.com/news/mailbag-any-changes-to-special-teams
Erneut teile ich dieses wundervolle Saag-Shrimp-Rezept mit Euch Folgenden, und es schmeckt auch mit einem Drittel der Ghee-Menge und ohne "heavy cream" fantastisch. #nommention
Study of a wideband high data rate implantable antenna for cortical visual prosthesis #BCI
Mailbag: Any changes to special teams? https://www.dallascowboys.com/news/mailbag-any-changes-to-special-teams
🆔 DC4EU final report proposes pluralistic trust model to realise EUDI Wallet vision
A key message in the report: no single trust model fits Europe’s diversity. Instead, DC4EU proposes a pluralistic approach, weaving together three complementary trust infrastructures.
⏳ With less than a year left to achieve Europe’s 2026 digital identity mandate, the report calls for coordinated action to move from feasibility to real-world deployment at European scale.
Read more:
Full disclosure in computer security still exists and is complementary to other disclosure models. The evolution of vulnerability disclosure is not linear from full disclosure to responsible disclosure to coordinated disclosure. These models coexist and all need to be taken into account.
You can’t just say “the legal framework will solve it” or “just do coordinated disclosure.” Vendors, researchers, and users are not all rational actors playing the same game.
Vulnerability disclo…
The fastverse is a suite of complementary high-performance packages for statistical computing and data manipulation in R. #rstats
Fork, Explore, Commit: OS Primitives for Agentic Exploration
Cong Wang, Yusheng Zheng
https://arxiv.org/abs/2602.08199 https://arxiv.org/pdf/2602.08199 https://arxiv.org/html/2602.08199
arXiv:2602.08199v1 Announce Type: new
Abstract: AI agents increasingly perform agentic exploration: pursuing multiple solution paths in parallel and committing only the successful one. Because each exploration path may modify files and spawn processes, agents require isolated environments with atomic commit and rollback semantics for both filesystem state and process state. We introduce the branch context, a new OS abstraction that provides: (1) copy-on-write state isolation with independent filesystem views and process groups, (2) a structured lifecycle of fork, explore, and commit/abort, (3) first-commit-wins resolution that automatically invalidates sibling branches, and (4) nestable contexts for hierarchical exploration. We realize branch contexts in Linux through two complementary components. First, BranchFS is a FUSE-based filesystem that gives each branch context an isolated copy-on-write workspace, with O(1) creation, atomic commit to the parent, and automatic sibling invalidation, all without root privileges. BranchFS is open sourced in https://github.com/multikernel/branchfs. Second, branch() is a proposed Linux syscall that spawns processes into branch contexts with reliable termination, kernel-enforced sibling isolation, and first-commit-wins coordination. Preliminary evaluation of BranchFS shows sub-350 us branch creation independent of base filesystem size, and modification-proportional commit overhead (under 1 ms for small changes).
toXiv_bot_toot
Proton Energy Dependence of Radiation Induced Low Gain Avalanche Detector Degradation
Veronika Kraus, Marcos Fernandez Garcia, Luca Menzio, Michael Moll
https://arxiv.org/abs/2602.01800 https://arxiv.org/pdf/2602.01800 https://arxiv.org/html/2602.01800
arXiv:2602.01800v1 Announce Type: new
Abstract: Low Gain Avalanche Detectors (LGADs) are key components for precise timing measurements in high-energy physics experiments, including the High Luminosity upgrades of the current LHC detectors. Their performance is, however, limited by radiation induced degradation of the gain layer, primarily driven by acceptor removal. This study presents a systematic comparison of how the degradation evolves with different incident proton energies, using LGADs from Hamamatsu Photonics (HPK) and The Institute of Microelectronics of Barcelona (IMB-CNM) irradiated with 18 MeV, 24 MeV, 400 MeV and 23 GeV protons and fluences up to 2.5x10^15 p/cm2. Electrical characterization is used to extract the acceptor removal coefficients for different proton energies, whereas IR TCT measurements offer complementary insight into the gain evolution in LGADs after irradiation. Across all devices, lower energy protons induce stronger gain layer degradation, confirming expectations. However, 400 MeV protons consistently appear less damaging than both lower and higher energy protons, an unexpected deviation from a monotonic energy trend. Conversion of proton fluences to 1 MeV neutron-equivalent fluences reduces but does not eliminate these differences, indicating that the standard Non-Ionizing Energy Loss (NIEL) scaling does not fully account for the underlying defect formation mechanisms at different energies and requires revision when considering irradiation fields that contain a broader spectrum of particle types and energies.
toXiv_bot_toot
From synthetic turbulence to true solutions: A deep diffusion model for discovering periodic orbits in the Navier-Stokes equations
Jeremy P Parker, Tobias M Schneider
https://arxiv.org/abs/2602.23181 https://arxiv.org/pdf/2602.23181 https://arxiv.org/html/2602.23181
arXiv:2602.23181v1 Announce Type: new
Abstract: Generative artificial intelligence has shown remarkable success in synthesizing data that mimic complex real-world systems, but its potential role in the discovery of mathematically meaningful structures in physical models remains underexplored. In this work, we demonstrate how a generative diffusion model can be used to uncover previously unknown solutions of a nonlinear partial differential equation: the two-dimensional Navier-Stokes equations in a turbulent regime. Trained on data from a direct numerical simulation of turbulence, the model learns to generate time series that resemble physically plausible trajectories. By carefully modifying the temporal structure of the model and enforcing the symmetries of the governing equations, we produce synthetic trajectories that are periodic in time, despite the fact that the training data did not contain periodic trajectories. These synthetic trajectories are then refined into true solutions using an iterative solver, yielding 111 new periodic orbits (POs) with very short periods. Our results reveal a previously unobserved richness in the PO structure of this system and suggest a broader role for generative AI: not as replacements for simulation and existing solvers, but as a complementary tool for navigating the complex solution spaces of nonlinear dynamical systems.
toXiv_bot_toot
HALO: A Fine-Grained Resource Sharing Quantum Operating System
John Zhuoyang Ye, Jiyuan Wang, Yifan Qiao, Jens Palsberg
https://arxiv.org/abs/2602.07191 https://arxiv.org/pdf/2602.07191 https://arxiv.org/html/2602.07191
arXiv:2602.07191v1 Announce Type: new
Abstract: As quantum computing enters the cloud era, thousands of users must share access to a small number of quantum processors. Users need to wait minutes to days to start their jobs, which only takes a few seconds for execution. Current quantum cloud platforms employ a fair-share scheduler, as there is no way to multiplex a quantum computer among multiple programs at the same time, leaving many qubits idle and significantly under-utilizing the hardware. This imbalance between high user demand and scarce quantum resources has become a key barrier to scalable and cost-effective quantum computing.
We present HALO, the first quantum operating system design that supports fine-grained resource-sharing. HALO introduces two complementary mechanisms. First, a hardware-aware qubit-sharing algorithm that places shared helper qubits on regions of the quantum computer that minimize routing overhead and avoid cross-talk noise between different users' processes. Second, a shot-adaptive scheduler that allocates execution windows according to each job's sampling requirements, improving throughput and reducing latency. Together, these mechanisms transform the way quantum hardware is scheduled and achieve more fine-grained parallelism.
We evaluate HALO on the IBM Torino quantum computer on helper qubit intense benchmarks. Compared to state-of-the-art systems such as HyperQ, HALO improves overall hardware utilization by up to 2.44x, increasing throughput by 4.44x, and maintains fidelity loss within 33%, demonstrating the practicality of resource-sharing in quantum computing.
toXiv_bot_toot
Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs
Yining Hong, Huang Huang, Manling Li, Li Fei-Fei, Jiajun Wu, Yejin Choi
https://arxiv.org/abs/2602.21198 https://arxiv.org/pdf/2602.21198 https://arxiv.org/html/2602.21198
arXiv:2602.21198v1 Announce Type: new
Abstract: Embodied LLMs endow robots with high-level task reasoning, but they cannot reflect on what went wrong or why, turning deployment into a sequence of independent trials where mistakes repeat rather than accumulate into experience. Drawing upon human reflective practitioners, we introduce Reflective Test-Time Planning, which integrates two modes of reflection: \textit{reflection-in-action}, where the agent uses test-time scaling to generate and score multiple candidate actions using internal reflections before execution; and \textit{reflection-on-action}, which uses test-time training to update both its internal reflection model and its action policy based on external reflections after execution. We also include retrospective reflection, allowing the agent to re-evaluate earlier decisions and perform model updates with hindsight for proper long-horizon credit assignment. Experiments on our newly-designed Long-Horizon Household benchmark and MuJoCo Cupboard Fitting benchmark show significant gains over baseline models, with ablative studies validating the complementary roles of reflection-in-action and reflection-on-action. Qualitative analyses, including real-robot trials, highlight behavioral correction through reflection.
toXiv_bot_toot