Standards and Safety: an Overview
Luca Dassa
https://arxiv.org/abs/2602.17173 https://arxiv.org/pdf/2602.17173 https://arxiv.org/html/2602.17173
arXiv:2602.17173v1 Announce Type: new
Abstract: This note is intended to provide an overview of the implications of regulations and standards on the safety of mechanical equipment, with a focus on accelerator components. Each research facility has different internal rules and standards which are applicable to specific cases; however, the main reference legal frame in Europe is everywhere based on the applicable European Directives. After a brief introduction to the 'safety' for mechanical systems, the process of 'risk analysis' will be introduced. The majority of this note will then deal with regulations and standards for pressure and cryogenic equipment. The European Pressure Equipment Directive (PED) will be briefly described, together with the concept of 'harmonized standards' and their implications on the entire lifecycle of a pressure equipment, with some hints at the peculiarities of accelerator components. In the second part of this note, regulations and standards for machinery, load-lifting accessories and buildings will be briefly mentioned to complete the picture of the most common cases in an accelerator facility.
toXiv_bot_toot
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/3]:
- SMaRT: Online Reusable Resource Assignment and an Application to Mediation in the Kenyan Judiciary
Farabi, Pinto, Lu, Ramos-Maqueda, Das, Deeb, Sautmann
https://arxiv.org/abs/2602.18431 https://mastoxiv.page/@arXiv_csCY_bot/116119352329590193
- Benchmarking Distilled Language Models: Performance and Efficiency in Resource-Constrained Settings
Sachin Gopal Wani, Eric Page, Ajay Dholakia, David Ellison
https://arxiv.org/abs/2602.20164 https://mastoxiv.page/@arXiv_csCL_bot/116130101399805837
- VISION-ICE: Video-based Interpretation and Spatial Identification of Arrhythmia Origins via Neura...
Dorsa EPMoghaddam, Feng Gao, Drew Bernard, Kavya Sinha, Mehdi Razavi, Behnaam Aazhang
https://arxiv.org/abs/2602.20165 https://mastoxiv.page/@arXiv_csCV_bot/116130222034322594
- Benchmarking Early Deterioration Prediction Across Hospital-Rich and MCI-Like Emergency Triage Un...
KMA Solaiman, Joshua Sebastian, Karma Tobden
https://arxiv.org/abs/2602.20168 https://mastoxiv.page/@arXiv_csCY_bot/116130239074411770
- Cross-Chirality Generalization by Axial Vectors for Hetero-Chiral Protein-Peptide Interaction Design
Yang, Tian, Jia, Zhang, Zheng, Wang, Su, He, Liu, Lan
https://arxiv.org/abs/2602.20176 https://mastoxiv.page/@arXiv_qbioBM_bot/116130281674122586
- Enhancing Heat Sink Efficiency in MOSFETs using Physics Informed Neural Networks: A Systematic St...
Aniruddha Bora, Isabel K. Alvarez, Julie Chalfant, Chryssostomos Chryssostomidis
https://arxiv.org/abs/2602.20177 https://mastoxiv.page/@arXiv_csNE_bot/116130397676559696
- Data-Driven Deep MIMO Detection:Network Architectures and Generalization Analysis
Yongwei Yi, Xinping Yi, Wenjin Wang, Xiao Li, Shi Jin
https://arxiv.org/abs/2602.20178 https://mastoxiv.page/@arXiv_eessSP_bot/116130257424413457
- OrgFlow: Generative Modeling of Organic Crystal Structures from Molecular Graphs
Mohammadmahdi Vahediahmar, Matthew A. McDonald, Feng Liu
https://arxiv.org/abs/2602.20195 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/116130271189617558
- KEMP-PIP: A Feature-Fusion Based Approach for Pro-inflammatory Peptide Prediction
Soumik Deb Niloy, Md. Fahmid-Ul-Alam Juboraj, Swakkhar Shatabda
https://arxiv.org/abs/2602.20198 https://mastoxiv.page/@arXiv_qbioQM_bot/116130341315320687
- Regressor-guided Diffusion Model for De Novo Peptide Sequencing with Explicit Mass Control
Shaorong Chen, Jingbo Zhou, Jun Xia
https://arxiv.org/abs/2602.20209 https://mastoxiv.page/@arXiv_qbioQM_bot/116130374083646541
- The Sim-to-Real Gap in MRS Quantification: A Systematic Deep Learning Validation for GABA
Zien Ma, S. M. Shermer, Oktay Karaku\c{s}, Frank C. Langbein
https://arxiv.org/abs/2602.20289 https://mastoxiv.page/@arXiv_eessSP_bot/116130267228834775
- Gap-Dependent Bounds for Nearly Minimax Optimal Reinforcement Learning with Linear Function Appro...
Haochen Zhang, Zhong Zheng, Lingzhou Xue
https://arxiv.org/abs/2602.20297 https://mastoxiv.page/@arXiv_statML_bot/116130255458256497
- Multilevel Determinants of Overweight and Obesity Among U.S. Children Aged 10-17: Comparative Eva...
Joyanta Jyoti Mondal
https://arxiv.org/abs/2602.20303 https://mastoxiv.page/@arXiv_csAI_bot/116130097466859145
- An artificial intelligence framework for end-to-end rare disease phenotyping from clinical notes ...
Shyr, Hu, Tinker, Cassini, Byram, Hamid, Fabbri, Wright, Peterson, Bastarache, Xu
https://arxiv.org/abs/2602.20324 https://mastoxiv.page/@arXiv_csAI_bot/116130100089848459
- Circuit Tracing in Vision-Language Models: Understanding the Internal Mechanisms of Multimodal Th...
Jingcheng Yang, Tianhu Xiong, Shengyi Qian, Klara Nahrstedt, Mingyuan Wu
https://arxiv.org/abs/2602.20330 https://mastoxiv.page/@arXiv_csCV_bot/116130463214879334
- No One Size Fits All: QueryBandits for Hallucination Mitigation
Nicole Cho, William Watson, Alec Koppel, Sumitra Ganesh, Manuela Veloso
https://arxiv.org/abs/2602.20332 https://mastoxiv.page/@arXiv_csCL_bot/116130370809116915
- Learning During Detection: Continual Learning for Neural OFDM Receivers via DMRS
Mohanad Obeed, Ming Jian
https://arxiv.org/abs/2602.20361 https://mastoxiv.page/@arXiv_csIT_bot/116130289537785136
- Detecting and Mitigating Group Bias in Heterogeneous Treatment Effects
Joel Persson, Jurri\"en Bakker, Dennis Bohle, Stefan Feuerriegel, Florian von Wangenheim
https://arxiv.org/abs/2602.20383 https://mastoxiv.page/@arXiv_statME_bot/116130509065601748
- Selecting Optimal Variable Order in Autoregressive Ising Models
Shiba Biswal, Marc Vuffray, Andrey Y. Lokhov
https://arxiv.org/abs/2602.20394 https://mastoxiv.page/@arXiv_statML_bot/116130299369541741
toXiv_bot_toot
Perfect Network Resilience in Polynomial Time
Matthias Bentert, Stefan Schmid
https://arxiv.org/abs/2602.03827 https://arxiv.org/pdf/2602.03827 https://arxiv.org/html/2602.03827
arXiv:2602.03827v1 Announce Type: new
Abstract: Modern communication networks support local fast rerouting mechanisms to quickly react to link failures: nodes store a set of conditional rerouting rules which define how to forward an incoming packet in case of incident link failures. The rerouting decisions at any node $v$ must rely solely on local information available at $v$: the link from which a packet arrived at $v$, the target of the packet, and the incident link failures at $v$. Ideally, such rerouting mechanisms provide perfect resilience: any packet is routed from its source to its target as long as the two are connected in the underlying graph after the link failures. Already in their seminal paper at ACM PODC '12, Feigenbaum, Godfrey, Panda, Schapira, Shenker, and Singla showed that perfect resilience cannot always be achieved. While the design of local rerouting algorithms has received much attention since then, we still lack a detailed understanding of when perfect resilience is achievable.
This paper closes this gap and presents a complete characterization of when perfect resilience can be achieved. This characterization also allows us to design an $O(n)$-time algorithm to decide whether a given instance is perfectly resilient and an $O(nm)$-time algorithm to compute perfectly resilient rerouting rules whenever it is. Our algorithm is also attractive for the simple structure of the rerouting rules it uses, known as skipping in the literature: alternative links are chosen according to an ordered priority list (per in-port), where failed links are simply skipped. Intriguingly, our result also implies that in the context of perfect resilience, skipping rerouting rules are as powerful as more general rerouting rules. This partially answers a long-standing open question by Chiesa, Nikolaevskiy, Mitrovic, Gurtov, Madry, Schapira, and Shenker [IEEE/ACM Transactions on Networking, 2017] in the affirmative.
toXiv_bot_toot
Large eddy simulation of turbulent swirl-stabilized flames using the front propagation formulation: impact of the resolved flame thickness
Ruochen Guo, Yunde Su, Yuewen Jiang
https://arxiv.org/abs/2602.21940 https://arxiv.org/pdf/2602.21940 https://arxiv.org/html/2602.21940
arXiv:2602.21940v1 Announce Type: new
Abstract: This work extends the front propagation formulation (FPF) combustion model to large eddy simulation (LES) of swirl-stabilized turbulent premixed flames and investigates the effects of resolved flame thickness on the predicted flame dynamics. The FPF method is designed to mitigate the spurious propagation of under-resolved flames while preserving the reaction characteristics of filtered flame fronts. In this study, the model is extended to account for non-adiabatic effects and is coupled with an improved sub-filter flame speed estimation that resolves the inconsistency arising from heat-release effects on local sub-filter turbulence. The performance of the extended FPF method is validated by LES of the TECFLAM swirl-stabilized burner, where the results agree well with experimental measurements. The simulations reveal that the stretching of vortical structures in the outer shear layer leads to the formation of trapped flame pockets, which are identified as the physical mechanism responsible for the secondary temperature peaks observed in the experiment. The prediction of this phenomenon is shown to be strongly dependent on the resolved flame thickness, when the filter size is used for modeling sub-filter flame wrinklings. Without proper modeling of the chemical steepening effects, the thickness of the resolved flame brush is over-predicted, causing the flame consumption rate to be under-estimated. Consequently, the flame brush detaches from the outer shear layer, resulting in a failure to capture the flame pockets and the associated secondary temperature peaks.
toXiv_bot_toot
CAGE: An Internal Source Scanning Cryostat for HPGe Characterization
G. Othman, C. Wiseman, T. H. Burritt, J. A. Detwiler, M. P. Held, R. Henning, T. Mathew, D. Peterson, W. Pettus, G. Song, T. D. Van Wechel
https://arxiv.org/abs/2602.06289 https://arxiv.org/pdf/2602.06289 https://arxiv.org/html/2602.06289
arXiv:2602.06289v1 Announce Type: new
Abstract: The success of current and future-generation neutrinoless double beta decay experiments relies on the ability to eliminate or reduce extraneous backgrounds. In addition to constructing experiments using radiopure materials and handling in underground laboratories, it is necessary to understand and reduce known backgrounds in data analysis. The Large Enriched Germanium Experiment for Neutrinoless double beta Decay is searching for this decay using 76Ge-enriched high-purity germanium detectors submerged in an active liquid argon veto. A significant background in LEGEND is surface events from shallowly-impinging radiation on detector surfaces. In this paper we introduce the Collimated Alphas, Gammas, and Electrons (CAGE) scanning system, an internal-source scanning vacuum cryostat, designed to perform studies of surface events on sensitive surfaces of HPGe in a surface-lab. CAGE features a collimated radionuclide source inside a movable infrared shield that is able to perform precision scans of detector surfaces by utilizing three independent motor stages for source positioning. This allows detailed studies of pulse shapes as a function of source position and incident angle, where defining features can be extracted and exploited for removing surface backgrounds in data analysis in LEGEND. In this paper, we describe CAGE and demonstrate its performance with a commissioning run with 241Am. The commissioning run was completed with the source at normal incidence, and we estimate a beam spot precision of 3.1 mm, which includes positioning uncertainties and the beam-spot size. Using the 59.5 keV gamma population from 241Am, we show that low-energy photon events near the passivated surface feature risetimes that increase with radial distance from the detector center. We suggest a specific metric that can be used to discriminate low-energy gamma backgrounds in LEGEND with similar characteristics.
toXiv_bot_toot
How local rules generate emergent structure in cellular automata
Manuel Pita
https://arxiv.org/abs/2604.00273 https://arxiv.org/pdf/2604.00273 https://arxiv.org/html/2604.00273
arXiv:2604.00273v1 Announce Type: new
Abstract: Cellular automata generate spatially extended, temporally persistent emergent structures from local update rules. No general method derives the mechanisms of that generation from the rule itself; existing tools reconstruct structure from observed dynamics. This paper shows that the look-up table contains a readable causal architecture and introduces a forward model to extract it. The key observation in elementary cellular automata (ECA) is that adjacent cells share input positions, so the prime implicants of neighbouring transitions overlap. That overlap can couple the transitions causally or leave them independent. We formalize each pairwise interaction as a tile. A finite-state, tiling transducer, $\mathcal{T}$, composes tiles across the CA lattice, tracking how coupling and independence propagate from one cell pair to the next. Structural properties of $\mathcal{T}$ are used to classify ECA rules that can sustain regions of causal independence across space and time. We find that, in the 88 ECA equivalence classes, the number of local configurations at which coupling is structurally impossible -- computable from the look-up table -- predicts the prevalence of dynamically decoupled regions with Spearman $\rho = 0.89$ ($p < 10^{-31}$). The look-up table encodes not just what a rule computes but where it distributes causal coupling across the lattice; the framework reads that distribution forward, from local logical redundancy to emergent mesoscopic organization.
toXiv_bot_toot
Adaptive transitions in FitzHugh-Nagumo networks with Hebb-Oja coupling rules
Astero Provata, George C. Boulougouris, Johanne Hizanidis
https://arxiv.org/abs/2602.18198 https://arxiv.org/pdf/2602.18198 https://arxiv.org/html/2602.18198
arXiv:2602.18198v1 Announce Type: new
Abstract: Adaptive coupling in networks of interacting neurons has gained recent attention due to the many applications both in biological and in artificial neural networks, where adaptive coupling or synaptic plasticity is considered as a key factor in learning processes. In the present study, we apply adaptive connectivity rules in networks of interacting FitzHugh-Nagumo oscillators. Adaptive coupling, here, is realized via Hebbian learning adjusted by the Oja rule to prevent the network link weights from growing without bounds. Numerical investigations demonstrate that during the adaptation process the FitzHugh-Nagumo network undergoes adaptive transitions realizing traveling waves, synchronized states and chimera states transiting through various multiplicities. These transitions become more evident when the time scales governing the coupling dynamics are much slower than the ones governing the nodal dynamics (nodal potentials). Namely, when the coupling time scales are slow, the network has the time to realize and demonstrate different synchronization regimes before reaching the final steady state. The transitions can be observed not only in the spacetime plots but also in the abrupt changes of the average coupling weights as the network evolves in time. Regarding the asymptotic coupling distributions, we show that the limiting average coupling strength follows an inverse power law with respect to the Oja parameter (also called "forgetting" parameter) which balances the learning growth. We also report abrupt transitions in the asymptotic coupling strengths when the parameter related to adaptive coupling crosses from fast to slow time scales. These findings are in line with previous studies on spiking neural networks.
toXiv_bot_toot
Deep learning of committor and explainable artificial intelligence analysis for identifying reaction coordinates
Toshifumi Mori, Kei-ichi Okazaki, Kang Kim, Nobuyuki Matubayasi
https://arxiv.org/abs/2603.25237 https://arxiv.org/pdf/2603.25237 https://arxiv.org/html/2603.25237
arXiv:2603.25237v1 Announce Type: new
Abstract: In complex molecular systems, the reaction coordinate (RC) that characterizes transition pathways is essential to understand underlying molecular mechanisms. This review surveys a framework for identifying the RC by applying deep learning to the committor, which provides the most reliable measure of the progress along a transition path. The inputs to the neural network are collective variables (CVs) expressed as functions of atomic coordinates of the system, and the corresponding RC is predicted as the output by training the network on the committor as the learning target. Because deep learning models typically operate in a black-box manner, it is difficult to determine which input variables govern the predictions. The incorporation of eXplainable Artificial Intelligence (XAI) techniques enables quantitative assessment of the contributions of individual input variables to the predictions. This approach allows the identification of CVs that play dominant roles and demonstrates that the committor distribution on the surface using important CVs is separated by well-defined boundaries. The framework provides an explainable deep learning strategy for assigning a molecular mechanism from the RC and is applicable to a wide range of complex molecular systems.
toXiv_bot_toot
A One-Step Cascade Symmetric Model: Rank-$1$ Packets, Binary Shielding, and the Even Exact-Cardinality Profile
Frank Gilson
https://arxiv.org/abs/2603.25950 https://arxiv.org/pdf/2603.25950 https://arxiv.org/html/2603.25950
arXiv:2603.25950v1 Announce Type: new
Abstract: We introduce a one-step cascade symmetric system whose local symmetry geometry is organized by finite $\rho$-closed windows and one-step stars rather than by rowwise-independent toggles. The resulting symmetric model isolates a new $ZF DC \neg \mathrm{BPI}$ geometry in which rank-$1$ hereditarily symmetric reals admit a packet normalization theorem over countable $\rho$-closed supports.
The technical center of the paper is the finite star-span lemma and the associated rank-$1$ packet calculus. From this we obtain a normalization theorem and a two-layer coding consequence for rank-$1$ reals (in the metatheory, via a well-orderable base of packets). We then apply the same binary fresh-support shielding pattern to prove $\neg C_2$, hence $\neg AC_{\mathrm{fin}}$, and therefore the failure of every even $C_n$ (where $C_n$ denotes the principle that every family of nonempty $n$-element sets admits a choice function). On the odd side, the present bounded packet calculus remains dyadic: support-fixed local actions factor through finite $2$-groups, bounded support-equivariant quotients of finite local orbits have power-of-two size, and trace-separated bounded rigid ternary families admit canonical selectors within a fixed finite trace window. Accordingly, the odd exact-cardinality profile remains open beyond the current local binary machinery.
toXiv_bot_toot
Statistical Query Lower Bounds for Smoothed Agnostic Learning
Ilias Diakonikolas, Daniel M. Kane
https://arxiv.org/abs/2602.21191 https://arxiv.org/pdf/2602.21191 https://arxiv.org/html/2602.21191
arXiv:2602.21191v1 Announce Type: new
Abstract: We study the complexity of smoothed agnostic learning, recently introduced by~\cite{CKKMS24}, in which the learner competes with the best classifier in a target class under slight Gaussian perturbations of the inputs. Specifically, we focus on the prototypical task of agnostically learning halfspaces under subgaussian distributions in the smoothed model. The best known upper bound for this problem relies on $L_1$-polynomial regression and has complexity $d^{\tilde{O}(1/\sigma^2) \log(1/\epsilon)}$, where $\sigma$ is the smoothing parameter and $\epsilon$ is the excess error. Our main result is a Statistical Query (SQ) lower bound providing formal evidence that this upper bound is close to best possible. In more detail, we show that (even for Gaussian marginals) any SQ algorithm for smoothed agnostic learning of halfspaces requires complexity $d^{\Omega(1/\sigma^{2} \log(1/\epsilon))}$. This is the first non-trivial lower bound on the complexity of this task and nearly matches the known upper bound. Roughly speaking, we show that applying $L_1$-polynomial regression to a smoothed version of the function is essentially best possible. Our techniques involve finding a moment-matching hard distribution by way of linear programming duality. This dual program corresponds exactly to finding a low-degree approximating polynomial to the smoothed version of the target function (which turns out to be the same condition required for the $L_1$-polynomial regression to work). Our explicit SQ lower bound then comes from proving lower bounds on this approximation degree for the class of halfspaces.
toXiv_bot_toot