2025-10-14 10:17:08
On function-on-function linear quantile regression
Muge Mutis, Ufuk Beyaztas, Filiz Karaman, Han Lin Shang
https://arxiv.org/abs/2510.10792 https://arxiv.o…
On function-on-function linear quantile regression
Muge Mutis, Ufuk Beyaztas, Filiz Karaman, Han Lin Shang
https://arxiv.org/abs/2510.10792 https://arxiv.o…
EB-MBD: Emerging-Barrier Model-Based Diffusion for Safe Trajectory Optimization in Highly Constrained Environments
Raghav Mishra, Ian R. Manchester
https://arxiv.org/abs/2510.07700
Near-Optimal Second-Order Guarantees for Model-Based Adversarial Imitation Learning
Shangzhe Li, Dongruo Zhou, Weitong Zhang
https://arxiv.org/abs/2510.09487 https://
Dynamic Function Configuration and its Management in Serverless Computing: A Taxonomy and Future Directions
Siddharth Agarwal, Maria A. Rodriguez, Rajkumar Buyya
https://arxiv.org/abs/2510.02404
BASILISK III. Stress-testing the Conditional Luminosity Function model
Kaustav Mitra, Frank C. van den Bosch
https://arxiv.org/abs/2510.08421 https://arxiv…
Dynamics of feedback Ising model
Yi-Ping Ma, Ivan Sudakow, P. L. Krapivsky, Sergey A. Vakulenko
https://arxiv.org/abs/2510.07301 https://arxiv.org/pdf/2510…
Kinetic modelling of the CO2 capture and utilisation on NiRu-Ca/Al dual function material via parameter estimation
Meshkat Dolat, Andrew David Wright, Soudabeh Bahrami Gharamaleki, Loukia-Pantzechroula Merkouri, Melis S. Duyar, Michael Short
https://arxiv.org/abs/2510.12439
Chinese AI startup Z.ai releases the GLM-4.6V series of vision models, with support for native function calling, available with 106B and 9B parameters (Carl Franzen/VentureBeat)
https://venturebeat.com/ai/z-ai-debuts-open-source-glm-4-6v-a-n…
Multimodal Function Vectors for Spatial Relations
Shuhao Fu, Esther Goldberg, Ying Nian Wu, Hongjing Lu
https://arxiv.org/abs/2510.02528 https://arxiv.org/…
A Pseudo-Hermitian Hybrid Model at Finite Temperature: The Role of the Exceptional Points
Ignacio Fushimi (IFLP-UNLP, Argentina), Marta Reboiro (IFLP-UNLP, Argentina)
https://arxiv.org/abs/2510.08773
Manifolds and Modules: How Function Develops in a Neural Foundation Model
Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker
https://arxiv.org/abs/2512.07869 https://arxiv.org/pdf/2512.07869 https://arxiv.org/html/2512.07869
arXiv:2512.07869v1 Announce Type: new
Abstract: Foundation models have shown remarkable success in fitting biological visual systems; however, their black-box nature inherently limits their utility for under- standing brain function. Here, we peek inside a SOTA foundation model of neural activity (Wang et al., 2025) as a physiologist might, characterizing each 'neuron' based on its temporal response properties to parametric stimuli. We analyze how different stimuli are represented in neural activity space by building decoding man- ifolds, and we analyze how different neurons are represented in stimulus-response space by building neural encoding manifolds. We find that the different processing stages of the model (i.e., the feedforward encoder, recurrent, and readout modules) each exhibit qualitatively different representational structures in these manifolds. The recurrent module shows a jump in capabilities over the encoder module by 'pushing apart' the representations of different temporal stimulus patterns; while the readout module achieves biological fidelity by using numerous specialized feature maps rather than biologically plausible mechanisms. Overall, we present this work as a study of the inner workings of a prominent neural foundation model, gaining insights into the biological relevance of its internals through the novel analysis of its neurons' joint temporal response patterns.
toXiv_bot_toot
MV-Performer: Taming Video Diffusion Model for Faithful and Synchronized Multi-view Performer Synthesis
Yihao Zhi, Chenghong Li, Hongjie Liao, Xihe Yang, Zhengwentai Sun, Jiahao Chang, Xiaodong Cun, Wensen Feng, Xiaoguang Han
https://arxiv.org/abs/2510.07190
Forecasting Inflation Based on Hybrid Integration of the Riemann Zeta Function and the FPAS Model (FPAS $\zeta$): Cyclical Flexibility, Socio-Economic Challenges and Shocks, and Comparative Analysis of Models
Davit Gondauri
https://arxiv.org/abs/2510.02966
On Logit Weibull Manifold
Prosper Rosaire Mama Assandje, Joseph Dongho, Thomas Bouetou Bouetou
https://arxiv.org/abs/2510.03299 https://arxiv.org/pdf/2510.…
The 3-state Potts model on planar triangulations: explicit algebraic solution
Mireille Bousquet-M\'elou, Hadrien Notarantonio
https://arxiv.org/abs/2510.08414 https://
A quantum N-dimer model
Daniel C. Douglas, Richard Kenyon, Nicholas Ovenhouse, Samuel Panitch, Sri Tata
https://arxiv.org/abs/2510.07543 https://arxiv.org/…
Bayesian Neural Networks for Functional ANOVA model
Seokhun Park, Choeun Kim, Jihu Lee, Yunseop Shin, Insung Kong, Yongdai Kim
https://arxiv.org/abs/2510.00545 https://
Enhanced Angle-Range Cluster Parameter Estimation in Full-Duplex ISAC Systems
Muhammad Talha, Besma Smida, David Gonz\'alez G
https://arxiv.org/abs/2510.12711 https://
Retardance of lab grown diamond substrates as a function of thickness: momentum-drift random walk model
Thanh Tran, Phuong Vo, Thomas Sheppard, Timothy Grotjohn, Paul Quayle
https://arxiv.org/abs/2510.05932
Non-exotic wormholes in $f(R,L_m)$ gravity
Sara Rastgoo, Foad Parsaei
https://arxiv.org/abs/2510.11487 https://arxiv.org/pdf/2510.11487
Development and Validation of a Novel Fresnel Integral Based Method to Model MSF Errors in Optical Imaging
Luuk Zonneveld, Paul Urbach, Aur\`ele Adam
https://arxiv.org/abs/2510.03172
The hat polykite as an Iterated Function System
Corey de Wit
https://arxiv.org/abs/2510.00409 https://arxiv.org/pdf/2510.00409
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
Generalized Jeffreys's approximate objective Bayes factor: Model-selection consistency, finite-sample accuracy, and statistical evidence in 71,126 clinical trial findings
Puneet Velidi, Zhengxiao Wei, Shreena Nisha Kalaria, Yimeng Liu, C\'eline M. Laumont, Brad H. Nelson, Farouk S. Nathoo
https://arxiv.org/abs/2510.10358
Robust Sensor Placement for Poisson Arrivals with False Alarm Aware Spatiotemporal Sensing
Mingyu Kim, Pronoy Sarker, Seungmo Kim, Daniel J. Stilwell, Jorge Jimenez
https://arxiv.org/abs/2510.05343
BaNEL: Exploration Posteriors for Generative Modeling Using Only Negative Rewards
Sangyun Lee, Brandon Amos, Giulia Fanti
https://arxiv.org/abs/2510.09596 https://
Role of universal function of the nuclear proximity potential: A systematic study on the alpha-decay of heavy/super-heavy nuclei and {\alpha}-induced reactions
S. Mohammadi, R. Gharaei, S. A. Alavi
https://arxiv.org/abs/2510.02764
A physically-informed sea spray generation model for splashing waves
Kaitao Tang, Thomas A. A. Adcock, Wouter Mostert
https://arxiv.org/abs/2510.02486 https://
Strong-coupling functional renormalization group: Nagaoka ferromagnetism and non-Fermi liquid physics in the Hubbard model at $ U = \infty $
Jonas Arnold, Peter Kopietz, Andreas R\"uckriegel
https://arxiv.org/abs/2510.01909
Bound-Preserving WENO Schemes for Temple-class systems
Wei Chen, Shumo Cui, Kailiang Wu, Tao Xiong, Baoyue Yu
https://arxiv.org/abs/2510.04123 https://arxi…
PentestMCP: A Toolkit for Agentic Penetration Testing
Zachary Ezetta, Wu-chang Feng
https://arxiv.org/abs/2510.03610 https://arxiv.org/pdf/2510.03610
The Theory of Strategic Evolution: Games with Endogenous Players and Strategic Replicators
Kevin Vallier
https://arxiv.org/abs/2512.07901 https://arxiv.org/pdf/2512.07901 https://arxiv.org/html/2512.07901
arXiv:2512.07901v1 Announce Type: new
Abstract: This paper develops the Theory of Strategic Evolution, a general model for systems in which the population of players, strategies, and institutional rules evolve together. The theory extends replicator dynamics to settings with endogenous players, multi level selection, innovation, constitutional change, and meta governance. The central mathematical object is a Poiesis stack: a hierarchy of strategic layers linked by cross level gain matrices. Under small gain conditions, the system admits a global Lyapunov function and satisfies selection, tracking, and stochastic stability results at every finite depth. We prove that the class is closed under block extension, innovation events, heterogeneous utilities, continuous strategy spaces, and constitutional evolution. The closure theorem shows that no new dynamics arise at higher levels and that unrestricted self modification cannot preserve Lyapunov structure. The theory unifies results from evolutionary game theory, institutional design, innovation dynamics, and constitutional political economy, providing a general mathematical model of long run strategic adaptation.
toXiv_bot_toot
The Diameter of (Threshold) Geometric Inhomogeneous Random Graphs
Zylan Benjert, Kostas Lakis, Johannes Lengler, Raghu Raman Ravi
https://arxiv.org/abs/2510.12543 https://
Robustness of Covariance Estimators with Application in Activity Detection
Hendrik Bernd Zarucha, Peter Jung, Giuseppe Caire
https://arxiv.org/abs/2510.07044 https://
IntMeanFlow: Few-step Speech Generation with Integral Velocity Distillation
Wei Wang, Rong Cao, Yi Guo, Zhengyang Chen, Kuan Chen, Yuanyuan Huo
https://arxiv.org/abs/2510.07979 …
Winding quotients for virtual period maps of rank 1
Kyoji Saito
https://arxiv.org/abs/2510.04558 https://arxiv.org/pdf/2510.04558
Universality and kernel-adaptive training for classically trained, quantum-deployed generative models
Andrii Kurkin, Kevin Shen, Susanne Pielawa, Hao Wang, Vedran Dunjko
https://arxiv.org/abs/2510.08476
Correlation function metrology for warm dense matter: Recent developments and practical guidelines
Maximilian Peter B\"ohme, Willow Martin, Hannah Bellenbaum, Magaret Berrens, Jan Vorberger, Sebastian Schwalbe, Zhandos Moldabekov, Thomas Gawne, Sebastien Hamel, Brianna Aguilar-Solis, Abhiraj Sharma, Frank Graziani, Tilo D\"oppner, Siegfried Glenzer, Tobias Dornheim, David Bishel
A comprehensive comparison of neural operators for 3D industry-scale engineering designs
Weiheng Zhong, Qibang Liu, Diab Abueidda, Seid Koric, Hadi Meidani
https://arxiv.org/abs/2510.05995
On the threshold behaviour of heavy top production
Torbj\"orn Sj\"ostrand
https://arxiv.org/abs/2510.04590 https://arxiv.org/pdf/2510.04590
Deconvolution of Arbitrary Distribution Functions and Densities
Henrik Kaiser
https://arxiv.org/abs/2510.04742 https://arxiv.org/pdf/2510.04742
GRACE: A Language Model Framework for Explainable Inverse Reinforcement Learning
Silvia Sapora, Devon Hjelm, Alexander Toshev, Omar Attia, Bogdan Mazoure
https://arxiv.org/abs/2510.02180
Line shapes of the Na/K resonance line profiles perturbed by H2 at extreme density
N. F. Allard, J. F. Kielkopf
https://arxiv.org/abs/2510.05763 https://ar…
In-flight performance of the IXPE telescopes
Riccardo Ferrazzoli, Enrico Costa, Sergio Fabiani, Philip Kaaret, Stephen L. O'Dell, Brian D. Ramsey, Paolo Soffitta, Luca Baldini, Ronaldo Bellazzini, Alessandro Di Marco, Fabio La Monaca, Luca Latronico, Alberto Manfreda, Fabio Muleri, John Rankin, Carmelo Sgr\`o, Stefano Silvestri, Martin C. Weisskopf
https://…
CEPC Technical Design Report -- Reference Detector
The CEPC Study Group
https://arxiv.org/abs/2510.05260 https://arxiv.org/pdf/2510.05260
Modeling Emission-Line Surface Brightness in a Multiphase Galactic Wind: An O VI Case Study
Zirui Chen, Zixuan Peng, Kate H. R. Rubin, Timothy M. Heckman, Matthew J. Hayes, Yakov Faerman, Crystal L. Martin, S. Peng Oh, Drummond B. Fielding
https://arxiv.org/abs/2510.02443
Reducing Discomfort in Driving Simulators: Motion Cueing for Motion Sickness Mitigation
Varun Kotian, Vishrut Jain, Andrea Michelle Rios Lazcano, Daan Marinus Pool, Riender Happee, Barys Shyrokau
https://arxiv.org/abs/2510.01986
Model-Guided Microstimulation Steers Primate Visual Behavior
Johannes Mehrer, Ben Lonnqvist, Anna Mitola, Abdulkadir Gokce, Paolo Papale, Martin Schrimpf
https://arxiv.org/abs/2510.03684
New Classes of Non-monotone Variational Inequality Problems Solvable via Proximal Gradient on Smooth Gap Functions
Lei Zhao, Daoli Zhu, Shuzhong Zhang
https://arxiv.org/abs/2510.12105
Cosmology in the Hoyle Narlikar gravity
J. K. Singh, Sonal Aggarwal, Shaily, Hamid Shabani
https://arxiv.org/abs/2510.11762 https://arxiv.org/pdf/2510.117…
Young functions on varifolds. Part I. Functional analytic foundations
Hsin-Chuang Chou
https://arxiv.org/abs/2510.05639 https://arxiv.org/pdf/2510.05639
Train Stochastic Non Linear Coupled ODEs to Classify and Generate
Stefano Gagliani, Feliciano Giuseppe Pacifico, Lorenzo Chicchi, Duccio Fanelli, Diego Febbe, Lorenzo Buffoni, Raffaele Marino
https://arxiv.org/abs/2510.12286
Modeling information acquisition via f-divergence and duality
Alex Bloedel, Tommaso Denti, Luciano Pomatto
https://arxiv.org/abs/2510.03482 https://arxiv.o…
A Deep Multi-Task Learning Approach to Impulsive Noise Parameter Estimation
Abdullahi Mohammad, Bdah Eya, Bassant Selim
https://arxiv.org/abs/2510.12179 https://
Unlocking the initial neutron density distribution from the two-pion HBT correlation function in heavy-ion collisions
Pengcheng Li, Manzi Nan, Haojie Zhang, Junhuai Xu, Xilong Xiang, Yijie Wang, Yongjia Wang, Gaochan Yong, Tadaaki Isobe, Zhigang Xiao, Qingfeng Li
https://arxiv.org/abs/2510.12226
Minimal-Dissipation Learning for Energy-Based Models
Jeff Hnybida, Simon Verret
https://arxiv.org/abs/2510.03137 https://arxiv.org/pdf/2510.03137
Explicit formulae and topological descriptions of action-minimizing sets for 2-locally potentials of the XY model
Yuika Kajihara, Shoya Motonaga, Mao Shinoda
https://arxiv.org/abs/2510.02678
Angular BAO Measurements with the DESI DR1 BGS Sample
Paula S. Ferreira, Ulisses Ribeiro, Pedro da Silveira Ferreira, Cl\'ecio R. Bom, Armando Bernui
https://arxiv.org/abs/2510.02144
A Data-Adaptive Factor Model Using Composite Quantile Approach
Seeun Park, Hee-Seok Oh
https://arxiv.org/abs/2510.00558 https://arxiv.org/pdf/2510.00558
Typhoon Path Prediction Using Functional Data Analysis and Clustering-Based Regression
Jimin Kim
https://arxiv.org/abs/2510.02316 https://arxiv.org/pdf/251…
Green's Function-Based Thin Plate Splines via Karhunen-Lo\`eve Expansion for Bayesian Spatial Modeling
Joaquin Cavieres, Sebastian Krumscheid
https://arxiv.org/abs/2510.04256
MRI-derived quantification of hepatic vessel-to-volume ratios in chronic liver disease using a deep learning approach
Alexander Herold, Daniel Sobotka, Lucian Beer, Nina Bastati, Sarah Poetter-Lang, Michael Weber, Thomas Reiberger, Mattias Mandorfer, Georg Semmler, Benedikt Simbrunner, Barbara D. Wichtmann, Sami A. Ba-Ssalamah, Michael Trauner, Ahmed Ba-Ssalamah, Georg Langs
A microscopic approach to nonlinear theory of spin-charge separation
Oleksandr Tsyplyatyev, Yiqing Jin, Mar\'ia Moreno, Wooi Kiat Tan, Christopher J. B. Ford
https://arxiv.org/abs/2510.09515
Hamilton-Jacobi Reachability for Viability Analysis of Constrained Waste-to-Energy Systems under Adversarial Uncertainty
Achraf Bouhmady, Othman Cherkaoui Dekkaki
https://arxiv.org/abs/2510.11396
Data-Driven Adaptive PID Control Based on Physics-Informed Neural Networks
Junsei Ito, Yasuaki Wasa
https://arxiv.org/abs/2510.04591 https://arxiv.org/pdf/…
Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
https://arxiv.org/abs/2512.17884 https://arxiv.org/pdf/2512.17884 https://arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot
The odometer in subcritical activated random walk
Tobias Johnson, Jacob Richey
https://arxiv.org/abs/2510.05514 https://arxiv.org/pdf/2510.05514
Universal behaviors of the multi-time correlation functions of random processes with renewal: the step noise case (the random velocity of a L\'evy walk)
Marco Bianucci, Mauro Bologna, Daniele Lagomarsino-Oneto, Riccardo Mannella
https://arxiv.org/abs/2510.11747
Bulk plasmons in elemental metals
Dario A. Leon, Claudia Cardoso, Kristian Berland
https://arxiv.org/abs/2510.07261 https://arxiv.org/pdf/2510.07261…
Maximum softly penalised likelihood in factor analysis
Philipp Sterzinger, Ioannis Kosmids, Irini Moustaki
https://arxiv.org/abs/2510.06465 https://arxiv.o…
The role of the overlap function in describing angular distributions of single-nucleon transfer reactions
M. R. Xie, J. G. Li, N. Keeley, N. Michel, W. Zuo
https://arxiv.org/abs/2510.12103
Polyharmonic Cascade
Yuriy N. Bakhvalov
https://arxiv.org/abs/2512.17671 https://arxiv.org/pdf/2512.17671 https://arxiv.org/html/2512.17671
arXiv:2512.17671v1 Announce Type: new
Abstract: This paper presents a deep machine learning architecture, the "polyharmonic cascade" -- a sequence of packages of polyharmonic splines, where each layer is rigorously derived from the theory of random functions and the principles of indifference. This makes it possible to approximate nonlinear functions of arbitrary complexity while preserving global smoothness and a probabilistic interpretation. For the polyharmonic cascade, a training method alternative to gradient descent is proposed: instead of directly optimizing the coefficients, one solves a single global linear system on each batch with respect to the function values at fixed "constellations" of nodes. This yields synchronized updates of all layers, preserves the probabilistic interpretation of individual layers and theoretical consistency with the original model, and scales well: all computations reduce to 2D matrix operations efficiently executed on a GPU. Fast learning without overfitting on MNIST is demonstrated.
toXiv_bot_toot
Observational constraints on f(Q,T) gravity from the mass-radius relation and stability of compact stars
S. K. Maurya, Abdul Aziz, Ksh. Newton Singh, G. Mustafa, Y. Sekhmani, Saibal Ray
https://arxiv.org/abs/2510.06003
Beyond Grid-Locked Voxels: Neural Response Functions for Continuous Brain Encoding
Haomiao Chen, Keith W Jamison, Mert R. Sabuncu, Amy Kuceyeski
https://arxiv.org/abs/2510.07342
Riccati-ZORO: An efficient algorithm for heuristic online optimization of internal feedback laws in robust and stochastic model predictive control
Florian Messerer, Yunfan Gao, Jonathan Frey, Moritz Diehl
https://arxiv.org/abs/2511.10473 https://arxiv.org/pdf/2511.10473 https://arxiv.org/html/2511.10473
arXiv:2511.10473v1 Announce Type: new
Abstract: We present Riccati-ZORO, an algorithm for tube-based optimal control problems (OCP). Tube OCPs predict a tube of trajectories in order to capture predictive uncertainty. The tube induces a constraint tightening via additional backoff terms. This backoff can significantly affect the performance, and thus implicitly defines a cost of uncertainty. Optimizing the feedback law used to predict the tube can significantly reduce the backoffs, but its online computation is challenging.
Riccati-ZORO jointly optimizes the nominal trajectory and uncertainty tube based on a heuristic uncertainty cost design. The algorithm alternates between two subproblems: (i) a nominal OCP with fixed backoffs, (ii) an unconstrained tube OCP, which optimizes the feedback gains for a fixed nominal trajectory. For the tube optimization, we propose a cost function informed by the proximity of the nominal trajectory to constraints, prioritizing reduction of the corresponding backoffs. These ideas are developed in detail for ellipsoidal tubes under linear state feedback. In this case, the decomposition into the two subproblems yields a substantial reduction of the computational complexity with respect to the state dimension from $\mathcal{O}(n_x^6)$ to $\mathcal{O}(n_x^3)$, i.e., the complexity of a nominal OCP.
We investigate the algorithm in numerical experiments, and provide two open-source implementations: a prototyping version in CasADi and a high-performance implementation integrated into the acados OCP solver.
toXiv_bot_toot
Flux confinement-deconfinement transition of dimer-loop models on three-dimensional bipartite lattices
Souvik Kundu, Kedar Damle
https://arxiv.org/abs/2510.11607 https://…
DeepEN: Personalized Enteral Nutrition for Critically Ill Patients using Deep Reinforcement Learning
Daniel Jason Tan, Jiayang Chen, Dilruk Perera, Kay Choong See, Mengling Feng
https://arxiv.org/abs/2510.08350
Scalar-Tensor Symmetric Teleparallel Gravity: Reconstruct the Cosmological History with a Steep Potential
Ghulam Murtaza, Avik De, Andronikos Paliathanasis
https://arxiv.org/abs/2510.03042
ZeroShotOpt: Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization
Jamison Meindl, Yunsheng Tian, Tony Cui, Veronika Thost, Zhang-Wei Hong, Johannes D\"urholt, Jie Chen, Wojciech Matusik, Mina Konakovi\'c Lukovi\'c
https://arxiv.org/abs/2510.03051
Interplay of order and disorder in two-dimensional critical systems with mixed boundary conditions
E. Eisenriegler
https://arxiv.org/abs/2510.04238 https://
The Non-Attainment Phenomenon in Robust SOCPs
Vinh Nguyen
https://arxiv.org/abs/2510.00318 https://arxiv.org/pdf/2510.00318
The sharpness of the quark-hadron transition and the properties of hybrid stars
M. B. Albino, R. Fariello, G. Lugones, F. S. Navarra
https://arxiv.org/abs/2510.02053 https://