2025-10-08 10:46:49
Quantifying the Accuracy-Interpretability Trade-Off in Concept-Based Sidechannel Models
David Debot, Giuseppe Marra
https://arxiv.org/abs/2510.05670 https://
Quantifying the Accuracy-Interpretability Trade-Off in Concept-Based Sidechannel Models
David Debot, Giuseppe Marra
https://arxiv.org/abs/2510.05670 https://
DeGuV: Depth-Guided Visual Reinforcement Learning for Generalization and Interpretability in Manipulation
Tien Pham, Xinyun Chi, Khang Nguyen, Manfred Huber, Angelo Cangelosi
https://arxiv.org/abs/2509.04970
Sparse-Group Factor Analysis for High-Dimensional Time Series
Xin Wang, Xialu Liu
https://arxiv.org/abs/2510.05370 https://arxiv.org/pdf/2510.05370
Visual Representations inside the Language Model
Benlin Liu, Amita Kamath, Madeleine Grunde-McLaughlin, Winson Han, Ranjay Krishna
https://arxiv.org/abs/2510.04819 https://
Mechanistic Interpretability of Code Correctness in LLMs via Sparse Autoencoders
Kriz Tahimic, Charibeth Cheng
https://arxiv.org/abs/2510.02917 https://arx…
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[4/6]:
- Cross-Document Cross-Lingual NLI via RST-Enhanced Graph Fusion and Interpretability Prediction
Mengying Yuan, Wenhao Wang, Zixuan Wang, Yujie Huang, Kangli Wei, Fei Li, Chong Teng, Donghong Ji
CWEFS: Brain volume conduction effects inspired channel-wise EEG feature selection for multi-dimensional emotion recognition
Xueyuan Xu, Wenjia Dong, Fulin Wei, Li Zhuo
https://arxiv.org/abs/2508.05228
Optimal Regularization Under Uncertainty: Distributional Robustness and Convexity Constraints
Oscar Leong, Eliza O'Reilly, Yong Sheng Soh
https://arxiv.org/abs/2510.03464 ht…
D\'esentrelacement Fr\'equentiel Doux pour les Codecs Audio Neuronaux
Beno\^it Gini\`es, Xiaoyu Bie, Olivier Fercoq, Ga\"el Richard
https://arxiv.org/abs/2510.03741
KGRAG-SC: Knowledge Graph RAG-Assisted Semantic Communication
Dayu Fan, Rui Meng, Song Gao, Xiaodong Xu
https://arxiv.org/abs/2509.04801 https://arxiv.org/…
Interpreting anomaly detection of SDSS spectra
Edgar Ortiz Manrique, M\'ed\'eric Boquien
https://arxiv.org/abs/2510.05235 https://arxiv.org/pdf/251…
Enhancing Interpretability and Effectiveness in Recommendation with Numerical Features via Learning to Contrast the Counterfactual samples
Xiaoxiao Xu, Hao Wu, Wenhui Yu, Lantao Hu, Peng Jiang, Kun Gai
https://arxiv.org/abs/2509.03187
Atlas-free Brain Network Transformer
Shuai Huang, Xuan Kan, James J. Lah, Deqiang Qiu
https://arxiv.org/abs/2510.03306 https://arxiv.org/pdf/2510.03306
Identifying Exoplanets with Deep Learning: A CNN and RNN Classifier for Kepler DR25 and Candidate Vetting
Bibin Thomas, Vittal Bhat M, Salman Arafath Mohammed, Abdul Wase Mohammed, Adis Abebaw Dessalegn, Mohit Mittal
https://arxiv.org/abs/2509.04793
TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration
Cheng Xin, Fan Xu, Xin Ding, Jie Gao, Jiaxin Ding
https://arxiv.org/abs/2510.05102 https:/…
High-Resolution Global Land Surface Temperature Retrieval via a Coupled Mechanism-Machine Learning Framework
Tian Xie, Huanfeng Shen, Menghui Jiang, Juan-Carlos Jim\'enez-Mu\~noz, Jos\'e A. Sobrino, Huifang Li, Chao Zeng
https://arxiv.org/abs/2509.04991
Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?
Qingyu Yin, Chak Tou Leong, Linyi Yang, Wenxuan Huang, Wenjie Li, Xiting Wang, Jaehong Yoon, YunXing, XingYu, Jinjin Gu
https://arxiv.org/abs/2510.06036
Amplitude-based Input Attribution in Quantum Learning via Integrated Gradients
Nicholas S. DiBrita, Jason Han, Younghyun Cho, Hengrui Luo, Tirthak Patel
https://arxiv.org/abs/2510.02497
Combining feature-based approaches with graph neural networks and symbolic regression for synergistic performance and interpretability
Rog\'erio Almeida Gouv\^ea, Pierre-Paul De Breuck, Tatiane Pretto, Gian-Marco Rignanese, Marcos Jos\'e Leite dos Santos
https://arxiv.org/abs/2509.03547
EmbodiedCoder: Parameterized Embodied Mobile Manipulation via Modern Coding Model
Zefu Lin, Rongxu Cui, Chen Hanning, Xiangyu Wang, Junjia Xu, Xiaojuan Jin, Chen Wenbo, Hui Zhou, Lue Fan, Wenling Li, Zhaoxiang Zhang
https://arxiv.org/abs/2510.06207
Teaching Machines to Speak Using Articulatory Control
Akshay Anand, Chenxu Guo, Cheol Jun Cho, Jiachen Lian, Gopala Anumanchipalli
https://arxiv.org/abs/2510.05619 https://
Exact and Heuristic Algorithms for Constrained Biclustering
Antonio M. Sudoso
https://arxiv.org/abs/2508.05493 https://arxiv.org/pdf/2508.05493
Soft Disentanglement in Frequency Bands for Neural Audio Codecs
Benoit Ginies, Xiaoyu Bie, Olivier Fercoq, Ga\"el Richard
https://arxiv.org/abs/2510.03735 https://
Learning from Failures: Understanding LLM Alignment through Failure-Aware Inverse RL
Nyal Patel, Matthieu Bou, Arjun Jagota, Satyapriya Krishna, Sonali Parbhoo
https://arxiv.org/abs/2510.06092
Beyond Regularization: Inherently Sparse Principal Component Analysis
Jan O. Bauer
https://arxiv.org/abs/2510.03729 https://arxiv.org/pdf/2510.03729…
An Approach to Grounding AI Model Evaluations in Human-derived Criteria
Sasha Mitts
https://arxiv.org/abs/2509.04676 https://arxiv.org/pdf/2509.04676
A Comprehensive Survey on Trustworthiness in Reasoning with Large Language Models
Yanbo Wang, Yongcan Yu, Jian Liang, Ran He
https://arxiv.org/abs/2509.03871 https://
SAE-RNA: A Sparse Autoencoder Model for Interpreting RNA Language Model Representations
Taehan Kim, Sangdae Nam
https://arxiv.org/abs/2510.02734 https://ar…
Deep Reinforcement Learning for Ranking Utility Tuning in the Ad Recommender System at Pinterest
Xiao Yang, Mehdi Ben Ayed, Longyu Zhao, Fan Zhou, Yuchen Shen, Abe Engle, Jinfeng Zhuang, Ling Leng, Jiajing Xu, Charles Rosenberg, Prathibha Deshikachar
https://arxiv.org/abs/2509.05292
Sparse Deep Additive Model with Interactions: Enhancing Interpretability and Predictability
Yi-Ting Hung, Li-Hsiang Lin, Vince D. Calhoun
https://arxiv.org/abs/2509.23068 https:…
Statistical Crime Linkage: Evaluating approaches within the Covenant for Using AI in Policing
Nathan A. Judd, Amy V. Tansell, Benjamin Costello, Liam Leonard, Jessica Woodhams, Rowland G. Seymour
https://arxiv.org/abs/2510.03730
QDeepGR4J: Quantile-based ensemble of deep learning and GR4J hybrid rainfall-runoff models for extreme flow prediction with uncertainty quantification
Arpit Kapoor, Rohitash Chandra
https://arxiv.org/abs/2510.05453
Uncertainty-Aware Concept Bottleneck Models with Enhanced Interpretability
Haifei Zhang, Patrick Barry, Eduardo Brandao
https://arxiv.org/abs/2510.00773 https://
Beyond Interpretability: Exploring the Comprehensibility of Adaptive Video Streaming through Large Language Models
Lianchen Jia, Chaoyang Li, Ziqi Yuan, Jiahui Chen, Tianchi Huang, Jiangchuan Liu, Lifeng Sun
https://arxiv.org/abs/2508.16448
On the Optimization of Methods for Establishing Well-Connected Communities
Mohammad Dindoost, Oliver Alvarado Rodriguez, Bartosz Bryg, Minhyuk Park, George Chacko, Tandy Warnow, David A. Bader
https://arxiv.org/abs/2509.02590
An Analysis of the New EU AI Act and A Proposed Standardization Framework for Machine Learning Fairness
Mike Teodorescu, Yongxu Sun, Haren N. Bhatia, Christos Makridis
https://arxiv.org/abs/2510.01281 …
GFSR-Net: Guided Focus via Segment-Wise Relevance Network for Interpretable Deep Learning in Medical Imaging
Jhonatan Contreras, Thomas Bocklitz
https://arxiv.org/abs/2510.01919
Protocode: Prototype-Driven Interpretability for Code Generation in LLMs
Krishna Vamshi Bodla, Haizhao Yang
https://arxiv.org/abs/2509.25247 https://arxiv.…
Reduce-Rank Matrix Integer-Valued Autoregressive Model
Kaiyan Cui, Tianyun Guo, Suping Wang
https://arxiv.org/abs/2509.03338 https://arxiv.org/pdf/2509.033…
Comparative Field Deployment of Reinforcement Learning and Model Predictive Control for Residential HVAC
Ozan Baris Mulayim, Elias N. Pergantis, Levi D. Reyes Premer, Bingqing Chen, Guannan Qu, Kevin J. Kircher, Mario Berg\'es
https://arxiv.org/abs/2510.01475
SpliDT: Partitioned Decision Trees for Scalable Stateful Inference at Line Rate
Murayyiam Parvez, Annus Zulfiqar, Roman Beltiukov, Shir Landau Feibish, Walter Willinger, Arpit Gupta, Muhammad Shahbaz
https://arxiv.org/abs/2509.00397
Smoothing-Based Conformal Prediction for Balancing Efficiency and Interpretability
Mingyi Zheng, Hongyu Jiang, Yizhou Lu, Jiaye Teng
https://arxiv.org/abs/2509.22529 https://
Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models
Miao Yu, Zhenhong Zhou, Moayad Aloqaily, Kun Wang, Biwei Huang, Stephen Wang, Yueming Jin, Qingsong Wen
https://arxiv.org/abs/2509.21761
Commutative algebra neural network reveals genetic origins of diseases
JunJie Wee, Faisal Suwayyid, Mushal Zia, Hongsong Feng, Yuta Hozumi, Guo-Wei Wei
https://arxiv.org/abs/2509.26566
Efficient Sketching and Nearest Neighbor Search Algorithms for Sparse Vector Sets
Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini
https://arxiv.org/abs/2509.24815
eDIF: A European Deep Inference Fabric for Remote Interpretability of LLM
Irma Heithoff. Marc Guggenberger, Sandra Kalogiannis, Susanne Mayer, Fabian Maag, Sigurd Schacht, Carsten Lanquillon
https://arxiv.org/abs/2508.10553
The Loss Kernel: A Geometric Probe for Deep Learning Interpretability
Maxwell Adam, Zach Furman, Jesse Hoogland
https://arxiv.org/abs/2509.26537 https://ar…
A Foundation Model for Chest X-ray Interpretation with Grounded Reasoning via Online Reinforcement Learning
Qika Lin, Yifan Zhu, Bin Pu, Ling Huang, Haoran Luo, Jingying Ma, Zhen Peng, Tianzhe Zhao, Fangzhi Xu, Jian Zhang, Kai He, Zhonghong Ou, Swapnil Mishra, Mengling Feng
https://arxiv.org/abs/2509.03906
DRetNet: A Novel Deep Learning Framework for Diabetic Retinopathy Diagnosis
Idowu Paul Okuwobi, Jingyuan Liu, Jifeng Wan, Jiaojiao Jiang
https://arxiv.org/abs/2509.01072 https:/…
Beyond Transcription: Mechanistic Interpretability in ASR
Neta Glazer, Yael Segal-Feldman, Hilit Segev, Aviv Shamsian, Asaf Buchnick, Gill Hetz, Ethan Fetaya, Joseph Keshet, Aviv Navon
https://arxiv.org/abs/2508.15882
Bayesian Additive Regression Trees for functional ANOVA model
Seokhun Park, Insung Kong, Yongdai Kim
https://arxiv.org/abs/2509.03317 https://arxiv.org/pdf…
AutoDrive-R$^2$: Incentivizing Reasoning and Self-Reflection Capacity for VLA Model in Autonomous Driving
Zhenlong Yuan, Jing Tang, Jinguo Luo, Rui Chen, Chengxuan Qian, Lei Sun, Xiangxiang Chu, Yujun Cai, Dapeng Zhang, Shuo Li
https://arxiv.org/abs/2509.01944
Assessing the Noise Robustness of Class Activation Maps: A Framework for Reliable Model Interpretability
Syamantak Sarkar, Revoti P. Bora, Bhupender Kaushal, Sudhish N George, Kiran Raja
https://arxiv.org/abs/2508.18154
Interpreting Language Models Through Concept Descriptions: A Survey
Nils Feldhus, Laura Kopf
https://arxiv.org/abs/2510.01048 https://arxiv.org/pdf/2510.01…
Open Opportunities in AI Safety, Alignment, and Ethics (AI SAE)
Dylan Waldner
https://arxiv.org/abs/2509.24065 https://arxiv.org/pdf/2509.24065
Machine Intelligence on the Edge: Interpretable Cardiac Pattern Localisation Using Reinforcement Learning
Haozhe Tian, Qiyu Rao, Nina Moutonnet, Pietro Ferraro, Danilo Mandic
https://arxiv.org/abs/2508.21652
Analyzing Latent Concepts in Code Language Models
Arushi Sharma, Vedant Pungliya, Christopher J. Quinn, Ali Jannesari
https://arxiv.org/abs/2510.00476 https://
Interpretable Clustering with Adaptive Heterogeneous Causal Structure Learning in Mixed Observational Data
Wenrui Li, Qinghao Zhang, Xiaowo Wang
https://arxiv.org/abs/2509.04415
Architecturally Constrained Solutions to Ill-Conditioned Problems in QUBIC
Leonora Kardum
https://arxiv.org/abs/2510.00090 https://arxiv.org/pdf/2510.00090…
Interpretable Scalar-on-Image Linear Regression Models via the Generalized Dantzig Selector
Sijia Liao, Xiaoxiao Sun, Ning Hao, Hao Helen Zhang
https://arxiv.org/abs/2508.20278 …
A Neuro-Fuzzy System for Interpretable Long-Term Stock Market Forecasting
Miha O\v{z}bot, Igor \v{S}krjanc, Vitomir \v{S}truc
https://arxiv.org/abs/2510.00960 https://
V-SEAM: Visual Semantic Editing and Attention Modulating for Causal Interpretability of Vision-Language Models
Qidong Wang, Junjie Hu, Ming Jiang
https://arxiv.org/abs/2509.14837
AIM: Amending Inherent Interpretability via Self-Supervised Masking
Eyad Alshami, Shashank Agnihotri, Bernt Schiele, Margret Keuper
https://arxiv.org/abs/2508.11502 https://
DPsurv: Dual-Prototype Evidential Fusion for Uncertainty-Aware and Interpretable Whole-Slide Image Survival Prediction
Yucheng Xing, Ling Huang, Jingying Ma, Ruping Hong, Jiangdong Qiu, Pei Liu, Kai He, Huazhu Fu, Mengling Feng
https://arxiv.org/abs/2510.00053
Bayesian Neural Networks for Functional ANOVA model
Seokhun Park, Choeun Kim, Jihu Lee, Yunseop Shin, Insung Kong, Yongdai Kim
https://arxiv.org/abs/2510.00545 https://
Typed Chain-of-Thought: A Curry-Howard Framework for Verifying LLM Reasoning
Elija Perrier
https://arxiv.org/abs/2510.01069 https://arxiv.org/pdf/2510.0106…
EvolveSignal: A Large Language Model Powered Coding Agent for Discovering Traffic Signal Control Algorithms
Leizhen Wang, Peibo Duan, Hao Wang, Yue Wang, Jian Xu, Nan Zheng, Zhenliang Ma
https://arxiv.org/abs/2509.03335
Constrained Co-evolutionary Metamorphic Differential Testing for Autonomous Systems with an Interpretability Approach
Hossein Yousefizadeh, Shenghui Gu, Lionel C. Briand, Ali Nasr
https://arxiv.org/abs/2509.16478
Learning Agile Gate Traversal via Analytical Optimal Policy Gradient
Tianchen Sun, Bingheng Wang, Longbin Tang, Yichao Gao, Lin Zhao
https://arxiv.org/abs/2508.21592 https://
Initialization Schemes for Kolmogorov-Arnold Networks: An Empirical Study
Spyros Rigas, Dhruv Verma, Georgios Alexandridis, Yixuan Wang
https://arxiv.org/abs/2509.03417 https://…
Unfolding Framework with Complex-Valued Deformable Attention for High-Quality Computer-Generated Hologram Generation
Haomiao Zhang, Zhangyuan Li, Yanling Piao, Zhi Li, Xiaodong Wang, Miao Cao, Xiongfei Su, Qiang Song, Xin Yuan
https://arxiv.org/abs/2508.21657
MUSE-Explainer: Counterfactual Explanations for Symbolic Music Graph Classification Models
Baptiste Hilaire, Emmanouil Karystinaios, Gerhard Widmer
https://arxiv.org/abs/2509.26521
Latent Thinking Optimization: Your Latent Reasoning Language Model Secretly Encodes Reward Signals in its Latent Thoughts
Hanwen Du, Yuxin Dong, Xia Ning
https://arxiv.org/abs/2509.26314
A Neuro-Fuzzy System for Interpretable Long-Term Stock Market Forecasting
Miha O\v{z}bot, Igor \v{S}krjanc, Vitomir \v{S}truc
https://arxiv.org/abs/2510.00960 https://
LINKER: Learning Interactions Between Functional Groups and Residues With Chemical Knowledge-Enhanced Reasoning and Explainability
Phuc Pham, Viet Thanh Duy Nguyen, Truong-Son Hy
https://arxiv.org/abs/2509.03425
Interpret, prune and distill Donut : towards lightweight VLMs for VQA on document
Adnan Ben Mansour, Ayoub Karine, David Naccache
https://arxiv.org/abs/2509.26235 https://
A more interpretable regression model for count data with excess of zeros
Gustavo H. A. Pereira, Jeremias Le\~ao, Manoel Santos-Neto, Jianwen Cai
https://arxiv.org/abs/2509.24916
Typed Chain-of-Thought: A Curry-Howard Framework for Verifying LLM Reasoning
Elija Perrier
https://arxiv.org/abs/2510.01069 https://arxiv.org/pdf/2510.0106…
Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures
Marco Bronzini, Carlo Nicolini, Bruno Lepri, Jacopo Staiano, Andrea Passerini
https://arxiv.org/abs/2509.25045
Medical priority fusion: achieving dual optimization of sensitivity and interpretability in nipt anomaly detection
Xiuqi Ge, Zhibo Yao, Yaosong Du
https://arxiv.org/abs/2509.17924
Improving Large Language Models Function Calling and Interpretability via Guided-Structured Templates
Hy Dang, Tianyi Liu, Zhuofeng Wu, Jingfeng Yang, Haoming Jiang, Tao Yang, Pei Chen, Zhengyang Wang, Helen Wang, Huasheng Li, Bing Yin, Meng Jiang
https://arxiv.org/abs/2509.18076
UML-CoT: Structured Reasoning and Planning with Unified Modeling Language for Robotic Room Cleaning
Hongyu Chen, Guangrun Wang
https://arxiv.org/abs/2509.22628 https://
Behind the Scenes: Mechanistic Interpretability of LoRA-adapted Whisper for Speech Emotion Recognition
Yujian Ma, Jinqiu Sang, Ruizhe Li
https://arxiv.org/abs/2509.08454 https:/…
Model Interpretability and Rationale Extraction by Input Mask Optimization
Marc Brinner, Sina Zarriess
https://arxiv.org/abs/2508.11388 https://arxiv.org/p…
Explaining multimodal LLMs via intra-modal token interactions
Jiawei Liang, Ruoyu Chen, Xianghao Jiao, Siyuan Liang, Shiming Liu, Qunli Zhang, Zheng Hu, Xiaochun Cao
https://arxiv.org/abs/2509.22415
Interpretability as Alignment: Making Internal Understanding a Design Principle
Aadit Sengupta, Pratinav Seth, Vinay Kumar Sankarapu
https://arxiv.org/abs/2509.08592 https://
Privacy Preserved Federated Learning with Attention-Based Aggregation for Biometric Recognition
Kassahun Azezew, Minyechil Alehegn, Tsega Asresa, Bitew Mekuria, Tizazu Bayh, Ayenew Kassie, Amsalu Tesema, Animut Embiyale
https://arxiv.org/abs/2510.01113
Where MLLMs Attend and What They Rely On: Explaining Autoregressive Token Generation
Ruoyu Chen, Xiaoqing Guo, Kangwei Liu, Siyuan Liang, Shiming Liu, Qunli Zhang, Hua Zhang, Xiaochun Cao
https://arxiv.org/abs/2509.22496
REMA: A Unified Reasoning Manifold Framework for Interpreting Large Language Model
Bo Li, Guanzhi Deng, Ronghao Chen, Junrong Yue, Shuo Zhang, Qinghua Zhao, Linqi Song, Lijie Wen
https://arxiv.org/abs/2509.22518
Analysing Moral Bias in Finetuned LLMs through Mechanistic Interpretability
Bianca Raimondi, Daniela Dalbagno, Maurizio Gabbrielli
https://arxiv.org/abs/2510.12229 https://
CLAIRE: A Dual Encoder Network with RIFT Loss and Phi-3 Small Language Model Based Interpretability for Cross-Modality Synthetic Aperture Radar and Optical Land Cover Segmentation
Debopom Sutradhar, Arefin Ittesafun Abian, Mohaimenul Azam Khan Raiaan, Reem E. Mohamed, Sheikh Izzal Azid, Sami Azam
https://arxiv.org/abs/2509.11952
Interpretable by AI Mother Tongue: Native Symbolic Reasoning in Neural Models
Hung Ming Liu
https://arxiv.org/abs/2508.18988 https://arxiv.org/pdf/2508.189…
Towards Understanding the Shape of Representations in Protein Language Models
Kosio Beshkov, Anders Malthe-S{\o}renssen
https://arxiv.org/abs/2509.24895 https://
Tracking World States with Language Models: State-Based Evaluation Using Chess
Romain Harang, Jason Naradowsky, Yaswitha Gujju, Yusuke Miyao
https://arxiv.org/abs/2508.19851 htt…
RLBFF: Binary Flexible Feedback to bridge between Human Feedback & Verifiable Rewards
Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Ellie Evans, Daniel Egert, Hoo-Chang Shin, Felipe Soares, Yi Dong, Oleksii Kuchaiev
https://arxiv.org/abs/2509.21319
(Sometimes) Less is More: Mitigating the Complexity of Rule-based Representation for Interpretable Classification
Luca Bergamin, Roberto Confalonieri, Fabio Aiolli
https://arxiv.org/abs/2509.22384
MOSS-ChatV: Reinforcement Learning with Process Reasoning Reward for Video Temporal Reasoning
Sicheng Tao, Jungang Li, Yibo Yan, Junyan Zhang, Yubo Gao, Hanqian Li, ShuHang Xun, Yuxuan Fan, Hong Chen, Jianxiang He, Xuming Hu
https://arxiv.org/abs/2509.21113
Enhancing Credit Risk Prediction: A Meta-Learning Framework Integrating Baseline Models, LASSO, and ECOC for Superior Accuracy
Haibo Wang, Lutfu S. Sua, Jun Huang, Figen Balo, Burak Dolar
https://arxiv.org/abs/2509.22381
Interpretable Decision-Making for End-to-End Autonomous Driving
Mona Mirzaie, Bodo Rosenhahn
https://arxiv.org/abs/2508.18898 https://arxiv.org/pdf/2508.18…
Think as a Doctor: An Interpretable AI Approach for ICU Mortality Prediction
Qingwen Li, Xiaohang Zhao, Xiao Han, Hailiang Huang, Lanjuan Liu
https://arxiv.org/abs/2510.11745 ht…