
2025-08-18 09:54:10
AIM: Amending Inherent Interpretability via Self-Supervised Masking
Eyad Alshami, Shashank Agnihotri, Bernt Schiele, Margret Keuper
https://arxiv.org/abs/2508.11502 https://
AIM: Amending Inherent Interpretability via Self-Supervised Masking
Eyad Alshami, Shashank Agnihotri, Bernt Schiele, Margret Keuper
https://arxiv.org/abs/2508.11502 https://
V-SEAM: Visual Semantic Editing and Attention Modulating for Causal Interpretability of Vision-Language Models
Qidong Wang, Junjie Hu, Ming Jiang
https://arxiv.org/abs/2509.14837
Model Interpretability and Rationale Extraction by Input Mask Optimization
Marc Brinner, Sina Zarriess
https://arxiv.org/abs/2508.11388 https://arxiv.org/p…
Towards Faithful Class-level Self-explainability in Graph Neural Networks by Subgraph Dependencies
Fanzhen Liu, Xiaoxiao Ma, Jian Yang, Alsharif Abuadbba, Kristen Moore, Surya Nepal, Cecile Paris, Quan Z. Sheng, Jia Wu
https://arxiv.org/abs/2508.11513
RadarQA: Multi-modal Quality Analysis of Weather Radar Forecasts
Xuming He, Zhiyuan You, Junchao Gong, Couhua Liu, Xiaoyu Yue, Peiqin Zhuang, Wenlong Zhang, Lei Bai
https://arxiv.org/abs/2508.12291
Neural Earthquake Forecasting with Minimal Information: Limits, Interpretability, and the Role of Markov Structure
Jonas Koehler, Nishtha Srivastava, Kai Zhou, Claudia Quinteros, Johannes Faber, F. Alejandro Nava
https://arxiv.org/abs/2509.14661
Learning Mechanistic Subtypes of Neurodegeneration with a Physics-Informed Variational Autoencoder Mixture Model
Sanduni Pinnawala, Annabelle Hartanto, Ivor J. A. Simpson, Peter A. Wijeratne
https://arxiv.org/abs/2509.15124
RRRA: Resampling and Reranking through a Retriever Adapter
Bongsu Kim
https://arxiv.org/abs/2508.11670 https://arxiv.org/pdf/2508.11670
Deploying UDM Series in Real-Life Stuttered Speech Applications: A Clinical Evaluation Framework
Eric Zhang (SSHealth Team, AI for Healthcare Laboratory), Li Wei (SSHealth Team, AI for Healthcare Laboratory), Sarah Chen (SSHealth Team, AI for Healthcare Laboratory), Michael Wang (SSHealth Team, AI for Healthcare Laboratory)
https://arxiv.o…
KAN-HAR: A Human activity recognition based on Kolmogorov-Arnold Network
Mohammad Alikhani
https://arxiv.org/abs/2508.11186 https://arxiv.org/pdf/2508.1118…
Rest2Visual: Predicting Visually Evoked fMRI from Resting-State Scans
Chuyang Zhou, Ziao Ji, Daochang Liu, Dongang Wang, Chenyu Wang, Chang Xu
https://arxiv.org/abs/2509.13612 h…
DF-LLaVA: Unlocking MLLM's potential for Synthetic Image Detection via Prompt-Guided Knowledge Injection
Zhuokang Shen, Kaisen Zhang, Bohan Jia, Yuan Fang, Zhou Yu, Shaohui Lin
https://arxiv.org/abs/2509.14957
Floating-Body Hydrodynamic Neural Networks
Tianshuo Zhang, Wenzhe Zhai, Rui Yann, Jia Gao, He Cao, Xianglei Xing
https://arxiv.org/abs/2509.13783 https://a…
From Sea to System: Exploring User-Centered Explainable AI for Maritime Decision Support
Doreen Jirak, Pieter Maes, Armeen Saroukanoff, Dirk van Rooy
https://arxiv.org/abs/2509.15084
Fairness-Aware and Interpretable Policy Learning
Nora Bearth, Michael Lechner, Jana Mareckova, Fabian Muny
https://arxiv.org/abs/2509.12119 https://arxiv.o…
D4PM: A Dual-branch Driven Denoising Diffusion Probabilistic Model with Joint Posterior Diffusion Sampling for EEG Artifacts Removal
Feixue Shao, Xueyu Liu, Yongfei Wu, Jianbo Lu, Guiying Yan, Weihua Yang
https://arxiv.org/abs/2509.14302
eDIF: A European Deep Inference Fabric for Remote Interpretability of LLM
Irma Heithoff. Marc Guggenberger, Sandra Kalogiannis, Susanne Mayer, Fabian Maag, Sigurd Schacht, Carsten Lanquillon
https://arxiv.org/abs/2508.10553
Checkmate: interpretable and explainable RSVQA is the endgame
Lucrezia Tosato, Christel Tartini Chappuis, Syrielle Montariol, Flora Weissgerber, Sylvain Lobry, Devis Tuia
https://arxiv.org/abs/2508.13086
RationAnomaly: Log Anomaly Detection with Rationality via Chain-of-Thought and Reinforcement Learning
Song Xu, Yilun Liu, Minggui He, Mingchen Dai, Ziang Chen, Chunguang Zhao, Jingzhou Du, Shimin Tao, Weibin Meng, Shenglin Zhang, Yongqian Sun, Boxing Chen, Daimeng Wei
https://arxiv.org/abs/2509.14693…
Multi-Sensory Cognitive Computing for Learning Population-level Brain Connectivity
Mayssa Soussia, Mohamed Ali Mahjoub, Islem Rekik
https://arxiv.org/abs/2508.11436 https://
ImagiDrive: A Unified Imagination-and-Planning Framework for Autonomous Driving
Jingyu Li, Bozhou Zhang, Xin Jin, Jiankang Deng, Xiatian Zhu, Li Zhang
https://arxiv.org/abs/2508.11428
From Distributional to Quantile Neural Basis Models: the case of Electricity Price Forecasting
Alessandro Brusaferri, Danial Ramin, Andrea Ballarino
https://arxiv.org/abs/2509.14113
Trading-R1: Financial Trading with LLM Reasoning via Reinforcement Learning
Yijia Xiao, Edward Sun, Tong Chen, Fang Wu, Di Luo, Wei Wang
https://arxiv.org/abs/2509.11420 https:/…
Deep Reinforcement Learning with Local Interpretability for Transparent Microgrid Resilience Energy Management
Mohammad Hossein Nejati Amiri, Fawaz Annaz, Mario De Oliveira, Florimond Gueniat
https://arxiv.org/abs/2508.08132
Residual MPC: Blending Reinforcement Learning with GPU-Parallelized Model Predictive Control
Se Hwan Jeon, Ho Jae Lee, Seungwoo Hong, Sangbae Kim
https://arxiv.org/abs/2510.12717
Crosslisted article(s) found for cs.HC. https://arxiv.org/list/cs.HC/new
[1/1]:
- User Perception of Attention Visualizations: Effects on Interpretability Across Evidence-Based Me...
Carvallo, Parra, Brusilovsky, Valdivieso, Rada, Donoso, Araujo
A Novel Study on Intelligent Methods and Explainable AI for Dynamic Malware Analysis
Richa Dasila, Vatsala Upadhyay, Samo Bobek, Abhishek Vaish
https://arxiv.org/abs/2508.10652 …
Reinforcing Video Reasoning Segmentation to Think Before It Segments
Sitong Gong, Lu Zhang, Yunzhi Zhuge, Xu Jia, Pingping Zhang, Huchuan Lu
https://arxiv.org/abs/2508.11538 htt…
Graph Neural Diffusion via Generalized Opinion Dynamics
Asela Hevapathige, Asiri Wijesinghe, Ahad N. Zehmakan
https://arxiv.org/abs/2508.11249 https://arxi…
Structured Kernel Regression VAE: A Computationally Efficient Surrogate for GP-VAEs in ICA
Yuan-Hao Wei, Fu-Hao Deng, Lin-Yong Cui, Yan-Jie Sun
https://arxiv.org/abs/2508.09721 …
Genome-Factory: An Integrated Library for Tuning, Deploying, and Interpreting Genomic Models
Weimin Wu, Xuefeng Song, Yibo Wen, Qinjie Lin, Zhihan Zhou, Jerry Yao-Chieh Hu, Zhong Wang, Han Liu
https://arxiv.org/abs/2509.12266
Mechanistic Interpretability of Code Correctness in LLMs via Sparse Autoencoders
Kriz Tahimic, Charibeth Cheng
https://arxiv.org/abs/2510.02917 https://arx…
Crosslisted article(s) found for q-fin.RM. https://arxiv.org/list/q-fin.RM/new
[1/1]:
- Enhancing ML Models Interpretability for Credit Scoring
Sagi Schwartz, Qinling Wang, Fang Fang
…
Behind the Scenes: Mechanistic Interpretability of LoRA-adapted Whisper for Speech Emotion Recognition
Yujian Ma, Jinqiu Sang, Ruizhe Li
https://arxiv.org/abs/2509.08454 https:/…
Analysing Moral Bias in Finetuned LLMs through Mechanistic Interpretability
Bianca Raimondi, Daniela Dalbagno, Maurizio Gabbrielli
https://arxiv.org/abs/2510.12229 https://
Targeted Sequential Pattern Mining with High Average Utility
Kai Cao, Yucong Duan, Wensheng Gan
https://arxiv.org/abs/2510.10115 https://arxiv.org/pdf/2510…
Dynamic Local Average Treatment Effects in Time Series
Alessandro Casini, Adam McCloskey, Luca Rolla, Raimondo Pala
https://arxiv.org/abs/2509.12985 https://
CLAIRE: A Dual Encoder Network with RIFT Loss and Phi-3 Small Language Model Based Interpretability for Cross-Modality Synthetic Aperture Radar and Optical Land Cover Segmentation
Debopom Sutradhar, Arefin Ittesafun Abian, Mohaimenul Azam Khan Raiaan, Reem E. Mohamed, Sheikh Izzal Azid, Sami Azam
https://arxiv.org/abs/2509.11952
Sparse-Group Factor Analysis for High-Dimensional Time Series
Xin Wang, Xialu Liu
https://arxiv.org/abs/2510.05370 https://arxiv.org/pdf/2510.05370
A Triad of Networks and a Triad of Fusions for the Other Climate Crisis
Emilio Porcu, Tobia Filosi, Horst Simon
https://arxiv.org/abs/2510.09728 https://ar…
Do Natural Language Descriptions of Model Activations Convey Privileged Information?
Millicent Li, Alberto Mario Ceballos Arroyo, Giordano Rogers, Naomi Saphra, Byron C. Wallace
https://arxiv.org/abs/2509.13316
Approximate combinatorial optimization with Rydberg atoms: the barrier of interpretability
Christian de Correc, Thomas Ayral, Corentin Bertrand
https://arxiv.org/abs/2507.22761 …
Algorithmic Tradeoffs, Applied NLP, and the State-of-the-Art Fallacy
AJ Alvero, Ruohong Dong, Klint Kanopka, David Lang
https://arxiv.org/abs/2509.08199 https://
Interpretability as Alignment: Making Internal Understanding a Design Principle
Aadit Sengupta, Pratinav Seth, Vinay Kumar Sankarapu
https://arxiv.org/abs/2509.08592 https://
Model-agnostic post-hoc explainability for recommender systems
Irina Ar\'evalo, Jose L Salmeron
https://arxiv.org/abs/2509.10245 https://arxiv.org/pdf/…
DeGuV: Depth-Guided Visual Reinforcement Learning for Generalization and Interpretability in Manipulation
Tien Pham, Xinyun Chi, Khang Nguyen, Manfred Huber, Angelo Cangelosi
https://arxiv.org/abs/2509.04970
FireGNN: Neuro-Symbolic Graph Neural Networks with Trainable Fuzzy Rules for Interpretable Medical Image Classification
Prajit Sengupta, Islem Rekik
https://arxiv.org/abs/2509.10510
Rethinking Human Preference Evaluation of LLM Rationales
Ziang Li, Manasi Ganti, Zixian Ma, Helena Vasconcelos, Qijia He, Ranjay Krishna
https://arxiv.org/abs/2509.11026 https:/…
Explainable Ensemble Learning for Graph-Based Malware Detection
Hossein Shokouhinejad, Roozbeh Razavi-Far, Griffin Higgins, Ali A Ghorbani
https://arxiv.org/abs/2508.09801 https…
Reduction of motion artifacts from photoplethysmography signals using learned convolutional sparse coding
Giulio Basso, Xi Long, Reinder Haakma, Rik Vullings
https://arxiv.org/abs/2508.10805
Hierarchical Variable Importance with Statistical Control for Medical Data-Based Prediction
Joseph Paillard, Antoine Collas, Denis A. Engemann, Bertrand Thirion
https://arxiv.org/abs/2508.08724
Co-Authoring the Self: A Human-AI Interface for Interest Reflection in Recommenders
Ruixuan Sun, Junyuan Wang, Sanjali Roy, Joseph A. Konstan
https://arxiv.org/abs/2510.08930 ht…
Think as a Doctor: An Interpretable AI Approach for ICU Mortality Prediction
Qingwen Li, Xiaohang Zhao, Xiao Han, Hailiang Huang, Lanjuan Liu
https://arxiv.org/abs/2510.11745 ht…
Why Bonds Fail Differently? Explainable Multimodal Learning for Multi-Class Default Prediction
Yi Lu, Aifan Ling, Chaoqun Wang, Yaxin Xu
https://arxiv.org/abs/2509.10802 https:/…
How to Evaluate Medical AI
Ilia Kopanichuk, Petr Anokhin, Vladimir Shaposhnikov, Vladimir Makharev, Ekaterina Tsapieva, Iaroslav Bespalov, Dmitry V. Dylov, Ivan Oseledets
https://arxiv.org/abs/2509.11941
Crosslisted article(s) found for cs.IR. https://arxiv.org/list/cs.IR/new
[1/1]:
- User Perception of Attention Visualizations: Effects on Interpretability Across Evidence-Based Me...
Carvallo, Parra, Brusilovsky, Valdivieso, Rada, Donoso, Araujo
Protocode: Prototype-Driven Interpretability for Code Generation in LLMs
Krishna Vamshi Bodla, Haizhao Yang
https://arxiv.org/abs/2509.25247 https://arxiv.…
Repulsive Mixture Model with Projection Determinantal Point Process
Ziyi Song, Federico Camerlenghi, Weining Shen, Michele Guindani, Mario Beraha
https://arxiv.org/abs/2510.08838
SCDTour: Embedding Axis Ordering and Merging for Interpretable Semantic Change Detection
Taichi Aida, Danushka Bollegala
https://arxiv.org/abs/2509.11818 https://
Causality and Interpretability for Electrical Distribution System faults
Karthik Peddi, Sai Ram Aditya Parisineni, Hemanth Macharla, Mayukha Pal
https://arxiv.org/abs/2508.02524
Audio-Maestro: Enhancing Large Audio-Language Models with Tool-Augmented Reasoning
Kuan-Yi Lee, Tsung-En Lin, Hung-Yi Lee
https://arxiv.org/abs/2510.11454 https://
Foundational theory for optimal decision tree problems. II. Optimal hypersurface decision tree algorithm
Xi He
https://arxiv.org/abs/2509.12057 https://arx…
Empowering LLM Agents with Geospatial Awareness: Toward Grounded Reasoning for Wildfire Response
Yiheng Chen, Lingyao Li, Zihui Ma, Qikai Hu, Yilun Zhu, Min Deng, Runlong Yu
https://arxiv.org/abs/2510.12061
Enhancing Interpretability and Effectiveness in Recommendation with Numerical Features via Learning to Contrast the Counterfactual samples
Xiaoxiao Xu, Hao Wu, Wenhui Yu, Lantao Hu, Peng Jiang, Kun Gai
https://arxiv.org/abs/2509.03187
Risk Map As Middleware: Towards Interpretable Cooperative End-to-end Autonomous Driving for Risk-Aware Planning
Mingyue Lei, Zewei Zhou, Hongchen Li, Jiaqi Ma, Jia Hu
https://arxiv.org/abs/2508.07686
Hybrid Explanation-Guided Learning for Transformer-Based Chest X-Ray Diagnosis
Shelley Zixin Shu, Haozhe Luo, Alexander Poellinger, Mauricio Reyes
https://arxiv.org/abs/2510.12704
Extending the Entropic Potential of Events for Uncertainty Quantification and Decision-Making in Artificial Intelligence
Mark Zilberman
https://arxiv.org/abs/2508.10241 https://…
Quantifying the Accuracy-Interpretability Trade-Off in Concept-Based Sidechannel Models
David Debot, Giuseppe Marra
https://arxiv.org/abs/2510.05670 https://
Denoised IPW-Lasso for Heterogeneous Treatment Effect Estimation in Randomized Experiments
Mingqian Guan, Komei Fujita, Naoya Sueishi, Shota Yasui
https://arxiv.org/abs/2510.10527
Sparse Deep Additive Model with Interactions: Enhancing Interpretability and Predictability
Yi-Ting Hung, Li-Hsiang Lin, Vince D. Calhoun
https://arxiv.org/abs/2509.23068 https:…
Towards Reliable and Interpretable Document Question Answering via VLMs
Alessio Chen, Simone Giovannini, Andrea Gemelli, Fabio Coppini, Simone Marinai
https://arxiv.org/abs/2509.10129
LaV-CoT: Language-Aware Visual CoT with Multi-Aspect Reward Optimization for Real-World Multilingual VQA
Jing Huang, Zhiya Tan, Shutao Gong, Fanwei Zeng, Jianshu Li
https://arxiv.org/abs/2509.10026
HiCoTraj:Zero-Shot Demographic Reasoning via Hierarchical Chain-of-Thought Prompting from Trajectory
Junyi Xie, Yuankun Jiao, Jina Kim, Yao-Yi Chiang, Lingyi Zhao, Khurram Shafique
https://arxiv.org/abs/2510.12067
Beyond Transcription: Mechanistic Interpretability in ASR
Neta Glazer, Yael Segal-Feldman, Hilit Segev, Aviv Shamsian, Asaf Buchnick, Gill Hetz, Ethan Fetaya, Joseph Keshet, Aviv Navon
https://arxiv.org/abs/2508.15882
Layer-Wise Perturbations via Sparse Autoencoders for Adversarial Text Generation
Huizhen Shu, Xuying Li, Qirui Wang, Yuji Kosuga, Mengqiu Tian, Zhuo Li
https://arxiv.org/abs/2508.10404
Situationally-aware Path Planning Exploiting 3D Scene Graphs
Saad Ejaz, Marco Giberna, Muhammad Shaheer, Jose Andres Millan-Romera, Ali Tourani, Paul Kremer, Holger Voos, Jose Luis Sanchez-Lopez
https://arxiv.org/abs/2508.06283
Exploring Expert Specialization through Unsupervised Training in Sparse Mixture of Experts
Strahinja Nikolic, Ilker Oguz, Demetri Psaltis
https://arxiv.org/abs/2509.10025 https:…
From <Answer> to <Think>: Multidimensional Supervision of Reasoning Process for LLM Optimization
Beining Wang, Weihang Su, Hongtao Tian, Tao Yang, Yujia Zhou, Ting Yao, Qingyao Ai, Yiqun Liu
https://arxiv.org/abs/2510.11457
LIA-X: Interpretable Latent Portrait Animator
Yaohui Wang, Di Yang, Xinyuan Chen, Francois Bremond, Yu Qiao, Antitza Dantcheva
https://arxiv.org/abs/2508.09959 https://
The Loss Kernel: A Geometric Probe for Deep Learning Interpretability
Maxwell Adam, Zach Furman, Jesse Hoogland
https://arxiv.org/abs/2509.26537 https://ar…
Towards Perfection: Building Inter-component Mutual Correction for Retinex-based Low-light Image Enhancement
Luyang Cao, Han Xu, Jian Zhang, Lei Qi, Jiayi Ma, Yinghuan Shi, Yang Gao
https://arxiv.org/abs/2508.09009
RADAR: Mechanistic Pathways for Detecting Data Contamination in LLM Evaluation
Ashish Kattamuri, Harshwardhan Fartale, Arpita Vats, Rahul Raja, Ishita Prasad
https://arxiv.org/abs/2510.08931
Prime Implicant Explanations for Reaction Feasibility Prediction
Klaus Weinbauer, Tieu-Long Phan, Peter F. Stadler, Thomas G\"artner, Sagar Malhotra
https://arxiv.org/abs/2510.09226
GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data
Margarita Belova, Jiaxin Xiao, Shikhar Tuli, Niraj K. Jha
https://arxiv.org/abs/2510.09580
Do All Autoregressive Transformers Remember Facts the Same Way? A Cross-Architecture Analysis of Recall Mechanisms
Minyeong Choe, Haehyun Cho, Changho Seo, Hyunil Kim
https://arxiv.org/abs/2509.08778
Uncertainty-Aware Concept Bottleneck Models with Enhanced Interpretability
Haifei Zhang, Patrick Barry, Eduardo Brandao
https://arxiv.org/abs/2510.00773 https://
Functional Groups are All you Need for Chemically Interpretable Molecular Property Prediction
Roshan Balaji, Joe Bobby, Nirav Pravinbhai Bhatt
https://arxiv.org/abs/2509.09619 h…
Lightweight Deep Unfolding Networks with Enhanced Robustness for Infrared Small Target Detection
Jingjing Liu, Yinchao Han, Xianchao Xiu, Jianhua Zhang, Wanquan Liu
https://arxiv.org/abs/2509.08205
ME$^3$-BEV: Mamba-Enhanced Deep Reinforcement Learning for End-to-End Autonomous Driving with BEV-Perception
Siyi Lu, Run Liu, Dongsheng Yang, Lei He
https://arxiv.org/abs/2508.06074
BlackboxNLP-2025 MIB Shared Task: Exploring Ensemble Strategies for Circuit Localization Methods
Philipp Mondorf, Mingyang Wang, Sebastian Gerstner, Ahmad Dawar Hakimi, Yihong Liu, Leonor Veloso, Shijia Zhou, Hinrich Sch\"utze, Barbara Plank
https://arxiv.org/abs/2510.06811
MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization
Mohammed Tiouti, Mohamed Bal-Ghaoui
https://arxiv.org/abs/2509.09387 https://
A Framework for Inherently Safer AGI through Language-Mediated Active Inference
Bo Wen
https://arxiv.org/abs/2508.05766 https://arxiv.org/pdf/2508.05766
Revisiting Data Attribution for Influence Functions
Hongbo Zhu, Angelo Cangelosi
https://arxiv.org/abs/2508.07297 https://arxiv.org/pdf/2508.07297
DeepGraphLog for Layered Neurosymbolic AI
Adem Kikaj, Giuseppe Marra, Floris Geerts, Robin Manhaeve, Luc De Raedt
https://arxiv.org/abs/2509.07665 https://…
Towards Interpretable Deep Neural Networks for Tabular Data
Khawla Elhadri, J\"org Schl\"otterer, Christin Seifert
https://arxiv.org/abs/2509.08617 https://
Medical priority fusion: achieving dual optimization of sensitivity and interpretability in nipt anomaly detection
Xiuqi Ge, Zhibo Yao, Yaosong Du
https://arxiv.org/abs/2509.17924
Towards explainable decision support using hybrid neural models for logistic terminal automation
Riccardo DElia, Alberto Termine, Francesco Flammini
https://arxiv.org/abs/2509.07577
Scientific Machine Learning with Kolmogorov-Arnold Networks
Salah A. Faroughi, Farinaz Mostajeran, Amin Hamed Mashhadzadeh, Shirko Faroughi
https://arxiv.org/abs/2507.22959 http…
IBN: An Interpretable Bidirectional-Modeling Network for Multivariate Time Series Forecasting with Variable Missing
Shusen Ma, Tianhao Zhang, Qijiu Xia, Yun-Bo Zhao
https://arxiv.org/abs/2509.07725
Introspection in Learned Semantic Scene Graph Localisation
Manshika Charvi Bissessur, Efimia Panagiotaki, Daniele De Martini
https://arxiv.org/abs/2510.07053 https://