2025-10-15 10:24:31
Towards Robust Artificial Intelligence: Self-Supervised Learning Approach for Out-of-Distribution Detection
Wissam Salhab, Darine Ameyed, Hamid Mcheick, Fehmi Jaafar
https://arxiv.org/abs/2510.12713
Towards Robust Artificial Intelligence: Self-Supervised Learning Approach for Out-of-Distribution Detection
Wissam Salhab, Darine Ameyed, Hamid Mcheick, Fehmi Jaafar
https://arxiv.org/abs/2510.12713
Hamilton-Jacobi Reachability for Viability Analysis of Constrained Waste-to-Energy Systems under Adversarial Uncertainty
Achraf Bouhmady, Othman Cherkaoui Dekkaki
https://arxiv.org/abs/2510.11396
Mirage Fools the Ear, Mute Hides the Truth: Precise Targeted Adversarial Attacks on Polyphonic Sound Event Detection Systems
Junjie Su, Weifei Jin, Yuxin Cao, Derui Wang, Kai Ye, Jie Hao
https://arxiv.org/abs/2510.02158
DecompGAIL: Learning Realistic Traffic Behaviors with Decomposed Multi-Agent Generative Adversarial Imitation Learning
Ke Guo, Haochen Liu, Xiaojun Wu, Chen Lv
https://arxiv.org/abs/2510.06913
OBJVanish: Physically Realizable Text-to-3D Adv. Generation of LiDAR-Invisible Objects
Bing Li, Wuqi Wang, Yanan Zhang, Jingzheng Li, Haigen Min, Wei Feng, Xingyu Zhao, Jie Zhang, Qing Guo
https://arxiv.org/abs/2510.06952
Position: Human Factors Reshape Adversarial Analysis in Human-AI Decision-Making Systems
Shutong Fan, Lan Zhang, Xiaoyong Yuan
https://arxiv.org/abs/2509.21436 https://
SoK: Systematic analysis of adversarial threats against deep learning approaches for autonomous anomaly detection systems in SDN-IoT networks
Tharindu Lakshan Yasarathna, Nhien-An Le-Khac
https://arxiv.org/abs/2509.26350
Adversarial Social Influence: Modeling Persuasion in Contested Social Networks
Renukanandan Tumu, Cristian Ioan Vasile, Victor Preciado, Rahul Mangharam
https://arxiv.org/abs/2510.01481
LegalSim: Multi-Agent Simulation of Legal Systems for Discovering Procedural Exploits
Sanket Badhe
https://arxiv.org/abs/2510.03405 https://arxiv.org/pdf/2…
A Bilevel Optimization Framework for Adversarial Control of Gas Pipeline Operations
Tejaswini Sanjay Katale, Lu Gao, Yunpeng Zhang, Alaa Senouci
https://arxiv.org/abs/2510.02503
VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation
Lesly Miculicich, Mihir Parmar, Hamid Palangi, Krishnamurthy Dj Dvijotham, Mirko Montanari, Tomas Pfister, Long T. Le
https://arxiv.org/abs/2510.05156
A Statistical Method for Attack-Agnostic Adversarial Attack Detection with Compressive Sensing Comparison
Chinthana Wimalasuriya, Spyros Tragoudas
https://arxiv.org/abs/2510.02707
Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
https://arxiv.org/abs/2509.22060
Vision Transformers: the threat of realistic adversarial patches
Kasper Cools, Clara Maathuis, Alexander M. van Oers, Claudia S. H\"ubner, Nikos Deligiannis, Marijke Vandewal, Geert De Cubber
https://arxiv.org/abs/2509.21084
Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?
Rostislav Makarov, Lea Sch\"onherr, Timo Gerkmann
https://arxiv.org/abs/2509.21087 https://
Physics-Constrained Inc-GAN for Tunnel Propagation Modeling from Sparse Line Measurements
Yang Zhou, Haochang Wu, Yunxi Mu, Hao Qin, Xinyue Zhang, Xingqi Zhang
https://arxiv.org/abs/2510.03019
Incentive-Aligned Multi-Source LLM Summaries
Yanchen Jiang, Zhe Feng, Aranyak Mehta
https://arxiv.org/abs/2509.25184 https://arxiv.org/pdf/2509.25184
Steerable Adversarial Scenario Generation through Test-Time Preference Alignment
Tong Nie, Yuewen Mei, Yihong Tang, Junlin He, Jie Sun, Haotian Shi, Wei Ma, Jian Sun
https://arxiv.org/abs/2509.20102
Modeling the Attack: Detecting AI-Generated Text by Quantifying Adversarial Perturbations
Lekkala Sai Teja, Annepaka Yadagiri, Sangam Sai Anish, Siva Gopala Krishna Nuthakki, Partha Pakray
https://arxiv.org/abs/2510.02319
Generative Adversarial Networks Applied for Privacy Preservation in Biometric-Based Authentication and Identification
Lubos Mjachky, Ivan Homoliak
https://arxiv.org/abs/2509.20024
The Use of the Simplex Architecture to Enhance Safety in Deep-Learning-Powered Autonomous Systems
Federico Nesti, Niko Salamini, Mauro Marinoni, Giorgio Maria Cicero, Gabriele Serra, Alessandro Biondi, Giorgio Buttazzo
https://arxiv.org/abs/2509.21014
TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models
Zonghuan Xu, Xiang Zheng, Xingjun Ma, Yu-Gang Jiang
https://arxiv.org/abs/2510.10932 https://
Bridging Batch and Streaming Estimations to System Identification under Adversarial Attacks
Jihun Kim, Javad Lavaei
https://arxiv.org/abs/2509.15794 https://
Steerable Adversarial Scenario Generation through Test-Time Preference Alignment
Tong Nie, Yuewen Mei, Yihong Tang, Junlin He, Jie Sun, Haotian Shi, Wei Ma, Jian Sun
https://arxiv.org/abs/2509.20102
Observation-Free Attacks on Online Learning to Rank
Sameep Chattopadhyay, Nikhil Karamchandani, Sharayu Mohair
https://arxiv.org/abs/2509.22855 https://arx…
Experimental insights into data augmentation techniques for deep learning-based multimode fiber imaging: limitations and success
Jawaria Maqbool, M. Imran Cheema
https://arxiv.org/abs/2511.19072 https://arxiv.org/pdf/2511.19072 https://arxiv.org/html/2511.19072
arXiv:2511.19072v1 Announce Type: new
Abstract: Multimode fiber~(MMF) imaging using deep learning has high potential to produce compact, minimally invasive endoscopic systems. Nevertheless, it relies on large, diverse real-world medical data, whose availability is limited by privacy concerns and practical challenges. Although data augmentation has been extensively studied in various other deep learning tasks, it has not been systematically explored for MMF imaging. This work provides the first in-depth experimental and computational study on the efficacy and limitations of augmentation techniques in this field. We demonstrate that standard image transformations and conditional generative adversarial-based synthetic speckle generation fail to improve, or even deteriorate, reconstruction quality, as they neglect the complex modal interference and dispersion that results in speckle formation. To address this, we introduce a physical data augmentation method in which only organ images are digitally transformed, while their corresponding speckles are experimentally acquired via fiber. This approach preserves the physics of light-fiber interaction and enhances the reconstruction structural similarity index measure~(SSIM) by up to 17\%, forming a viable system for reliable MMF imaging under limited data conditions.
toXiv_bot_toot
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation
Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo, Huansheng Ning
https://arxiv.org/abs/2509.20411
Secrecy-Driven Beamforming for Multi-User Integrated Sensing and Communication
Ali Khandan Boroujeni, Hyeon Seok Rou, Ghazal Bagheri, Kuranage Roche Rayan Ranasinghe, Giuseppe Thadeu Freitas de Abreu, Stefan K\"opsell, Rafael F. Schaefer
https://arxiv.org/abs/2509.26249
Evaluating the Robustness of a Production Malware Detection System to Transferable Adversarial Attacks
Milad Nasr, Yanick Fratantonio, Luca Invernizzi, Ange Albertini, Loua Farah, Alex Petit-Bianco, Andreas Terzis, Kurt Thomas, Elie Bursztein, Nicholas Carlini
https://arxiv.org/abs/2510.01676
Partial Resilient Leader-Follower Consensus in Time-Varying Graphs
Haejoon Lee, Dimitra Panagou
https://arxiv.org/abs/2510.01144 https://arxiv.org/pdf/2510…
Joint Optimization of Speaker and Spoof Detectors for Spoofing-Robust Automatic Speaker Verification
O\u{g}uzhan Kurnaz, Jagabandhu Mishra, Tomi H. Kinnunen, Cemal Hanil\c{c}i
https://arxiv.org/abs/2510.01818
Unified Threat Detection and Mitigation Framework (UTDMF): Combating Prompt Injection, Deception, and Bias in Enterprise-Scale Transformers
Santhosh KumarRavindran
https://arxiv.org/abs/2510.04528
Crosslisted article(s) found for cs.MA. https://arxiv.org/list/cs.MA/new
[1/1]:
- ORCA: Agentic Reasoning For Hallucination and Adversarial Robustness in Vision-Language Models
Chung-En Johnny Yu (Neil), Hsuan-Chih (Neil), Chen, Brian Jalaian, Nathaniel D. Bastian
Universally Composable Termination Analysis of Tendermint
Zhixin Dong, Xian Xu, Yuhang Zeng, Mingchao Wan, Chunmiao Li
https://arxiv.org/abs/2510.01097 https://
Integrated Security Mechanisms for Weight Protection in Memristive Crossbar Arrays
Muhammad Faheemur Rahman, Wayne Burleson
https://arxiv.org/abs/2510.01350 https://
Are Robust LLM Fingerprints Adversarially Robust?
Anshul Nasery, Edoardo Contente, Alkin Kaz, Pramod Viswanath, Sewoong Oh
https://arxiv.org/abs/2509.26598 https://