Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 10:24:31

Towards Robust Artificial Intelligence: Self-Supervised Learning Approach for Out-of-Distribution Detection
Wissam Salhab, Darine Ameyed, Hamid Mcheick, Fehmi Jaafar
arxiv.org/abs/2510.12713

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 11:42:38

Hamilton-Jacobi Reachability for Viability Analysis of Constrained Waste-to-Energy Systems under Adversarial Uncertainty
Achraf Bouhmady, Othman Cherkaoui Dekkaki
arxiv.org/abs/2510.11396

@arXiv_csCR_bot@mastoxiv.page
2025-10-03 09:44:41

Mirage Fools the Ear, Mute Hides the Truth: Precise Targeted Adversarial Attacks on Polyphonic Sound Event Detection Systems
Junjie Su, Weifei Jin, Yuxin Cao, Derui Wang, Kai Ye, Jie Hao
arxiv.org/abs/2510.02158

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:36:01

DecompGAIL: Learning Realistic Traffic Behaviors with Decomposed Multi-Agent Generative Adversarial Imitation Learning
Ke Guo, Haochen Liu, Xiaojun Wu, Chen Lv
arxiv.org/abs/2510.06913

@arXiv_csCV_bot@mastoxiv.page
2025-10-09 10:33:31

OBJVanish: Physically Realizable Text-to-3D Adv. Generation of LiDAR-Invisible Objects
Bing Li, Wuqi Wang, Yanan Zhang, Jingzheng Li, Haigen Min, Wei Feng, Xingyu Zhao, Jie Zhang, Qing Guo
arxiv.org/abs/2510.06952

@arXiv_csHC_bot@mastoxiv.page
2025-09-29 07:35:25

Position: Human Factors Reshape Adversarial Analysis in Human-AI Decision-Making Systems
Shutong Fan, Lan Zhang, Xiaoyong Yuan
arxiv.org/abs/2509.21436

@arXiv_csCR_bot@mastoxiv.page
2025-10-01 09:32:18

SoK: Systematic analysis of adversarial threats against deep learning approaches for autonomous anomaly detection systems in SDN-IoT networks
Tharindu Lakshan Yasarathna, Nhien-An Le-Khac
arxiv.org/abs/2509.26350

@arXiv_csSI_bot@mastoxiv.page
2025-10-03 08:51:51

Adversarial Social Influence: Modeling Persuasion in Contested Social Networks
Renukanandan Tumu, Cristian Ioan Vasile, Victor Preciado, Rahul Mangharam
arxiv.org/abs/2510.01481

@arXiv_csMA_bot@mastoxiv.page
2025-10-07 07:45:17

LegalSim: Multi-Agent Simulation of Legal Systems for Discovering Procedural Exploits
Sanket Badhe
arxiv.org/abs/2510.03405 arxiv.org/pdf/2…

@arXiv_eessSY_bot@mastoxiv.page
2025-10-06 09:12:19

A Bilevel Optimization Framework for Adversarial Control of Gas Pipeline Operations
Tejaswini Sanjay Katale, Lu Gao, Yunpeng Zhang, Alaa Senouci
arxiv.org/abs/2510.02503

@arXiv_csSE_bot@mastoxiv.page
2025-10-08 08:26:09

VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation
Lesly Miculicich, Mihir Parmar, Hamid Palangi, Krishnamurthy Dj Dvijotham, Mirko Montanari, Tomas Pfister, Long T. Le
arxiv.org/abs/2510.05156

@arXiv_csCR_bot@mastoxiv.page
2025-10-06 09:45:49

A Statistical Method for Attack-Agnostic Adversarial Attack Detection with Compressive Sensing Comparison
Chinthana Wimalasuriya, Spyros Tragoudas
arxiv.org/abs/2510.02707

@arXiv_csSD_bot@mastoxiv.page
2025-09-29 09:39:27

Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
arxiv.org/abs/2509.22060

@arXiv_csCV_bot@mastoxiv.page
2025-09-26 10:17:11

Vision Transformers: the threat of realistic adversarial patches
Kasper Cools, Clara Maathuis, Alexander M. van Oers, Claudia S. H\"ubner, Nikos Deligiannis, Marijke Vandewal, Geert De Cubber
arxiv.org/abs/2509.21084

@arXiv_eessAS_bot@mastoxiv.page
2025-09-26 09:40:11

Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?
Rostislav Makarov, Lea Sch\"onherr, Timo Gerkmann
arxiv.org/abs/2509.21087

@arXiv_eessSP_bot@mastoxiv.page
2025-10-06 09:02:19

Physics-Constrained Inc-GAN for Tunnel Propagation Modeling from Sparse Line Measurements
Yang Zhou, Haochang Wu, Yunxi Mu, Hao Qin, Xinyue Zhang, Xingqi Zhang
arxiv.org/abs/2510.03019

@arXiv_csCL_bot@mastoxiv.page
2025-09-30 14:16:11

Incentive-Aligned Multi-Source LLM Summaries
Yanchen Jiang, Zhe Feng, Aranyak Mehta
arxiv.org/abs/2509.25184 arxiv.org/pdf/2509.25184

@arXiv_csAI_bot@mastoxiv.page
2025-09-26 09:53:31

Steerable Adversarial Scenario Generation through Test-Time Preference Alignment
Tong Nie, Yuewen Mei, Yihong Tang, Junlin He, Jie Sun, Haotian Shi, Wei Ma, Jian Sun
arxiv.org/abs/2509.20102

@arXiv_csCR_bot@mastoxiv.page
2025-10-06 07:32:39

Modeling the Attack: Detecting AI-Generated Text by Quantifying Adversarial Perturbations
Lekkala Sai Teja, Annepaka Yadagiri, Sangam Sai Anish, Siva Gopala Krishna Nuthakki, Partha Pakray
arxiv.org/abs/2510.02319

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 10:36:12

Generative Adversarial Networks Applied for Privacy Preservation in Biometric-Based Authentication and Identification
Lubos Mjachky, Ivan Homoliak
arxiv.org/abs/2509.20024

@arXiv_eessSY_bot@mastoxiv.page
2025-09-26 09:35:31

The Use of the Simplex Architecture to Enhance Safety in Deep-Learning-Powered Autonomous Systems
Federico Nesti, Niko Salamini, Mauro Marinoni, Giorgio Maria Cicero, Gabriele Serra, Alessandro Biondi, Giorgio Buttazzo
arxiv.org/abs/2509.21014

@arXiv_csCR_bot@mastoxiv.page
2025-10-14 12:04:28

TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models
Zonghuan Xu, Xiang Zheng, Xingjun Ma, Yu-Gang Jiang
arxiv.org/abs/2510.10932

@arXiv_mathOC_bot@mastoxiv.page
2025-09-22 08:40:01

Bridging Batch and Streaming Estimations to System Identification under Adversarial Attacks
Jihun Kim, Javad Lavaei
arxiv.org/abs/2509.15794

@arXiv_csAI_bot@mastoxiv.page
2025-09-25 09:13:52

Steerable Adversarial Scenario Generation through Test-Time Preference Alignment
Tong Nie, Yuewen Mei, Yihong Tang, Junlin He, Jie Sun, Haotian Shi, Wei Ma, Jian Sun
arxiv.org/abs/2509.20102

@arXiv_csLG_bot@mastoxiv.page
2025-09-30 09:44:21

Observation-Free Attacks on Online Learning to Rank
Sameep Chattopadhyay, Nikhil Karamchandani, Sharayu Mohair
arxiv.org/abs/2509.22855 arx…

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 11:06:23

Experimental insights into data augmentation techniques for deep learning-based multimode fiber imaging: limitations and success
Jawaria Maqbool, M. Imran Cheema
arxiv.org/abs/2511.19072 arxiv.org/pdf/2511.19072 arxiv.org/html/2511.19072
arXiv:2511.19072v1 Announce Type: new
Abstract: Multimode fiber~(MMF) imaging using deep learning has high potential to produce compact, minimally invasive endoscopic systems. Nevertheless, it relies on large, diverse real-world medical data, whose availability is limited by privacy concerns and practical challenges. Although data augmentation has been extensively studied in various other deep learning tasks, it has not been systematically explored for MMF imaging. This work provides the first in-depth experimental and computational study on the efficacy and limitations of augmentation techniques in this field. We demonstrate that standard image transformations and conditional generative adversarial-based synthetic speckle generation fail to improve, or even deteriorate, reconstruction quality, as they neglect the complex modal interference and dispersion that results in speckle formation. To address this, we introduce a physical data augmentation method in which only organ images are digitally transformed, while their corresponding speckles are experimentally acquired via fiber. This approach preserves the physics of light-fiber interaction and enhances the reconstruction structural similarity index measure~(SSIM) by up to 17\%, forming a viable system for reliable MMF imaging under limited data conditions.
toXiv_bot_toot

@arXiv_csCR_bot@mastoxiv.page
2025-09-26 08:43:41

Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation
Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo, Huansheng Ning
arxiv.org/abs/2509.20411

@arXiv_eessSP_bot@mastoxiv.page
2025-10-01 10:21:37

Secrecy-Driven Beamforming for Multi-User Integrated Sensing and Communication
Ali Khandan Boroujeni, Hyeon Seok Rou, Ghazal Bagheri, Kuranage Roche Rayan Ranasinghe, Giuseppe Thadeu Freitas de Abreu, Stefan K\"opsell, Rafael F. Schaefer
arxiv.org/abs/2509.26249

@arXiv_csCR_bot@mastoxiv.page
2025-10-03 08:58:01

Evaluating the Robustness of a Production Malware Detection System to Transferable Adversarial Attacks
Milad Nasr, Yanick Fratantonio, Luca Invernizzi, Ange Albertini, Loua Farah, Alex Petit-Bianco, Andreas Terzis, Kurt Thomas, Elie Bursztein, Nicholas Carlini
arxiv.org/abs/2510.01676

@arXiv_csMA_bot@mastoxiv.page
2025-10-02 09:14:41

Partial Resilient Leader-Follower Consensus in Time-Varying Graphs
Haejoon Lee, Dimitra Panagou
arxiv.org/abs/2510.01144 arxiv.org/pdf/2510…

@arXiv_eessAS_bot@mastoxiv.page
2025-10-03 07:45:21

Joint Optimization of Speaker and Spoof Detectors for Spoofing-Robust Automatic Speaker Verification
O\u{g}uzhan Kurnaz, Jagabandhu Mishra, Tomi H. Kinnunen, Cemal Hanil\c{c}i
arxiv.org/abs/2510.01818

@arXiv_csCR_bot@mastoxiv.page
2025-10-07 10:53:32

Unified Threat Detection and Mitigation Framework (UTDMF): Combating Prompt Injection, Deception, and Bias in Enterprise-Scale Transformers
Santhosh KumarRavindran
arxiv.org/abs/2510.04528

@arXiv_csMA_bot@mastoxiv.page
2025-09-22 11:06:10

Crosslisted article(s) found for cs.MA. arxiv.org/list/cs.MA/new
[1/1]:
- ORCA: Agentic Reasoning For Hallucination and Adversarial Robustness in Vision-Language Models
Chung-En Johnny Yu (Neil), Hsuan-Chih (Neil), Chen, Brian Jalaian, Nathaniel D. Bastian

@arXiv_csCR_bot@mastoxiv.page
2025-10-02 10:09:21

Universally Composable Termination Analysis of Tendermint
Zhixin Dong, Xian Xu, Yuhang Zeng, Mingchao Wan, Chunmiao Li
arxiv.org/abs/2510.01097

@arXiv_csCR_bot@mastoxiv.page
2025-10-03 07:56:02

Integrated Security Mechanisms for Weight Protection in Memristive Crossbar Arrays
Muhammad Faheemur Rahman, Wayne Burleson
arxiv.org/abs/2510.01350

@arXiv_csCR_bot@mastoxiv.page
2025-10-01 10:01:47

Are Robust LLM Fingerprints Adversarially Robust?
Anshul Nasery, Edoardo Contente, Alkin Kaz, Pramod Viswanath, Sewoong Oh
arxiv.org/abs/2509.26598