Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[4/5]:
- Retrieving Climate Change Disinformation by Narrative
Upravitelev, Solopova, Jakob, Sahitaj, M\"oller, Schmitt
https://arxiv.org/abs/2603.22015 https://mastoxiv.page/@arXiv_csCL_bot/116283633674519408
- PaperVoyager : Building Interactive Web with Visual Language Models
Dasen Dai, Biao Wu, Meng Fang, Wenhao Wang
https://arxiv.org/abs/2603.22999 https://mastoxiv.page/@arXiv_csCL_bot/116289015432093128
- Continual Robot Skill and Task Learning via Dialogue
Weiwei Gu, Suresh Kondepudi, Anmol Gupta, Lixiao Huang, Nakul Gopalan
https://arxiv.org/abs/2409.03166 https://mastoxiv.page/@arXiv_csRO_bot/113089412115632702
- Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Zara Siddique, Irtaza Khalid, Liam D. Turner, Luis Espinosa-Anke
https://arxiv.org/abs/2503.05371 https://mastoxiv.page/@arXiv_csLG_bot/114136994263573386
- SkillFlow: Scalable and Efficient Agent Skill Retrieval System
Fangzhou Li, Pagkratios Tagkopoulos, Ilias Tagkopoulos
https://arxiv.org/abs/2504.06188 https://mastoxiv.page/@arXiv_csAI_bot/114306773220502860
- Large Language Models for Computer-Aided Design: A Survey
Licheng Zhang, Bach Le, Naveed Akhtar, Siew-Kei Lam, Tuan Ngo
https://arxiv.org/abs/2505.08137 https://mastoxiv.page/@arXiv_csLG_bot/114504972217393639
- Structured Agent Distillation for Large Language Model
Liu, Kong, Dong, Yang, Li, Tang, Yuan, Niu, Zhang, Zhao, Lin, Huang, Wang
https://arxiv.org/abs/2505.13820 https://mastoxiv.page/@arXiv_csLG_bot/114544636506163783
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
Fan, Zhang, Li, Zhang, Chen, Hu, Wang, Qu, Zhou, Wang, Yan, Xu, Theiss, Chen, Li, Tu, Wang, Ranjan
https://arxiv.org/abs/2505.20279 https://mastoxiv.page/@arXiv_csCV_bot/114578817567171199
- Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Bhattacharjee, Tian, Rubin, Lo, Merchant, Hanson, Gounley, Tandon
https://arxiv.org/abs/2506.04450 https://mastoxiv.page/@arXiv_csCR_bot/114635189706505648
- L-MARS: Legal Multi-Agent Workflow with Orchestrated Reasoning and Agentic Search
Ziqi Wang, Boqin Yuan
https://arxiv.org/abs/2509.00761 https://mastoxiv.page/@arXiv_csAI_bot/115140304787881576
- Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking
Han, Huang, Liao, Jiang, Lu, Zhao, Wang, Zhou, Jiang, Liang, Zhou, Sun, Yu, Xiao
https://arxiv.org/abs/2509.23392 https://mastoxiv.page/@arXiv_csAI_bot/115293169353788311
- Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Leander Girrbach, Stephan Alaniz, Genevieve Smith, Trevor Darrell, Zeynep Akata
https://arxiv.org/abs/2510.03721 https://mastoxiv.page/@arXiv_csCV_bot/115332690912652473
- Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Zhang, Hu, Upasani, Ma, Hong, Kamanuru, Rainton, Wu, Ji, Li, Thakker, Zou, Olukotun
https://arxiv.org/abs/2510.04618 https://mastoxiv.page/@arXiv_csLG_bot/115332999596603375
- Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling
Giannone, Xu, Nayak, Awhad, Sudalairaj, Xu, Srivastava
https://arxiv.org/abs/2510.05825 https://mastoxiv.page/@arXiv_csLG_bot/115338159696513898
- Complete asymptotic type-token relationship for growing complex systems with inverse power-law co...
Pablo Rosillo-Rodes, Laurent H\'ebert-Dufresne, Peter Sheridan Dodds
https://arxiv.org/abs/2511.02069 https://mastoxiv.page/@arXiv_physicssocph_bot/115496283627867809
- ViPRA: Video Prediction for Robot Actions
Sandeep Routray, Hengkai Pan, Unnat Jain, Shikhar Bahl, Deepak Pathak
https://arxiv.org/abs/2511.07732 https://mastoxiv.page/@arXiv_csRO_bot/115535941444003568
- AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
Chandrachur Bhattacharya, Sibendu Som
https://arxiv.org/abs/2511.14043
- VideoARM: Agentic Reasoning over Hierarchical Memory for Long-Form Video Understanding
Yufei Yin, Qianke Meng, Minghao Chen, Jiajun Ding, Zhenwei Shao, Zhou Yu
https://arxiv.org/abs/2512.12360 https://mastoxiv.page/@arXiv_csCV_bot/115729238732682644
- RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering
L\'eo Butsanets, Charles Corbi\`ere, Julien Khlaut, Pierre Manceron, Corentin Dancette
https://arxiv.org/abs/2512.17396 https://mastoxiv.page/@arXiv_csCV_bot/115762705911757243
- Measuring all the noises of LLM Evals
Sida Wang
https://arxiv.org/abs/2512.21326 https://mastoxiv.page/@arXiv_csLG_bot/115779597137785637
toXiv_bot_toot
Replaced article(s) found for cs.DS. https://arxiv.org/list/cs.DS/new
[1/1]:
- Fully Dynamic Adversarially Robust Correlation Clustering in Polylogarithmic Update Time
Vladimir Braverman, Prathamesh Dharangutte, Shreyas Pai, Vihan Shah, Chen Wang
https://arxiv.org/abs/2411.09979 https://mastoxiv.page/@arXiv_csDS_bot/113502653187863544
- A Simple and Combinatorial Approach to Proving Chernoff Bounds and Their Generalizations
William Kuszmaul
https://arxiv.org/abs/2501.03488 https://mastoxiv.page/@arXiv_csDS_bot/113791396712128907
- The Structural Complexity of Matrix-Vector Multiplication
Emile Anand, Jan van den Brand, Rose McCarty
https://arxiv.org/abs/2502.21240 https://mastoxiv.page/@arXiv_csDS_bot/114097340825270885
- Clustering under Constraints: Efficient Parameterized Approximation Schemes
Sujoy Bhore, Ameet Gadekar, Tanmay Inamdar
https://arxiv.org/abs/2504.06980 https://mastoxiv.page/@arXiv_csDS_bot/114312444050875805
- Minimizing Envy and Maximizing Happiness in Graphical House Allocation
Anubhav Dhar, Ashlesha Hota, Palash Dey, Sudeshna Kolay
https://arxiv.org/abs/2505.00296 https://mastoxiv.page/@arXiv_csDS_bot/114437013364446063
- Fast and Simple Densest Subgraph with Predictions
Thai Bui, Luan Nguyen, Hoa T. Vu
https://arxiv.org/abs/2505.12600 https://mastoxiv.page/@arXiv_csDS_bot/114538936921930134
- Compressing Suffix Trees by Path Decompositions
Becker, Cenzato, Gagie, Kim, Koerkamp, Manzini, Prezza
https://arxiv.org/abs/2506.14734 https://mastoxiv.page/@arXiv_csDS_bot/114703384646892523
- Improved sampling algorithms and functional inequalities for non-log-concave distributions
Yuchen He, Zhehan Lei, Jianan Shao, Chihao Zhang
https://arxiv.org/abs/2507.11236 https://mastoxiv.page/@arXiv_csDS_bot/114862112197588124
- Deterministic Lower Bounds for $k$-Edge Connectivity in the Distributed Sketching Model
Peter Robinson, Ming Ming Tan
https://arxiv.org/abs/2507.11257 https://mastoxiv.page/@arXiv_csDS_bot/114862223634372292
- Optimally detecting uniformly-distributed $\ell_2$ heavy hitters in data streams
Santhoshini Velusamy, Huacheng Yu
https://arxiv.org/abs/2509.07286 https://mastoxiv.page/@arXiv_csDS_bot/115178875220889588
- Uncrossed Multiflows and Applications to Disjoint Paths
Chandra Chekuri, Guyslain Naves, Joseph Poremba, F. Bruce Shepherd
https://arxiv.org/abs/2511.00254 https://mastoxiv.page/@arXiv_csDS_bot/115490402963680492
- Dynamic Matroids: Base Packing and Covering
Tijn de Vos, Mara Grilnberger
https://arxiv.org/abs/2511.15460 https://mastoxiv.page/@arXiv_csDS_bot/115580946319285096
- Branch-width of connectivity functions is fixed-parameter tractable
Tuukka Korhonen, Sang-il Oum
https://arxiv.org/abs/2601.04756 https://mastoxiv.page/@arXiv_csDS_bot/115864074799755995
- CoinPress: Practical Private Mean and Covariance Estimation
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman
https://arxiv.org/abs/2006.06618
- The Ideal Membership Problem and Abelian Groups
Andrei A. Bulatov, Akbar Rafiey
https://arxiv.org/abs/2201.05218
- Bridging Classical and Quantum: Group-Theoretic Approach to Quantum Circuit Simulation
Daksh Shami
https://arxiv.org/abs/2407.19575 https://mastoxiv.page/@arXiv_quantph_bot/112874282709517475
- Young domination on Hamming rectangles
Janko Gravner, Matja\v{z} Krnc, Martin Milani\v{c}, Jean-Florent Raymond
https://arxiv.org/abs/2501.03788 https://mastoxiv.page/@arXiv_mathCO_bot/113791421814248215
- On the Space Complexity of Online Convolution
Joel Daniel Andersson, Amir Yehudayoff
https://arxiv.org/abs/2505.00181 https://mastoxiv.page/@arXiv_csCC_bot/114437005955255553
- Universal Solvability for Robot Motion Planning on Graphs
Anubhav Dhar, Pranav Nyati, Tanishq Prasad, Ashlesha Hota, Sudeshna Kolay
https://arxiv.org/abs/2506.18755 https://mastoxiv.page/@arXiv_csCC_bot/114737342714568702
- Colorful Minors
Evangelos Protopapas, Dimitrios M. Thilikos, Sebastian Wiederrecht
https://arxiv.org/abs/2507.10467
- Learning fermionic linear optics with Heisenberg scaling and physical operations
Aria Christensen, Andrew Zhao
https://arxiv.org/abs/2602.05058
toXiv_bot_toot
Democratizing AI: A Comparative Study in Deep Learning Efficiency and Future Trends in Computational Processing
Lisan Al Amin, Md Ismail Hossain, Rupak Kumar Das, Mahbubul Islam, Saddam Mukta, Abdulaziz Tabbakh
https://arxiv.org/abs/2603.20920 https://arxiv.org/pdf/2603.20920 https://arxiv.org/html/2603.20920
arXiv:2603.20920v1 Announce Type: new
Abstract: The exponential growth in data has intensified the demand for computational power to train large-scale deep learning models. However, the rapid growth in model size and complexity raises concerns about equal and fair access to computational resources, particularly under increasing energy and infrastructure constraints. GPUs have emerged as essential for accelerating such workloads. This study benchmarks four deep learning models (Conv6, VGG16, ResNet18, CycleGAN) using TensorFlow and PyTorch on Intel Xeon CPUs and NVIDIA Tesla T4 GPUs. Our experiments demonstrate that, on average, GPU training achieves speedups ranging from 11x to 246x depending on model complexity, with lightweight models (Conv6) showing the highest acceleration (246x), mid-sized models (VGG16, ResNet18) achieving 51-116x speedups, and complex generative models (CycleGAN) reaching 11x improvements compared to CPU training. Additionally, in our PyTorch vs. TensorFlow comparison, we observed that TensorFlow's kernel-fusion optimizations reduce inference latency by approximately 15%. We also analyze GPU memory usage trends and projecting requirements through 2025 using polynomial regression. Our findings highlight that while GPUs are essential for sustaining AI's growth, democratized and shared access to GPU resources is critical for enabling research innovation across institutions with limited computational budgets.
toXiv_bot_toot
Space Complexity Dichotomies for Subgraph Finding Problems in the Streaming Model
Yu-Sheng Shih, Meng-Tsung Tsai, Yen-Chu Tsai, Ying-Sian Wu
https://arxiv.org/abs/2602.08002 https://arxiv.org/pdf/2602.08002 https://arxiv.org/html/2602.08002
arXiv:2602.08002v1 Announce Type: new
Abstract: We study the space complexity of four variants of the standard subgraph finding problem in the streaming model. Specifically, given an $n$-vertex input graph and a fixed-size pattern graph, we consider two settings: undirected simple graphs, denoted by $G$ and $H$, and oriented graphs, denoted by $\vec{G}$ and $\vec{H}$. Depending on the setting, the task is to decide whether $G$ contains $H$ as a subgraph or as an induced subgraph, or whether $\vec{G}$ contains $\vec{H}$ as a subgraph or as an induced subgraph. Let Sub$(H)$, IndSub$(H)$, Sub$(\vec{H})$, and IndSub$(\vec{H})$ denote these four variants, respectively.
An oriented graph is well-oriented if it admits a bipartition in which every arc is oriented from one part to the other, and a vertex is non-well-oriented if both its in-degree and out-degree are non-zero. For each variant, we obtain a complete dichotomy theorem, briefly summarized as follows.
(1) Sub$(H)$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if $H$ is bipartite.
(2) IndSub$(H)$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if $H \in \{P_3, P_4, co\mbox{-}P_3\}$.
(3) Sub$(\vec{H})$ can be solved by a single-pass $n^{2-\Omega(1)}$-space algorithm if and only if every connected component of $\vec H$ is either a well-oriented bipartite graph or a tree containing at most one non-well-oriented vertex.
(4) IndSub$(\vec{H})$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if the underlying undirected simple graph $H$ is a $co\mbox{-}P_3$.
toXiv_bot_toot
Fork, Explore, Commit: OS Primitives for Agentic Exploration
Cong Wang, Yusheng Zheng
https://arxiv.org/abs/2602.08199 https://arxiv.org/pdf/2602.08199 https://arxiv.org/html/2602.08199
arXiv:2602.08199v1 Announce Type: new
Abstract: AI agents increasingly perform agentic exploration: pursuing multiple solution paths in parallel and committing only the successful one. Because each exploration path may modify files and spawn processes, agents require isolated environments with atomic commit and rollback semantics for both filesystem state and process state. We introduce the branch context, a new OS abstraction that provides: (1) copy-on-write state isolation with independent filesystem views and process groups, (2) a structured lifecycle of fork, explore, and commit/abort, (3) first-commit-wins resolution that automatically invalidates sibling branches, and (4) nestable contexts for hierarchical exploration. We realize branch contexts in Linux through two complementary components. First, BranchFS is a FUSE-based filesystem that gives each branch context an isolated copy-on-write workspace, with O(1) creation, atomic commit to the parent, and automatic sibling invalidation, all without root privileges. BranchFS is open sourced in https://github.com/multikernel/branchfs. Second, branch() is a proposed Linux syscall that spawns processes into branch contexts with reliable termination, kernel-enforced sibling isolation, and first-commit-wins coordination. Preliminary evaluation of BranchFS shows sub-350 us branch creation independent of base filesystem size, and modification-proportional commit overhead (under 1 ms for small changes).
toXiv_bot_toot
Inter-detector differential fuzz testing for tamper detection in gamma spectrometers
Pei Yao Li, Jayson R. Vavrek, Sean Peisert
https://arxiv.org/abs/2602.00336 https://arxiv.org/pdf/2602.00336 https://arxiv.org/html/2602.00336
arXiv:2602.00336v1 Announce Type: new
Abstract: We extend physical differential fuzz testing as an anti-tamper method for radiation detectors [Vavrek et al., Science and Global Security 2025] to comparisons across multiple detector units. The method was previously introduced as a tamper detection method for authenticating a single radiation detector in nuclear safeguards and treaty verification scenarios, and works by randomly sampling detector configuration parameters to produce a sequence of spectra that form a baseline signature of an untampered system. At a later date, after potential tampering, the same random sequence of parameters is used to generate another series of spectra that can be compared against the baseline. Anomalies in the series of comparisons indicate changes in detector behavior, which may be due to tampering. One limitation of this original method is that once the detector has `gone downrange' and may have been tampered with, the original baseline is fixed, and a new trusted baseline can never be established if tests at new parameters are required. In this work, we extend our anti-tamper fuzz testing concept to multiple detector units, such that the downrange detector can be compared against a trusted or `golden copy' detector, even despite normal inter-detector manufacturing variations. We show using three NaI detectors that this inter-detector differential fuzz testing can detect a representative attack, even when the tested and golden copy detectors are from different manufacturers and have different performances. Here, detecting tampering requires visualizing the comparison metric vs. the parameter values and not just the sample number; moreover this baseline is non-linear and may require anomaly detection methods more complex than a simple threshold. Overall, this extension to multiple detectors improves prospects for operationalizing the technique in real-world treaty verification and safeguards contexts.
toXiv_bot_toot
Towards Efficient Data Structures for Approximate Search with Range Queries
Ladan Kian, Dariusz R. Kowalski
https://arxiv.org/abs/2602.06860 https://arxiv.org/pdf/2602.06860 https://arxiv.org/html/2602.06860
arXiv:2602.06860v1 Announce Type: new
Abstract: Range queries are simple and popular types of queries used in data retrieval. However, extracting exact and complete information using range queries is costly. As a remedy, some previous work proposed a faster principle, {\em approximate} search with range queries, also called single range cover (SRC) search. It can, however, produce some false positives. In this work we introduce a new SRC search structure, a $c$-DAG (Directed Acyclic Graph), which provably decreases the average number of false positives by logarithmic factor while keeping asymptotically same time and memory complexities as a classic tree structure. A $c$-DAG is a tunable augmentation of the 1D-Tree with denser overlapping branches ($c \geq 3$ children per node). We perform a competitive analysis of a $c$-DAG with respect to 1D-Tree and derive an additive constant time overhead and a multiplicative logarithmic improvement of the false positives ratio, on average. We also provide a generic framework to extend our results to empirical distributions of queries, and demonstrate its effectiveness for Gowalla dataset. Finally, we quantify and discuss security and privacy aspects of SRC search on $c$-DAG vs 1D-Tree, mainly mitigation of structural leakage, which makes $c$-DAG a good data structure candidate for deployment in privacy-preserving systems (e.g., searchable encryption) and multimedia retrieval.
toXiv_bot_toot