Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:13:03

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[4/5]:
- Retrieving Climate Change Disinformation by Narrative
Upravitelev, Solopova, Jakob, Sahitaj, M\"oller, Schmitt
arxiv.org/abs/2603.22015 mastoxiv.page/@arXiv_csCL_bot/
- PaperVoyager : Building Interactive Web with Visual Language Models
Dasen Dai, Biao Wu, Meng Fang, Wenhao Wang
arxiv.org/abs/2603.22999 mastoxiv.page/@arXiv_csCL_bot/
- Continual Robot Skill and Task Learning via Dialogue
Weiwei Gu, Suresh Kondepudi, Anmol Gupta, Lixiao Huang, Nakul Gopalan
arxiv.org/abs/2409.03166 mastoxiv.page/@arXiv_csRO_bot/
- Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Zara Siddique, Irtaza Khalid, Liam D. Turner, Luis Espinosa-Anke
arxiv.org/abs/2503.05371 mastoxiv.page/@arXiv_csLG_bot/
- SkillFlow: Scalable and Efficient Agent Skill Retrieval System
Fangzhou Li, Pagkratios Tagkopoulos, Ilias Tagkopoulos
arxiv.org/abs/2504.06188 mastoxiv.page/@arXiv_csAI_bot/
- Large Language Models for Computer-Aided Design: A Survey
Licheng Zhang, Bach Le, Naveed Akhtar, Siew-Kei Lam, Tuan Ngo
arxiv.org/abs/2505.08137 mastoxiv.page/@arXiv_csLG_bot/
- Structured Agent Distillation for Large Language Model
Liu, Kong, Dong, Yang, Li, Tang, Yuan, Niu, Zhang, Zhao, Lin, Huang, Wang
arxiv.org/abs/2505.13820 mastoxiv.page/@arXiv_csLG_bot/
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
Fan, Zhang, Li, Zhang, Chen, Hu, Wang, Qu, Zhou, Wang, Yan, Xu, Theiss, Chen, Li, Tu, Wang, Ranjan
arxiv.org/abs/2505.20279 mastoxiv.page/@arXiv_csCV_bot/
- Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Bhattacharjee, Tian, Rubin, Lo, Merchant, Hanson, Gounley, Tandon
arxiv.org/abs/2506.04450 mastoxiv.page/@arXiv_csCR_bot/
- L-MARS: Legal Multi-Agent Workflow with Orchestrated Reasoning and Agentic Search
Ziqi Wang, Boqin Yuan
arxiv.org/abs/2509.00761 mastoxiv.page/@arXiv_csAI_bot/
- Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking
Han, Huang, Liao, Jiang, Lu, Zhao, Wang, Zhou, Jiang, Liang, Zhou, Sun, Yu, Xiao
arxiv.org/abs/2509.23392 mastoxiv.page/@arXiv_csAI_bot/
- Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Leander Girrbach, Stephan Alaniz, Genevieve Smith, Trevor Darrell, Zeynep Akata
arxiv.org/abs/2510.03721 mastoxiv.page/@arXiv_csCV_bot/
- Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Zhang, Hu, Upasani, Ma, Hong, Kamanuru, Rainton, Wu, Ji, Li, Thakker, Zou, Olukotun
arxiv.org/abs/2510.04618 mastoxiv.page/@arXiv_csLG_bot/
- Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling
Giannone, Xu, Nayak, Awhad, Sudalairaj, Xu, Srivastava
arxiv.org/abs/2510.05825 mastoxiv.page/@arXiv_csLG_bot/
- Complete asymptotic type-token relationship for growing complex systems with inverse power-law co...
Pablo Rosillo-Rodes, Laurent H\'ebert-Dufresne, Peter Sheridan Dodds
arxiv.org/abs/2511.02069 mastoxiv.page/@arXiv_physicsso
- ViPRA: Video Prediction for Robot Actions
Sandeep Routray, Hengkai Pan, Unnat Jain, Shikhar Bahl, Deepak Pathak
arxiv.org/abs/2511.07732 mastoxiv.page/@arXiv_csRO_bot/
- AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
Chandrachur Bhattacharya, Sibendu Som
arxiv.org/abs/2511.14043
- VideoARM: Agentic Reasoning over Hierarchical Memory for Long-Form Video Understanding
Yufei Yin, Qianke Meng, Minghao Chen, Jiajun Ding, Zhenwei Shao, Zhou Yu
arxiv.org/abs/2512.12360 mastoxiv.page/@arXiv_csCV_bot/
- RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering
L\'eo Butsanets, Charles Corbi\`ere, Julien Khlaut, Pierre Manceron, Corentin Dancette
arxiv.org/abs/2512.17396 mastoxiv.page/@arXiv_csCV_bot/
- Measuring all the noises of LLM Evals
Sida Wang
arxiv.org/abs/2512.21326 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 21:08:46

Replaced article(s) found for cs.DS. arxiv.org/list/cs.DS/new
[1/1]:
- Fully Dynamic Adversarially Robust Correlation Clustering in Polylogarithmic Update Time
Vladimir Braverman, Prathamesh Dharangutte, Shreyas Pai, Vihan Shah, Chen Wang
arxiv.org/abs/2411.09979 mastoxiv.page/@arXiv_csDS_bot/
- A Simple and Combinatorial Approach to Proving Chernoff Bounds and Their Generalizations
William Kuszmaul
arxiv.org/abs/2501.03488 mastoxiv.page/@arXiv_csDS_bot/
- The Structural Complexity of Matrix-Vector Multiplication
Emile Anand, Jan van den Brand, Rose McCarty
arxiv.org/abs/2502.21240 mastoxiv.page/@arXiv_csDS_bot/
- Clustering under Constraints: Efficient Parameterized Approximation Schemes
Sujoy Bhore, Ameet Gadekar, Tanmay Inamdar
arxiv.org/abs/2504.06980 mastoxiv.page/@arXiv_csDS_bot/
- Minimizing Envy and Maximizing Happiness in Graphical House Allocation
Anubhav Dhar, Ashlesha Hota, Palash Dey, Sudeshna Kolay
arxiv.org/abs/2505.00296 mastoxiv.page/@arXiv_csDS_bot/
- Fast and Simple Densest Subgraph with Predictions
Thai Bui, Luan Nguyen, Hoa T. Vu
arxiv.org/abs/2505.12600 mastoxiv.page/@arXiv_csDS_bot/
- Compressing Suffix Trees by Path Decompositions
Becker, Cenzato, Gagie, Kim, Koerkamp, Manzini, Prezza
arxiv.org/abs/2506.14734 mastoxiv.page/@arXiv_csDS_bot/
- Improved sampling algorithms and functional inequalities for non-log-concave distributions
Yuchen He, Zhehan Lei, Jianan Shao, Chihao Zhang
arxiv.org/abs/2507.11236 mastoxiv.page/@arXiv_csDS_bot/
- Deterministic Lower Bounds for $k$-Edge Connectivity in the Distributed Sketching Model
Peter Robinson, Ming Ming Tan
arxiv.org/abs/2507.11257 mastoxiv.page/@arXiv_csDS_bot/
- Optimally detecting uniformly-distributed $\ell_2$ heavy hitters in data streams
Santhoshini Velusamy, Huacheng Yu
arxiv.org/abs/2509.07286 mastoxiv.page/@arXiv_csDS_bot/
- Uncrossed Multiflows and Applications to Disjoint Paths
Chandra Chekuri, Guyslain Naves, Joseph Poremba, F. Bruce Shepherd
arxiv.org/abs/2511.00254 mastoxiv.page/@arXiv_csDS_bot/
- Dynamic Matroids: Base Packing and Covering
Tijn de Vos, Mara Grilnberger
arxiv.org/abs/2511.15460 mastoxiv.page/@arXiv_csDS_bot/
- Branch-width of connectivity functions is fixed-parameter tractable
Tuukka Korhonen, Sang-il Oum
arxiv.org/abs/2601.04756 mastoxiv.page/@arXiv_csDS_bot/
- CoinPress: Practical Private Mean and Covariance Estimation
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman
arxiv.org/abs/2006.06618
- The Ideal Membership Problem and Abelian Groups
Andrei A. Bulatov, Akbar Rafiey
arxiv.org/abs/2201.05218
- Bridging Classical and Quantum: Group-Theoretic Approach to Quantum Circuit Simulation
Daksh Shami
arxiv.org/abs/2407.19575 mastoxiv.page/@arXiv_quantph_b
- Young domination on Hamming rectangles
Janko Gravner, Matja\v{z} Krnc, Martin Milani\v{c}, Jean-Florent Raymond
arxiv.org/abs/2501.03788 mastoxiv.page/@arXiv_mathCO_bo
- On the Space Complexity of Online Convolution
Joel Daniel Andersson, Amir Yehudayoff
arxiv.org/abs/2505.00181 mastoxiv.page/@arXiv_csCC_bot/
- Universal Solvability for Robot Motion Planning on Graphs
Anubhav Dhar, Pranav Nyati, Tanishq Prasad, Ashlesha Hota, Sudeshna Kolay
arxiv.org/abs/2506.18755 mastoxiv.page/@arXiv_csCC_bot/
- Colorful Minors
Evangelos Protopapas, Dimitrios M. Thilikos, Sebastian Wiederrecht
arxiv.org/abs/2507.10467
- Learning fermionic linear optics with Heisenberg scaling and physical operations
Aria Christensen, Andrew Zhao
arxiv.org/abs/2602.05058
toXiv_bot_toot

@arXiv_csPF_bot@mastoxiv.page
2026-03-24 07:38:32

Democratizing AI: A Comparative Study in Deep Learning Efficiency and Future Trends in Computational Processing
Lisan Al Amin, Md Ismail Hossain, Rupak Kumar Das, Mahbubul Islam, Saddam Mukta, Abdulaziz Tabbakh
arxiv.org/abs/2603.20920 arxiv.org/pdf/2603.20920 arxiv.org/html/2603.20920
arXiv:2603.20920v1 Announce Type: new
Abstract: The exponential growth in data has intensified the demand for computational power to train large-scale deep learning models. However, the rapid growth in model size and complexity raises concerns about equal and fair access to computational resources, particularly under increasing energy and infrastructure constraints. GPUs have emerged as essential for accelerating such workloads. This study benchmarks four deep learning models (Conv6, VGG16, ResNet18, CycleGAN) using TensorFlow and PyTorch on Intel Xeon CPUs and NVIDIA Tesla T4 GPUs. Our experiments demonstrate that, on average, GPU training achieves speedups ranging from 11x to 246x depending on model complexity, with lightweight models (Conv6) showing the highest acceleration (246x), mid-sized models (VGG16, ResNet18) achieving 51-116x speedups, and complex generative models (CycleGAN) reaching 11x improvements compared to CPU training. Additionally, in our PyTorch vs. TensorFlow comparison, we observed that TensorFlow's kernel-fusion optimizations reduce inference latency by approximately 15%. We also analyze GPU memory usage trends and projecting requirements through 2025 using polynomial regression. Our findings highlight that while GPUs are essential for sustaining AI's growth, democratized and shared access to GPU resources is critical for enabling research innovation across institutions with limited computational budgets.
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 09:45:25

Space Complexity Dichotomies for Subgraph Finding Problems in the Streaming Model
Yu-Sheng Shih, Meng-Tsung Tsai, Yen-Chu Tsai, Ying-Sian Wu
arxiv.org/abs/2602.08002 arxiv.org/pdf/2602.08002 arxiv.org/html/2602.08002
arXiv:2602.08002v1 Announce Type: new
Abstract: We study the space complexity of four variants of the standard subgraph finding problem in the streaming model. Specifically, given an $n$-vertex input graph and a fixed-size pattern graph, we consider two settings: undirected simple graphs, denoted by $G$ and $H$, and oriented graphs, denoted by $\vec{G}$ and $\vec{H}$. Depending on the setting, the task is to decide whether $G$ contains $H$ as a subgraph or as an induced subgraph, or whether $\vec{G}$ contains $\vec{H}$ as a subgraph or as an induced subgraph. Let Sub$(H)$, IndSub$(H)$, Sub$(\vec{H})$, and IndSub$(\vec{H})$ denote these four variants, respectively.
An oriented graph is well-oriented if it admits a bipartition in which every arc is oriented from one part to the other, and a vertex is non-well-oriented if both its in-degree and out-degree are non-zero. For each variant, we obtain a complete dichotomy theorem, briefly summarized as follows.
(1) Sub$(H)$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if $H$ is bipartite.
(2) IndSub$(H)$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if $H \in \{P_3, P_4, co\mbox{-}P_3\}$.
(3) Sub$(\vec{H})$ can be solved by a single-pass $n^{2-\Omega(1)}$-space algorithm if and only if every connected component of $\vec H$ is either a well-oriented bipartite graph or a tree containing at most one non-well-oriented vertex.
(4) IndSub$(\vec{H})$ can be solved by an $\tilde{O}(1)$-pass $n^{2-\Omega(1)}$-space algorithm if and only if the underlying undirected simple graph $H$ is a $co\mbox{-}P_3$.
toXiv_bot_toot

@arXiv_csOS_bot@mastoxiv.page
2026-02-10 07:47:16

Fork, Explore, Commit: OS Primitives for Agentic Exploration
Cong Wang, Yusheng Zheng
arxiv.org/abs/2602.08199 arxiv.org/pdf/2602.08199 arxiv.org/html/2602.08199
arXiv:2602.08199v1 Announce Type: new
Abstract: AI agents increasingly perform agentic exploration: pursuing multiple solution paths in parallel and committing only the successful one. Because each exploration path may modify files and spawn processes, agents require isolated environments with atomic commit and rollback semantics for both filesystem state and process state. We introduce the branch context, a new OS abstraction that provides: (1) copy-on-write state isolation with independent filesystem views and process groups, (2) a structured lifecycle of fork, explore, and commit/abort, (3) first-commit-wins resolution that automatically invalidates sibling branches, and (4) nestable contexts for hierarchical exploration. We realize branch contexts in Linux through two complementary components. First, BranchFS is a FUSE-based filesystem that gives each branch context an isolated copy-on-write workspace, with O(1) creation, atomic commit to the parent, and automatic sibling invalidation, all without root privileges. BranchFS is open sourced in github.com/multikernel/branchfs. Second, branch() is a proposed Linux syscall that spawns processes into branch contexts with reliable termination, kernel-enforced sibling isolation, and first-commit-wins coordination. Preliminary evaluation of BranchFS shows sub-350 us branch creation independent of base filesystem size, and modification-proportional commit overhead (under 1 ms for small changes).
toXiv_bot_toot

@arXiv_physicsinsdet_bot@mastoxiv.page
2026-02-03 08:51:39

Inter-detector differential fuzz testing for tamper detection in gamma spectrometers
Pei Yao Li, Jayson R. Vavrek, Sean Peisert
arxiv.org/abs/2602.00336 arxiv.org/pdf/2602.00336 arxiv.org/html/2602.00336
arXiv:2602.00336v1 Announce Type: new
Abstract: We extend physical differential fuzz testing as an anti-tamper method for radiation detectors [Vavrek et al., Science and Global Security 2025] to comparisons across multiple detector units. The method was previously introduced as a tamper detection method for authenticating a single radiation detector in nuclear safeguards and treaty verification scenarios, and works by randomly sampling detector configuration parameters to produce a sequence of spectra that form a baseline signature of an untampered system. At a later date, after potential tampering, the same random sequence of parameters is used to generate another series of spectra that can be compared against the baseline. Anomalies in the series of comparisons indicate changes in detector behavior, which may be due to tampering. One limitation of this original method is that once the detector has `gone downrange' and may have been tampered with, the original baseline is fixed, and a new trusted baseline can never be established if tests at new parameters are required. In this work, we extend our anti-tamper fuzz testing concept to multiple detector units, such that the downrange detector can be compared against a trusted or `golden copy' detector, even despite normal inter-detector manufacturing variations. We show using three NaI detectors that this inter-detector differential fuzz testing can detect a representative attack, even when the tested and golden copy detectors are from different manufacturers and have different performances. Here, detecting tampering requires visualizing the comparison metric vs. the parameter values and not just the sample number; moreover this baseline is non-linear and may require anomaly detection methods more complex than a simple threshold. Overall, this extension to multiple detectors improves prospects for operationalizing the technique in real-world treaty verification and safeguards contexts.
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-09 07:46:50

Towards Efficient Data Structures for Approximate Search with Range Queries
Ladan Kian, Dariusz R. Kowalski
arxiv.org/abs/2602.06860 arxiv.org/pdf/2602.06860 arxiv.org/html/2602.06860
arXiv:2602.06860v1 Announce Type: new
Abstract: Range queries are simple and popular types of queries used in data retrieval. However, extracting exact and complete information using range queries is costly. As a remedy, some previous work proposed a faster principle, {\em approximate} search with range queries, also called single range cover (SRC) search. It can, however, produce some false positives. In this work we introduce a new SRC search structure, a $c$-DAG (Directed Acyclic Graph), which provably decreases the average number of false positives by logarithmic factor while keeping asymptotically same time and memory complexities as a classic tree structure. A $c$-DAG is a tunable augmentation of the 1D-Tree with denser overlapping branches ($c \geq 3$ children per node). We perform a competitive analysis of a $c$-DAG with respect to 1D-Tree and derive an additive constant time overhead and a multiplicative logarithmic improvement of the false positives ratio, on average. We also provide a generic framework to extend our results to empirical distributions of queries, and demonstrate its effectiveness for Gowalla dataset. Finally, we quantify and discuss security and privacy aspects of SRC search on $c$-DAG vs 1D-Tree, mainly mitigation of structural leakage, which makes $c$-DAG a good data structure candidate for deployment in privacy-preserving systems (e.g., searchable encryption) and multimedia retrieval.
toXiv_bot_toot