
2025-08-01 09:57:41
Distributed AI Agents for Cognitive Underwater Robot Autonomy
Markus Buchholz, Ignacio Carlucho, Michele Grimaldi, Yvan R. Petillot
https://arxiv.org/abs/2507.23735 https://
Distributed AI Agents for Cognitive Underwater Robot Autonomy
Markus Buchholz, Ignacio Carlucho, Michele Grimaldi, Yvan R. Petillot
https://arxiv.org/abs/2507.23735 https://
DCN^2: Interplay of Implicit Collision Weights and Explicit Cross Layers for Large-Scale Recommendation
Bla\v{z} \v{S}krlj, Yonatan Karni, Grega Ga\v{s}per\v{s}i\v{c}, Bla\v{z} Mramor, Yulia Stolin, Martin Jakomin, Jasna Urban\v{c}i\v{c}, Yuval Dishi, Natalia Silberstein, Ophir Friedler, Assaf Klein
https://arxiv.org/abs/2506.21…
A small and interesting architecture for early fault-tolerant quantum computers
Jacob S. Nelson, Andrew J. Landahl, Andrew D. Baczewski
https://arxiv.org/abs/2507.20387 https://…
Edge Agentic AI Framework for Autonomous Network Optimisation in O-RAN
Abdelaziz Salama, Zeinab Nezami, Mohammed M. H. Qazzaz, Maryam Hafeez, Syed Ali Raza Zaidi
https://arxiv.org/abs/2507.21696
Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
Ali \c{S}enol, Garima Agrawal, Huan Liu
https://arxiv.org/abs/2506.21443 https://arxiv.org/pdf/2506.21443 https://arxiv.org/html/2506.21443
arXiv:2506.21443v1 Announce Type: new
Abstract: Detecting deceptive conversations on dynamic platforms is increasingly difficult due to evolving language patterns and Concept Drift (CD)\-i.e., semantic or topical shifts that alter the context or intent of interactions over time. These shifts can obscure malicious intent or mimic normal dialogue, making accurate classification challenging. While Large Language Models (LLMs) show strong performance in natural language tasks, they often struggle with contextual ambiguity and hallucinations in risk\-sensitive scenarios. To address these challenges, we present a Domain Knowledge (DK)\-Enhanced LLM framework that integrates pretrained LLMs with structured, task\-specific insights to perform fraud and concept drift detection. The proposed architecture consists of three main components: (1) a DK\-LLM module to detect fake or deceptive conversations; (2) a drift detection unit (OCDD) to determine whether a semantic shift has occurred; and (3) a second DK\-LLM module to classify the drift as either benign or fraudulent. We first validate the value of domain knowledge using a fake review dataset and then apply our full framework to SEConvo, a multiturn dialogue dataset that includes various types of fraud and spam attacks. Results show that our system detects fake conversations with high accuracy and effectively classifies the nature of drift. Guided by structured prompts, the LLaMA\-based implementation achieves 98\% classification accuracy. Comparative studies against zero\-shot baselines demonstrate that incorporating domain knowledge and drift awareness significantly improves performance, interpretability, and robustness in high\-stakes NLP applications.
toXiv_bot_toot
Mix-of-Language-Experts Architecture for Multilingual Programming
Yifan Zong, Yuntian Deng, Pengyu Nie
https://arxiv.org/abs/2506.18923 https://
This https://arxiv.org/abs/2503.09492 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csIR_…
PyG 2.0: Scalable Learning on Real World Graphs
Matthias Fey, Jinu Sunil, Akihiro Nitta, Rishi Puri, Manan Shah, Bla\v{z} Stojanovi\v{c}, Ramona Bendias, Alexandria Barghi, Vid Kocijan, Zecheng Zhang, Xinwei He, Jan Eric Lenssen, Jure Leskovec
https://arxiv.org/abs/2507.16991
Replaced article(s) found for cs.NI. https://arxiv.org/list/cs.NI/new
[1/1]:
- Towards Constraint-aware Learning for Resource Allocation in NFV Networks
Tianfu Wang, Long Yang, Chao Wang, Chuan Qin, Liwei Deng, Wei Wu, Junyang Wang, Li Shen, Hui Xiong
A Compact 16-bit S-box over Tower Field $\F_{(((2^2)^2)^2)^2}$ with High Security
Bahram Rashidi, Behrooz Khadem
https://arxiv.org/abs/2507.01423 https://
Replaced article(s) found for cs.AR. https://arxiv.org/list/cs.AR/new
[1/1]:
- LLM-Aided Testbench Generation and Bug Detection for Finite-State Machines
Jitendra Bhandari, Johann Knechtel, Ramesh Narayanaswamy, Siddharth Garg, Ramesh Karri
How cool is this? #music #art
https://www.instagram.com/reel/DL3CAxCPgxy/?igsh=MTJ…
Upcoming GÉANT infoshare - Part 2 🚀
🔐 Quantum KMS Architectures and Services
This session will explore the key aspects of Key Management Systems (KMS) in enabling secure, scalable Quantum Key Distribution. Gain insights on architecture, standardisation, and integration—from both research and industry voices.
🔗 Sign up here: https://
If you are a vendor or developer for #macOS who has not yet moved to Universal 2 or ship a product only for the x86_64 architecture, there is extremely straight forward guidance for you here to update your product ASAP
#Apple #WWDC25
https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
Replaced article(s) found for quant-ph. https://arxiv.org/list/quant-ph/new
[2/2]:
- Information-acquiring von Neumann architecture of a computer: Functionality and subjectivity
Eiji Konishi
Causal Graph Fuzzy LLMs: A First Introduction and Applications in Time Series Forecasting
Omid Orang, Patricia O. Lucas, Gabriel I. F. Paiva, Petronio C. L. Silva, Felipe Augusto Rocha da Silva, Adriano Alonso Veloso, Frederico Gadelha Guimaraes
https://arxiv.org/abs/2507.17016
Feasibility Study of CNNs and MLPs for Radiation Heat Transfer in 2-D Furnaces with Spectrally Participative Gases
Axel TahmasebiMoradi, Vincent Ren, Benjamin Le-Creurer, Chetra Mang
https://arxiv.org/abs/2506.08033
TinierHAR: Towards Ultra-Lightweight Deep Learning Models for Efficient Human Activity Recognition on Edge Devices
Sizhen Bian, Mengxi Liu, Vitor Fortes Rey, Daniel Geissler, Paul Lukowicz
https://arxiv.org/abs/2507.07949
GCC: A 3DGS Inference Architecture with Gaussian-Wise and Cross-Stage Conditional Processing
Minnan Pei, Gang Li, Junwen Si, Zeyu Zhu, Zitao Mo, Peisong Wang, Zhuoran Song, Xiaoyao Liang, Jian Cheng
https://arxiv.org/abs/2507.15300
Towards Scalable AASIST: Refining Graph Attention for Speech Deepfake Detection
Ivan Viakhirev, Daniil Sirota, Aleksandr Smirnov, Kirill Borodin
https://arxiv.org/abs/2507.11777
Replaced article(s) found for cs.NI. https://arxiv.org/list/cs.NI/new
[1/1]:
- Semantic-Aware Resource Allocation Based on Deep Reinforcement Learning for 5G-V2X HetNets
Zhiyu Shao, Qiong Wu, Pingyi Fan, Nan Cheng, Qiang Fan, Jiangzhou Wang
Fast and Interactive Byzantine Fault-tolerant Web Services via Session-Based Consensus Decoupling
Ahmad Zaki Akmal, Azkario Rizky Pratama, Guntur Dharma Putra
https://arxiv.org/abs/2507.08281
This https://arxiv.org/abs/2505.24799 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_ees…
Investigating the Potential of Large Language Model-Based Router Multi-Agent Architectures for Foundation Design Automation: A Task Classification and Expert Selection Study
Sompote Youwai, David Phim, Vianne Gayl Murcia, Rianne Clair Onas
https://arxiv.org/abs/2506.13811
Design and optimization of neural networks for multifidelity cosmological emulation
Yanhui Yang, Simeon Bird, Ming-Feng Ho, Mahdi Qezlou
https://arxiv.org/abs/2507.07184
BenchRL-QAS: Benchmarking reinforcement learning algorithms for quantum architecture search
Azhar Ikhtiarudin, Aditi Das, Param Thakkar, Akash Kundu
https://arxiv.org/abs/2507.12189
Replaced article(s) found for cs.RO. https://arxiv.org/list/cs.RO/new
[1/2]:
- Enhancing Trust in Autonomous Agents: An Architecture for Accountability and Explainability throu...
Fern\'andez-Becerra, Gonz\'alez-Santamarta, Guerrero-Higueras, Rodr\'iguez-Lera, Olivera…
STREAMINGGS: Voxel-Based Streaming 3D Gaussian Splatting with Memory Optimization and Architectural Support
Chenqi Zhang, Yu Feng, Jieru Zhao, Guangda Liu, Wenchao Ding, Chentao Wu, Minyi Guo
https://arxiv.org/abs/2506.09070
This https://arxiv.org/abs/2503.03913 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qbi…
A Simple and Novel Passive Double-Sensitivity Optical Gyroscope Based on Non-Reciprocal Polarization Techniques
Onder Akcaalan, Melike Gumus Akcaalan
https://arxiv.org/abs/2506.03498
Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Ziyue Li, Yang Li, Tianyi Zhou
https://arxiv.org/abs/2507.07996 https://arxiv.org/pdf/2507.07996 https://arxiv.org/html/2507.07996
arXiv:2507.07996v1 Announce Type: new
Abstract: Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
toXiv_bot_toot
Exploring Advanced LLM Multi-Agent Systems Based on Blackboard Architecture
Bochen Han, Songmao Zhang
https://arxiv.org/abs/2507.01701 https://
Special-Unitary Parameterization for Trainable Variational Quantum Circuits
Kuan-Cheng Chen, Huan-Hsin Tseng, Samuel Yen-Chi Chen, Chen-Yu Liu, Kin K. Leung
https://arxiv.org/abs/2507.05535
This https://arxiv.org/abs/2503.09492 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csIR_…
Intelligent System of Emergent Knowledge: A Coordination Fabric for Billions of Minds
Moshi Wei, Sparks Li
https://arxiv.org/abs/2506.09335 https://…
DeepCHART: Mapping the 3D dark matter density field from Ly$\alpha$ forest surveys using deep learning
Soumak Maitra (TIFR), Matteo Viel, Girish Kulkarni
https://arxiv.org/abs/2507.00135
HotelMatch-LLM: Joint Multi-Task Training of Small and Large Language Models for Efficient Multimodal Hotel Retrieval
Arian Askari, Emmanouil Stergiadis, Ilya Gusev, Moran Beladev
https://arxiv.org/abs/2506.07296