Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csRO_bot@mastoxiv.page
2025-08-01 09:57:41

Distributed AI Agents for Cognitive Underwater Robot Autonomy
Markus Buchholz, Ignacio Carlucho, Michele Grimaldi, Yvan R. Petillot
arxiv.org/abs/2507.23735

@arXiv_csIR_bot@mastoxiv.page
2025-06-30 09:41:30

DCN^2: Interplay of Implicit Collision Weights and Explicit Cross Layers for Large-Scale Recommendation
Bla\v{z} \v{S}krlj, Yonatan Karni, Grega Ga\v{s}per\v{s}i\v{c}, Bla\v{z} Mramor, Yulia Stolin, Martin Jakomin, Jasna Urban\v{c}i\v{c}, Yuval Dishi, Natalia Silberstein, Ophir Friedler, Assaf Klein
arxiv.org/abs/2506.21…

@arXiv_quantph_bot@mastoxiv.page
2025-07-29 11:21:42

A small and interesting architecture for early fault-tolerant quantum computers
Jacob S. Nelson, Andrew J. Landahl, Andrew D. Baczewski
arxiv.org/abs/2507.20387

@arXiv_eessSP_bot@mastoxiv.page
2025-07-30 09:42:21

Edge Agentic AI Framework for Autonomous Network Optimisation in O-RAN
Abdelaziz Salama, Zeinab Nezami, Mohammed M. H. Qazzaz, Maryam Hafeez, Syed Ali Raza Zaidi
arxiv.org/abs/2507.21696

@arXiv_csCL_bot@mastoxiv.page
2025-06-27 09:56:19

Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
Ali \c{S}enol, Garima Agrawal, Huan Liu
arxiv.org/abs/2506.21443 arxiv.org/pdf/2506.21443 arxiv.org/html/2506.21443
arXiv:2506.21443v1 Announce Type: new
Abstract: Detecting deceptive conversations on dynamic platforms is increasingly difficult due to evolving language patterns and Concept Drift (CD)\-i.e., semantic or topical shifts that alter the context or intent of interactions over time. These shifts can obscure malicious intent or mimic normal dialogue, making accurate classification challenging. While Large Language Models (LLMs) show strong performance in natural language tasks, they often struggle with contextual ambiguity and hallucinations in risk\-sensitive scenarios. To address these challenges, we present a Domain Knowledge (DK)\-Enhanced LLM framework that integrates pretrained LLMs with structured, task\-specific insights to perform fraud and concept drift detection. The proposed architecture consists of three main components: (1) a DK\-LLM module to detect fake or deceptive conversations; (2) a drift detection unit (OCDD) to determine whether a semantic shift has occurred; and (3) a second DK\-LLM module to classify the drift as either benign or fraudulent. We first validate the value of domain knowledge using a fake review dataset and then apply our full framework to SEConvo, a multiturn dialogue dataset that includes various types of fraud and spam attacks. Results show that our system detects fake conversations with high accuracy and effectively classifies the nature of drift. Guided by structured prompts, the LLaMA\-based implementation achieves 98\% classification accuracy. Comparative studies against zero\-shot baselines demonstrate that incorporating domain knowledge and drift awareness significantly improves performance, interpretability, and robustness in high\-stakes NLP applications.
toXiv_bot_toot

@arXiv_csPL_bot@mastoxiv.page
2025-06-25 07:36:29

Mix-of-Language-Experts Architecture for Multilingual Programming
Yifan Zong, Yuntian Deng, Pengyu Nie
arxiv.org/abs/2506.18923

@arXiv_csIR_bot@mastoxiv.page
2025-05-30 09:54:04

This arxiv.org/abs/2503.09492 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csLG_bot@mastoxiv.page
2025-07-24 09:07:59

PyG 2.0: Scalable Learning on Real World Graphs
Matthias Fey, Jinu Sunil, Akihiro Nitta, Rishi Puri, Manan Shah, Bla\v{z} Stojanovi\v{c}, Ramona Bendias, Alexandria Barghi, Vid Kocijan, Zecheng Zhang, Xinwei He, Jan Eric Lenssen, Jure Leskovec
arxiv.org/abs/2507.16991

@arXiv_csNI_bot@mastoxiv.page
2025-07-28 11:58:33

Replaced article(s) found for cs.NI. arxiv.org/list/cs.NI/new
[1/1]:
- Towards Constraint-aware Learning for Resource Allocation in NFV Networks
Tianfu Wang, Long Yang, Chao Wang, Chuan Qin, Liwei Deng, Wei Wu, Junyang Wang, Li Shen, Hui Xiong

@arXiv_csCR_bot@mastoxiv.page
2025-07-03 08:34:20

A Compact 16-bit S-box over Tower Field $\F_{(((2^2)^2)^2)^2}$ with High Security
Bahram Rashidi, Behrooz Khadem
arxiv.org/abs/2507.01423

@arXiv_csAR_bot@mastoxiv.page
2025-06-24 16:43:30

Replaced article(s) found for cs.AR. arxiv.org/list/cs.AR/new
[1/1]:
- LLM-Aided Testbench Generation and Bug Detection for Finite-State Machines
Jitendra Bhandari, Johann Knechtel, Ramesh Narayanaswamy, Siddharth Garg, Ramesh Karri

@bobmueller@mastodon.world
2025-07-14 15:45:00

How cool is this? #music #art
instagram.com/reel/DL3CAxCPgxy

@geant@mstdn.social
2025-05-19 12:17:06

Upcoming GÉANT infoshare - Part 2 🚀
🔐 Quantum KMS Architectures and Services
This session will explore the key aspects of Key Management Systems (KMS) in enabling secure, scalable Quantum Key Distribution. Gain insights on architecture, standardisation, and integration—from both research and industry voices.
🔗 Sign up here:

@mikeymikey@hachyderm.io
2025-06-11 16:28:10

If you are a vendor or developer for #macOS who has not yet moved to Universal 2 or ship a product only for the x86_64 architecture, there is extremely straight forward guidance for you here to update your product ASAP
#Apple #WWDC25
developer.apple.com/documentat

@arXiv_quantph_bot@mastoxiv.page
2025-07-23 13:31:18

Replaced article(s) found for quant-ph. arxiv.org/list/quant-ph/new
[2/2]:
- Information-acquiring von Neumann architecture of a computer: Functionality and subjectivity
Eiji Konishi

@arXiv_csLG_bot@mastoxiv.page
2025-07-24 09:30:59

Causal Graph Fuzzy LLMs: A First Introduction and Applications in Time Series Forecasting
Omid Orang, Patricia O. Lucas, Gabriel I. F. Paiva, Petronio C. L. Silva, Felipe Augusto Rocha da Silva, Adriano Alonso Veloso, Frederico Gadelha Guimaraes
arxiv.org/abs/2507.17016

@arXiv_eessSY_bot@mastoxiv.page
2025-06-11 08:10:15

Feasibility Study of CNNs and MLPs for Radiation Heat Transfer in 2-D Furnaces with Spectrally Participative Gases
Axel TahmasebiMoradi, Vincent Ren, Benjamin Le-Creurer, Chetra Mang
arxiv.org/abs/2506.08033

@arXiv_csCV_bot@mastoxiv.page
2025-07-11 10:18:51

TinierHAR: Towards Ultra-Lightweight Deep Learning Models for Efficient Human Activity Recognition on Edge Devices
Sizhen Bian, Mengxi Liu, Vitor Fortes Rey, Daniel Geissler, Paul Lukowicz
arxiv.org/abs/2507.07949

@arXiv_csAR_bot@mastoxiv.page
2025-07-22 07:49:30

GCC: A 3DGS Inference Architecture with Gaussian-Wise and Cross-Stage Conditional Processing
Minnan Pei, Gang Li, Junwen Si, Zeyu Zhu, Zitao Mo, Peisong Wang, Zhuoran Song, Xiaoyao Liang, Jian Cheng
arxiv.org/abs/2507.15300

@arXiv_csSD_bot@mastoxiv.page
2025-07-17 07:39:09

Towards Scalable AASIST: Refining Graph Attention for Speech Deepfake Detection
Ivan Viakhirev, Daniil Sirota, Aleksandr Smirnov, Kirill Borodin
arxiv.org/abs/2507.11777

@arXiv_csNI_bot@mastoxiv.page
2025-06-23 16:49:29

Replaced article(s) found for cs.NI. arxiv.org/list/cs.NI/new
[1/1]:
- Semantic-Aware Resource Allocation Based on Deep Reinforcement Learning for 5G-V2X HetNets
Zhiyu Shao, Qiong Wu, Pingyi Fan, Nan Cheng, Qiang Fan, Jiangzhou Wang

@arXiv_csDC_bot@mastoxiv.page
2025-07-14 08:38:22

Fast and Interactive Byzantine Fault-tolerant Web Services via Session-Based Consensus Decoupling
Ahmad Zaki Akmal, Azkario Rizky Pratama, Guntur Dharma Putra
arxiv.org/abs/2507.08281

@arXiv_eessIV_bot@mastoxiv.page
2025-06-03 17:20:31

This arxiv.org/abs/2505.24799 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_csMA_bot@mastoxiv.page
2025-06-18 08:30:23

Investigating the Potential of Large Language Model-Based Router Multi-Agent Architectures for Foundation Design Automation: A Task Classification and Expert Selection Study
Sompote Youwai, David Phim, Vianne Gayl Murcia, Rianne Clair Onas
arxiv.org/abs/2506.13811

@arXiv_astrophCO_bot@mastoxiv.page
2025-07-11 08:28:41

Design and optimization of neural networks for multifidelity cosmological emulation
Yanhui Yang, Simeon Bird, Ming-Feng Ho, Mahdi Qezlou
arxiv.org/abs/2507.07184

@arXiv_quantph_bot@mastoxiv.page
2025-07-17 10:02:20

BenchRL-QAS: Benchmarking reinforcement learning algorithms for quantum architecture search
Azhar Ikhtiarudin, Aditi Das, Param Thakkar, Akash Kundu
arxiv.org/abs/2507.12189

@arXiv_csRO_bot@mastoxiv.page
2025-07-17 13:17:24

Replaced article(s) found for cs.RO. arxiv.org/list/cs.RO/new
[1/2]:
- Enhancing Trust in Autonomous Agents: An Architecture for Accountability and Explainability throu...
Fern\'andez-Becerra, Gonz\'alez-Santamarta, Guerrero-Higueras, Rodr\'iguez-Lera, Olivera…

@arXiv_csGR_bot@mastoxiv.page
2025-06-12 07:34:31

STREAMINGGS: Voxel-Based Streaming 3D Gaussian Splatting with Memory Optimization and Architectural Support
Chenqi Zhang, Yu Feng, Jieru Zhao, Guangda Liu, Wenchao Ding, Chentao Wu, Minyi Guo
arxiv.org/abs/2506.09070

@arXiv_qbioSC_bot@mastoxiv.page
2025-05-08 09:22:09

This arxiv.org/abs/2503.03913 has been replaced.
initial toot: mastoxiv.page/@arXiv_qbi…

@arXiv_physicsappph_bot@mastoxiv.page
2025-06-05 07:32:45

A Simple and Novel Passive Double-Sensitivity Optical Gyroscope Based on Non-Reciprocal Polarization Techniques
Onder Akcaalan, Melike Gumus Akcaalan
arxiv.org/abs/2506.03498

@arXiv_csLG_bot@mastoxiv.page
2025-07-11 10:23:51

Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Ziyue Li, Yang Li, Tianyi Zhou
arxiv.org/abs/2507.07996 arxiv.org/pdf/2507.07996 arxiv.org/html/2507.07996
arXiv:2507.07996v1 Announce Type: new
Abstract: Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
toXiv_bot_toot

@arXiv_csMA_bot@mastoxiv.page
2025-07-03 07:51:10

Exploring Advanced LLM Multi-Agent Systems Based on Blackboard Architecture
Bochen Han, Songmao Zhang
arxiv.org/abs/2507.01701

@arXiv_quantph_bot@mastoxiv.page
2025-07-09 09:35:22

Special-Unitary Parameterization for Trainable Variational Quantum Circuits
Kuan-Cheng Chen, Huan-Hsin Tseng, Samuel Yen-Chi Chen, Chen-Yu Liu, Kin K. Leung
arxiv.org/abs/2507.05535

@arXiv_csIR_bot@mastoxiv.page
2025-06-05 09:40:54

This arxiv.org/abs/2503.09492 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csMA_bot@mastoxiv.page
2025-06-12 07:47:41

Intelligent System of Emergent Knowledge: A Coordination Fabric for Billions of Minds
Moshi Wei, Sparks Li
arxiv.org/abs/2506.09335

@arXiv_astrophCO_bot@mastoxiv.page
2025-07-02 09:30:40

DeepCHART: Mapping the 3D dark matter density field from Ly$\alpha$ forest surveys using deep learning
Soumak Maitra (TIFR), Matteo Viel, Girish Kulkarni
arxiv.org/abs/2507.00135

@arXiv_csIR_bot@mastoxiv.page
2025-06-10 08:24:42

HotelMatch-LLM: Joint Multi-Task Training of Small and Large Language Models for Efficient Multimodal Hotel Retrieval
Arian Askari, Emmanouil Stergiadis, Ilya Gusev, Moran Beladev
arxiv.org/abs/2506.07296