Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCV_bot@mastoxiv.page
2025-07-18 10:22:02

$\pi^3$: Scalable Permutation-Equivariant Visual Geometry Learning
Yifan Wang, Jianjun Zhou, Haoyi Zhu, Wenzheng Chang, Yang Zhou, Zizun Li, Junyi Chen, Jiangmiao Pang, Chunhua Shen, Tong He
arxiv.org/abs/2507.13347

@arXiv_csCL_bot@mastoxiv.page
2025-09-19 10:33:21

Patent Language Model Pretraining with ModernBERT
Amirhossein Yousefiramandi, Ciaran Cooney
arxiv.org/abs/2509.14926 arxiv.org/pdf/2509.149…

@arXiv_csCR_bot@mastoxiv.page
2025-09-19 09:46:41

Threat Modeling for Enhancing Security of IoT Audio Classification Devices under a Secure Protocols Framework
Sergio Benlloch-Lopez, Miquel Viel-Vazquez, Javier Naranjo-Alcazar, Jordi Grau-Haro, Pedro Zuccarello
arxiv.org/abs/2509.14657

@arXiv_eessSP_bot@mastoxiv.page
2025-08-19 10:57:30

Scaling Wideband Massive MIMO Radar via Beamspace Dimension Reduction
Oveys Delafrooz Noroozi, Jiyoon Han, Wei Tang, Zhengya Zhang, Upamanyu Madhow
arxiv.org/abs/2508.11790

@arXiv_condmatstrel_bot@mastoxiv.page
2025-09-15 08:58:21

Entanglement architecture of beyond-Landau quantum criticality
Menghan Song, Ting-Tung Wang, Liuke Lyu, William Witczak-Krempa, Zi Yang Meng
arxiv.org/abs/2509.09983

@arXiv_csCV_bot@mastoxiv.page
2025-09-16 12:39:17

CLAIRE: A Dual Encoder Network with RIFT Loss and Phi-3 Small Language Model Based Interpretability for Cross-Modality Synthetic Aperture Radar and Optical Land Cover Segmentation
Debopom Sutradhar, Arefin Ittesafun Abian, Mohaimenul Azam Khan Raiaan, Reem E. Mohamed, Sheikh Izzal Azid, Sami Azam
arxiv.org/abs/2509.11952

@arXiv_physicsoptics_bot@mastoxiv.page
2025-09-16 10:49:37

Programmable Optical Filters Based on Feed-Forward Photonic Meshes
Carson G. Valdez, Anne R. Kroo, Marek Vlk, Charles Roques-Carmes, Shanhui Fan, David A. B. Miller, Olav Solgaard
arxiv.org/abs/2509.12059

@arXiv_csAI_bot@mastoxiv.page
2025-09-08 09:15:30

Internet 3.0: Architecture for a Web-of-Agents with it's Algorithm for Ranking Agents
Rajesh Tembarai Krishnamachari, Srividya Rajesh
arxiv.org/abs/2509.04979

@Techmeme@techhub.social
2025-06-23 20:45:44

Salesforce launches Agentforce 3 with an observability tool called Command Center and MCP support, and says 8,000 customers have signed up to deploy Agentforce (Larry Dignan/Constellation Research)
constellationr.com/blog-news/i

@frankel@mastodon.top
2025-06-22 08:15:07

CI/CD #Pipeline #Architecture: Complete Guide to Building Robust CI and CD Pipelines

@blakes7bot@mas.torpidity.net
2025-08-09 12:21:15

Series B, Episode 03 - Weapon
AVON: Auron may be different, Cally, but on Earth it is considered ill-mannered to kill your friends while committing suicide.
GAN: Why that base, Cally?
blake.torpidity.net/m/203/31 B7B3

Claude 3.7 describes the image as: "The image shows a person in what appears to be a scene from a television production, likely from the late 1970s or early 1980s based on the visual style and quality. They are wearing a black turtleneck sweater and have short dark hair. The background is minimalist and gray, with what looks like stairs or stepped architecture visible behind them.

The lighting and cinematography have that distinctive quality common in British sci-fi television productions of t…
@arXiv_csLG_bot@mastoxiv.page
2025-09-10 11:47:48

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/4]:
- MedicalPatchNet: A Patch-Based Self-Explainable AI Architecture for Chest X-ray Classification
Patrick Wienholt, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn

@arXiv_csCY_bot@mastoxiv.page
2025-08-06 08:39:30

The Architecture of Trust: A Framework for AI-Augmented Real Estate Valuation in the Era of Structured Data
Petteri Teikari, Mike Jarrell, Maryam Azh, Harri Pesola
arxiv.org/abs/2508.02765

@arXiv_quantph_bot@mastoxiv.page
2025-07-29 11:21:42

A small and interesting architecture for early fault-tolerant quantum computers
Jacob S. Nelson, Andrew J. Landahl, Andrew D. Baczewski
arxiv.org/abs/2507.20387

@arXiv_csNI_bot@mastoxiv.page
2025-08-08 12:34:14

Replaced article(s) found for cs.NI. arxiv.org/list/cs.NI/new
[1/1]:
- Performance Comparison of HTTP/3 and HTTP/2 with Proxy Integration
Fan Liu, Behrooz Farkiani, John Dehart, Jyoti Parwatikar, Patrick Crowley

@arXiv_csRO_bot@mastoxiv.page
2025-08-29 10:04:31

Scaling Fabric-Based Piezoresistive Sensor Arrays for Whole-Body Tactile Sensing
Curtis C. Johnson, Daniel Webb, David Hill, Marc D. Killpack
arxiv.org/abs/2508.20959

@arXiv_csAR_bot@mastoxiv.page
2025-08-04 07:31:40

E2ATST: A Temporal-Spatial Optimized Energy-Efficient Architecture for Training Spiking Transformer
Yunhao Ma, Yanyu Lin, Mingjing Li, Puli Quan, Chenlin Zhou, Wenyue Zhang, Zhiwei Zhong, Wanyi Jia, Xueke Zhu, Qingyan Meng, Huihui Zhou, Fengwei An
arxiv.org/abs/2508.00475

@arXiv_csIR_bot@mastoxiv.page
2025-08-07 07:37:53

Suggest, Complement, Inspire: Story of Two Tower Recommendations at Allegro.com
Aleksandra Osowska-Kurczab, Klaudia Nazarko, Mateusz Marzec, Lidia Wojciechowska, Eli\v{s}ka Kreme\v{n}ov\'a
arxiv.org/abs/2508.03702

@arXiv_csMA_bot@mastoxiv.page
2025-07-03 07:51:10

Exploring Advanced LLM Multi-Agent Systems Based on Blackboard Architecture
Bochen Han, Songmao Zhang
arxiv.org/abs/2507.01701

@arXiv_csDC_bot@mastoxiv.page
2025-08-26 10:00:46

Scalable Engine and the Performance of Different LLM Models in a SLURM based HPC architecture
Anderson de Lima Luiz, Shubham Vijay Kurlekar, Munir Georges
arxiv.org/abs/2508.17814

@arXiv_csCL_bot@mastoxiv.page
2025-09-05 10:11:01

Expanding Foundational Language Capabilities in Open-Source LLMs through a Korean Case Study
Junghwan Lim, Gangwon Jo, Sungmin Lee, Jiyoung Park, Dongseok Kim, Jihwan Kim, Junhyeok Lee, Wai Ting Cheung, Dahye Choi, Kibong Choi, Jaeyeon Huh, Beomgyu Kim, Jangwoong Kim, Taehyun Kim, Haesol Lee, Jeesoo Lee, Dongpin Oh, Changseok Song, Daewon Suh

@arXiv_csCR_bot@mastoxiv.page
2025-07-10 09:16:21

Bridging AI and Software Security: A Comparative Vulnerability Assessment of LLM Agent Deployment Paradigms
Tarek Gasmi, Ramzi Guesmi, Ines Belhadj, Jihene Bennaceur
arxiv.org/abs/2507.06323

@arXiv_csCV_bot@mastoxiv.page
2025-09-09 12:27:02

MRI-Based Brain Tumor Detection through an Explainable EfficientNetV2 and MLP-Mixer-Attention Architecture
Mustafa Yurdakul, \c{S}akir Ta\c{s}demir
arxiv.org/abs/2509.06713

@arXiv_hepph_bot@mastoxiv.page
2025-06-26 09:29:10

DeepQuark: deep-neural-network approach to multiquark bound states
Wei-Lin Wu, Lu Meng, Shi-Lin Zhu
arxiv.org/abs/2506.20555

@arXiv_csCV_bot@mastoxiv.page
2025-09-09 12:26:52

Cortex-Synth: Differentiable Topology-Aware 3D Skeleton Synthesis with Hierarchical Graph Attention
Mohamed Zayaan S
arxiv.org/abs/2509.06705

@mgorny@social.treehouse.systems
2025-07-05 15:24:22

A while ago, I've followed the example given by #Fedora and unbundled ensurepip wheels from #Python in #Gentoo (just checked — "a while ago" was 3 years ago). This had the important advantage that it enabled us to update these wheels along with the actual pip and setuptools packages, meaning new virtual environments would get fresh versions rather than whatever CPython happened to bundle at the time of release.
I had considered using our system packages to prepare these wheels, but since we were already unbundling dependencies back then, that couldn't work. So I just went with fetching upstream wheels from PyPI. Why not build them from source instead? Well, besides feeling unnecessary (it's not like the PyPI wheels are actually binary packages), we probably didn't have the right kind of eclass support for that at the time.
Inspired by @…, today I've tried preparing new revisions of ensurepip packages that actually do build everything from source. So what changed, and why should building from source matter now? Firstly, as part of the wheel reuse patches, we do have a reasonably clean architecture to grab the wheels created as part of the PEP517 build. Secondly, since we're unbundling dependencies from pip and setuptools, we're effectively testing different packages than these installed as ensurepip wheels — and so it would be meaningful to test both variants. Thirdly, building from source is going to make patching easier, and at the very least enable user patching.
While at it, I've refreshed the test suite runs in all three regular packages (pip, setuptools and wheel — we need an "ensurepip" wheel for the last because of test suites). And of course, I hit some test failures in testing the versions with bundled dependencies, and I've discovered a random bug in #PyPy.
github.com/gentoo/gentoo/pull/ (yes, we haven't moved yet)
github.com/pypy/pypy/issues/53

@arXiv_quantph_bot@mastoxiv.page
2025-06-26 10:07:30

Continuous operation of a coherent 3,000-qubit system
Neng-Chun Chiu, Elias C. Trapp, Jinen Guo, Mohamed H. Abobeih, Luke M. Stewart, Simon Hollerith, Pavel Stroganov, Marcin Kalinowski, Alexandra A. Geim, Simon J. Evered, Sophie H. Li, Lisa M. Peters, Dolev Bluvstein, Tout T. Wang, Markus Greiner, Vladan Vuleti\'c, Mikhail D. Lukin

@arXiv_csAI_bot@mastoxiv.page
2025-06-24 12:01:30

jina-embeddings-v4: Universal Embeddings for Multimodal Multilingual Retrieval
Michael G\"unther, Saba Sturua, Mohammad Kalim Akram, Isabelle Mohr, Andrei Ungureanu, Sedigheh Eslami, Scott Martens, Bo Wang, Nan Wang, Han Xiao
arxiv.org/abs/2506.18902

@arXiv_csCL_bot@mastoxiv.page
2025-06-27 09:56:19

Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
Ali \c{S}enol, Garima Agrawal, Huan Liu
arxiv.org/abs/2506.21443 arxiv.org/pdf/2506.21443 arxiv.org/html/2506.21443
arXiv:2506.21443v1 Announce Type: new
Abstract: Detecting deceptive conversations on dynamic platforms is increasingly difficult due to evolving language patterns and Concept Drift (CD)\-i.e., semantic or topical shifts that alter the context or intent of interactions over time. These shifts can obscure malicious intent or mimic normal dialogue, making accurate classification challenging. While Large Language Models (LLMs) show strong performance in natural language tasks, they often struggle with contextual ambiguity and hallucinations in risk\-sensitive scenarios. To address these challenges, we present a Domain Knowledge (DK)\-Enhanced LLM framework that integrates pretrained LLMs with structured, task\-specific insights to perform fraud and concept drift detection. The proposed architecture consists of three main components: (1) a DK\-LLM module to detect fake or deceptive conversations; (2) a drift detection unit (OCDD) to determine whether a semantic shift has occurred; and (3) a second DK\-LLM module to classify the drift as either benign or fraudulent. We first validate the value of domain knowledge using a fake review dataset and then apply our full framework to SEConvo, a multiturn dialogue dataset that includes various types of fraud and spam attacks. Results show that our system detects fake conversations with high accuracy and effectively classifies the nature of drift. Guided by structured prompts, the LLaMA\-based implementation achieves 98\% classification accuracy. Comparative studies against zero\-shot baselines demonstrate that incorporating domain knowledge and drift awareness significantly improves performance, interpretability, and robustness in high\-stakes NLP applications.
toXiv_bot_toot

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:43:00

RadProPoser: A Framework for Human Pose Estimation with Uncertainty Quantification from Raw Radar Data
Jonas Leo Mueller, Lukas Engel, Eva Dorschky, Daniel Krauss, Ingrid Ullmann, Martin Vossiek, Bjoern M. Eskofier
arxiv.org/abs/2508.03578

@arXiv_csCL_bot@mastoxiv.page
2025-07-22 12:23:40

Supernova: Achieving More with Less in Transformer Architectures
Andrei-Valentin Tanase, Elena Pelican
arxiv.org/abs/2507.15773