Tootfinder

Opt-in global Mastodon full text search. Join the index!

@Techmeme@techhub.social
2025-12-18 12:05:57

UK AI Security Institute report: AI models are rapidly improving at potentially dangerous biological and chemical tasks, and show fast jumps in self-replication (Shakeel Hashim/Transformer)
transformernews.ai/p/aisi-ai-s

@arXiv_csCL_bot@mastoxiv.page
2025-10-15 10:27:41

Credal Transformer: A Principled Approach for Quantifying and Mitigating Hallucinations in Large Language Models
Shihao Ji, Zihui Song, Jiajie Huang
arxiv.org/abs/2510.12137

@arXiv_csCV_bot@mastoxiv.page
2025-10-15 10:49:01

Hybrid Explanation-Guided Learning for Transformer-Based Chest X-Ray Diagnosis
Shelley Zixin Shu, Haozhe Luo, Alexander Poellinger, Mauricio Reyes
arxiv.org/abs/2510.12704

@arXiv_quantph_bot@mastoxiv.page
2025-10-15 10:19:41

Hybrid Vision Transformer and Quantum Convolutional Neural Network for Image Classification
Mingzhu Wang, Yun Shang
arxiv.org/abs/2510.12291

@arXiv_csSD_bot@mastoxiv.page
2025-10-15 08:56:42

Audio Palette: A Diffusion Transformer with Multi-Signal Conditioning for Controllable Foley Synthesis
Junnuo Wang
arxiv.org/abs/2510.12175

@arXiv_csSE_bot@mastoxiv.page
2025-10-14 10:48:28

Software Defect Prediction using Autoencoder Transformer Model
Seshu Barma, Mohanakrishnan Hariharan, Satish Arvapalli
arxiv.org/abs/2510.10840

@arXiv_csCE_bot@mastoxiv.page
2025-10-14 07:55:44

GrifFinNet: A Graph-Relation Integrated Transformer for Financial Predictions
Chenlanhui Dai, Wenyan Wang, Yusi Fan, Yueying Wang, Lan Huang, Kewei Li, Fengfeng Zhou
arxiv.org/abs/2510.10387

@Techmeme@techhub.social
2025-12-15 14:06:48

Nvidia launches Nemotron 3, a family of AI models using a hybrid mixture-of-experts architecture and the Mamba-Transformer design, in 30B, 100B, and ~500B sizes (Emilia David/VentureBeat)
venturebeat.com/ai/nvidia-debu

@arXiv_statML_bot@mastoxiv.page
2025-10-13 09:03:40

Efficient Autoregressive Inference for Transformer Probabilistic Models
Conor Hassan, Nasrulloh Loka, Cen-You Li, Daolang Huang, Paul E. Chang, Yang Yang, Francesco Silvestrin, Samuel Kaski, Luigi Acerbi
arxiv.org/abs/2510.09477

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:37:31

From Condensation to Rank Collapse: A Two-Stage Analysis of Transformer Training Dynamics
Zheng-An Chen, Tao Luo
arxiv.org/abs/2510.06954 a…

@burger_jaap@mastodon.social
2025-11-14 14:25:10

"Für jede Trafostation eine LEG" [‘A local energy community for every transformer station’]
It is fascinating how local energy communities are encouraged in Switzerland (so not governed by EU law, that also creates that role), and how the (local) utilities are also actively involved in this.

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 12:19:51

Crosslisted article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[6/6]:
- Hybrid Explanation-Guided Learning for Transformer-Based Chest X-Ray Diagnosis
Shelley Zixin Shu, Haozhe Luo, Alexander Poellinger, Mauricio Reyes

@arXiv_eessAS_bot@mastoxiv.page
2025-10-14 09:01:48

ILD-VIT: A Unified Vision Transformer Architecture for Detection of Interstitial Lung Disease from Respiratory Sounds
Soubhagya Ranjan Hota, Arka Roy, Udit Satija
arxiv.org/abs/2510.11458

@arXiv_csRO_bot@mastoxiv.page
2025-10-08 08:11:29

VER: Vision Expert Transformer for Robot Learning via Foundation Distillation and Dynamic Routing
Yixiao Wang, Mingxiao Huo, Zhixuan Liang, Yushi Du, Lingfeng Sun, Haotian Lin, Jinghuan Shang, Chensheng Peng, Mohit Bansal, Mingyu Ding, Masayoshi Tomizuka
arxiv.org/abs/2510.05213

@arXiv_csCL_bot@mastoxiv.page
2025-10-14 13:12:58

An Encoder-Integrated PhoBERT with Graph Attention for Vietnamese Token-Level Classification
Ba-Quang Nguyen
arxiv.org/abs/2510.11537 arxiv…

@arXiv_hepph_bot@mastoxiv.page
2025-10-09 09:47:21

Latent Representation Learning in Heavy-Ion Collisions with MaskPoint Transformer
Jing-Zong Zhang, Shuang Guo, Li-Lin Zhu, Lingxiao Wang, Guo-Liang Ma
arxiv.org/abs/2510.06691

@arXiv_eessIV_bot@mastoxiv.page
2025-10-08 08:34:29

A Scalable AI Driven, IoT Integrated Cognitive Digital Twin for Multi-Modal Neuro-Oncological Prognostics and Tumor Kinetics Prediction using Enhanced Vision Transformer and XAI
Saptarshi Banerjee, Himadri Nath Saha, Utsho Banerjee, Rajarshi Karmakar, Jon Turdiev
arxiv.org/abs/2510.05123

@arXiv_csSD_bot@mastoxiv.page
2025-10-13 08:08:00

LadderSym: A Multimodal Interleaved Transformer for Music Practice Error Detection
Benjamin Shiue-Hal Chou, Purvish Jajal, Nick John Eliopoulos, James C. Davis, George K. Thiruvathukal, Kristen Yeon-Ji Yun, Yung-Hsiang Lu
arxiv.org/abs/2510.08580

@arXiv_hepex_bot@mastoxiv.page
2025-10-10 09:21:29

Locality-Sensitive Hashing-Based Efficient Point Transformer for Charged Particle Reconstruction
Shitij Govil, Jack P. Rodgers, Yuan-Tang Chou, Siqi Miao, Amit Saha, Advaith Anand, Kilian Lieret, Gage DeZoort, Mia Liu, Javier Duarte, Pan Li, Shih-Chieh Hsu
arxiv.org/abs/2510.07594

@arXiv_csCE_bot@mastoxiv.page
2025-10-14 09:46:38

LRQ-Solver: A Transformer-Based Neural Operator for Fast and Accurate Solving of Large-scale 3D PDEs
Peijian Zeng, Guan Wang, Haohao Gu, Xiaoguang Hu, TiezhuGao, Zhuowei Wang, Aimin Yang, Xiaoyu Song
arxiv.org/abs/2510.11636

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-10-15 10:08:21

Self-attention enabled quantum path analysis of high-harmonic generation in solids
Cong Zhao, Xiaozhou Zou
arxiv.org/abs/2510.12443 arxiv.o…

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:46:51

HTMformer: Hybrid Time and Multivariate Transformer for Time Series Forecasting
Tan Wang, Yun Wei Dong, Tao Zhang, Qi Wang
arxiv.org/abs/2510.07084

@arXiv_csIT_bot@mastoxiv.page
2025-09-22 08:37:11

Interplay Between Belief Propagation and Transformer: Differential-Attention Message Passing Transformer
Chin Wa Lau, Xiang Shi, Ziyan Zheng, Haiwen Cao, Nian Guo
arxiv.org/abs/2509.15637

@Techmeme@techhub.social
2025-12-11 21:21:08

Sources: the NY governor proposes a rewrite of the RAISE Act, the AI bill that recently passed NY legislature, with text copied verbatim from California's SB 53 (Shakeel Hashim/Transformer)
transformernews.ai/p/new-york-

@arXiv_csCV_bot@mastoxiv.page
2025-10-15 10:53:41

What If : Understanding Motion Through Sparse Interactions
Stefan Andreas Baumann, Nick Stracke, Timy Phan, Bj\"orn Ommer
arxiv.org/abs/2510.12777

@arXiv_csET_bot@mastoxiv.page
2025-10-03 07:35:50

ENLighten: Lighten the Transformer, Enable Efficient Optical Acceleration
Hanqing Zhu, Zhican Zhou, Shupeng Ning, Xuhao Wu, Ray Chen, Yating Wan, David Pan
arxiv.org/abs/2510.01673

@arXiv_csSD_bot@mastoxiv.page
2025-10-13 08:44:00

DiTSinger: Scaling Singing Voice Synthesis with Diffusion Transformer and Implicit Alignment
Zongcai Du, Guilin Deng, Xiaofeng Guo, Xin Gao, Linke Li, Kaichang Cheng, Fubo Han, Siyu Yang, Peng Liu, Pan Zhong, Qiang Fu
arxiv.org/abs/2510.09016

@arXiv_eessSP_bot@mastoxiv.page
2025-10-13 11:26:22

Crosslisted article(s) found for eess.SP. arxiv.org/list/eess.SP/new
[1/1]:
- Soft Graph Transformer for MIMO Detection
Jiadong Hong, Lei Liu, Xinyu Bian, Wenjie Wang, Zhaoyang Zhang

@arXiv_qbioNC_bot@mastoxiv.page
2025-10-07 09:06:32

Atlas-free Brain Network Transformer
Shuai Huang, Xuan Kan, James J. Lah, Deqiang Qiu
arxiv.org/abs/2510.03306 arxiv.org/pdf/2510.03306

@arXiv_csCR_bot@mastoxiv.page
2025-09-26 09:52:31

Dual-Path Phishing Detection: Integrating Transformer-Based NLP with Structural URL Analysis
Ibrahim Altan, Abdulla Bachir, Yousuf Parbhulkar, Abdul Muksith Rizvi, Moshiur Farazi
arxiv.org/abs/2509.20972

@m0les@aus.social
2025-11-03 02:21:12

Pretty sure the power supply in this switch is fried. Possibly a short in the coil/transformer next to the hot power regulator.
$8 to replace the entire unit, delivered.

An infra red video showing a power transistor oscillating between about 40 and 50 degrees Celsius with a couple of other 0 ohm resistors flickering hot and cooler as the transistor flaps in and out of operation.
A visible light photograph of the switch with its power jack and power supply circuitry in the lower left corner of a roughly triangular green circuit board.
A close up photo of the power supply circuitry featuring many surface mount components. At the top left is a grey coil or transformer. Below it is a black 5-pin power regulator chip. Just at the left edge is the mounting for the input barrel jack and next to this are two tiny 0 ohm resistors.
@arXiv_condmatstrel_bot@mastoxiv.page
2025-10-14 10:20:48

Comparing Symmetrized Determinant Neural Quantum States for the Hubbard Model
Louis Sharma, Ahmedeo Shokry, Rajah Nutakki, Olivier Simard, Michel Ferrero, Filippo Vicentini
arxiv.org/abs/2510.11710

@arXiv_csCL_bot@mastoxiv.page
2025-10-09 10:18:01

A Comparative Analysis of Contextual Representation Flow in State-Space and Transformer Architectures
Nhat M. Hoang, Do Xuan Long, Cong-Duy Nguyen, Min-Yen Kan, Luu Anh Tuan
arxiv.org/abs/2510.06640

@arXiv_csNE_bot@mastoxiv.page
2025-09-29 07:45:47

From Embeddings to Equations: Genetic-Programming Surrogates for Interpretable Transformer Classification
Mohammad Sadegh Khorshidi, Navid Yazdanjue, Hassan Gharoun, Mohammad Reza Nikoo, Fang Chen, Amir H. Gandomi
arxiv.org/abs/2509.21341

@arXiv_eessIV_bot@mastoxiv.page
2025-10-02 08:59:11

Variable Rate Image Compression via N-Gram Context based Swin-transformer
Priyanka Mudgal, Feng Liu
arxiv.org/abs/2510.00058 arxiv.org/pdf/…

@arXiv_csAI_bot@mastoxiv.page
2025-10-01 11:39:27

Transformer Classification of Breast Lesions: The BreastDCEDL_AMBL Benchmark Dataset and 0.92 AUC Baseline
Naomi Fridman (Ariel University), Anat Goldstein (Ariel University)
arxiv.org/abs/2509.26440

@Techmeme@techhub.social
2025-12-08 07:01:24

How Pathway, a startup developing an alternative to the transformer, aims to use its Dragon Hatchling architecture to create a new class of adaptive AI systems (Steven Rosenbush/Wall Street Journal)
wsj.com/articles/an-ai…

@arXiv_csCV_bot@mastoxiv.page
2025-10-03 10:09:01

PyramidStyler: Transformer-Based Neural Style Transfer with Pyramidal Positional Encoding and Reinforcement Learning
Raahul Krishna Durairaju (California State University, Fullerton), K. Saruladha (Puducherry Technological University)
arxiv.org/abs/2510.01715

@tante@tldr.nettime.org
2025-11-27 09:20:05

This is so much "AI" reporting: Claims about potentials and/or threads. I'd just like to have grown-up conversations about tech again :(
"The actual current user base for evil chatbots is the cyber security vendors, who scaremonger how only their good AI can possibly stop this automated hacker evil!"
(Original title: AI for evil — hacked by WormGPT!)

@arXiv_csLG_bot@mastoxiv.page
2025-10-06 10:25:29

Signature-Informed Transformer for Asset Allocation
Yoontae Hwang, Stefan Zohren
arxiv.org/abs/2510.03129 arxiv.org/pdf/2510.03129

@arXiv_csDB_bot@mastoxiv.page
2025-10-01 08:54:37

PAT: Pattern-Perceptive Transformer for Error Detection in Relational Databases
Jian Fu, Xixian Han, Xiaolong Wan, Wenjian Wang
arxiv.org/abs/2509.25907

@arXiv_astrophCO_bot@mastoxiv.page
2025-10-02 09:56:01

CosmoUiT: A Vision Transformer-UNet Hybrid for Fast and Accurate Emulation of 21-cm Maps from the Epoch of Reionization
Prasad Rajesh Posture, Yashrajsinh Mahida, Suman Majumdar, Leon Noble
arxiv.org/abs/2510.01121

@arXiv_qbioQM_bot@mastoxiv.page
2025-09-26 08:19:21

cAItomorph: Transformer-Based Hematological Malignancy Prediction from Peripheral Blood Smears in a Real-Word Cohort
Muhammed Furkan Dasdelen, Ivan Kukuljan, Peter Lienemann, Ario Sadafi, Matthias Hehr, Karsten Spiekermann, Christian Pohlkamp, Carsten Marr
arxiv.org/abs/2509.20402

@burger_jaap@mastodon.social
2025-12-10 10:24:28

#OTD 2013: A time when Tesla still meant progress. One of the first Tesla Superchargers in Europe, in Zevenaar 🇳🇱. Until then, #EV fast chargers were often single devices, but this was one of the first hubs.

Five white charging dispensers, a dark Tesla Model S next to one of them. On the left side of the image, there is also a transformer box and other technical cabinets.
Type 2 charging connector.
White charging dispenser, with TESLA in red letters
@arXiv_csCL_bot@mastoxiv.page
2025-10-15 10:47:51

Dr.LLM: Dynamic Layer Routing in LLMs
Ahmed Heakl, Martin Gubri, Salman Khan, Sangdoo Yun, Seong Joon Oh
arxiv.org/abs/2510.12773 arxiv.org…

@arXiv_csSE_bot@mastoxiv.page
2025-09-30 09:47:31

Influence-Guided Concolic Testing of Transformer Robustness
Chih-Duo Hong, Yu Wang, Yao-Chen Chang, Fang Yu
arxiv.org/abs/2509.23806 arxiv.…

@arXiv_csIT_bot@mastoxiv.page
2025-10-13 11:37:11

Crosslisted article(s) found for cs.IT. arxiv.org/list/cs.IT/new
[1/1]:
- Soft Graph Transformer for MIMO Detection
Jiadong Hong, Lei Liu, Xinyu Bian, Wenjie Wang, Zhaoyang Zhang

@arXiv_csCV_bot@mastoxiv.page
2025-10-10 11:02:19

Hyperspectral data augmentation with transformer-based diffusion models
Mattia Ferrari, Lorenzo Bruzzone
arxiv.org/abs/2510.08363 arxiv.org…

@arXiv_csLG_bot@mastoxiv.page
2025-10-06 10:23:39

Lightweight Transformer for EEG Classification via Balanced Signed Graph Algorithm Unrolling
Junyi Yao, Parham Eftekhar, Gene Cheung, Xujin Chris Liu, Yao Wang, Wei Hu
arxiv.org/abs/2510.03027

@arXiv_eessSP_bot@mastoxiv.page
2025-09-30 10:52:51

BladderFormer: A Streaming Transformer for Real-Time Urological State Monitoring
Chengwei Zhou, Steve Majerus, Gourav Datta
arxiv.org/abs/2509.24178

@arXiv_csRO_bot@mastoxiv.page
2025-09-23 11:31:20

MAST: Multi-Agent Spatial Transformer for Learning to Collaborate
Damian Owerko, Frederic Vatnsdal, Saurav Agarwal, Vijay Kumar, Alejandro Ribeiro
arxiv.org/abs/2509.17195

@arXiv_hepex_bot@mastoxiv.page
2025-10-01 09:56:18

TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs
Arjan Blankestijn, Uraz Odyurt, Amirreza Yousefzadeh
arxiv.org/abs/2509.26335

@arXiv_csCR_bot@mastoxiv.page
2025-10-10 09:42:39

New Machine Learning Approaches for Intrusion Detection in ADS-B
Mika\"ela Ngambo\'e, Jean-Simon Marrocco, Jean-Yves Ouattara, Jos\'e M. Fernandez, Gabriela Nicolescu
arxiv.org/abs/2510.08333

@arXiv_csCL_bot@mastoxiv.page
2025-10-14 13:16:18

Deconstructing Attention: Investigating Design Principles for Effective Language Modeling
Huiyin Xue, Nafise Sadat Moosavi, Nikolaos Aletras
arxiv.org/abs/2510.11602

@arXiv_csCV_bot@mastoxiv.page
2025-10-14 22:04:25

Replaced article(s) found for cs.CV. arxiv.org/list/cs.CV/new
[5/8]:
- Context Guided Transformer Entropy Modeling for Video Compression
Junlong Tong, Wei Zhang, Yaohui Jin, Xiaoyu Shen

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 22:18:51

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[9/14]:
- Hyper-STTN: Hypergraph Augmented Spatial-Temporal Transformer Network for Trajectory Prediction
Weizheng Wang, Baijian Yang, Sungeun Hong, Wenhai Sun, Byung-Cheol Min

@arXiv_eessIV_bot@mastoxiv.page
2025-10-13 11:31:24

Crosslisted article(s) found for eess.IV. arxiv.org/list/eess.IV/new
[1/1]:
- 3D Reconstruction from Transient Measurements with Time-Resolved Transformer
Yue Li, Shida Sun, Yu Hong, Feihu Xu, Zhiwei Xiong

@arXiv_csAI_bot@mastoxiv.page
2025-10-10 16:36:53

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[2/9]:
- Spatial-Functional awareness Transformer-based graph archetype contrastive learning for Decoding ...
Yueming Sun, Long Yang

@arXiv_csCE_bot@mastoxiv.page
2025-10-07 07:32:44

Lightweight and Data-Efficient MultivariateTime Series Forecasting using Residual-Stacked Gaussian (RS-GLinear) Architecture
Abukar Ali
arxiv.org/abs/2510.03788

@arXiv_csLG_bot@mastoxiv.page
2025-09-29 11:34:57

IIET: Efficient Numerical Transformer via Implicit Iterative Euler Method
Xinyu Liu, Bei Li, Jiahao Liu, Junhao Ruan, Kechen Jiao, Hongyin Tang, Jingang Wang, Xiao Tong, Jingbo Zhu
arxiv.org/abs/2509.22463

@arXiv_csCV_bot@mastoxiv.page
2025-10-13 10:32:30

Utilizing dynamic sparsity on pretrained DETR
Reza Sedghi, Anand Subramoney, David Kappel
arxiv.org/abs/2510.09380 arxiv.org/pdf/2510.09380…

@arXiv_csSE_bot@mastoxiv.page
2025-10-03 09:10:51

Towards fairer public transit: Real-time tensor-based multimodal fare evasion and fraud detection
Peter Wauyo, Dalia Bwiza, Alain Murara, Edwin Mugume, Eric Umuhoza
arxiv.org/abs/2510.02165

@arXiv_csSD_bot@mastoxiv.page
2025-09-24 09:17:34

Scattering Transformer: A Training-Free Transformer Architecture for Heart Murmur Detection
Rami Zewail
arxiv.org/abs/2509.18424 arxiv.org/…

@arXiv_hepex_bot@mastoxiv.page
2025-10-01 10:00:07

TrackFormers Part 2: Enhanced Transformer-Based Models for High-Energy Physics Track Reconstruction
Sascha Caron, Nadezhda Dobreva, Maarten Kimpel, Uraz Odyurt, Slav Pshenov, Roberto Ruiz de Austri Bazan, Eugene Shalugin, Zef Wolffs, Yue Zhao
arxiv.org/abs/2509.26411

@arXiv_csCL_bot@mastoxiv.page
2025-10-13 10:38:30

Accent-Invariant Automatic Speech Recognition via Saliency-Driven Spectrogram Masking
Mohammad Hossein Sameti, Sepehr Harfi Moridani, Ali Zarean, Hossein Sameti
arxiv.org/abs/2510.09528

@arXiv_eessSP_bot@mastoxiv.page
2025-10-01 09:18:47

Transformer-Based Rate Prediction for Multi-Band Cellular Handsets
Ruibin Chen, Haozhe Lei, Hao Guo, Marco Mezzavilla, Hitesh Poddar, Tomoki Yoshimura, Sundeep Rangan
arxiv.org/abs/2509.25722

@Techmeme@techhub.social
2025-12-05 04:45:50

Google debuts Titans, an architecture combining RNN speed with transformer performance for real-time learning, able to scale effectively to a 2M context window (Google Research)
research.google/blog/titans-mi

@arXiv_csCV_bot@mastoxiv.page
2025-10-07 12:37:52

DiT-VTON: Diffusion Transformer Framework for Unified Multi-Category Virtual Try-On and Virtual Try-All with Integrated Image Editing
Qi Li, Shuwen Qiu, Julien Han, Xingzi Xu, Mehmet Saygin Seyfioglu, Kee Kiat Koo, Karim Bouyarmane
arxiv.org/abs/2510.04797

@arXiv_csCE_bot@mastoxiv.page
2025-09-30 07:36:11

A Hybrid DNN Transformer AE Framework for Corporate Tax Risk Supervision and Risk Level Assessment
Zhenzhen Song, Nanxi Wang, Hongji Li
arxiv.org/abs/2509.23862

@arXiv_csCL_bot@mastoxiv.page
2025-10-07 12:17:42

AWARE, Beyond Sentence Boundaries: A Contextual Transformer Framework for Identifying Cultural Capital in STEM Narratives
Khalid Mehtab Khan, Anagha Kulkarni
arxiv.org/abs/2510.04983

@arXiv_csLG_bot@mastoxiv.page
2025-09-25 10:38:22

Pi-Transformer: A Physics-informed Attention Mechanism for Time Series Anomaly Detection
Sepehr Maleki, Negar Pourmoazemi
arxiv.org/abs/2509.19985

@arXiv_eessIV_bot@mastoxiv.page
2025-10-01 10:17:48

GastroViT: A Vision Transformer Based Ensemble Learning Approach for Gastrointestinal Disease Classification with Grad CAM & SHAP Visualization
Sumaiya Tabassum, Md. Faysal Ahamed, Hafsa Binte Kibria, Md. Nahiduzzaman, Julfikar Haider, Muhammad E. H. Chowdhury, Mohammad Tariqul Islam
arxiv.org/abs/2509.26502

@arXiv_csLG_bot@mastoxiv.page
2025-10-13 10:42:20

The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton
Natalie Abreu, Nikhil Vyas, Sham Kakade, Depen Morwani
arxiv.org/abs/2510.09378

@arXiv_csAI_bot@mastoxiv.page
2025-10-07 20:39:55

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[6/13]:
- LIAM: Multimodal Transformer for Language Instructions, Images, Actions and Semantic Maps
Yihao Wang, Raphael Memmesheimer, Sven Behnke

@arXiv_csCL_bot@mastoxiv.page
2025-10-13 10:36:30

Domain-Adapted Pre-trained Language Models for Implicit Information Extraction in Crash Narratives
Xixi Wang, Jordanka Kovaceva, Miguel Costa, Shuai Wang, Francisco Camara Pereira, Robert Thomson
arxiv.org/abs/2510.09434

@arXiv_csCV_bot@mastoxiv.page
2025-09-29 11:16:57

LucidFlux: Caption-Free Universal Image Restoration via a Large-Scale Diffusion Transformer
Song Fei, Tian Ye, Lujia Wang, Lei Zhu
arxiv.org/abs/2509.22414

@arXiv_csSD_bot@mastoxiv.page
2025-10-03 08:19:51

HRTFformer: A Spatially-Aware Transformer for Personalized HRTF Upsampling in Immersive Audio Rendering
Xuyi Hu, Jian Li, Shaojie Zhang, Stefan Goetz, Lorenzo Picinali, Ozgur B. Akan, Aidan O. T. Hogg
arxiv.org/abs/2510.01891

@Techmeme@techhub.social
2025-11-04 00:26:38

Augmented Intelligence, which is building neuro-symbolic AI models, raised $20M in a bridge SAFE round at a $750M valuation, bringing its total funding to ~$60M (Carl Franzen/VentureBeat)
venturebeat.com/ai/the-beginni

@arXiv_csLG_bot@mastoxiv.page
2025-09-23 12:45:00

Conv-like Scale-Fusion Time Series Transformer: A Multi-Scale Representation for Variable-Length Long Time Series
Kai Zhang, Siming Sun, Zhengyu Fan, Qinmin Yang, Xuejun Jiang
arxiv.org/abs/2509.17845

@Techmeme@techhub.social
2025-10-03 02:05:50

IBM releases Granite 4.0, an open source "enterprise-ready" LLM family with a hybrid architecture, claiming it uses significantly less RAM than traditional LLMs (Carl Franzen/VentureBeat)
venturebeat.com/ai/western-qwe

@arXiv_csCL_bot@mastoxiv.page
2025-10-03 10:55:01

Enhanced Arabic-language cyberbullying detection: deep embedding and transformer (BERT) approaches
Ebtesam Jaber Aljohani, Wael M. S. Yafoo
arxiv.org/abs/2510.02232

@arXiv_csCV_bot@mastoxiv.page
2025-10-01 11:52:17

HART: Human Aligned Reconstruction Transformer
Xiyi Chen, Shaofei Wang, Marko Mihajlovic, Taewon Kang, Sergey Prokudin, Ming Lin
arxiv.org/abs/2509.26621

@arXiv_csCL_bot@mastoxiv.page
2025-09-25 10:36:42

SINAI at eRisk@CLEF 2025: Transformer-Based and Conversational Strategies for Depression Detection
Alba Maria Marmol-Romero, Manuel Garcia-Vega, Miguel Angel Garcia-Cumbreras, Arturo Montejo-Raez
arxiv.org/abs/2509.19861

@arXiv_csLG_bot@mastoxiv.page
2025-10-10 16:44:16

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[6/8]:
- Rethinking Decoders for Transformer-based Semantic Segmentation: A Compression Perspective
Qishuai Wen, Chun-Guang Li

@arXiv_csCV_bot@mastoxiv.page
2025-10-10 11:15:19

To Sink or Not to Sink: Visual Information Pathways in Large Vision-Language Models
Jiayun Luo, Wan-Cyuan Fan, Lyuyang Wang, Xiangteng He, Tanzila Rahman, Purang Abolmaesumi, Leonid Sigal
arxiv.org/abs/2510.08510

@arXiv_csCL_bot@mastoxiv.page
2025-09-23 12:54:41

Transformer-Encoder Trees for Efficient Multilingual Machine Translation and Speech Translation
Yiwen Guan, Jacob Whitehill
arxiv.org/abs/2509.17930

@arXiv_csCV_bot@mastoxiv.page
2025-09-23 13:11:21

GraDeT-HTR: A Resource-Efficient Bengali Handwritten Text Recognition System utilizing Grapheme-based Tokenizer and Decoder-only Transformer
Md. Mahmudul Hasan, Ahmed Nesar Tahsin Choudhury, Mahmudul Hasan, Md. Mosaddek Khan
arxiv.org/abs/2509.18081

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:37:21

Grouped Differential Attention
Junghwan Lim, Sungmin Lee, Dongseok Kim, Wai Ting Cheung, Beomgyu Kim, Taehwan Kim, Haesol Lee, Junhyeok Lee, Dongpin Oh, Eunhwan Park
arxiv.org/abs/2510.06949

@arXiv_csCV_bot@mastoxiv.page
2025-10-10 11:14:29

AI-Driven Radiology Report Generation for Traumatic Brain Injuries
Riadh Bouslimi, Houda Trabelsi, Wahiba Ben Abdssalem Karaa, Hana Hedhli
arxiv.org/abs/2510.08498

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:51:11

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL
Egor Cherepanov, Alexey K. Kovalev, Aleksandr I. Panov
arxiv.org/abs/2510.07151

@arXiv_csCV_bot@mastoxiv.page
2025-10-09 10:19:01

Heptapod: Language Modeling on Visual Signals
Yongxin Zhu, Jiawei Chen, Yuanzhe Chen, Zhuo Chen, Dongya Jia, Jian Cong, Xiaobin Zhuang, Yuping Wang, Yuxuan Wang
arxiv.org/abs/2510.06673

@arXiv_csCV_bot@mastoxiv.page
2025-10-09 10:38:11

Bayesian Modelling of Multi-Year Crop Type Classification Using Deep Neural Networks and Hidden Markov Models
Gianmarco Perantoni, Giulio Weikmann, Lorenzo Bruzzone
arxiv.org/abs/2510.07008

@arXiv_csLG_bot@mastoxiv.page
2025-09-23 12:47:30

Optimizing Inference in Transformer-Based Models: A Multi-Method Benchmark
Siu Hang Ho, Prasad Ganesan, Nguyen Duong, Daniel Schlabig
arxiv.org/abs/2509.17894

@arXiv_csCV_bot@mastoxiv.page
2025-09-26 10:26:21

Quantized Visual Geometry Grounded Transformer
Weilun Feng, Haotong Qin, Mingqiang Wu, Chuanguang Yang, Yuqi Li, Xiangqi Li, Zhulin An, Libo Huang, Yulun Zhang, Michele Magno, Yongjun Xu
arxiv.org/abs/2509.21302

@arXiv_csCV_bot@mastoxiv.page
2025-10-09 10:31:31

Lung Infection Severity Prediction Using Transformers with Conditional TransMix Augmentation and Cross-Attention
Bouthaina Slika, Fadi Dornaika, Fares Bougourzi, Karim Hammoudi
arxiv.org/abs/2510.06887

@arXiv_csLG_bot@mastoxiv.page
2025-10-07 16:08:50

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/8]:
- ReplaceMe: Network Simplification via Depth Pruning and Transformer Block Linearization
Shopkhoev, Ali, Zhussip, Malykh, Lefkimmiatis, Komodakis, Zagoruyko

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 09:36:02

Enhancing Transformer-Based Vision Models: Addressing Feature Map Anomalies Through Novel Optimization Strategies
Sumit Mamtani
arxiv.org/abs/2509.19687

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 10:36:02

PS3: A Multimodal Transformer Integrating Pathology Reports with Histology Images and Biological Pathways for Cancer Survival Prediction
Manahil Raza, Ayesha Azam, Talha Qaiser, Nasir Rajpoot
arxiv.org/abs/2509.20022

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 10:17:32

EfficienT-HDR: An Efficient Transformer-Based Framework via Multi-Exposure Fusion for HDR Reconstruction
Yu-Shen Huang, Tzu-Han Chen, Cheng-Yen Hsiao, Shaou-Gang Miaou
arxiv.org/abs/2509.19779

@arXiv_csCV_bot@mastoxiv.page
2025-10-07 12:38:32

Visual Representations inside the Language Model
Benlin Liu, Amita Kamath, Madeleine Grunde-McLaughlin, Winson Han, Ranjay Krishna
arxiv.org/abs/2510.04819