Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:37:21

Grouped Differential Attention
Junghwan Lim, Sungmin Lee, Dongseok Kim, Wai Ting Cheung, Beomgyu Kim, Taehwan Kim, Haesol Lee, Junhyeok Lee, Dongpin Oh, Eunhwan Park
arxiv.org/abs/2510.06949

@arXiv_csCL_bot@mastoxiv.page
2025-09-10 09:54:21

Mitigating Attention Localization in Small Scale: Self-Attention Refinement via One-step Belief Propagation
Nakyung Lee, Yeongoon Kim, Minhae Oh, Suhwan Kim, Jin Woo Koo, Hyewon Jo, Jungwoo Lee
arxiv.org/abs/2509.07324

@arXiv_csCV_bot@mastoxiv.page
2025-10-09 10:38:51

DADO: A Depth-Attention framework for Object Discovery
Federico Gonzalez, Estefania Talavera, Petia Radeva
arxiv.org/abs/2510.07089 arxiv.o…

@arXiv_csCL_bot@mastoxiv.page
2025-09-10 09:11:11

Causal Attention with Lookahead Keys
Zhuoqing Song, Peng Sun, Huizhuo Yuan, Quanquan Gu
arxiv.org/abs/2509.07301 arxiv.org/pdf/2509.07301…

@arXiv_csCV_bot@mastoxiv.page
2025-09-09 12:26:52

Cortex-Synth: Differentiable Topology-Aware 3D Skeleton Synthesis with Hierarchical Graph Attention
Mohamed Zayaan S
arxiv.org/abs/2509.06705

@arXiv_csLG_bot@mastoxiv.page
2025-10-10 11:15:29

In-Context Clustering with Large Language Models
Ying Wang, Mengye Ren, Andrew Gordon Wilson
arxiv.org/abs/2510.08466 arxiv.org/pdf/2510.08…

@arXiv_csSD_bot@mastoxiv.page
2025-10-10 08:33:28

Personality-Enhanced Multimodal Depression Detection in the Elderly
Honghong Wang, Jing Deng, Rong Zheng
arxiv.org/abs/2510.08004 arxiv.org…

@arXiv_csMA_bot@mastoxiv.page
2025-09-09 08:15:31

Orchestrator: Active Inference for Multi-Agent Systems in Long-Horizon Tasks
Lukas Beckenbauer, Johannes-Lucas Loewe, Ge Zheng, Alexandra Brintrup
arxiv.org/abs/2509.05651

@arXiv_condmatsuprcon_bot@mastoxiv.page
2025-09-10 08:47:51

Examining density wave correlations in high pressure $\rm{La_3Ni_2O_7}$ through variational Monte Carlo
Yanxin Chen, Haoxiang Chen, Tonghuan Jiang, Ji Chen
arxiv.org/abs/2509.07219

@arXiv_csLG_bot@mastoxiv.page
2025-10-10 11:13:29

Synthetic Series-Symbol Data Generation for Time Series Foundation Models
Wenxuan Wang, Kai Wu, Yujian Betterest Li, Dan Wang, Xiaoyu Zhang
arxiv.org/abs/2510.08445

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:44:00

AttZoom: Attention Zoom for Better Visual Features
Daniel DeAlcala, Aythami Morales, Julian Fierrez, Ruben Tolosana
arxiv.org/abs/2508.03625

@arXiv_csAI_bot@mastoxiv.page
2025-10-01 11:45:57

HilbertA: Hilbert Attention for Image Generation with Diffusion Models
Shaoyi Zheng, Wenbo Lu, Yuxuan Xia, Haomin Liu, Shengjie Wang
arxiv.org/abs/2509.26538

@arXiv_astrophEP_bot@mastoxiv.page
2025-09-08 08:49:40

Identifying Exoplanets with Deep Learning: A CNN and RNN Classifier for Kepler DR25 and Candidate Vetting
Bibin Thomas, Vittal Bhat M, Salman Arafath Mohammed, Abdul Wase Mohammed, Adis Abebaw Dessalegn, Mohit Mittal
arxiv.org/abs/2509.04793

@arXiv_csMM_bot@mastoxiv.page
2025-09-08 07:46:20

An Emotion Recognition Framework via Cross-modal Alignment of EEG and Eye Movement Data
Jianlu Wang, Yanan Wang, Tong Liu
arxiv.org/abs/2509.04938

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@arXiv_csCV_bot@mastoxiv.page
2025-09-09 12:30:52

BIR-Adapter: A Low-Complexity Diffusion Model Adapter for Blind Image Restoration
Cem Eteke, Alexander Griessel, Wolfgang Kellerer, Eckehard Steinbach
arxiv.org/abs/2509.06904

@arXiv_eessIV_bot@mastoxiv.page
2025-08-05 10:29:10

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation
Andrea Dosi, Semanto Mondal, Rajib Chandra Ghosh, Massimo Brescia, Giuseppe Longo
arxiv.org/abs/2508.01941

@arXiv_csIR_bot@mastoxiv.page
2025-09-30 10:46:51

Multi-Item-Query Attention for Stable Sequential Recommendation
Mingshi Xu, Haoren Zhu, Wilfred Siu Hung Ng
arxiv.org/abs/2509.24424 arxiv.…

@arXiv_csLG_bot@mastoxiv.page
2025-09-05 10:20:51

Attention as an Adaptive Filter
Peter Racioppo
arxiv.org/abs/2509.04154 arxiv.org/pdf/2509.04154

@arXiv_eessSP_bot@mastoxiv.page
2025-08-06 10:05:10

Investigating the Cognitive Response of Brake Lights in Initiating Braking Action Using EEG
Ramaswamy Palaniappan, Surej Mouli, Howard Bowman, Ian McLoughlin
arxiv.org/abs/2508.03274

@arXiv_csAI_bot@mastoxiv.page
2025-10-01 11:28:27

LMILAtt: A Deep Learning Model for Depression Detection from Social Media Users Enhanced by Multi-Instance Learning Based on Attention Mechanism
Yukun Yang
arxiv.org/abs/2509.26145

@peterhoneyman@a2mi.social
2025-08-18 20:00:51

i am determined to read the attention/transformer paper
i even printed it out

Attention Is All You Need
Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Łukasz Kaiser
Illia Polosukhin

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with …
@arXiv_quantph_bot@mastoxiv.page
2025-07-28 09:42:41

PGKET: A Photonic Gaussian Kernel Enhanced Transformer
Ren-Xin Zhao
arxiv.org/abs/2507.19041 arxiv.org/pdf/2507.19041

@arXiv_csLG_bot@mastoxiv.page
2025-10-07 13:04:42

On Structured State-Space Duality
Jerry Yao-Chieh Hu, Xiwen Zhang, Weimin Wu, Han Liu
arxiv.org/abs/2510.04944 arxiv.org/pdf/2510.04944

@arXiv_csSD_bot@mastoxiv.page
2025-10-01 09:43:38

The silence of the weights: an investigation of structural pruning strategies for attention-based audio signal architectures
Andrea Diecidue, Carlo Alberto Barbano, Piero Fraternali, Mathieu Fontaine, Enzo Tartaglione
arxiv.org/abs/2509.26207

@arXiv_physicsfludyn_bot@mastoxiv.page
2025-08-05 10:27:11

A robust intermittency equation formulation for transition modeling in Spalart-Allmaras RANS simulations of airfoil flows across a wide range of Reynolds numbers
Valerio D'Alessandro, Matteo Falone, Luca Giammichele, Renato Ricci
arxiv.org/abs/2508.02547

@arXiv_csCL_bot@mastoxiv.page
2025-10-03 10:40:01

Learning to Look at the Other Side: A Semantic Probing Study of Word Embeddings in LLMs with Enabled Bidirectional Attention
Zhaoxin Feng, Jianfei Ma, Emmanuele Chersoni, Xiaojing Zhao, Xiaoyi Bao
arxiv.org/abs/2510.01652

@arXiv_csNI_bot@mastoxiv.page
2025-08-05 08:47:30

Convolutions are Competitive with Transformers for Encrypted Traffic Classification with Pre-training
Chungang Lin, Weiyao Zhang, Tianyu Zuo, Chao Zha, Yilong Jiang, Ruiqi Meng, Haitong Luo, Xuying Meng, Yujun Zhang
arxiv.org/abs/2508.02001

@arXiv_eessIV_bot@mastoxiv.page
2025-07-28 08:35:41

Enhancing Diabetic Retinopathy Classification Accuracy through Dual Attention Mechanism in Deep Learning
Abdul Hannan, Zahid Mahmood, Rizwan Qureshi, Hazrat Ali
arxiv.org/abs/2507.19199

@arXiv_mathRT_bot@mastoxiv.page
2025-07-22 08:27:00

Partial Symmetry Enforced Attention Decomposition (PSEAD): A Group-Theoretic Framework for Equivariant Transformers in Biological Systems
Daniel Ayomide Olanrewaju
arxiv.org/abs/2507.14908

@arXiv_csIT_bot@mastoxiv.page
2025-09-22 08:37:11

Interplay Between Belief Propagation and Transformer: Differential-Attention Message Passing Transformer
Chin Wa Lau, Xiang Shi, Ziyan Zheng, Haiwen Cao, Nian Guo
arxiv.org/abs/2509.15637

@arXiv_csCL_bot@mastoxiv.page
2025-08-01 10:20:11

DiffLoRA: Differential Low-Rank Adapters for Large Language Models
Alexandre Misrahi, Nadezhda Chirkova, Maxime Louis, Vassilina Nikoulina
arxiv.org/abs/2507.23588

@arXiv_csCE_bot@mastoxiv.page
2025-09-22 07:31:21

SPH-Net: A Co-Attention Hybrid Model for Accurate Stock Price Prediction
Yiyang Wu, Hanyu Ma, Muxin Ge, Xiaoli Ma, Yadi Liu, Ye Aung Moe, Zeyu Han, Weizheng Xie
arxiv.org/abs/2509.15414

@arXiv_csCV_bot@mastoxiv.page
2025-09-03 15:03:13

Enhancing Fitness Movement Recognition with Attention Mechanism and Pre-Trained Feature Extractors
Shanjid Hasan Nishat, Srabonti Deb, Mohiuddin Ahmed
arxiv.org/abs/2509.02511

@arXiv_csOH_bot@mastoxiv.page
2025-08-12 07:50:53

Historical Prediction Attention Mechanism based Trajectory Forecasting for Proactive Work Zone Safety in a Digital Twin Environment
Minhaj Uddin Ahmad, Mizanur Rahman, Alican Sevim, David Bodoh, Sakib Khan, Li Zhao, Nathan Huynh, Eren Erman Ozguven
arxiv.org/abs/2508.06544

@arXiv_csCR_bot@mastoxiv.page
2025-07-16 08:17:11

LaSM: Layer-wise Scaling Mechanism for Defending Pop-up Attack on GUI Agents
Zihe Yan, Zhuosheng Zhang
arxiv.org/abs/2507.10610

@arXiv_csCV_bot@mastoxiv.page
2025-08-25 09:56:40

Attention Mechanism in Randomized Time Warping
Yutaro Hiraoka, Kazuya Okamura, Kota Suto, Kazuhiro Fukui
arxiv.org/abs/2508.16366 arxiv.org…

@arXiv_csLG_bot@mastoxiv.page
2025-09-30 14:44:01

High-Dimensional Analysis of Single-Layer Attention for Sparse-Token Classification
Nicholas Barnfield, Hugo Cui, Yue M. Lu
arxiv.org/abs/2509.25153

@arXiv_physicsappph_bot@mastoxiv.page
2025-08-01 08:28:51

Clock Pulling Enables Maximum-Efficiency Wireless Power Transfer
Xianglin Hao, Xiaosheng Wang, ke Yin, Sheng Ren, Chaoqiang Jiang, Jianlong Zou, Tianyu Dong, Chi Kong Tse
arxiv.org/abs/2507.22907

@arXiv_physicschemph_bot@mastoxiv.page
2025-09-22 08:36:31

DeepMech: A Machine Learning Framework for Chemical Reaction Mechanism Prediction
Manajit Das, Ajnabiul Hoque, Mayank Baranwal, Raghavan B. Sunoj
arxiv.org/abs/2509.15872

@arXiv_csSD_bot@mastoxiv.page
2025-07-29 08:51:11

Improving Deep Learning-based Respiratory Sound Analysis with Frequency Selection and Attention Mechanism
Nouhaila Fraihi, Ouassim Karrakchou, Mounir Ghogho
arxiv.org/abs/2507.20052

@arXiv_eessSP_bot@mastoxiv.page
2025-07-22 12:02:10

BEAM-Net: A Deep Learning Framework with Bone Enhancement Attention Mechanism for High Resolution High Frame Rate Ultrasound Beamforming
Midhila Madhusoodanan, Mahesh Raveendranatha Panicker, Pisharody Harikrishnan Gopalakrishnan, Abhilash Rakkunedeth Hareendranathan
arxiv.org/abs/2507.15306

@arXiv_csNI_bot@mastoxiv.page
2025-09-22 08:52:11

Smart Interrupted Routing Based on Multi-head Attention Mask Mechanism-Driven MARL in Software-defined UASNs
Zhenyu Wang, Chuan Lin, Guangjie Han, Shengchao Zhu, Ruoyuan Wu, Tongwei Zhang
arxiv.org/abs/2509.15856

@arXiv_csLG_bot@mastoxiv.page
2025-10-02 11:09:21

Privacy Preserved Federated Learning with Attention-Based Aggregation for Biometric Recognition
Kassahun Azezew, Minyechil Alehegn, Tsega Asresa, Bitew Mekuria, Tizazu Bayh, Ayenew Kassie, Amsalu Tesema, Animut Embiyale
arxiv.org/abs/2510.01113

@arXiv_physicsplasmph_bot@mastoxiv.page
2025-07-25 08:49:31

Nonlocal current-driven heat flow in ideal plasmas
Nicholas Mitchell, David Chapman, Grigory Kagan
arxiv.org/abs/2507.18430 arxiv.org/pdf/2…

@arXiv_csLG_bot@mastoxiv.page
2025-09-25 10:38:22

Pi-Transformer: A Physics-informed Attention Mechanism for Time Series Anomaly Detection
Sepehr Maleki, Negar Pourmoazemi
arxiv.org/abs/2509.19985

@arXiv_csCL_bot@mastoxiv.page
2025-09-23 12:57:41

Cross-Attention is Half Explanation in Speech-to-Text Models
Sara Papi, Dennis Fucci, Marco Gaido, Matteo Negri, Luisa Bentivogli
arxiv.org/abs/2509.18010

@arXiv_astrophCO_bot@mastoxiv.page
2025-07-21 09:07:50

Investigating $f(R)$-Inflation: background evolution and constraints
Elisa Fazzari, Chiara De Leo, Giovanni Montani, Matteo Martinelli, Alessandro Melchiorri, Guadalupe Ca\~nas-Herrera
arxiv.org/abs/2507.13890

@arXiv_csSD_bot@mastoxiv.page
2025-09-25 08:47:12

Eliminating stability hallucinations in llm-based tts models via attention guidance
ShiMing Wang, ZhiHao Du, Yang Xiang, TianYu Zhao, Han Zhao, Qian Chen, XianGang Li, HanJie Guo, ZhenHua Ling
arxiv.org/abs/2509.19852

@arXiv_csCV_bot@mastoxiv.page
2025-09-05 10:14:41

TEn-CATS: Text-Enriched Audio-Visual Video Parsing with Multi-Scale Category-Aware Temporal Graph
Yaru Chen, Faegheh Sardari, Peiliang Zhang, Ruohao Guo, Yang Xiang, Zhenbo Li, Wenwu Wang
arxiv.org/abs/2509.04086

@arXiv_csLG_bot@mastoxiv.page
2025-10-06 10:25:29

Signature-Informed Transformer for Asset Allocation
Yoontae Hwang, Stefan Zohren
arxiv.org/abs/2510.03129 arxiv.org/pdf/2510.03129

@arXiv_csCL_bot@mastoxiv.page
2025-07-25 10:07:42

Not All Features Deserve Attention: Graph-Guided Dependency Learning for Tabular Data Generation with Language Models
Zheyu Zhang, Shuo Yang, Bardh Prenkaj, Gjergji Kasneci
arxiv.org/abs/2507.18504

@arXiv_csGR_bot@mastoxiv.page
2025-07-21 07:34:10

StructInbet: Integrating Explicit Structural Guidance into Inbetween Frame Generation
Zhenglin Pan, Haoran Xie
arxiv.org/abs/2507.13377

@arXiv_csSD_bot@mastoxiv.page
2025-07-29 07:51:51

Efficient Vocal-Conditioned Music Generation via Soft Alignment Attention and Latent Diffusion
Hei Shing Cheung, Boya Zhang
arxiv.org/abs/2507.19991

@arXiv_eessSP_bot@mastoxiv.page
2025-08-01 07:59:21

BS-1-to-N: Diffusion-Based Environment-Aware Cross-BS Channel Knowledge Map Generation for Cell-Free Networks
Zhuoyin Dai, Di Wu, Yong Zeng, Xiaoli Xu, Xinyi Wang, Zesong Fei
arxiv.org/abs/2507.23236

@arXiv_csLG_bot@mastoxiv.page
2025-08-29 10:08:31

Rethinking Transformer Connectivity: TLinFormer, A Path to Exact, Full Context-Aware Linear Attention
Zhongpan Tang
arxiv.org/abs/2508.20407

@arXiv_quantph_bot@mastoxiv.page
2025-07-21 09:51:50

Machine Learning-aided Optimal Control of a noisy qubit
Riccardo Cantone, Shreyasi Mukherjee, Luigi Giannelli, Elisabetta Paladino, Giuseppe Falci
arxiv.org/abs/2507.14085

@arXiv_csLG_bot@mastoxiv.page
2025-07-24 10:08:39

DistrAttention: An Efficient and Flexible Self-Attention Mechanism on Modern GPUs
Haolin Jin, Mengbai Xiao, Yuan Yuan, Xiao Zhang, Dongxiao Yu, Guanghui Zhang, Haoliang Wang
arxiv.org/abs/2507.17245

@arXiv_csCR_bot@mastoxiv.page
2025-08-14 07:48:32

Shadow in the Cache: Unveiling and Mitigating Privacy Risks of KV-cache in LLM Inference
Zhifan Luo, Shuo Shao, Su Zhang, Lijing Zhou, Yuke Hu, Chenxu Zhao, Zhihao Liu, Zhan Qin
arxiv.org/abs/2508.09442

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@arXiv_eessAS_bot@mastoxiv.page
2025-09-18 09:21:31

Mixture of Low-Rank Adapter Experts in Generalizable Audio Deepfake Detection
Janne Laakkonen, Ivan Kukanov, Ville Hautam\"aki
arxiv.org/abs/2509.13878

@arXiv_csSD_bot@mastoxiv.page
2025-07-25 08:27:12

Resnet-conformer network with shared weights and attention mechanism for sound event localization, detection, and distance estimation
Quoc Thinh Vo, David Han
arxiv.org/abs/2507.17941

@arXiv_csLG_bot@mastoxiv.page
2025-09-29 11:34:37

Physics-informed GNN for medium-high voltage AC power flow with edge-aware attention and line search correction operator
Changhun Kim, Timon Conrad, Redwanul Karim, Julian Oelhaf, David Riebesel, Tom\'as Arias-Vergara, Andreas Maier, Johann J\"ager, Siming Bayer
arxiv.org/abs/2509.22458

@arXiv_csCV_bot@mastoxiv.page
2025-10-02 10:54:11

Feature Identification for Hierarchical Contrastive Learning
Julius Ott, Nastassia Vysotskaya, Huawei Sun, Lorenzo Servadei, Robert Wille
arxiv.org/abs/2510.00837

@arXiv_physicsfludyn_bot@mastoxiv.page
2025-09-19 08:54:31

On the algebraic stretching dynamics of variable-density mixing in shock-bubble interaction
Xu Han, Bin Yu, Hong Liu
arxiv.org/abs/2509.14607

@arXiv_csLG_bot@mastoxiv.page
2025-10-01 11:57:07

TASP: Topology-aware Sequence Parallelism
Yida Wang (Capital Normal University, Infinigence-AI), Ke Hong (Tsinghua University, Infinigence-AI), Xiuhong Li (Infinigence-AI), Yuanchao Xu (Capital Normal University), Wenxun Wang (Tsinghua University), Guohao Dai (Infinigence-AI, Shanghai Jiao Tong University), Yu Wang (Tsinghua University)

@arXiv_csCV_bot@mastoxiv.page
2025-07-28 10:14:41

Modality Agnostic Efficient Long Range Encoder
Toufiq Parag, Ahmed Elgammal
arxiv.org/abs/2507.19409 arxiv.org/pdf/2507.19409

@arXiv_csLG_bot@mastoxiv.page
2025-09-15 09:56:11

Multipole Semantic Attention: A Fast Approximation of Softmax Attention for Pretraining
Rupert Mitchell, Kristian Kersting
arxiv.org/abs/2509.10406

@arXiv_csGR_bot@mastoxiv.page
2025-07-18 08:44:42

HairFormer: Transformer-Based Dynamic Neural Hair Simulation
Joy Xiaoji Zhang, Jingsen Zhu, Hanyu Chen, Steve Marschner
arxiv.org/abs/2507.12600

@arXiv_csLG_bot@mastoxiv.page
2025-08-15 10:19:32

Natively Trainable Sparse Attention for Hierarchical Point Cloud Datasets
Nicolas Lapautre, Maria Marchenko, Carlos Miguel Pati\~no, Xin Zhou
arxiv.org/abs/2508.10758

@arXiv_csCV_bot@mastoxiv.page
2025-08-21 10:13:40

EventSSEG: Event-driven Self-Supervised Segmentation with Probabilistic Attention
Lakshmi Annamalai, Chetan Singh Thakur
arxiv.org/abs/2508.14856

@arXiv_csSD_bot@mastoxiv.page
2025-07-31 08:20:11

Adaptive Duration Model for Text Speech Alignment
Junjie Cao
arxiv.org/abs/2507.22612 arxiv.org/pdf/2507.22612

@arXiv_csCL_bot@mastoxiv.page
2025-09-19 13:23:51

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[1/3]:
- Fast Multipole Attention: A Scalable Multilevel Attention Mechanism for Text and Images
Yanming Kang, Giang Tran, Hans De Sterck

@arXiv_csCV_bot@mastoxiv.page
2025-09-30 15:01:16

VideoAnchor: Reinforcing Subspace-Structured Visual Cues for Coherent Visual-Spatial Reasoning
Zhaozhi Wang, Tong Zhang, Mingyue Guo, Yaowei Wang, Qixiang Ye
arxiv.org/abs/2509.25151

@arXiv_csLG_bot@mastoxiv.page
2025-10-02 11:07:11

Random Feature Spiking Neural Networks
Maximilian Gollwitzer, Felix Dietrich
arxiv.org/abs/2510.01012 arxiv.org/pdf/2510.01012

@arXiv_csCV_bot@mastoxiv.page
2025-07-29 10:02:11

HydraMamba: Multi-Head State Space Model for Global Point Cloud Learning
Kanglin Qu, Pan Gao, Qun Dai, Yuanhao Sun
arxiv.org/abs/2507.19778

@arXiv_csLG_bot@mastoxiv.page
2025-08-20 10:12:40

PENGUIN: Enhancing Transformer with Periodic-Nested Group Attention for Long-term Time Series Forecasting
Tian Sun, Yuqi Chen, Weiwei Sun
arxiv.org/abs/2508.13773

@arXiv_csLG_bot@mastoxiv.page
2025-09-26 10:28:01

TyphoonMLA: A Mixed Naive-Absorb MLA Kernel For Shared Prefix
Ahmet Caner Y\"uz\"ug\"uler, Ahmet \c{C}elik, Jiawei Zhuang, Lukas Cavigelli
arxiv.org/abs/2509.21081

@arXiv_eessAS_bot@mastoxiv.page
2025-09-11 09:22:43

Accelerating Diffusion Transformer-Based Text-to-Speech with Transformer Layer Caching
Siratish Sakpiboonchit
arxiv.org/abs/2509.08696 arxi…

@arXiv_csSD_bot@mastoxiv.page
2025-08-11 09:13:00

DAFMSVC: One-Shot Singing Voice Conversion with Dual Attention Mechanism and Flow Matching
Wei Chen, Binzhu Sha, Dan Luo, Jing Yang, Zhuo Wang, Fan Fan, Zhiyong Wu
arxiv.org/abs/2508.05978

@arXiv_csLG_bot@mastoxiv.page
2025-09-12 09:19:09

Fast attention mechanisms: a tale of parallelism
Jingwen Liu, Hantao Yu, Clayton Sanford, Alexandr Andoni, Daniel Hsu
arxiv.org/abs/2509.09001

@arXiv_csCL_bot@mastoxiv.page
2025-08-21 09:57:50

Improving in-context learning with a better scoring function
Omar Naim, Swarnadeep Bhar, J\'er\^ome Bolte, Nicholas Asher
arxiv.org/abs/2508.14685

@arXiv_csSD_bot@mastoxiv.page
2025-09-12 08:30:29

Efficient Transformer-Based Piano Transcription With Sparse Attention Mechanisms
Weixing Wei, Kazuyoshi Yoshii
arxiv.org/abs/2509.09318 arx…

@arXiv_csSD_bot@mastoxiv.page
2025-08-28 07:48:40

Infant Cry Detection In Noisy Environment Using Blueprint Separable Convolutions and Time-Frequency Recurrent Neural Network
Haolin Yu, Yanxiong Li
arxiv.org/abs/2508.19308

@arXiv_csCV_bot@mastoxiv.page
2025-07-25 10:20:52

DRWKV: Focusing on Object Edges for Low-Light Image Enhancement
Xuecheng Bai, Yuxiang Wang, Boyu Hu, Qinyuan Jie, Chuanzhi Xu, Hongru Xiao, Kechen Li, Vera Chung
arxiv.org/abs/2507.18594

@arXiv_eessAS_bot@mastoxiv.page
2025-08-13 08:05:32

Joint decoding method for controllable contextual speech recognition based on Speech LLM
Yangui Fang, Jing Peng, Yu Xi, Xu Li, Haoyu Li, Chengwei Zhang, Guohui Zhong, Kai Yu
arxiv.org/abs/2508.08585

@arXiv_csLG_bot@mastoxiv.page
2025-08-21 10:08:30

Great GATsBi: Hybrid, Multimodal, Trajectory Forecasting for Bicycles using Anticipation Mechanism
Kevin Riehl, Shaimaa K. El-Baklish, Anastasios Kouvelas, Michail A. Makridis
arxiv.org/abs/2508.14523

@arXiv_csCV_bot@mastoxiv.page
2025-08-20 10:17:30

Self-Aware Adaptive Alignment: Enabling Accurate Perception for Intelligent Transportation Systems
Tong Xiang, Hongxia Zhao, Fenghua Zhu, Yuanyuan Chen, Yisheng Lv
arxiv.org/abs/2508.13823

@arXiv_csSD_bot@mastoxiv.page
2025-07-22 10:36:50

Multichannel Keyword Spotting for Noisy Conditions
Dzmitry Saladukha, Ivan Koriabkin, Kanstantsin Artsiom, Aliaksei Rak, Nikita Ryzhikov
arxiv.org/abs/2507.15558

@arXiv_csSD_bot@mastoxiv.page
2025-07-22 10:12:40

A2TTS: TTS for Low Resource Indian Languages
Ayush Singh Bhadoriya, Abhishek Nikunj Shinde, Isha Pandey, Ganesh Ramakrishnan
arxiv.org/abs/2507.15272

@arXiv_csCV_bot@mastoxiv.page
2025-09-17 10:58:10

Vi-SAFE: A Spatial-Temporal Framework for Efficient Violence Detection in Public Surveillance
Ligang Chang, Shengkai Xu, Liangchang Shen, Binhan Xu, Junqiao Wang, Tianyu Shi, Yanhui Du
arxiv.org/abs/2509.13210

@arXiv_csSD_bot@mastoxiv.page
2025-08-21 08:58:20

EffiFusion-GAN: Efficient Fusion Generative Adversarial Network for Speech Enhancement
Bin Wen, Tien-Ping Tan
arxiv.org/abs/2508.14525 arxi…

@arXiv_csLG_bot@mastoxiv.page
2025-08-21 10:08:10

Artificial Intelligence-Based Multiscale Temporal Modeling for Anomaly Detection in Cloud Services
Lian Lian, Yilin Li, Song Han, Renzi Meng, Sibo Wang, Ming Wang
arxiv.org/abs/2508.14503

@arXiv_csSD_bot@mastoxiv.page
2025-09-17 09:27:30

Timbre-Adaptive Transcription: A Lightweight Architecture with Associative Memory for Dynamic Instrument Separation
Ruigang Li, Yongxu Zhu
arxiv.org/abs/2509.12712