Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCV_bot@mastoxiv.page
2025-08-20 10:16:50

Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation
Donghwa Kang, Doohyun Kim, Sang-Ki Ko, Jinkyu Lee, Hyeongboo Baek, Brent ByungHoon Kang
arxiv.org/abs/2508.13812

@arXiv_csNE_bot@mastoxiv.page
2025-08-21 08:00:00

Quantization Meets Spikes: Lossless Conversion in the First Timestep via Polarity Multi-Spike Mapping
Hangming Zhang, Zheng Li, Qiang Yu
arxiv.org/abs/2508.14520

@arXiv_mathNA_bot@mastoxiv.page
2025-06-23 10:42:30

IMEX-RB: a self-adaptive IMEX time integration scheme exploiting the RB method
Micol Bassanini, Simone Deparis, Francesco Sala, Riccardo Tenderini
arxiv.org/abs/2506.16470

@arXiv_csSD_bot@mastoxiv.page
2025-06-19 08:36:03

Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models
Teysir Baoueb, Xiaoyu Bie, Xi Wang, Ga\"el Richard
arxiv.org/abs/2506.15530

@arXiv_csGR_bot@mastoxiv.page
2025-08-20 07:41:59

Sparse, Geometry- and Material-Aware Bases for Multilevel Elastodynamic Simulation
Ty Trusty, David I. W. Levin, Danny M. Kaufman
arxiv.org/abs/2508.13386

@arXiv_csNE_bot@mastoxiv.page
2025-08-18 07:54:30

SDSNN: A Single-Timestep Spiking Neural Network with Self-Dropping Neuron and Bayesian Optimization
Changqing Xu, Buxuan Song, Yi Liu, Xinfang Liao, Wenbin Zheng, Yintang Yang
arxiv.org/abs/2508.10913

@arXiv_astrophEP_bot@mastoxiv.page
2025-07-08 11:34:00

On the convergence of N-body simulations of the Solar System
Hanno Rein, Garett Brown, Mei Kanda
arxiv.org/abs/2507.04987

@arXiv_csCV_bot@mastoxiv.page
2025-08-12 12:47:43

OMGSR: You Only Need One Mid-timestep Guidance for Real-World Image Super-Resolution
Zhiqiang Wu, Zhaomang Sun, Tong Zhou, Bingtao Fu, Ji Cong, Yitong Dong, Huaqi Zhang, Xuan Tang, Mingsong Chen, Xian Wei
arxiv.org/abs/2508.08227

@arXiv_csAR_bot@mastoxiv.page
2025-06-11 07:18:53

STI-SNN: A 0.14 GOPS/W/PE Single-Timestep Inference FPGA-based SNN Accelerator with Algorithm and Hardware Co-Design
Kainan Wang, Chengyi Yang, Chengting Yu, Yee Sin Ang, Bo Wang, Aili Wang
arxiv.org/abs/2506.08842

@arXiv_csDS_bot@mastoxiv.page
2025-08-15 08:45:12

On Fixed-Parameter Tractability of Weighted 0-1 Timed Matching Problem on Temporal Graphs
Rinku Kumar, Bodhisatwa Mazumdar, Subhrangsu Mandal
arxiv.org/abs/2508.10562

@arXiv_csMM_bot@mastoxiv.page
2025-07-11 07:58:11

IML-Spikeformer: Input-aware Multi-Level Spiking Transformer for Speech Processing
Zeyang Song, Shimin Zhang, Yuhong Chou, Jibin Wu, Haizhou Li
arxiv.org/abs/2507.07396

@arXiv_csLG_bot@mastoxiv.page
2025-07-11 10:23:31

Reinforcement Learning with Action Chunking
Qiyang Li, Zhiyuan Zhou, Sergey Levine
arxiv.org/abs/2507.07969 arxiv.org/pdf/2507.07969 arxiv.org/html/2507.07969
arXiv:2507.07969v1 Announce Type: new
Abstract: We present Q-chunking, a simple yet effective recipe for improving reinforcement learning (RL) algorithms for long-horizon, sparse-reward tasks. Our recipe is designed for the offline-to-online RL setting, where the goal is to leverage an offline prior dataset to maximize the sample-efficiency of online learning. Effective exploration and sample-efficient learning remain central challenges in this setting, as it is not obvious how the offline data should be utilized to acquire a good exploratory policy. Our key insight is that action chunking, a technique popularized in imitation learning where sequences of future actions are predicted rather than a single action at each timestep, can be applied to temporal difference (TD)-based RL methods to mitigate the exploration challenge. Q-chunking adopts action chunking by directly running RL in a 'chunked' action space, enabling the agent to (1) leverage temporally consistent behaviors from offline data for more effective online exploration and (2) use unbiased $n$-step backups for more stable and efficient TD learning. Our experimental results demonstrate that Q-chunking exhibits strong offline performance and online sample efficiency, outperforming prior best offline-to-online methods on a range of long-horizon, sparse-reward manipulation tasks.
toXiv_bot_toot

@arXiv_csMA_bot@mastoxiv.page
2025-08-08 07:35:52

BTPG-max: Achieving Local Maximal Bidirectional Pairs for Bidirectional Temporal Plan Graphs
Yifan Su, Rishi Veerapaneni, Jiaoyang Li
arxiv.org/abs/2508.04849

@arXiv_csCV_bot@mastoxiv.page
2025-07-08 14:30:11

TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation
Zonglin Lyu, Chen Chen
arxiv.org/abs/2507.04984

@arXiv_physicscompph_bot@mastoxiv.page
2025-05-29 10:29:04

This arxiv.org/abs/2505.02270 has been replaced.
initial toot: mastoxiv.page/@ar…

@arXiv_mathNA_bot@mastoxiv.page
2025-07-24 08:27:49

Explicit Monotone Stable Super-Time-Stepping Methods for Finite Time Singularities
Zheng Tan, Tariq D. Aslam, Andrea L. Bertozzi
arxiv.org/abs/2507.17062