OMGSR: You Only Need One Mid-timestep Guidance for Real-World Image Super-Resolution
Zhiqiang Wu, Zhaomang Sun, Tong Zhou, Bingtao Fu, Ji Cong, Yitong Dong, Huaqi Zhang, Xuan Tang, Mingsong Chen, Xian Wei
https://arxiv.org/abs/2508.08227
Another timestep of the #Pleiades #occultation by the #Moon, at 21:26 UTC - Merope had just emerged behind the dark side. Interestingly no stars whatsoever could be seen in 11 x 70 binoculars though the Moon - nice terminator - was a beauty. The Moon's elevation was 15°; 1/4 second at f/3.2 and ISO 200.
Streaming Sequence-to-Sequence Learning with Delayed Streams Modeling
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, V\'aclav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick P\'erez, Laurent Mazar\'e, Alexandre D\'efossez
https://arxiv.org/abs/2509.08753
And the the #Pleiades #occultation by the #Moon is already history in Bochum, Germany: a final timestep at 22:34 UTC, with the Moon having cleared all major stars of the cluster. Nice show that was - though only for the camera (which was a Panasonic DMC-FZ300 on a tripod, mostly at maximum zoom; this image 1/5 second at f/3.2 and ISO 200, with the Pleiades 25° up).
Reinforcement Learning with Action Chunking
Qiyang Li, Zhiyuan Zhou, Sergey Levine
https://arxiv.org/abs/2507.07969 https://arxiv.org/pdf/2507.07969 https://arxiv.org/html/2507.07969
arXiv:2507.07969v1 Announce Type: new
Abstract: We present Q-chunking, a simple yet effective recipe for improving reinforcement learning (RL) algorithms for long-horizon, sparse-reward tasks. Our recipe is designed for the offline-to-online RL setting, where the goal is to leverage an offline prior dataset to maximize the sample-efficiency of online learning. Effective exploration and sample-efficient learning remain central challenges in this setting, as it is not obvious how the offline data should be utilized to acquire a good exploratory policy. Our key insight is that action chunking, a technique popularized in imitation learning where sequences of future actions are predicted rather than a single action at each timestep, can be applied to temporal difference (TD)-based RL methods to mitigate the exploration challenge. Q-chunking adopts action chunking by directly running RL in a 'chunked' action space, enabling the agent to (1) leverage temporally consistent behaviors from offline data for more effective online exploration and (2) use unbiased $n$-step backups for more stable and efficient TD learning. Our experimental results demonstrate that Q-chunking exhibits strong offline performance and online sample efficiency, outperforming prior best offline-to-online methods on a range of long-horizon, sparse-reward manipulation tasks.
toXiv_bot_toot
Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation
Donghwa Kang, Doohyun Kim, Sang-Ki Ko, Jinkyu Lee, Hyeongboo Baek, Brent ByungHoon Kang
https://arxiv.org/abs/2508.13812
SDSNN: A Single-Timestep Spiking Neural Network with Self-Dropping Neuron and Bayesian Optimization
Changqing Xu, Buxuan Song, Yi Liu, Xinfang Liao, Wenbin Zheng, Yintang Yang
https://arxiv.org/abs/2508.10913
Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution
Tainyi Zhang, Zheng-Peng Duan, Peng-Tao Jiang, Bo Li, Ming-Ming Cheng, Chun-Le Guo, Chongyi Li
https://arxiv.org/abs/2508.16557