🇺🇦 #NowPlaying on KEXP's #SonicReducer
Split System:
🎵 On The Edge
#SplitSystem
https://splitsystem.bandcamp.com/album/on-the-edge-on-the-loose
https://open.spotify.com/track/5UipxubiGX8H0y2Idb744r
Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Reconstruction
Fengzhi Guo, Chih-Chuan Hsu, Sihao Ding, Cheng Zhang
https://arxiv.org/abs/2510.12768 https://
PubSub-VFL: Towards Efficient Two-Party Split Learning in Heterogeneous Environments via Publisher/Subscriber Architecture
Yi Liu, Yang Liu, Leqian Zheng, Jue Hong, Junjie Shi, Qingyou Yang, Ye Wu, Cong Wang
https://arxiv.org/abs/2510.12494
Monday jam: Medeski Scofield Martin & Wood | Little Walter Rides Again | #jazz
🇺🇦 Auf radioeins läuft...
Sepalot:
🎵 Never Give Up
#NowPlaying #Sepalot
https://sepalot.bandcamp.com/track/never-give-up
https://open.spotify.com/track/6WQKv8HGRnIdzdgPvd5CWF
'Tennet speelt stroom vrij in Zeeland, tientallen bedrijven uit de problemen'
https://nos.nl/artikel/2590062-tennet-speelt-stroom-vrij-in-zeeland-tientallen-bedrijven-uit-de-problemen
Goed dat hier wél gelukt …
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Google: KI-Agent Sima 2 trainiert in Videospielen für die echte Welt
Google hat eine neue Version seines spielenden KI-Agenten Sima vorgestellt, die komplexere Aktionen beherrscht. Sima trainiert in Spielen für die echte Welt.
🇺🇦 Auf radioeins läuft...
Sepalot:
🎵 My Own Way
#NowPlaying #Sepalot
https://sepalot.bandcamp.com/track/my-own-way
https://open.spotify.com/track/1fZWCdpLnT98Lf5DeGkMSB