Tootfinder

Opt-in global Mastodon full text search. Join the index!

@grumpybozo@toad.social
2026-02-16 19:10:34

RE: flipboard.com/@cbcnews/ottawa-
The direct non-translation is suboptimal for communicating their meaning…

@peterhoneyman@a2mi.social
2026-01-12 22:01:28

i’m reviewing a paper on reducing energy costs in large model training and it keeps slinging words like optimize and optimization around and calling other approaches suboptimal and i feel like i would be kind of an old crank if i were to ask if optimality is on the table here (it is not)
EDIT: hold on, maybe it is

@jdrm@social.linux.pizza
2026-03-06 07:03:49

No se si visteis esto. Lo que estš haciendo gente para poder programar y que en la empresa piensen que estš usando un agente de loroestocšstico danq.me/2026/03/03/ai-agent-lo

@azonenberg@ioc.exchange
2025-12-29 18:42:10

There's still room to tune - shallow memory throughput is definitely suboptimal due to latency issues I need to chase - but with deep memory ngscopeclient ThunderScope is getting some pretty impressive performance.
2 channels @ 50M point memory depth (100M points per trigger) streaming at 7.5 WFM/s over LAN from across the building through a router. 40Gbase-SR4 from client to core switch and from core switch to router, then back to core switch, then 10Gbase-SR to the machine host…

Screenshot of ngscopeclient displaying 50M points of data from two channels as a time domain waveform, FFT, and waterfall updating at 7.5 Hz
Filter graph showing the subtract, FFT, and waterfall filters each completing in single digit milliseconds on the GPU
@cark@social.tchncs.de
2026-02-21 22:56:20

Ich finde ja, Online-Petitionen sind ein bislang nur suboptimal genutztes Mittel der #Demokratie. Da geht deutlich mehr!
Das Fediverse (mit seinen überproportional politisch interessierten User:innen) könnte da helfen. Leider ist die Fediverse-Unterstüzung gängiger Petitionsplattformen aber nicht zufriedenstellend (siehe Bilder).
Das ist aber ein lösbares Problem. Z.B. könnte man…

Screenshot von openpetition.de, genauer von dem Teil der Seite, der dazu aufruft eine Petition weiter zu teilen. Unter "Petition teilen" sind die Logos von Facebook, X, Whatsapp und Telegram aufgeführt, dann ein Kurzlink zum Kopieren und ein Feld, um die Petition per Mail zu teilen.
Screenshot von weact.campact.de im Bereich, wo man eine Petition teilen kann.

Hinter "Teilen" stehen die Logos von WhatsApp, Mail, Facebook und Bluesky.
@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:40:51

T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation
Dongik Park, Hyunwoo Ryu, Suahn Bae, Keondo Park, Hyung-Sin Kim
arxiv.org/abs/2602.21043 arxiv.org/pdf/2602.21043 arxiv.org/html/2602.21043
arXiv:2602.21043v1 Announce Type: new
Abstract: Imputing missing values in multivariate time series remains challenging, especially under diverse missing patterns and heavy missingness. Existing methods suffer from suboptimal performance as corrupted temporal features hinder effective cross-variable information transfer, amplifying reconstruction errors. Robust imputation requires both extracting temporal patterns from sparse observations within each variable and selectively transferring information across variables--yet current approaches excel at one while compromising the other. We introduce T1 (Time series imputation with 1-to-1 channel-head binding), a CNN-Transformer hybrid architecture that achieves robust imputation through Channel-Head Binding--a mechanism creating one-to-one correspondence between CNN channels and attention heads. This design enables selective information transfer: when missingness corrupts certain temporal patterns, their corresponding attention pathways adaptively down-weight based on remaining observable patterns while preserving reliable cross-variable connections through unaffected channels. Experiments on 11 benchmark datasets demonstrate that T1 achieves state-of-the-art performance, reducing MSE by 46% on average compared to the second-best baseline, with particularly strong gains under extreme sparsity (70% missing ratio). The model generalizes to unseen missing patterns without retraining and uses a consistent hyperparameter configuration across all datasets. The code is available at github.com/Oppenheimerdinger/T1.
toXiv_bot_toot