2026-02-16 19:10:34
RE: https://flipboard.com/@cbcnews/ottawa-dpigmqdrz/-/a-ApOA11EGSkSHy-B2OkyHQQ:a:107108217-/0
The direct non-translation is suboptimal for communicating their meaning…
RE: https://flipboard.com/@cbcnews/ottawa-dpigmqdrz/-/a-ApOA11EGSkSHy-B2OkyHQQ:a:107108217-/0
The direct non-translation is suboptimal for communicating their meaning…
i’m reviewing a paper on reducing energy costs in large model training and it keeps slinging words like optimize and optimization around and calling other approaches suboptimal and i feel like i would be kind of an old crank if i were to ask if optimality is on the table here (it is not)
EDIT: hold on, maybe it is
No se si visteis esto. Lo que estš haciendo gente para poder programar y que en la empresa piensen que estš usando un agente de loroestocšstico https://danq.me/2026/03/03/ai-agent-logging/
There's still room to tune - shallow memory throughput is definitely suboptimal due to latency issues I need to chase - but with deep memory ngscopeclient ThunderScope is getting some pretty impressive performance.
2 channels @ 50M point memory depth (100M points per trigger) streaming at 7.5 WFM/s over LAN from across the building through a router. 40Gbase-SR4 from client to core switch and from core switch to router, then back to core switch, then 10Gbase-SR to the machine host…
Ich finde ja, Online-Petitionen sind ein bislang nur suboptimal genutztes Mittel der #Demokratie. Da geht deutlich mehr!
Das Fediverse (mit seinen überproportional politisch interessierten User:innen) könnte da helfen. Leider ist die Fediverse-Unterstüzung gängiger Petitionsplattformen aber nicht zufriedenstellend (siehe Bilder).
Das ist aber ein lösbares Problem. Z.B. könnte man…
T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation
Dongik Park, Hyunwoo Ryu, Suahn Bae, Keondo Park, Hyung-Sin Kim
https://arxiv.org/abs/2602.21043 https://arxiv.org/pdf/2602.21043 https://arxiv.org/html/2602.21043
arXiv:2602.21043v1 Announce Type: new
Abstract: Imputing missing values in multivariate time series remains challenging, especially under diverse missing patterns and heavy missingness. Existing methods suffer from suboptimal performance as corrupted temporal features hinder effective cross-variable information transfer, amplifying reconstruction errors. Robust imputation requires both extracting temporal patterns from sparse observations within each variable and selectively transferring information across variables--yet current approaches excel at one while compromising the other. We introduce T1 (Time series imputation with 1-to-1 channel-head binding), a CNN-Transformer hybrid architecture that achieves robust imputation through Channel-Head Binding--a mechanism creating one-to-one correspondence between CNN channels and attention heads. This design enables selective information transfer: when missingness corrupts certain temporal patterns, their corresponding attention pathways adaptively down-weight based on remaining observable patterns while preserving reliable cross-variable connections through unaffected channels. Experiments on 11 benchmark datasets demonstrate that T1 achieves state-of-the-art performance, reducing MSE by 46% on average compared to the second-best baseline, with particularly strong gains under extreme sparsity (70% missing ratio). The model generalizes to unseen missing patterns without retraining and uses a consistent hyperparameter configuration across all datasets. The code is available at https://github.com/Oppenheimerdinger/T1.
toXiv_bot_toot