Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@cowboys@darktundra.xyz
2026-01-25 14:35:32

2026 NFL Draft Prospect Cole Wisniewski Fits Christian Parker's Scheme insidethestar.com/2026-nfl-dra

@mariyadelano@hachyderm.io
2026-01-25 19:03:53

RE: techhub.social/@shantini/11595
Being a marketer shaped my progressive politics more than I expected precisely because of this.
Once you see how much effort is being spent on marketing certain worldviews to you and how much of that can be studied, analyzed, and replicated - you can’t unsee it.
And you see the power that’s available for all of us to tap into to push back. The same kinds of marketing and communication tactics used against us can be used to amplify science, art, pro-social values, and progressive policy.
The right has been waging a coordinated campaign of swaying public opinion since at least the birth of the Federalist Society and backlash to Roe.
Their legal influence required creating an information and media apparatus that influenced first elite professional networks, then the public at large.
(For a recent example, just look at how much LLMS and AI have been relying on constant marketing and media attention for anyone to believe that these tools are “inevitable” or even “useful”. Their marketing and PR departments work very hard and are very well funded. For a reason.)

@relcfp@mastodon.social
2026-01-26 00:59:13

NEW PROGRAM> Call for Applications to attend the Black Buddhism Faculty Training Summer '26 #acrel networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:45:51

Test-Time Training with KV Binding Is Secretly Linear Attention
Junchen Liu, Sven Elflein, Or Litany, Zan Gojcic, Ruilong Li
arxiv.org/abs/2602.21204 arxiv.org/pdf/2602.21204 arxiv.org/html/2602.21204
arXiv:2602.21204v1 Announce Type: new
Abstract: Test-time training (TTT) with KV binding as sequence modeling layer is commonly interpreted as a form of online meta-learning that memorizes a key-value mapping at test time. However, our analysis reveals multiple phenomena that contradict this memorization-based interpretation. Motivated by these findings, we revisit the formulation of TTT and show that a broad class of TTT architectures can be expressed as a form of learned linear attention operator. Beyond explaining previously puzzling model behaviors, this perspective yields multiple practical benefits: it enables principled architectural simplifications, admits fully parallel formulations that preserve performance while improving efficiency, and provides a systematic reduction of diverse TTT variants to a standard linear attention form. Overall, our results reframe TTT not as test-time memorization, but as learned linear attention with enhanced representational capacity.
toXiv_bot_toot

@bilbo_le_hobbit@mamot.fr
2026-01-24 13:58:47

Apprivoiser l'#eau et apprendre Š nager.
Mon nouvel élément depuis un an. Un gamin d'il y a plus de quarante ans qui a paniqué un peu trop longtemps dans le grand bain serait ébahi par les progrès accomplis et se demanderait pourquoi avoir attendu si longtemps pour franchir Š nouveau les portes d'une piscine...

ligne d'eau de la piscine de Pontivy après fermeture des bassins. Les éclairages du bassin se reflètent à la surface.
@michabbb@social.vivaldi.net
2025-12-25 22:05:58

#Mistral Small 3.2: 99.3% defect reduction (98.8% − 99.99%)
• Ministral 3B: 100% defect reduction − perfect 100% validity
🛠️ Common issues fixed automatically: trailing commas after last element, unescaped control characters in strings, missing closing brackets, various syntax errors that break JSON parsers, prefixed text like "Here's the data you requested:"

@Techmeme@techhub.social
2026-02-23 20:35:47

IBM shares fall 12% after Anthropic outlined in a blog post how Claude Code can automate the exploration and analysis phases of COBOL modernization (Pia Singh/CNBC)
cnbc.com/2026/02/23/ibm-is-the

@relcfp@mastodon.social
2026-01-22 07:11:54

NEW PROGRAM> Call for Applications to attend the Black Buddhism Faculty Training Summer '26 networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:40:51

T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation
Dongik Park, Hyunwoo Ryu, Suahn Bae, Keondo Park, Hyung-Sin Kim
arxiv.org/abs/2602.21043 arxiv.org/pdf/2602.21043 arxiv.org/html/2602.21043
arXiv:2602.21043v1 Announce Type: new
Abstract: Imputing missing values in multivariate time series remains challenging, especially under diverse missing patterns and heavy missingness. Existing methods suffer from suboptimal performance as corrupted temporal features hinder effective cross-variable information transfer, amplifying reconstruction errors. Robust imputation requires both extracting temporal patterns from sparse observations within each variable and selectively transferring information across variables--yet current approaches excel at one while compromising the other. We introduce T1 (Time series imputation with 1-to-1 channel-head binding), a CNN-Transformer hybrid architecture that achieves robust imputation through Channel-Head Binding--a mechanism creating one-to-one correspondence between CNN channels and attention heads. This design enables selective information transfer: when missingness corrupts certain temporal patterns, their corresponding attention pathways adaptively down-weight based on remaining observable patterns while preserving reliable cross-variable connections through unaffected channels. Experiments on 11 benchmark datasets demonstrate that T1 achieves state-of-the-art performance, reducing MSE by 46% on average compared to the second-best baseline, with particularly strong gains under extreme sparsity (70% missing ratio). The model generalizes to unseen missing patterns without retraining and uses a consistent hyperparameter configuration across all datasets. The code is available at github.com/Oppenheimerdinger/T1.
toXiv_bot_toot

@relcfp@mastodon.social
2026-01-22 06:07:06

NEW PROGRAM> Call for Applications to attend the Black Buddhism Faculty Training Summer '26 networks.h-net.org/group/annou