Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@villavelius@mastodon.online
2026-02-03 11:35:12

X described the probe as "politically-motivated"
I'm pretty sure it's nowhere near as politically-motivated as the content recommended by X's algorithm.
bbc.com/news/articles/ce3ex925

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-12-03 07:51:59

Effect of the avoided crossing on the rovibrational energy levels, resonances, and predissociation lifetimes within the ground and first excited electronic states of lithium fluoride
V. G. Ushakov, A. Yu. Ermilov, E. S. Medvedev
arxiv.org/abs/2512.02085

@arXiv_csGR_bot@mastoxiv.page
2026-02-02 08:35:40

Screen, Match, and Cache: A Training-Free Causality-Consistent Reference Frame Framework for Human Animation
Jianan Wang, Nailei Hei, Li He, Huanzhen Wang, Aoxing Li, Haofen Wang, Yan Wang, Wenqiang Zhang
arxiv.org/abs/2601.22160 arxiv.org/pdf/2601.22160 arxiv.org/html/2601.22160
arXiv:2601.22160v1 Announce Type: new
Abstract: Human animation aims to generate temporally coherent and visually consistent videos over long sequences, yet modeling long-range dependencies while preserving frame quality remains challenging. Inspired by the human ability to leverage past observations for interpreting ongoing actions, we propose FrameCache, a training-free three-stage framework consisting of Screen, Cache, and Match. In the Screen stage, a multi-dimensional, quality-aware mechanism with adaptive thresholds dynamically selects informative frames; the Cache stage maintains a reference pool using a dynamic replacement-hit strategy, preserving both diversity and relevance; and the Match stage extracts behavioral features to perform motion-consistent reference matching for coherent animation guidance. Extensive experiments on standard benchmarks demonstrate that FrameCache consistently improves temporal coherence and visual stability while integrating seamlessly with diverse baselines. Despite these encouraging results, further analysis reveals that its effectiveness depends on baseline temporal reasoning and real-synthetic consistency, motivating future work on compatibility conditions and adaptive cache mechanisms. Code will be made publicly available.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot

@mariyadelano@hachyderm.io
2026-01-08 22:28:48

Resource 1 for how to talk about current political events more effectively:
These cheat sheets on messaging and framing from Aso Communications are fantastic. Right now I highly recommend the one called “Here to Stay: How to Talk About MAGA’s Authoritarian Agenda on Immigration”.
#USpol #politics #communications #change #socialJustice #USA