Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@NFL@darktundra.xyz
2025-12-23 04:36:30

49ers vs. Colts takeaways: Brock Purdy's 5 TDs put Philip Rivers, Indy on the brink nytimes.com/athletic/6912776/2

@cowboys@darktundra.xyz
2025-12-23 16:51:25

Prescott, Pickens among 5 Cowboys named to 2026 Pro Bowl Games cowboyswire.usatoday.com/story

@memeorandum@universeodon.com
2025-12-17 17:25:51

DOJ must release Epstein files by Friday or risk repercussions, law's co-author says (NBC News)
nbcnews.com/politics/justice-d
memeorandum.com/251217/p75#a25

@NFL@darktundra.xyz
2026-01-20 16:02:21

NFL playoffs: Patriots went 3-0 with AFC Championship referee in 2025; Rams are 7-0 with NFC title game ref

cbssports.com/nfl/news/nfl-pla

@servelan@newsie.social
2025-11-17 05:24:06

So, who did he throw under the bus?
In reversal, Trump says House Republicans should vote to release Epstein files – NBC Los Angeles
nbclosangeles.com/news/nationa

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot

@Techmeme@techhub.social
2025-12-09 04:20:57

Scientists at NeurIPS, which drew a record 26,000 attendees this year, say key questions about how AI models work and how to measure them remain unresolved (Jared Perlo/NBC News)
nbcnews.com/tech/tech-news/ai-

@Mediagazer@mstdn.social
2025-12-04 18:50:55

MS NOW, formerly MSNBC, plans to launch membership subscriptions in the summer of 2026, offering events, curated insights, and moderated community spaces (Sara Fischer/Axios)
axios.com/2025/12/04/ms-now-ms

@seeingwithsound@mas.to
2025-12-31 10:05:06

Towards precise synthetic neural codes: high-dimensional stimulation with flexible electrodes #BCI

@NFL@darktundra.xyz
2025-11-06 16:25:52

Super Bowl predictions, midseason update: Seven NFL favorites to win it all in February nfl.com/news/super-bowl-lx-pre