Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@NFL@darktundra.xyz
2025-06-26 20:34:42

SB LIX performer arrested for halftime protest espn.com/nfl/story/_/id/455917

@villavelius@mastodon.online
2025-05-26 08:39:43

social.cwts.nl/@LudoWaltman/11
Always publish an open preprint first, and then, if needed, submit to a journal. If the preprint is open anyway, it probably doesn't matter whether your chosen journal is open access (of any flavour) or subscription-…

@Techmeme@techhub.social
2025-06-25 14:10:40

Paraform, which operates a hiring marketplace with AI-powered candidate relationship management and other tools, raised a $20M Series A led by Felicis (FinSMEs)
finsmes.com/2025/06/paraform-r

@memeorandum@universeodon.com
2025-06-28 02:01:22

Billionaire and radio host John Catsimatidis prefers Eric Adams over fellow Republican in mayor's race (Emily Ngo/Politico)
politico.com/news/2025/06/27/b
memeorandum.com/250627/p163#a2

@arXiv_csFL_bot@mastoxiv.page
2025-05-27 07:19:37

A note on Automatic Baire property
Ludwig Staiger
arxiv.org/abs/2505.18626 arxiv.org/pdf/2505.18626

@arXiv_mathCO_bot@mastoxiv.page
2025-06-26 09:34:20

On likelihood of a Condorcet winner for uniformly random and independent voter preferences
Boris Pittel
arxiv.org/abs/2506.20613

@pbloem@sigmoid.social
2025-06-26 10:56:22

After training, we finetune on real-world data. We observe that the models that have been pre-trained with noise converge very quickly compared to a baseline which is trained from scratch.
Moreover, on the other datasets, the UP models retain their zero-shot performance during finetuning. This suggests that there may be a generalization benefit to using a UP model.
All this is at the expense of much longer training, but that cost can be amortized over many tasks.

The results for the finetuning experiment. Six datasets (linux, code, dyck, wp, german and ndfa) and the performance of four models: the baseline and UP trained models and two finetuning datasets. 

The results show that the UP models converge quicker, and that they retain most of their zero-shot performance on the other datasets.
@arXiv_csDC_bot@mastoxiv.page
2025-06-27 08:47:09

Scalable GPU Performance Variability Analysis framework
Ankur Lahiry, Ayush Pokharel, Seth Ockerman, Amal Gueroudji, Line Pouchard, Tanzima Z. Islam
arxiv.org/abs/2506.20674

@NFL@darktundra.xyz
2025-06-25 22:30:02

Taylor Swift has surprise performance at Tight End University in Tennessee nfl.com/news/taylor-swift-has-

@NFL@darktundra.xyz
2025-06-25 21:04:28

Taylor Swift joins Kane Brown in surprise performance at Tight End University espn.com/nfl/story/_/id/455835