Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@johnleonard@mastodon.social
2026-03-02 13:41:48

Rebuilding public trust in AI requires meaningful citizen engagement, transparent governance, and robust legislation. Technology itself is not the problem. The issue is that few people trust institutions to deploy it wisely and for their benefit. This makes the first step to answer the following question: What’s it in for me?

@azonenberg@ioc.exchange
2026-02-02 16:01:42

So, I have an answer to my previous question about GPU transfer efficiency.
Original code: write data to staging buffer on CPU, vkCopyBuffer to GPU local memory, run int-float32 conversion on GPU out of that buffer. The copy operation shows 50% SM occupancy by compute warps, 50% unallocated warp slots in active SMs.
GPU memory write bandwidth is sitting around 2%, about 1.9 ms copy/shader run time.

NSight Systems plot of shader execution time
@fgraver@hcommons.social
2026-03-29 11:38:24

The Computer Science Fetish mail.cyberneticforests.com/the

@kexpmusicbot@mastodonapp.uk
2026-02-03 14:13:03

🇺🇦 #NowPlaying on #KEXP's #Early
Confidence Man:
🎵 Angry Girl
#ConfidenceMan
confidenceman.bandcamp.com/tra
open.spotify.com/track/2PXULQ9

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:12:22

Training data generation for context-dependent rubric-based short answer grading
Pavel \v{S}indel\'a\v{r}, D\'avid Slivka, Christopher Bouma, Filip Pr\'a\v{s}il, Ond\v{r}ej Bojar
arxiv.org/abs/2603.28537 arxiv.org/pdf/2603.28537 arxiv.org/html/2603.28537
arXiv:2603.28537v1 Announce Type: new
Abstract: Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.
toXiv_bot_toot

@jswright61@ruby.social
2026-01-30 16:11:40

I plan to bow out of NYT #Wordle after tomorrow. I’ll miss the friendly competition with the #OldGal & #YoungPups. Their announcement that they’ll start reusing previous answers as of Monday Feb, 2 was enough t…

@june_thalia_michael@literatur.social
2026-02-26 18:33:26

Meine Motorik ist so im Eimer, oder auch: Hört mich, wie ich meinen Computer anschreie "Höre ich mal auf, mich zu verradieren?!"

@heiseonline@social.heise.de
2026-02-06 17:00:15

Noch ein paar der zuletzt hier besonders häufig geteilten #News:
IT-Angriff betrifft IT der Beweisstückstelle der Polizei

@frankel@mastodon.top
2026-03-28 09:04:46

Should I Switch From #Git to #Jujutsu
etodd.io/2025/10/02/should-i-s

@heiseonline@social.heise.de
2026-02-05 16:45:15

Einige der zuletzt hier besonders häufig geteilten #News:
IT-Angriff betrifft IT der Beweisstückstelle der Polizei