Rebuilding public trust in AI requires meaningful citizen engagement, transparent governance, and robust legislation. Technology itself is not the problem. The issue is that few people trust institutions to deploy it wisely and for their benefit. This makes the first step to answer the following question: What’s it in for me?
So, I have an answer to my previous question about GPU transfer efficiency.
Original code: write data to staging buffer on CPU, vkCopyBuffer to GPU local memory, run int-float32 conversion on GPU out of that buffer. The copy operation shows 50% SM occupancy by compute warps, 50% unallocated warp slots in active SMs.
GPU memory write bandwidth is sitting around 2%, about 1.9 ms copy/shader run time.
The Computer Science Fetish https://mail.cyberneticforests.com/the-computer-science-fetish/
Training data generation for context-dependent rubric-based short answer grading
Pavel \v{S}indel\'a\v{r}, D\'avid Slivka, Christopher Bouma, Filip Pr\'a\v{s}il, Ond\v{r}ej Bojar
https://arxiv.org/abs/2603.28537 https://arxiv.org/pdf/2603.28537 https://arxiv.org/html/2603.28537
arXiv:2603.28537v1 Announce Type: new
Abstract: Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.
toXiv_bot_toot
Anker announces Thus, a compute-in-memory chip it says will bring on-device AI to its products and accessories, starting with its upcoming Soundcore earbuds (John Higgins/The Verge)
https://www.theverge.com/tech/916463/anker-thus-chip-announcement
Meine Motorik ist so im Eimer, oder auch: Hört mich, wie ich meinen Computer anschreie "Höre ich mal auf, mich zu verradieren?!"
A look at Politico's Dasha Burns, soon to be its Global Anchor in addition to other journalistic roles, as the outlet molds her into a creator-like figure (Corbin Bolies/The Wrap)
https://www.thewrap.com/media-platforms/journalism/dasha-burns-politico-i…
2026's Costco-value late-round NFL draft picks, plus examining the 49ers' confidence https://www.nytimes.com/athletic/7240037/2026/04/29/2026-nfl-draft-late-round-values-49ers-reach-consensus/
2026's Costco-value late-round NFL draft picks, plus examining the 49ers' confidence https://www.nytimes.com/athletic/7240037/2026/04/29/2026-nfl-draft-late-round-values-49ers-reach-consensus/