Noch ein paar der zuletzt hier besonders häufig geteilten #News:
IT-Angriff betrifft IT der Beweisstückstelle der Polizei
Call the GI Rights Hotline at 1-877-447-4487.
Call for yourself or someone you care about
Free and confidential
One hotline for a nationwide network of counseling centers
https://girightshotline.org/
My department is looking to hire a professor of practice in CS, with a focus on AI. Job posting below. If you have questions I'll do my best to answer them, else find someone who can! We are in Providence, easy commute access from Boston.
https://
"A pair of US lawmakers are calling for an investigation into how easily spies can steal information based on devices’ electromagnetic and acoustic leaks—a spying trick the NSA once codenamed TEMPEST"
https://www.wired.com/story/how-vulnerable
Wenn Cybercrime zeigt, dass wirklich niemand verschont bleibt. 🫠 Ein Ransomware-Angriff auf die Werkstatt Bremen hat auch Auswirkungen auf die IT-Systeme der polizeilichen Beweisstückstelle.
Zum Artikel: https://heise.de/-11165825?wt_mc=sm.re
The Computer Science Fetish https://mail.cyberneticforests.com/the-computer-science-fetish/
So, I have an answer to my previous question about GPU transfer efficiency.
Original code: write data to staging buffer on CPU, vkCopyBuffer to GPU local memory, run int-float32 conversion on GPU out of that buffer. The copy operation shows 50% SM occupancy by compute warps, 50% unallocated warp slots in active SMs.
GPU memory write bandwidth is sitting around 2%, about 1.9 ms copy/shader run time.
Rebuilding public trust in AI requires meaningful citizen engagement, transparent governance, and robust legislation. Technology itself is not the problem. The issue is that few people trust institutions to deploy it wisely and for their benefit. This makes the first step to answer the following question: What’s it in for me?
Training data generation for context-dependent rubric-based short answer grading
Pavel \v{S}indel\'a\v{r}, D\'avid Slivka, Christopher Bouma, Filip Pr\'a\v{s}il, Ond\v{r}ej Bojar
https://arxiv.org/abs/2603.28537 https://arxiv.org/pdf/2603.28537 https://arxiv.org/html/2603.28537
arXiv:2603.28537v1 Announce Type: new
Abstract: Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.
toXiv_bot_toot