Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@heiseonline@social.heise.de
2026-02-06 17:00:15

Noch ein paar der zuletzt hier besonders häufig geteilten #News:
IT-Angriff betrifft IT der Beweisstückstelle der Polizei

Call the GI Rights Hotline at 1-877-447-4487.
Call for yourself or someone you care about

Free and confidential

One hotline for a nationwide network of counseling centers
girightshotline.org/

@shriramk@mastodon.social
2026-04-06 14:50:57

My department is looking to hire a professor of practice in CS, with a focus on AI. Job posting below. If you have questions I'll do my best to answer them, else find someone who can! We are in Providence, easy commute access from Boston.

@metacurity@infosec.exchange
2026-03-04 19:06:21

"A pair of US lawmakers are calling for an investigation into how easily spies can steal information based on devices’ electromagnetic and acoustic leaks—a spying trick the NSA once codenamed TEMPEST"
wired.com/story/how-vulnerable

@heiseonline@social.heise.de
2026-02-05 16:45:15

Einige der zuletzt hier besonders häufig geteilten #News:
IT-Angriff betrifft IT der Beweisstückstelle der Polizei

@heiseonline@social.heise.de
2026-02-06 10:27:09

Wenn Cybercrime zeigt, dass wirklich niemand verschont bleibt. 🫠 Ein Ransomware-Angriff auf die Werkstatt Bremen hat auch Auswirkungen auf die IT-Systeme der polizeilichen Beweisstückstelle.
Zum Artikel: heise.de/-11165825?wt_mc=sm.re

Im Bild sieht man eine Hand an einer Tastatur. Im Bild steht: "Tatwerkzeuge sicher, Computer nicht
IT-Angriff trifft Beweisstückstelle der Polizei in Bremen" darunter steht: "Nach einem Ransomware-Angriff auf die Werkstatt Bremen ist auch die IT der polizeilichen Beweisstückstelle betroffen. Die Staatsanwaltschaft hat Ermittlungen aufgenommen."
@fgraver@hcommons.social
2026-03-29 11:38:24

The Computer Science Fetish mail.cyberneticforests.com/the

@azonenberg@ioc.exchange
2026-02-02 16:01:42

So, I have an answer to my previous question about GPU transfer efficiency.
Original code: write data to staging buffer on CPU, vkCopyBuffer to GPU local memory, run int-float32 conversion on GPU out of that buffer. The copy operation shows 50% SM occupancy by compute warps, 50% unallocated warp slots in active SMs.
GPU memory write bandwidth is sitting around 2%, about 1.9 ms copy/shader run time.

NSight Systems plot of shader execution time
@johnleonard@mastodon.social
2026-03-02 13:41:48

Rebuilding public trust in AI requires meaningful citizen engagement, transparent governance, and robust legislation. Technology itself is not the problem. The issue is that few people trust institutions to deploy it wisely and for their benefit. This makes the first step to answer the following question: What’s it in for me?

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:12:22

Training data generation for context-dependent rubric-based short answer grading
Pavel \v{S}indel\'a\v{r}, D\'avid Slivka, Christopher Bouma, Filip Pr\'a\v{s}il, Ond\v{r}ej Bojar
arxiv.org/abs/2603.28537 arxiv.org/pdf/2603.28537 arxiv.org/html/2603.28537
arXiv:2603.28537v1 Announce Type: new
Abstract: Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.
toXiv_bot_toot