Maybe I should mention how well it does.
It's caught some bugs before they're published, some fairly serious. But mostly its hallucinated many problems which aren't really there.
With all the code we write being reviewed by robots now, the code is written slightly differently to avoid it going on about issues that aren't really there.
People imagine that code is written for the computer to run, but really it's always been written for the programmers to understand. Now it's also written keeping in mind it'll be reviewed by an AI that has no context or understanding tasked with nit-picking to review.
Arguably this is ending up with better code. More unnecessary re-validation of everything mostly, but its also taking longer rather than being quicker.
Is that more efficient? 🤷 Sort of maybe?
3/3
Introducing CQ: A C-like API for Quantum Accelerated HPC
Oliver Thomson Brown, Mateusz Meller, James Richings
https://arxiv.org/abs/2508.10854 https://arxi…
#TGIQF: Das Quiz rund um Videospielverfilmungen
Computerspielverfilmungen werden immer beliebter. Ein Quiz zu den Verfilmungen der Mario Bros, Resident Evil, DOOM, Warcraft und Co.
Putting the finishing touches on my presentation at @… next weekend. "The #Fediverse: Embracing the Hacker Ethos for a Decentralized Social Media Experience". We may even have a special guest join us, @…
I know some complain about George R.R. Martin and his speed of writing his book series: he started writing "Song of Ice and Fire" in 1991 and so far published five big books (1996, 1998, 2000, 2005, 2011) with two more planned.
Beginners.
Ever heard of Donald Knuth? He started writing "The Art of Computer Programming" in 1962, planned in 12 chapters. So far, six and a half chapters have been published, in five big books (1968, 1969, 1973, 2011, 2023) with thre…
Apple-Keynote: Bei diesen Produkten muss man noch zittern
Am Dienstag steigt Apples diesjährige iPhone-Präsentation. Neben Handys werden mindestens neue Computeruhren erwartet. Doch was ist mit anderen Produkten?
h…