
2025-09-17 19:33:34
Anthony Fantano on politics in music: “Advocating that art should avoid any topic generally speaking is just lame” https://musictech.com/news/music/anthony-fantano-music-politics/?utm_source…
Anthony Fantano on politics in music: “Advocating that art should avoid any topic generally speaking is just lame” https://musictech.com/news/music/anthony-fantano-music-politics/?utm_source…
Oh, did you think the israel model would be confined to the Middle East? Why should it? Didn’t all our Western governments just declare it kosher?
First, they came for the Palestinians… https://kolektiva.social/@DoomsdaysCW/115034832497483253
Shoddy excuse 1: “Maybe it was technically a work device, provided by and owned by the employer, and she shouldn’t have used it for personal things”
Then both the employer and the tech vendors involved should have made it not only possible but easy by default for her to quarantine her personal use.
The onus for that kind of opsec should not fall to individuals. That’s just ducking accountability, the kind of cognitive outsourcing large players use to dodge the costs of running their own org.
This is basically “we don’t need to install railings on our corporate balconies; employees just need to never fall off.”
2/
It's 2025. If your library still doesn't have types in it (parameter, return, and property), I assume it's abandoned and I should not use it.
There are no exceptions to this statement. Not typing your PHP code in 2025 is irresponsible. No, docblocks are not good enough.
#PHP
We should acknowledge that if there is an "Epstein client list",
there’s a good chance Donald Trump is on it.
“I’ve known Jeff for fifteen years. Terrific guy,” Trump said in 2002.
“He’s a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side.”
Is that why Bondi won’t release the list?
Quite the enticing possibility.
If there ever was a case made for conspiracy theorists,…
@… Comments should describe intent or purpose. If a specific algorithm is used, it doesn't hurt if you name it or refer to it.
If code is meticulously crafted with a specific goal, such as performance, it might also be useful to mention. But good naming is definitively the most important part of good code.
For some tech we should try turning it off and leaving it off.
Testing it rn
@… https://social.growyourown.services/@FediTips/115220480846307303
I got a 404media subscription, and you should too. But honestly, there's a lot of stuff that I won't read these days (for my own mental well-being). This is one of those things. The headline is enough, I don't need to know more.
https://hachyderm.io/@evacide/11486976…
Hey, just a reminder that Thursday 18th there’s the “IPTC Photo Metadata Conference 2025”, a free Zoom event open to all. I’m a speaker (focusing on #C2PA). It looks like it should be interesting for anyone interested in the tangle of issues about media provenance, #genAI, publishing workflow, social media, and…
I remember Terence Stamp's "The Limey" fondly - I should rewatch it, as I have now forgotten the plot.
@… @… Looks interesting, I should check it out!
I really would love to abandon Kubernetes and run into the hills, but everyone wants it… I find it very frustrating.
OK, this is more what I was expecting from my Sweet Peas; I think there's a couple of different varieties in this really high grade LIdl pack of sweet peas. Next year I should get some from better #gardening supplier.
TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?
iPhone 17 Pro review: has a solid battery life, brighter screen outdoors, and doesn't get hot but it's heavier than usual and Siri needs to catch up with rivals (The Verge)
https://www.theverge.com/tech/779265/iphone-17-pro-max-review
The 21 Worst Things About the Worst Movie of the Year, ‘War of the Worlds’ https://flip.it/2eHWai
I don’t know — this kinda makes me want to see it…
Seems to me Coke should put pages of a certain file and list on their bottles, just print them on there. It would really get the information to the people.
Could be so refreshing.
source: https://trumpstruth.org/statuses/32028
Read all about Chatcontrol and why we should resist it. This sentence is extremely scary.... "Specifically, this AI will have to be built by WhatsApp/Meta so they can monitor us on behalf of the EU" 😱
https://berthub.eu/articles/posts/chatcontrol-in-brief/
It's Monday but I don't want it to be. What should I do?
#AskFedi
Should you consider printing? Check out this video from Simon Baxter.
Disclaimer: I also print at home but only in A4. But holding a photo in your hands is REALLY different than seeing it on any screen.
https://youtu.be/_xOGHjR16Fc?si=SzlMk-MfcWRHRfxE
Every time I see someone saying something like "we should bring this back" or "how come no website/app does this anymore" I just go like "bet" and implement it into my own website because fuck the modern internet "standards" set by shitty companies I'm gonna use my free will to do whatever I want (and so should you).
One of the most worrying things I find about LLMs at the moment is how normalised it's becoming to just chat to it like a friend or confidant and even so far as to ask it for therapy. I don't mean amongst tech bros and nerds, I mean amongst normal people.
It just seems so reckless. Aside from the problem of hallucinations, these things are built to have an agenda. No sane person should ever ever ever ask Zuckerberg, Musk or Altman for a mental health vibe check
This one is around the corner from my office. I never suspected what it might look like on the inside.
https://www.instagram.com/zillowgonewild/p/DMMMikYumtR/
MS Teams is terrible. It should be euthanized.
"Acosta would later say that he offered Epstein the plea deal because he was told Epstein “belonged to intelligence,” that the matter was “above his paygrade” and that he should “leave it alone.”
Kash Patel unloads all blame for botched Epstein probe onto ex-Trump official - Raw Story
https://www.rawstory.com/jeffrey-epstein-2673998704/
It's good to see The Economist taking the impact of climate change on the #AMOC seriously. It isn't a particularly good article - no info on likely timescales, no sources - but it is something that should worry us all. Here's Carbon Brief's recent piece:
I wanna change artist name to a sexier more professional one. Also this whole online persona I created doesn't fit me anymore. Should I take down my website? It's a big expense and it doesn't satisfy me anymore. Was fun but I think its time has come.
Curbs? Due to genocide?
Better than nothing, I guess. But the thing you should really do would be to shut it completely down. @…
https://assortedflotsam.co…
in a discussion of the wikipedia pope announcement overload incident https://lobste.rs/s/wajnta/wikipedia_outage_report_for_may_s_pope, i see mention of a “pope predictor”
surely it should be called
hope on a pope
Shoddy excuse 2: “Maybe it was her personal device but she shouldn’t have agreed to let the employer install remote management software”
Could she really say no to such a demand? Not everyone has the power to refuse a coercive demand from an employer. There should be a strong legal firewall around employer demands to surveil people’s lives. And there should be mechanisms for an employee who is met with such demands to put the brakes on their employer.
I realize the word “should” is doing a lot of work there, but let’s at least be clear about how wrong that situation would be.
3/
I baked a pumpkin cheesecake, and it should be ready and set in a week.
Update: IT CRACKED. F#$%!
I feel like computers are getting more like cars, or... how I view cars.
I do not need/want to spend a ton of money to get everything computerized car with ever sensor and camera and screen there is because I just don't think I need it or it's worth it for how much I drive.
I get that some people want that, and should probably have it, but I've got like 35 years of driving experience and I think I do okay.
@… @… The last reporter to leave should leave a sign with "Croatoan" on it for future civilizations to ponder.
Don't miss today's packed Metacurity for a ton of critical infosec developments you should know, including
--UK to spend $1.1 billion relocating Afghan helpers following data breach
--DOGE worker published the private key for four dozen-plus LLMs,
--US gov't IT contractor to pay $14.75m fine for overstated cyber services,
--Italian cops arrest Romanian behind 'Diskstation' ransomware gang,
--OMB readies post-quantum standard,
--MSFT'…
Series A, Episode 10 - Breakdown
CALLY: Two of them would flatten any one of us for about a hundred hours.
AVON: If he comes round, he'll flatten all of us for a good deal longer than that. He ought to be put under restraint.
https://blake.torpidity.net/m/110/6 B7B2
Spent a few hours cleaning out the fridge on my M1 Mac Mini today. It’s now in an ok state and can resume its local server role. But man, there’s zero joy in using macOS anymore. Just paper cut after paper cut. Or should I say joy in using OS X? I guess that’s when it actually was nice to use. Even with the limitations, iOS feels less claustrophobic – perhaps because it never had the freedom to begin with, so you don’t run into all the restrictions in the same way. I got the Mac because it w…
Dignan analysiert: Apples KI-Rückstand ist so gravierend wie Microsofts „Mobile-Fail“. Perplexity als Allheilmittel? Kaum. Aber Tim Cook braucht dringend eine Story für den Kapitalmarkt—zur Not für 20 Mrd. Dollar. #TechDebatte #Apple
What a rubbish idea:
Large corporations already hide too much information from their shareholders and the public.
This would make that hiding easier and make company performance more opaque.
It doubles the time that corporate ill deeds and management failures can be hidden.
It is a dumb idea - but coming from one of the great scammers in our corporate world, we should be glad that he did not suggest yearly, or longer, reporting.
"Trump renews push to end co…
I hope I don't become the sort of person that thinks the country should return to how it was when they were 17, and never change from that point.
But I might.
"Each summer brings an annual farce, in which an increasingly hot country pretends it is anything but."
🎁.
Britain is already a hot country. It should act like it
https://www.economist.com/britain/2025/07/03/britain-is-already-a-hot-country-it-should-act-like-it?giftId=5f3749d8-a0fc-46fe-ac93-69f1a875e5bb&utm_campaign=gifted_article
@… Energy should always be produced as close as possible to where it is consumed, and the equipment owned by the consumers. It is essential to life and should not be used to underpin political or economic power. I cannot understand the farmers of Caithness, whose only thought was to sell underproductive land to the power companies, instead of making it availa…
@… Should turn it into a proper autoscaler at some point, but it works :)
I am so glad that other people are doing the hard work of proving that calling LLM's "AI" is bullshit and having them write code is just plain dumb. https://infosec.exchange/@teriradichel/115033555848878022
I accidentally made another basket. Should I list it on Makerworld or Etsy? #openscad #3dprinting
Related to understanding firearms, "rifles" and hand guns tend to be rifled. Rifling is grooving that runs in a helical pattern down the barrel. When purchasing a firearm, it's important to check the rifling.
First check that the firearm is unloaded. Empty or remove the magazine, cycle the weapon. Next, check again that it's unloaded by looking both down the barrel and into the magazine. Now, shine a light down the barrel and look down it. In the absence of a light, you may be able to reflect light off your thumbnail.
Rifling should look as though it's drawn on with a sharp pencil, and the barrel should look otherwise completely smooth and clean. If the rifling looks like bumpy mountains, then the owner probably used corrosive ammo and didn't clean it enough. It will probably still shoot, but not at all accurately.
Both the rifling and the pin can be used in forensic analysis to match a bullet to a gun. I don't honestly know how accurate this is because a lot of forensic "science" is just made up stuff that relies on the CSI effect and doesn't actually work as advertised.
However, not all firearms are not all rifled. Shotguns are "smoothbore" firearms, meaning they lack rifling. It is not possible to perform forensic analysis of a smoothbore firearm. It *is* possible to check for powder on the hands of someone who has used a firearm within the last few days, but it's not possible to distinguish between firing inside and outside a range.
I've been gathering all kinds of tidbits like this, partially just out of curiosity and partially because I've been wanting to write a story about a revolutionary group fighting a modern authoritarian society. I'm always happy to learn other bits, if anyone has anything else I could throw in my narrative (whenever I finally get back to writing it).
Testing the influence of anisotropic CR transport and the Galactic magnetic field structure on the all-sky gamma-ray emission
Julien D\"orner, Jonas Hellrung, Julia Becker Tjus, Horst Fichtner
https://arxiv.org/abs/2507.12074
Every company that has ever been involved with fascism, slavery, or exploitation of labor should have open statements like this on their website
But I'm impressed that BMW is this blunt about it — and has actual pictures of Dachau
https://www.bmwgroup.com/en/company/histo…
The year is 2027.
Google: "Should we remove HTML from the web platform?
You know it's kind of old and we have barely any resources to keep it running, and there's chatbots anyway."
People: "What?!"
Google: "Well, clearly you don't have anything to say." [deletes code]
While coding, been watching Regulator Standards Bill submissions. I note a scarce few (like 1-4 per session out of 17-18) pro-bill submissions. Most are emphatically against. I also note that, so far, all pro submitters are either white male individuals (many quite old), or Atlas Network affiliated think tanks like the New Zealand Initiative. This bill should fail spectacularly. If it doesn't, this gov't needs to go immediately.
Me to my mom: “I’m out for a walk and a tiny rabbit just ran across my feet!”
Her: “You should catch it and cook it!”
In case you were wondering where I get it from…
In his op ed in the WaPo,
Jay Bhattacharya has shown that he should not be taken seriously. He’s DEFINITELY not a real scientist.
If people don’t like science should we ignore the potential it has?
That’s not how it works.
Science doesn’t care about your feelings.
https://
"""
But there is no certainty that madness was content to sit locked up in its immutable identity, waiting for psychiatry to perfect its art, before it emerged blinking from the shadows into the blinding light of truth. Nor is it clear that confinement was above all, or even implicitly, a series of measures put in place to deal with madness. It is not even certain that in this repetition of the ancient gesture of segregation at the threshold of the classical age, the modern world was aiming to wipe out all those who, either as a species apart or a spontaneous mutation, appeared as 'asocial'. The fact that the internees of the eighteenth century bear a resemblance to our modern vision of the asocial is undeniable, but it is above all a question of results, as the character of the marginal was produced by the gesture of segregation itself. For the day came when this man, banished in the same exile all over Europe in the mid-seventeenth century, suddenly became an outsider, expelled by a society to whose norms he could not be seen to conform; and for our own intellectual comfort, he then became a candidate for prisons, asylums and punishment. In reality, this character is merely the result of superimposed grids of exclusion.
The gesture that proscribed was as abrupt as the one that had isolated the lepers, and in both cases, the meaning of the gesture should not be mistaken for its effect. Lepers were not excluded to prevent contagion, any more than in 1657, 1 per cent of the population of Paris was confined merely to deliver the city from the 'asocial'. The gesture had a different dimension: it did not isolate strangers who had previously remained invisible, who until then had been ignored by force of habit. It altered the familiar cityscape by giving them new faces, strange, bizarre silhouettes that nobody recognised. Strangers were found in places where their presence had never previously been suspected: the process punctured the fabric of society, and undid the familiar. Through this gesture, something inside man was placed outside of himself, and pushed over the edge of our horizon. It is the gesture of confinement, in short, which created alienation.
"""
(Michel Foucault, History of Madness)
Took longer than it prob should have, but I now have a 4tb (3.6, really, but who's counting) "Fikwot" (aka Kikwot, according to my OS) NVME SSD installed in my PC. Also PC got some bonus cleaning with the 20gal air compressor.
This Meshify case from Fractal doing a great job. It's been a year since the build, but there was just some caked on surface dust. Little bit of brushing and blasts of air and the PC looks almost new again.
Reason it took longer than expe…
Cameron Ward, Shedeur Sanders and Jaxson Dart all impressed in NFL preseason debuts, but does it even matter?
https://www.cbssports.com/nfl/news/cameron…
Are you in #Prague ? You must visit my talk entitled:
Disobey: #FOSS tools to fight back!
https://pretalx.linuxdays.cz/linux…
Oh, and it doesn't matter for lots of reasons, but Doak 💯 was fouled in the box and it should have been a penalty.
Darren England was already in midseason form for cowardice. 7/6
#LFC
“How do I get to your app?”
“Oh, just enter our domain name in your URL bar.”
“I can’t find the URL bar… ?”
“In your browser app/window, it should be at the top or bottom.”
“I don’t think my browser has one.”
“It must. Which browser is it?”
“ChatGPT 6”
#TheWebWeReallyLost
Whatever the situation is, I’ll stand fairly firm on this: it should not have been possible for any employer to put her in the situation as described, and the onus for preventing such situations must not fall to the individual.
/end
Why Data Anonymization Has Not Taken Off
Matthew J. Schneider, James Bailie, Dawn Iacobucci
https://arxiv.org/abs/2509.10165 https://arxiv.org/pdf/2509.101…
Good proposal, a no-fly zone above Ukraine, NATO/EU can do this, it is non-escalatory and saves delivered weapons and material from europe from destruction.
https://edition.cnn.com/2025/09/15/europe/poland-nato-no-fly-zone-ukraine-russian-drones-i…
So hello all, is anyone on (#)GeForceNOW and not feeling any guilt about it?
Now why should you? I should cause I've seen the greatness of FLOSS.
Now you say what's FLOSS?
And to that I say, if you don't know what FLOSS is you're very lucky!
I know about FLOSS and that's why I feel bad when I'm using GeForceNOW but of course not too bad cause I'm enjoying using it...
(#)Hypocricy I know, I know.
The Orbits of Isolated Dwarfs in the Local Group from New 3D Kinematics: Constraints on First Infall, Backsplash, and Quenching Mechanisms
Paul Bennet, Ekta Patel, Sangmo Tony Sohn, Andres del Pino, Roeland van der Marel, Mark Fardal, Kristine Spekkens, Laura Congrever Hunter, Gurtina Besla, Laura Watkins, Daniel Weisz
https://arxiv.org/ab…
hah, can't fire me if i don't have a job, suckers! https://mastodon.social/@PatrickoftheG/115199991716282130
'They’ll chip away at it gradually with bills that disenfranchise the “wrong” sort of people and mechanisms that make voting more difficult. But we should not mistake their ultimate objective. [W]hen they hint that they are interested in getting rid of women’s suffrage, we should take them very seriously indeed.'
Women’s suffrage is apparently up for debate again in America | Arwa Mahdawi | The Guardian
https://www.theguardian.com/commentisfree/2025/sep/13/womens-suffrage-week-in-patriarchy
This is yet another article that chronicles an LLM's bad effects on the psyche of a vulnerable person.
I hate the LLM's "Yes, great thought! Should we explore this more?" flattery. Every quote in that article feels alien in its tone - or obviously cult-like - to me.
But pride comes before the fall. So I'll rather not use the sycophantic brain-rot machine lest I risk getting drawn into delusions myself...
Will anyone review this paper? Screening, sorting, and the feedback cycles that imperil peer review
Carl T. Bergstrom, Kevin Gross
https://arxiv.org/abs/2507.10734
You’ve been given free access to this article from The Economist as a gift. You can open the link five times within seven days. After that it will expire.
Britain is already a hot country. It should act like it
https://www.economist.com/britain/2025/07/03/britain-is-already-a-hot-country-it-should-act-like-it?giftId=5f3749d8-a0fc-46fe-ac93-69f1a875e5bb&utm_campaign=gifted_article
The other thing I want to remedy on this laptop is that IBM in their great wisdom decided that I needed a SIM slot (with no WWAN option) but no microSD card reader. I reckon it should be possible to physically shove an SD slot behind the SIM hole, but the plumbing will be another story.
OpenAI restores GPT-4o as default for all paid ChatGPT users, vows "plenty of notice" if 4o is deprecated, raises GPT-5 Thinking rate limits to 3K messages/week (Carl Franzen/VentureBeat)
https://venturebeat.com/a…
Long post, game design
Crungle is a game designed to be a simple test of general reasoning skills that's difficult to play by rote memory, since there are many possible rule sets, but it should be easy to play if one can understand and extrapolate from rules. The game is not necessarily fair, with the first player often having an advantage or a forced win. The game is entirely deterministic, although a variant determines the rule set randomly.
This is version 0.1, and has not yet been tested at all.
Crungle is a competitive game for two players, each of whom controls a single piece on a 3x3 grid. The cells of the grid are numbered from 1 to 9, starting at the top left and proceeding across each row and then down to the next row, so the top three cells are 1, 2, and 3 from left to right, then the next three are 4, 5, and 6 and the final row is cells 7, 8, and 9.
The two players decide who shall play as purple and who shall play as orange. Purple goes first, starting the rules phase by picking one goal rule from the table of goal rules. Next, orange picks a goal rule. These two goal rules determine the two winning conditions. Then each player, starting with orange, alternate picking a movement rule until four movement rules have been selected. During this process, at most one indirect movement rule may be selected. Finally, purple picks a starting location for orange (1-9), with 5 (the center) not allowed. Then orange picks the starting location for purple, which may not be adjacent to orange's starting position.
Alternatively, the goal rules, movement rules, and starting positions may be determined randomly, or a pre-determined ruleset may be selected.
If the ruleset makes it impossible to win, the players should agree to a draw. Either player could instead "bet" their opponent. If the opponent agrees to the bet, the opponent must demonstrate a series of moves by both players that would result in a win for either player. If they can do this, they win, but if they submit an invalid demonstration or cannot submit a demonstration, the player who "bet" wins.
Now that starting positions, movement rules, and goals have been decided, the play phase proceeds with each player taking a turn, starting with purple, until one player wins by satisfying one of the two goals, or until the players agree to a draw. Note that it's possible for both players to occupy the same space.
During each player's turn, that player identifies one of the four movement rules to use and names the square they move to using that rule, then they move their piece into that square and their turn ends. Neither player may use the same movement rule twice in a row (but it's okay to use the same rule your opponent just did unless another rule disallows that). If the movement rule a player picks moves their opponent's piece, they need to state where their opponent's piece ends up. Pieces that would move off the board instead stay in place; it's okay to select a rule that causes your piece to stay in place because of this rule. However, if a rule says "pick a square" or "move to a square" with some additional criteria, but there are no squares that meet those criteria, then that rule may not be used, and a player who picks that rule must pick a different one instead.
Any player who incorrectly states a destination for either their piece or their opponent's piece, picks an invalid square, or chooses an invalid rule has made a violation, as long as their opponent objects before selecting their next move. A player who makes at least three violations immediately forfeits and their opponent wins by default. However, if a player violates a rule but their opponent does not object before picking their next move, the stated destination(s) of the invalid move still stand, and the violation does not count. If a player objects to a valid move, their objection is ignored, and if they do this at least three times, they forfeit and their opponent wins by default.
Goal rules (each player picks one; either player can win using either chosen rule):
End your turn in the same space as your opponent three turns in a row.
End at least one turn in each of the 9 cells.
End five consecutive turns in the three cells in any single row, ending at least one turn on each of the three.
End five consecutive turns in the three cells in any single column, ending at least one turn on each of the three.
Within the span of 8 consecutive turns, end at least one turn in each of cells 1, 3, 7, and 9 (the four corners of the grid).
Within the span of 8 consecutive turns at least one turn in each of cells 2, 4, 6, and 8 (the central cells on each side).
Within the span of 8 consecutive turns, end at least one turn in the cell directly above your opponent, and end at least one turn in the cell directly below your opponent (in either order).
Within the span of 8 consecutive turns at least one turn in the cell directly to the left of your opponent, and end at least one turn in the cell directly to the right of your opponent (in either order).
End 12 turns in a row without ending any of them in cell 5.
End 8 turns in a row in 8 different cells.
Movement rules (each player picks two; either player may move using any of the four):
Move to any cell on the board that's diagonally adjacent to your current position.
Move to any cell on the board that's orthogonally adjacent to your current position.
Move up one cell. Also move your opponent up one cell.
Move down one cell. Also move your opponent down one cell.
Move left one cell. Also move your opponent left one cell.
Move right one cell. Also move your opponent right one cell.
Move up one cell. Move your opponent down one cell.
Move down one cell. Move your opponent up one cell.
Move left one cell. Move your opponent right one cell.
Move right one cell. Move your opponent left one cell.
Move any pieces that aren't in square 5 clockwise around the edge of the board 1 step (for example, from 1 to 2 or 3 to 6 or 9 to 8).
Move any pieces that aren't in square 5 counter-clockwise around the edge of the board 1 step (for example, from 1 to 4 or 6 to 3 or 7 to 8).
Move to any square reachable from your current position by a knight's move in chess (in other words, a square that's in an adjacent column and two rows up or down, or that's in an adjacent row and two columns left or right).
Stay in the same place.
Swap places with your opponent's piece.
Move back to the position that you started at on your previous turn.
If you are on an odd-numbered square, move to any other odd-numbered square. Otherwise, move to any even-numbered square.
Move to any square in the same column as your current position.
Move to any square in the same row as your current position.
Move to any square in the same column as your opponent's position.
Move to any square in the same row as your opponent's position.
Pick a square that's neither in the same row as your piece nor in the same row as your opponent's piece. Move to that square.
Pick a square that's neither in the same column as your piece nor in the same column as your opponent's piece. Move to that square.
Move to one of the squares orthogonally adjacent to your opponent's piece.
Move to one of the squares diagonally adjacent to your opponent's piece.
Move to the square opposite your current position across the middle square, or stay in place if you're in the middle square.
Pick any square that's closer to your opponent's piece than the square you're in now, measured using straight-line distance between square centers (this includes the square your opponent is in). Move to that square.
Pick any square that's further from your opponent's piece than the square you're in now, measured using straight-line distance between square centers. Move to that square.
If you are on a corner square (1, 3, 7, or 9) move to any other corner square. Otherwise, move to square 5.
If you are on an edge square (2, 4, 6, or 8) move to any other edge square. Otherwise, move to square 5.
Indirect movement rules (may be chosen instead of a direct movement rule; at most one per game):
Move using one of the other three movement rules selected in your game, and in addition, your opponent may not use that rule on their next turn (nor may they select it via an indirect rule like this one).
Select two of the other three movement rules, declare them, and then move as if you had used one and then the other, applying any additional effects of both rules in order.
Move using one of the other three movement rules selected in your game, but if the move would cause your piece to move off the board, instead of staying in place move to square 5 (in the middle).
Pick one of the other three movement rules selected in your game and apply it, but move your opponent's piece instead of your own piece. If that movement rule says to move "your opponent's piece," instead apply that movement to your own piece. References to "your position" and "your opponent's position" are swapped when applying the chosen rule, as are references to "your turn" and "your opponent's turn" and do on.
#Game #GameDesign
Not sure if I should make merch for this party, but it does seem really popular.
#FaceEatingLeopards #FaceEatingLeopardsParty #politics
Every republican in an interview should be asked about the statement, read the statement, and then asked to denounce it. Any who don’t denounce it should be told it’s unconstitutional and not debatable and given a 2nd opportunity to denounce it. Those who don’t should have their interviews ended.
-- Andrew Rothstein
https://
Idk how trails in the sky could be any longer. This game had sooo much text. Though perhaps full voice acting will make it less tedious because it was good but took so long.
https://www.…
I think it's a generally bad thing that I've referenced a specific Nazi death camp in reference to #USPol so often that I was thinking "oh, I should probably just bookmark this." I think that's a bad sign and I don't like it.
Meanwhile Norway had to recover from an attack that opened up a dam that should have remained closed.
It is almost redundant to note that these attacks are coming from Russia. https://infosec.exchange/@mayahustle/115031316869416935
When the nazis kill people, it's always non-violently. The Holocaust was a massive session of Kumbaya.
When non-nazis call them nazis, it's always violence and should be met with massive sessions of Kumbaya.
That's why Antifa is a terrorist organization, even though it's not an organization and never killed anybody.
That's also why anyone who dares to decry what the genocidal israelis are doing is a pro-hamas shrill who needs a massive session of Kumbaya…
It should surprise no one that AI watermarking is not going to work.
#ai
I can't believe i removed my (#)AntitrustJoke
Went something like:
Google: Why is nobody putting pressure on us to allow alternative stores and haha like we would actually cave hehehe, no way...
EU: We agree, nobody should put pressure on you on any antitrust issues.
Google: What!? You defending us as well? Thanks! Then everybody will get shitty apps forever! Great idea! We love it!
EU: Glad to help. Anyway you got F-Droid hahahaha
Google: HAHAHAHA! We…
Obesity & diet
I wouldn't normally share a positive story about the new diet drugs, because I've seen someone get obsessed with them who was at a perfectly acceptable weight *by majority standards* (surprise: every weight is in fact perfectly acceptable by *objective* standards, because every "weight-associated" health risk is its own danger that should be assessed *in individuals*). I think two almost-contradictory things:
1. In a society shuddering under the burden of metastasized fatmisia, there's a very real danger in promoting the new diet drugs because lots of people who really don't need them will be psychologically bullied into using them and suffer from the cost and/or side effects.
2. For many individuals under the assault of our society's fatmisia, "just ignore it" is not a sufficient response, and also for specific people for whom decreasing their weight can address *specific* health risks/conditions that they *want* to address that way, these drugs can be a useful tool.
I know @… to be a trustworthy & considerate person, so I think it's responsible to share this:
#Fat #Diet #Obesity
Public service announcement:
I am blocking the babka.social instance as the person who runs it (Serge) is a Zionist and a genocide/denier, who conflates being Jewish with being Zionist (“Zionism is a belief that more than 80% of Jews have. Talking about Zionists is talking about Jews.” – https://babka.social/@serge/1151605…
Who could have seen it coming that a whole country marches into supporting a genocide while claiming that genocide should happen “niemals wieder” (“never again”)
https://www.theguardian.com/commentisfree/2025/aug/13/german-media-outlets-israel-murder-journalists-gaza
It's so funny, my wife travels for work and is like, "It's amazing here! We [the whole family] should visit!"
Ah, those rose-colored traveling-alone glasses. I remember those! ...As opposed to the the-kids-have-been-yelling-and-screaming-at-each-other-for-over-an-hour-because-someone-was-humming-too-loudly-colored glasses. A 12-hour flight with these monsters? 🫠
One of the things it stressed is that we really don't know the motivation at this point. I think we're closer to "extremely online and weird" but we can't really say "he's a Groyper." I think this is pretty likely, but we should be honest that we don't actually know.
This is where I think it's important to realize we're operating in two spaces: reality and the meme space. The right now longer has a separation between the two. The meme they want to spread to attack the left is that it was some how associated with leftists (especially trans folks), so this is the meme they believe.
It's useful, on a memetic level, to disrupt this. The Groyper meme is a pretty solid response. But, at least as far as I'm aware, we're still not sure that's true. But we don't have to collapse meme space and reality. We have absurdism.
We don't know if he's a Groyper, or even a far right troll, for sure anyway. But we do know that #JeffryEpstein killed #CharlieKirk. #TheTruthIsOutThere #ReleaseTheFiles! #FreeLuigi
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
Imagine you're president, and someone in prison awaiting trial had dirt on you, dies suspiciously. Then ~6 years later people clamor for info you'd kill to suppress to be released anyway.
Might make you post: people should "not waste Time and Energy on Jeffrey Epstein, somebody that nobody cares about."
Rough Tuesday for Dopey McGropey.
His post from 3 days ago:
Here’s a mind-blowing experiment that you can try at home:
Gather some children’s blocks and place them on a table.
Take one block and slowly push it over the table’s edge, inch by inch, until it’s on the brink of falling.
If you possess patience and a steady hand, you should be able to balance it so that exactly half of it hangs off the edge.
Nudge it any farther, and gravity wins.
Now take two blocks and start over.
Stacking one on top of the other, how…
So if Trump is going to take over police forces of major cities (don't tell me "he can't, that's illegal!" because I have some bad news for you), why should municipalities fund those police forces? Like, you're a city working on your budget, why on earth would you allocate funding for something that you KNOW will be turned against you?
Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.
her: *holds up a bag*
her: "Do you like this flatbread stuff?"
me: *shakes head*
me: "I don't think it should be called 'bread' if it is flat. It's basically a soggy cracker."
her: "It's flat bread!" 🙄
TL;DR: what if nationalism, not anarchy, is futile?
Since I had the pleasure of seeing the "what would anarchists do against a warlord?" argument again in my timeline, I'll present again my extremely simple proposed solution:
Convince the followers of the warlord that they're better off joining you in freedom, then kill or exile the warlord once they're alone or vastly outnumbered.
Remember that even in our own historical moment where nothing close to large-scale free society has existed in living memory, the warlord's promise of "help me oppress others and you'll be richly rewarded" is a lie that many understand is historically a bad bet. Many, many people currently take that bet, for a variety of reasons, and they're enough to coerce through fear an even larger number of others. But although we imagine, just as the medieval peasants might have imagined of monarchy, that such a structure is both the natural order of things and much too strong to possibly fail, in reality it takes an enormous amount of energy, coordination, and luck for these structures to persist! Nations crumble every day, and none has survived more than a couple *hundred* years, compared to pre-nation societies which persisted for *tends of thousands of years* if not more. I'm this bubbling froth of hierarchies, the notion that hierarchy is inevitable is certainly popular, but since there's clearly a bit of an ulterior motive to make (and teach) that claim, I'm not sure we should trust it.
So what I believe could form the preconditions for future anarchist societies to avoid the "warlord problem" is merely: a widespread common sense belief that letting anyone else have authority over you is morally suspect. Given such a belief, a warlord will have a hard time building any following at all, and their opponents will have an easy time getting their supporters to defect. In fact, we're already partway there, relative to the situation a couple hundred years ago. At that time, someone could claim "you need to obey my orders and fight and die for me because the Queen was my mother" and that was actually a quite successful strategy. Nowadays, this strategy is only still working in a few isolated places, and the idea that one could *start a new monarchy* or even resurrect a defunct one seems absurd. So why can't that same transformation from "this is just how the world works" to "haha, how did anyone ever believe *that*? also happen to nationalism in general? I don't see an obvious reason why not.
Now I think one popular counterargument to this is: if you think non-state societies can win out with these tactics, why didn't they work for American tribes in the face of the European colonizers? (Or insert your favorite example of colonialism here.) I think I can imagine a variety of reasons, from the fact that many of those societies didn't try this tactic (and/or were hierarchical themselves), to the impacts of disease weakening those societies pre-contact, to the fact that with much-greater communication and education possibilities it might work better now, to the fact that most of those tribes are *still* around, and a future in which they persist longer than the colonist ideologies actually seems likely to me, despite the fact that so much cultural destruction has taken place. In fact, if the modern day descendants of the colonized tribes sow the seeds of a future society free of colonialism, that's the ultimate demonstration of the futility of hierarchical domination (I just read "Theory of Water" by Leanne Betasamosake Simpson).
I guess the TL;DR on this is: what if nationalism is actually as futile as monarchy, and we're just unfortunately living in the brief period during which it is ascendant?