TL;DR: what if nationalism, not anarchy, is futile?
Since I had the pleasure of seeing the "what would anarchists do against a warlord?" argument again in my timeline, I'll present again my extremely simple proposed solution:
Convince the followers of the warlord that they're better off joining you in freedom, then kill or exile the warlord once they're alone or vastly outnumbered.
Remember that even in our own historical moment where nothing close to large-scale free society has existed in living memory, the warlord's promise of "help me oppress others and you'll be richly rewarded" is a lie that many understand is historically a bad bet. Many, many people currently take that bet, for a variety of reasons, and they're enough to coerce through fear an even larger number of others. But although we imagine, just as the medieval peasants might have imagined of monarchy, that such a structure is both the natural order of things and much too strong to possibly fail, in reality it takes an enormous amount of energy, coordination, and luck for these structures to persist! Nations crumble every day, and none has survived more than a couple *hundred* years, compared to pre-nation societies which persisted for *tends of thousands of years* if not more. I'm this bubbling froth of hierarchies, the notion that hierarchy is inevitable is certainly popular, but since there's clearly a bit of an ulterior motive to make (and teach) that claim, I'm not sure we should trust it.
So what I believe could form the preconditions for future anarchist societies to avoid the "warlord problem" is merely: a widespread common sense belief that letting anyone else have authority over you is morally suspect. Given such a belief, a warlord will have a hard time building any following at all, and their opponents will have an easy time getting their supporters to defect. In fact, we're already partway there, relative to the situation a couple hundred years ago. At that time, someone could claim "you need to obey my orders and fight and die for me because the Queen was my mother" and that was actually a quite successful strategy. Nowadays, this strategy is only still working in a few isolated places, and the idea that one could *start a new monarchy* or even resurrect a defunct one seems absurd. So why can't that same transformation from "this is just how the world works" to "haha, how did anyone ever believe *that*? also happen to nationalism in general? I don't see an obvious reason why not.
Now I think one popular counterargument to this is: if you think non-state societies can win out with these tactics, why didn't they work for American tribes in the face of the European colonizers? (Or insert your favorite example of colonialism here.) I think I can imagine a variety of reasons, from the fact that many of those societies didn't try this tactic (and/or were hierarchical themselves), to the impacts of disease weakening those societies pre-contact, to the fact that with much-greater communication and education possibilities it might work better now, to the fact that most of those tribes are *still* around, and a future in which they persist longer than the colonist ideologies actually seems likely to me, despite the fact that so much cultural destruction has taken place. In fact, if the modern day descendants of the colonized tribes sow the seeds of a future society free of colonialism, that's the ultimate demonstration of the futility of hierarchical domination (I just read "Theory of Water" by Leanne Betasamosake Simpson).
I guess the TL;DR on this is: what if nationalism is actually as futile as monarchy, and we're just unfortunately living in the brief period during which it is ascendant?
Uspol, genocide
In case you're wondering whether "political violence" is escalating in the U.S.A. right now, of *course* it is as we move into an era of concentration campus and domestic military deployments. But both domestic genocides and purges as well as political violence targeted at individual prominent figures has been a *constant* throughout American history, from gun duels fought between political rivals to massacres of Native Americans in order to steal their land, to pogroms against Catholics, to literal wars on local Black success and political participation, all dating back before the American Revolution to the beginning of colonization. Thanks to Wikipedia, here's a *small sampling* where I attempted to whittle things down to about one event per decade before recent times.
Sources:
https://en.m.wikipedia.org/wiki/List_of_massacres_in_the_United_States
https://en.m.wikipedia.org/wiki/List_of_Indian_massacres_in_North_America
https://en.m.wikipedia.org/wiki/List_of_United_States_presidential_assassination_attempts_and_plots
https://en.m.wikipedia.org/wiki/List_of_incidents_of_political_violence_in_Washington,_D.C.
https://en.m.wikipedia.org/wiki/Mass_racial_violence_in_the_United_States
Killings, woundings, and plots against political figures:
Aaron Burr killing Alexander Hamilton in 1804
Sam Houston beats Rep. William Stanbery in 1832
Attempted Assassination of Andrew Jackson in 1835
Fight between Representatives Churchwell & Cullom in 1854
Caning of Sen. Charles Summer in 1856
Brawl on the House floor in 1858
Assassination of President Abraham Lincoln in 1865
Assassination of President James A. Garfield in 1881
Assassination of President William McKinley in 1901
Attempted Assassination of William Howard Taft and Porfirio Díaz in 1909
Wounding of former President Theodore Roosevelt in 1912
Bombing of the U.S. Senate reception room in 1915
Attempted Assassination of President Herbert Hover in 1928 (in Argentina)
Attempted Assassination of President Harry S. Truman in 1947
Attempted Assassination of President Harry S. Truman in 1950
The United States Capitol Shooting in 1954
Planned Assassination of President John F. Kennedy in 1960
Attempted Assassination of President Franklin D. Roosevelt in 1963
Assassination of President John F. Kennedy in 1963
Assassination of Dr. Martin Luther King Jr. in 1968
Weather Underground bombings in 1970, 1971, and 1975
Planned Assassination of President Richard Nixon in 1972 (Alabama Governor George Wallace was targeted & injured instead)
Planned Assassination of President Richard Nixon in 1974
Planned Assassination of President Gerald Ford in 1974
Attempted Assassinations (x2) of President Gerald Ford in 1975
Wounding of President Ronald Reagan in 1981
Attempted Kidnapping of Federal Reserve Board members in 1981
Planned Assassination of President George Bush in 1993 (in Kuwait)
Attempted Assassinations (x3) of President Bill Clinton in 1994
Attempted Assassination of President Bill Clinton in 1996
Anthrax attacks on US senators in 2001
Attempted Assassination of President George W. Bush in 2005 (in the foreign country of Georgia)
Planned Assassination of President-Elect Barrack Obama in 2008
Planned Assassination of President Barrack Obama in 2009 (in Turkey)
Attempted Assassination of President Barrack Obama in 2011
Shooting of Rep. Gabby Gliffords in 2011
Planned Assassinations (x2) of President Barrack Obama in 2012
Attempted Assassinations (x2) of President Barrack Obama in 2013
Planned Assassination of President Barrack Obama in 2015
Attempted Assassinations (x2) of President Donald Trump in 2017
Attempted Assassination of President Donald Trump in 2018
Pipe bombs mailed to Democratic leaders in 2018, including former President Barack Obama
Planned Assassination of President Barrack Obama in 2019
Attempted Assassination of President Donald Trump in 2020
Kidnapping plot against Michigan Governor Gretchen Whitmer in 2020
Planned Assassination of Former President George W. Bush in 2022
Planned Assassination of Former President Barrack Obama in 2023
Attempted Assassination of President Joe Biden in 2023
Planned Assassinations (x2) of Presidential Candidate Donald Trump in 2024
Wounding of Presidential Candidate Donald Trump in 2024
Massacres and other mass killings, mostly with genocidal motivations:
The Acoma Massacre in 1599
The Paspaheg Massacre in 1610
The Wessagusset affair in 1623
The Mystic Massacre in 1637
The Pound Ridge Massacre in 1644
The Susquehannock chiefs massacre in 1675
The Apalachee Massacre in 1704
The Massacre at Fort Narhantes in 1712
The Norridgewock Massacre in 1724
The 1745 Massacre at Walden (in 1745)
The 1756 Massacre at Walden (in 1756)
The Killings by the Paxton Boys in 1763
The Yellow Creek Massacre in 1774
The Gnadenhütten Massacre in 1782
The Canyon del Muerto Massacre in 1805
The Battle of Tallushatchee in 1813
The Philadelphia Nativist Riots in 1844
The Bloody Island Massacre in 1850
The Mountain Meadows Massacre in 1857
The Sand Creek Massacre in 1864
The Opelousas Massacre in 1868
The Chinese Massacre in 1871
The Election Riot of 1874
The Haymarket Affair in 1886
The Buffalo Gap Massacre in 1890
The Wilmington Massacre in 1898
The 1906 Atlanta Race Massacre (in 1906)
The Ludlow Massacre in 1914
The Elaine massacre in 1919
The Tulsa Race Massacre in 1921
The Battle of Blair Mountain in 1921
The Bonus Army Conflict in 1932
The 1937 Memorial Day massacre (in 1937)
The 16th Street Baptist Church bombing in 1963
The Kent State shootings in 1970
The Greensboro massacre in 1979
The MOVE Bombing in 1985
The 4 O'Clock murders in 1988
The Oklahoma City bombing in 1995
The September 11 Attacks in 2001
The Fort Hood Shooting in 2009
The Holocaust Memorial Shooting in 2009
The Isla Vista killings in 2014
The Charleston Church shooting in 2015
The San Bernardino attack in 2015
The Orlando Nightclub Shooting in 2016
The Pittsburgh Synagogue Shooting in 2018
The El Paso Walmart shooting in 2019
The January 6th Capitol Attack in 2021
The 2022 Buffalo Shooting (in 2022)
Some right-wing media stars, whose conspiracy theories helped put Trump in power, are rejecting his call to stop wasting "time and energy on Jeffrey Epstein" (Brian Stelter/CNN)
http://www.cnn.com/2025/07/14/media/trump-maga-media-epstein-files-c…
Here's a view of Little York Lake looking south from Dwyer Memorial Park -- so far as I can tell "Little York" is one of the four Yorks of New York
#photo #photography #landscape…
lkml_thread: Linux kernel mailing list
A bipartite network of contributions by users to threads on the Linux kernel mailing list. A left node is a person, and a right node is a thread, and each timestamped edge (i,j,t) denotes that user i contributed to thread j at time t. The date of the snapshot is not given.
This network has 379554 nodes and 1565683 edges.
Tags: Social, Communication, Unweighted, Timestamps
Bourgeoisie: having two bathrooms in the house.
The poor in post-communist blocks of flats: having a separate bathroom and toilet.
True aristocracy: having a dedicated cat litter box room.
(Read: an impractical room that exists as a result of neverending modifications of an old house. It also features a wardrobe but I won't be having a wardrobe room like some bourgeois.)
AI Feedback Enhances Community-Based Content Moderation through Engagement with Counterarguments
Saeedeh Mohammadi, Taha Yasseri
https://arxiv.org/abs/2507.08110
Touch Speaks, Sound Feels: A Multimodal Approach to Affective and Social Touch from Robots to Humans
Qiaoqiao Ren, Tony Belpaeme
https://arxiv.org/abs/2508.07839 https://…
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love