TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?
For all the marketing advice LinkedIn MBAs come up with, a surprising number of people forget one of the most basic steps: The Middle Schooler Test.
The process is simple:
1) Place a sample of your brochure, website, etc. in front of a recently calibrated 12-year-old boy.
2) If they start giggling uncontrollably, identify and correct the source of their amusement. Return to step 1.
3) If they aren't particularly amused, move on with the campaign.
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
Were I not an atheist I’d get some comfort from knowing that this son of a bitch would spend eternity burning in hell. But because I am, I have to hope he gets his comeuppance in life instead. Him and everyone else who is complicit in this genocide. https://mastodon.social/@EndIsraeliApa
How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love
Obesity & diet
I wouldn't normally share a positive story about the new diet drugs, because I've seen someone get obsessed with them who was at a perfectly acceptable weight *by majority standards* (surprise: every weight is in fact perfectly acceptable by *objective* standards, because every "weight-associated" health risk is its own danger that should be assessed *in individuals*). I think two almost-contradictory things:
1. In a society shuddering under the burden of metastasized fatmisia, there's a very real danger in promoting the new diet drugs because lots of people who really don't need them will be psychologically bullied into using them and suffer from the cost and/or side effects.
2. For many individuals under the assault of our society's fatmisia, "just ignore it" is not a sufficient response, and also for specific people for whom decreasing their weight can address *specific* health risks/conditions that they *want* to address that way, these drugs can be a useful tool.
I know @… to be a trustworthy & considerate person, so I think it's responsible to share this:
#Fat #Diet #Obesity
How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love
Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
https://social.coop/@eloquence/114940607434005478
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.
How popular media gets love wrong
Okay, my attempt at (hopefully widely-applicable) advice about relationships based on my mental "engineering" model and how it differs from the popular "fire" and "appeal" models:
1. If you're looking for a partner, don't focus too much on external qualities, but instead ask: "Do they respect me?" "Are they interested in active consent in all aspects of our relationship?" "Are they willing to commit a little now, and open to respectfully negotiating deeper commitment?" "Are they trustworthy, and willing to trust me?" Finding your partner attractive can come *from* trusting/appreciating/respecting them, rather than vice versa.
2. If you're looking for a partner, don't wait for infatuation to start before you try building a relationship. Don't wait to "fall in love;" if you "fall" into love you could just as easily "fall" out, but if you build up love, it won't be so easy to destroy. If you're feeling lonely and want a relationship, pick someone who seems interesting and receptive in your social circles and ask if they'd like to do something with you (doesn't have to be a date at first). *Pursue active consent* at each stage (if they're not interested; ask someone else, this will be easier if you're not already infatuated). If they're judging you by the standards in point 1, this is doubly important.
3. When building a relationship, try to synchronize your levels of commitment & trust even as you're trying to deepen them, or at least try to be honest and accepting when they need to be out-of-step. Say things and do things that show your partner the things (like trust, commitment, affection, etc.) that are important in your relationship, and ask them to do the same (or ideally you don't have to ask if they're conscious of this too). Do these things not as a chore or a transaction when your partner does them, but because they're the work of building the relationship that you value for its own sake (and because you value your partner for themselves too).
4. When facing big external challenges to your commitment to a relationship, like a move, ensure that your partner has an appropriate level of commitment too, but then don't undervalue the relationship relative to other things in life. Everyone is different, but *to me*, my committed relationship has been far more rewarding than e.g., a more "successful" career would have been. Of course worth noting here that non-men are taught by our society to undervalue their careers & other aspects of their life and sacrifice everything for their partners, which is toxic. I'm not saying "don't value other things" but especially for men, *do* value romantic relationships and be prepared to make decisions that prioritize them over other things, assuming a partner who is comfortable with that commitment and willing to reciprocate.
Okay, this thread is complete for now, until I think of something else that I've missed. I hope this advice is helpful in some way (or at least not harmful). Feel free to chime in if you've got different ideas...
#relationships #love
TL;DR: spending money to find the cause of autism is a eugenics project, and those resources could have been spent improving accommodations for Autistic people instead.
To preface this, I'm not Autistic but I'm neurodivergent with some overlap.
We need to be absolutely clear right now: the main purpose is *all* research into the causes of autism is eugenics: a cause is sought because non-autistic people want to *eliminate* autistic people via some kind of "cure." It should be obvious, but a "cured autistic person" who did not get a say in the decision to administer that "cure" has been subjected to non-consensual medical intervention at an extremely unethical level. Many autistic people have been exceptionally clear that they don't want to be "cured," including some people with "severe autism" such as people who are nonverbal.
When we think things like "but autism makes life so hard for some people," we're saying that the difficulties in their life are a result of their neurotype, rather than blaming the society that punished & devalues the behaviors that result from that neurotype at every turn. To the extent that an individual autistic person wants to modify their neurotype and/or otherwise use aids to modify themselves to reduce difficulties in their life, they should be free to pursue that. But we should always ask the question: "what if we changed their social or physical environment instead, so that they didn't have to change themselves?" The point is that difficulties are always the product of person x environment, and many of the difficulties we attribute to autism should instead be attributed to anti-autistic social & physical spaces, and resources spent trying to "find the cause of autism" would be *much* better spent trying to develop & promote better accommodations for autism. Or at least, that's the case if you care about the quality of life of autistic people and/or recognize their enormous contributions to society (e.g., Wikipedia could not exist in anything near its current form without autistic input). If instead you think of Autistic people as gross burdens that you'd rather be rid of, then it makes sense to investigate the causes of autism so that you can eventually find a "cure."
All of that to say: the best response to lies about the causes of autism is to ask "What is the end goal of identifying the cause?" instead of saying "That's not true, here's better info about the causes."
#autism #trump
P.S. yes, I do think about the plight of parents of autistic kids, particularly those that have huge struggles fitting into the expectations of our society. They've been put in a position where society constantly bullies and devalues their kid, and makes it mostly impossible for their kid to exist without constant parental support, which is a lot of work and which is unfair when your peers get the school system to do a massive amount of childcare. But in that situation, your kid is in an even worse position than you as the direct victim of all of that, and you have a choice: are you going to be their ally against the unfair world, or are you going to blame them and try to get them to confirm enough that you can let the school system take care of them, despite the immense pain that that will provoke? Please don't come crying for sympathy if you choose the later option (and yes, helping them be able to independently navigate society is a good thing for them, but there's a difference between helping them as their ally, at their pace, and trying to force them to conform to reduce the burden society has placed on you).