Tootfinder

Opt-in global Mastodon full text search. Join the index!

@hex@kolektiva.social
2025-07-09 12:48:36

Related to understanding firearms, "rifles" and hand guns tend to be rifled. Rifling is grooving that runs in a helical pattern down the barrel. When purchasing a firearm, it's important to check the rifling.
First check that the firearm is unloaded. Empty or remove the magazine, cycle the weapon. Next, check again that it's unloaded by looking both down the barrel and into the magazine. Now, shine a light down the barrel and look down it. In the absence of a light, you may be able to reflect light off your thumbnail.
Rifling should look as though it's drawn on with a sharp pencil, and the barrel should look otherwise completely smooth and clean. If the rifling looks like bumpy mountains, then the owner probably used corrosive ammo and didn't clean it enough. It will probably still shoot, but not at all accurately.
Both the rifling and the pin can be used in forensic analysis to match a bullet to a gun. I don't honestly know how accurate this is because a lot of forensic "science" is just made up stuff that relies on the CSI effect and doesn't actually work as advertised.
However, not all firearms are not all rifled. Shotguns are "smoothbore" firearms, meaning they lack rifling. It is not possible to perform forensic analysis of a smoothbore firearm. It *is* possible to check for powder on the hands of someone who has used a firearm within the last few days, but it's not possible to distinguish between firing inside and outside a range.
I've been gathering all kinds of tidbits like this, partially just out of curiosity and partially because I've been wanting to write a story about a revolutionary group fighting a modern authoritarian society. I'm always happy to learn other bits, if anyone has anything else I could throw in my narrative (whenever I finally get back to writing it).

@grumpybozo@toad.social
2025-06-09 22:30:39

My least favorite work interactions are with customers’ 3rd party “rockstar” consultants. I never know how to respond to certain sorts of queries. Ones where I could answer the question as asked but it really implies a deeper misunderstanding. I feel like I want to write him a treatise on VPNs: what they are and how they work. And Cc his boss (who shares his surname…) to communicate the depth of the problem.
I’m letting my boss handle it. He’s the networking guy anyway…

Screengrab from an email with text:

Is there an alternative to AnyConnect? It wants to turn on the Socket Filter?

What is the VPN gateway?
@UP8@mastodon.social
2025-08-10 13:05:46

𝐉𝐮𝐬𝐭 𝐫𝐞𝐚𝐝: 𝑁𝑜𝑟𝑚𝑎𝑙 𝐴𝑐𝑐𝑖𝑑𝑒𝑛𝑡𝑠 by Charles Perrow. It made a big impression on me when I first came across it but when I put it in my backpack for a trip to Binghamton I recalled that the versions of many incidents in it aren’t the best you can find in the literature.
Not sure how well it aged: flying has gotten safer and people don’t write about maritime accidents like they did in the 70’s
#books

A black paperback book on a wood table with an image of the cloud of the Challenger explosion, at the top the title Normak Accidents, a red bar white white letters across the middle says Living with High-Risk Technologies, and the author’s name Charles Perrow
@inthehands@hachyderm.io
2025-06-06 22:08:10

Hey, folks who understand alt text and limited vision accessibility:
I have need to write alt text for the diagram below. The alt text needs to be comprehensible to somebody who is encountering this kind of diagram for the ••very first time••. I could describe the images using the relevant jargon, but that would only serve people who already know the thing this activity is teaching them!
Any suggestions for how I could write good alt text for something like this? Is it possible? (The horizontal black bars are minus signs, i.e. subtraction. This is clear from context in the text, but probably not clear in the image.)
PLEASE NOTE: I am looking for people with ••relevant accessibility expertise••, not just random best shots from people who (like me) don’t really know much about this kind of problem.

@hashtaggames@oldfriends.live
2025-06-09 00:58:00

Time For 9 o'clock #HashTagGames hosted by @…
I didn't see that coming. Let's play!
#ShowCharactersSurpriseRevelation

Poster Meme announcing New Game

Featured image, large blue hashTag and 
Text:
 9 o'clock Hashtag

How to play
#HashTagGames

 Write something awesome, Use the Hashtag, Toot/Post and Repeat!

Please Boost

Hashtag Games on Mastodon and the entire Fediverse.

 hosted by @paul@OldFriends.Live
#ShowCharactersSurpriseRevelation

Every Night, 9PM EST, (6PM PT / 1AM GMT / 2AM CET / 12PM AEDT / 2PM  NZST)
Proudly hosting daily games since November 16, 2022
@arXiv_hepth_bot@mastoxiv.page
2025-06-11 09:33:55

The $\mathcal{W}$-algebra bootstrap of 6d $\mathcal{N}=(2,0)$ theories
Mitchell Woolley
arxiv.org/abs/2506.08094 arxi…

@nemorosa@mastodon.nu
2025-06-06 12:20:19

#WritersCoffeeClub Jun 6
What are the conventions of the genre in which you write? How strictly do you follow them?
Um.. . I don't know, I write high/epic/dark fantasy (according to my beta readers) but I follow my story where it leads me. Are there conventions? Probably. Do I care? Not consciously.

@andres4ny@social.ridetrans.it
2025-08-08 16:45:12

People have lots of reasons for loving human languages. Some find that they sound romantic, others love the ways that they can be patterned into poetry or song, and still others find them fascinating in the ways that they morph and change over time.
Me? The thing I love most about the English language is how easy it is to accidentally write "pubic" when you meant "public", and vice-versa.

@theodric@social.linux.pizza
2025-07-07 18:18:39

I can't tell you how annoying this was to write. I sacrificed a lot of features I had already implemented (colour-coding of VM state, context-sensitive menus, etc.) because I just could not get the UI to repaint after returning from the virtual serial console with them in place. Even something as simple as adding shortcut prompts to the actions menu broke things. Extremely annoying, but console access is much more important, so I went with the MVP: good is the enemy of adequate.

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@rasterweb@mastodon.social
2025-07-30 14:10:49

I went for a bike ride this morning and thought about code a bit, and I think I figured out how to write the code I need for a project, and how to do it really simply.
I also thought about how to either chain or split off the output from the reed switch attached to the bike wheel to serve as input for multiple devices.

@mariyadelano@hachyderm.io
2025-07-22 18:24:49

The weird paradox of really disliking AI is that I still find myself thinking about it all the time.
I read about it, I watch videos about it, I write about it, I bring it up in conversation. And just make myself angrier in the process. And make all my algorithms show me more content about it 😓
I feel like I’m Cady in the movie Mean Girls addicted to talking about how she hated Regina George:
“I was a woman possessed. I spent about 80 percent of my time talking about Regina. And the other 20 percent of the time, I was praying for someone else to bring her up so I could talk about her more. [..] I could hear people getting bored with me. But I couldn't stop.”

@al3x@hachyderm.io
2025-06-03 10:12:26

The only documentation I can find about using the `:custom` keyword with use-package is "The :custom keyword allows customization of package custom variables."
I have no idea how to read that.
1. Can I do (recent-mode t)?
2. If I am to set a config option like dired-dwim-target to t do I write that: (dired-dwim-target t) or (setq dired-dwim-target t)?
#Emacs #UsePackage

@samir@functional.computer
2025-07-04 08:41:00

@… Do you think this is an issue? I have no idea how people test React components at all nowadays.
Personally I prefer to write browser tests with a fake backend implementation (running in the client; no HTTP calls), but this might be controversial.

@kctipton@mas.to
2025-07-06 06:40:12
Content warning: Spoilers and criticism

#murderbot I can't get over how this show is veering from the book. Did the Weitz brothers say "hey, wouldn't it be funny if SecBot did _this_?" and then write it into the show? The book was funny in the commentary, not in the actions of Secbot.
And, Gurathin, I'm sorry, but he has not fit in at all. He violated SecBot's privacy and Mensah's as well, and he prev…

@aardrian@toot.cafe
2025-05-28 13:52:42

Last week I asked Google to stop releasing broken things. Apologists said maybe I needed to contribute more, write code or demos.
Last year I outlined some of what I *have* done and how, for the most part, Google / Web•dev doesn’t give a shit:
adrianroselli.com/2024/07…

@sauer_lauwarm@mastodon.social
2025-06-02 17:42:24

Aus irgendeinem Grund ist gerade die Sonos-Radio-Station "The Lighthouse", über die Brian Eno jede Menge unveröffentlichte Tracks abspielen läßt, gratis (man braucht halt ein Sonos-Gerät dazu), und ich höre mich da nun etwas durch.
DJ Food hat das ein paar Monate lang gemacht übrigens, und mitgeschrieben (Stand 2023):

@inthehands@hachyderm.io
2025-06-05 02:11:34

I’m well out of my depth here: my historical knowledge to speak to the issues is thin; my cultural knowledge is almost nonexistent. Reading that Standing Together site, seeing how they’ve crafted what they write, I see just how much nuance and awareness I •don’t• have.
I’m grateful to the people who’ve helped me learn, and who’ve pointed me to these resources — in this case @… and @…. Sometimes the Internet really is good for something.
/end

@adrianco@mastodon.social
2025-08-01 04:00:08

On Monday I started a new GitHub repo, to implement a distributed knowledge graph that knows about stuff in houses and how they are related. I used the Claude AI tools and Claude-flow agent framework to write all the code. Got a simplified thing running and cleaned up on Tuesday, built the full functionality on Wednesday and after a bit more testing and documentation wrapped up work before lunch on Thursday. It’s a lot of well tested and documented functionality.

@pre@boing.world
2025-07-30 19:14:53

Watched all of "Ghosts US", the American version of the UK spooky sit-com.
There's like seventy episodes that I watched in less than a week. This is how I like telly to be. Relentless.
The show's weaker than the UK one in some ways but that shear continuity, the endless-seeming perpetuity of it, drove it into my brain.
US Ghosts are always trying to cop off with each other. Sexy ghosts. Pairing up. Don't remember much of that in the UK one. Perhaps when you make it 5x longer you're left with little else in the way of story to write.
Anyway, it's over now. I miss it in the way you might miss an infuriating neighbour if they moved out.
#watching #tv

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@tante@tldr.nettime.org
2025-06-23 11:36:22

It's really fucked up how when I write posts on work laptop (MacBook/Macos) vs. my own machine (Thinkpad/Linux) I end up with some many extra typos.
I don't think it's the MacBook keyboard (though it is a bit shit) it really is Macos' autocorrect.

@davidshq@hachyderm.io
2025-07-03 15:27:08

I've been using ClickUp for a few months now...and like every other productivity tool I've used over the last number of years it has its pros and cons.
It's got a lot of features but sometimes pretty basic features don't "just work" - and that makes me unhappy. 🙁
For example, right now I have some recurring tasks that will not allow an update to their due date manually. If I set a new date it resets back to the original date. Even more concerning is that this isn't clear from the UI.
The UI acts as if the change was successful but a page refresh reveals the change didn't save.
But the real reason I wanted to post wasn't about ClickUp particularly but about chatbots in general. They verified that my issue was an actual bug and created a ticket for it but the way I've been instructed to view the ticket status is by opening the chatbot, telling it I want information on my ticket (pasting in the ticket ID) and after doing all that I get this
5/6: Umm, no. I want to see an actual ticket please. I don't want to have to talk to a chatbot to see it. Chatbots really are great for a lot of things (during the free trial of ClickUp I found the chatbot quite helpful in learning how to do things without searching through docs) but this sort of "there is a direct record", please no. Or let me paste it in and do the lookup immediately - and provide a permalink so I don't need to chat every time!
I'm sticking with ClickUp at the moment, but one of these days when I magically get a large amount of free time, I'm going to write my own solution...I've only been saying that for a few years now. ;-)
#clickup #productivity #chatbots #projectmanagement #tasks

@jkohlmann@mastodon.social
2025-06-20 14:34:14

> “Instead of articulating our own thoughts, we articulate whatever AI helps us to articulate…we become more persuaded.” Without these signals, Naaman warns, we’ll only trust face-to-face communication — not even video calls.
Now I expect generative AI be used to justify return to office policies 🤢

You sound like ChatGPT
AI isn’t just impacting how we write — it’s changing how we speak and interact with others. And there’s only more to come.

@ruth_mottram@fediscience.org
2025-06-25 11:09:39

If I didn't have 2 papers to (re)submit and two reports due at the ECMWF and ESA, I would write a quick blog post on how interesting it is that our modelled melt over the #Greenland #IceSheet on the @… and the satellite derived melt areas almost but don't quite match, and why that might be...
fediscience.org/@polarportal/1

@grifferz@social.bitfolk.com
2025-06-23 09:18:29

God DAMMIT Nagios, how many times have I told you to stop picking up women on LinkedIn?

A screenshot of an email reading:

From: Andrea Wendy Eanarea23@everyts.store> ) )
To: monitoring-bounces@bitfolk.com
Subject: Captured by your nice Photo

Hello there,

I found your gorgeous photo on LinkedIn and just had to write even though it’s my very first time doing anything 1like this! 1I’d like to learn more about you. Are you currently single or married?

I’m a firm believer that you can never have too many friends, so I hope to hear from you soon.

Regards

Andrea Wendy
@robpike@hachyderm.io
2025-07-21 10:44:40

An excellent explanation of what bothers me about LLMs, especially in schools (but also more broadly). It's changing who we are - we the community, not just individuals - and in ways we cannot control or manage. I guess some people want those changes. I do not.
It's ironic that writing has never been more central to our lives, with texting and messaging and blogging and social media, yet we are moving towards a sterile world where no one will know how to write.
discuss.systems/@rebeccawb/114

@jeang3nie@social.linux.pizza
2025-07-24 14:34:55

In my Agile CS course we were given a couple of fluff pieces about the development process at Amazon and asked to write about how how their methods aligned with Agile methodology, and how that is what keeps them on top.
I couldn't do it. My paper was basically about enshittification, and how they are a perfect example of what not to do.
Here's hoping my grade doesn't tank. If it dies, so be it.

@al3x@hachyderm.io
2025-05-31 09:10:07

I cannot figure out how to use Github Copilot in @… @… on demand.
Is it a way to make it work like code assist? Basically trigger it with a shortcut.
I don’t want it turned on all the time spewing text right in front of what I write.

@stargazer@woof.tech
2025-06-06 17:24:14

[2025-06-06]
#WritersCoffeeShop
>What are the conventions of the genre in which you write? How strictly do you follow them?
Okay, let's jump onto this bandwagon.
None. As I've formulated years ago, "I like science fiction because it's the ideal clay to mold". I believe that the underlying idea is what matters, and fiction/fantasy/sci-fi is the…

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@MamasPinkyToe@mastodon.world
2025-06-24 01:14:23

- How do you do? I'm Trump's speechwriter.
- You write drivel for morons.
- Yeah.

@Dwemthy@social.linux.pizza
2025-05-14 19:14:51

I get the desire for live coding interviews, you can't just take people's word that they know how to do it.
But what's the point in throwing Advent of Code style problems at me and interrupting a naive or incorrect approach before I even start implementing it? Let me write unoptimized code for you and then make it better! I'm not going to write the perfect implementation first try for every problem, but I can show you my process and prove I can write _some_ code.

@grahamperrin@bsd.cafe
2025-05-22 23:44:27

@…
"… I had already ran a script that 'over organized' my project files and made it too complicated for me to access simple files, so I asked ChatGPT
"Can you write me a script that will retrace our steps to how it was organized on my desktop before we ran the last script? Rather than have over organized session folders like they …

@inthehands@hachyderm.io
2025-08-01 23:28:53

In a bit of hopeful news, I am pleased to announce to report that The Guardian knows how to write a headline

@arXiv_csHC_bot@mastoxiv.page
2025-06-18 08:21:17

"I Cannot Write This Because It Violates Our Content Policy": Understanding Content Moderation Policies and User Experiences in Generative AI Products
Lan Gao, Oscar Chen, Rachel Lee, Nick Feamster, Chenhao Tan, Marshini Chetty
arxiv.org/abs/2506.14018

@rasterweb@mastodon.social
2025-07-27 18:12:50

I have one question for the programmers who tirelessly write software for blimps… how do they keep it up?

@tiotasram@kolektiva.social
2025-07-19 08:14:41

AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.

@stargazer@woof.tech
2025-08-04 08:06:01

#WritersCoffeeClub August 1: Why are you writing your current work?
2: What's plentiful in your writing?
3: How heroic are your protagonists?
4: How violent is your work?
---
1. Because no one else can.
2. Words.
3. Pretty much. Then again, is it truly heroism for a suicidal person?
4. Depends.
I never write gore for the sake of gore. Unless i…

A pony named Charlie wearing a brown blouse, black pants and a black beret. Drawn by LilBoulder
@bmariusz@techhub.social
2025-07-18 17:39:35

Day 18
Today I debugged an issue with accessing backend endpoints from a Next.js frontend talking to a NestJS API.
The browser was blocking requests due to a CORS error — the Authorization header was not allowed in the preflight response. Even though frontend domains were correctly set, I forgot to include Authorization in allowedHeaders.
After updating enableCors() to:
`allowedHeaders: 'Authorization, Content-Type, Accept'`
…the issue disappeared, and t…

@rasterweb@mastodon.social
2025-07-12 00:29:22

I should probably write some Python to do a map thing with OpenStreetMap now that I've learned how incredibly easy it is.

@stargazer@woof.tech
2025-07-28 16:11:43

#WritersCoffeeClub
July 26: Talk about the most difficulty you had writing something sensory.
July 27: How does your social class influence what you write?
July 28: How do you write sensory experiences that fall beyond the the usual five? Give an example.
---
26: I occasionally have difficulty writing all of them, but can't single out one particular case.

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@mariyadelano@hachyderm.io
2025-07-01 14:40:23

Update on my stance here: I’ve changed my mind after reading way more about #AI. Stopped using #LLM products, cancelled paid subscription to #ChatGPT and am currently exploring smaller, specialized, and open-source alternative language model solutions to keep business functioning where needed (tldr because of client requests for certain types of automation I can’t say goodbye to LMs completely).
Planning to write up how my thinking developed soon.
#technology #artificialintelligence #genAI

@stargazer@woof.tech
2025-07-21 05:37:52

#WritersCoffeeClub July 21: What emotions do you avoid writing? Why?
---
Can't say I specifically avoid any emotions. I don't write some because I don't know how to write them.
Like love.

@stargazer@woof.tech
2025-06-16 14:07:02

#WritersCoffeeClub
June 13
Do you restrict what you read or watch while working on a WIP? Why or why not?
June 14
Do you take notes for your WIP? How closely do you follow them?
June 15
Have you ever challenged yourself to write without editing? What were the results?
---
No, but I may limit whether I read or watch anything at all at the time. Flow…

@stargazer@woof.tech
2025-07-14 14:31:34

#WritersCoffeeClub July 13: How many ‘layers’ of interpretation do you seek to achieve in a piece of writing?
July 14: What is your favorite emotion to write?
---
I got lost along the way, but let's try it again.
13. At least two. When it's three or four, I'm happy.
14. Justified rage.

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI