Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@kexpmusicbot@mastodonapp.uk
2025-09-08 05:26:34

🇺🇦 #NowPlaying on KEXP's #Expansions
Little Fritter:
🎵 Do It Like Dis
#LittleFritter
wearerhythmsection.bandcamp.co
open.spotify.com/track/3aOleM4

@cosmos4u@scicomm.xyz
2025-08-29 19:52:49

Lonely Little Red Dots - Challenges to the Active Galactic Nucleus Nature of #LittleRedDots through Their Clustering and Spectral Energy Distributions: iopscience.iop.org/article/10. -> All Alone With No AGN to Call Home? New Results for Little Red Dots: aasnova.org/2025/08/29/all-alo

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@shoppingtonz@mastodon.social
2025-07-02 09:56:18

I had to uninstall F-Droid today because I think they are so alone and you are not supporting(I did follow them today, I didn't even know they are active on the fediverse but they are!)
F-Droid is active on the fediverse and willing to help!
Why not help them in return with a very simple thing just a tiny little follow.
Just a tiny little one...
follow button pressed: BOOM.
Public Service mission accomplished!
It's all about community!

@seeingwithsound@mas.to
2025-06-29 14:26:34

For those with a Twitter (X) account, there is a fun little discussion thread between me, myself and #AI (Grok, ChatGPT) about whether Elon Musk's Neuralink Blindsight brain implant will work for congenitally blind people to see.

@n8foo@macaw.social
2025-08-23 23:23:47

Oleander aphids are neat, their bright colors make them look like little alien monsters #bugstodon #bugs

@v_i_o_l_a@openbiblio.social
2025-08-22 13:40:35

in der #verschlagwortung diese woche: experimentelle literaturformen ("the little database", hbz-ulbms.primo.exlibrisgroup.

foto einiger bücher in einem regalfach; zu besseren lesbarkeit der titel auf den buchrücken wurde das foto um 90 grad nach links gedreht
@arXiv_astrophGA_bot@mastoxiv.page
2025-06-30 09:17:10

AGN with massive black holes have closer galactic neighbors: k-Nearest-Neighbor statistics of an unbiased sample of AGN at z~0.03
A. Mhatre, M. C. Powell, S. Yuan, S. W. Allen, T. Caglar, M. Koss, I. del Moral-Castro, K. Oh, A. Peca, C. Ricci, F. Ricci, A. Rojas, M. Signorini
arxiv.org/abs/2506.21705

@davidaugust@mastodon.online
2025-07-16 00:10:29

I'm migrating my articles and blogging, and you can kind of follow my process.
I just posted this, which includes this line: "Building your goals in someone else’s sandbox alone is not ideal, a little like building a castle on sand."
davaug.medium.com/online-longe

@tiotasram@kolektiva.social
2025-08-05 10:34:05

It's time to lower your inhibitions towards just asking a human the answer to your question.
In the early nineties, effectively before the internet, that's how you learned a lot of stuff. Your other option was to look it up in a book. I was a kid then, so I asked my parents a lot of questions.
Then by ~2000 or a little later, it started to feel almost rude to do this, because Google was now a thing, along with Wikipedia. "Let me Google that for you" became a joke website used to satirize the poor fool who would waste someone's time answering a random question. There were some upsides to this, as well as downsides. I'm not here to judge them.
At this point, Google doesn't work any more for answering random questions, let alone more serous ones. That era is over. If you don't believe it, try it yourself. Between Google intentionally making their results worse to show you more ads, the SEO cruft that already existed pre-LLMs, and the massive tsunami of SEO slop enabled by LLMs, trustworthy information is hard to find, and hard to distinguish from the slop. (I posted an example earlier: #AI #LLMs #DigitalCommons #AskAQuestion