Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@servelan@newsie.social
2025-09-29 06:51:50

Judge Rejects Boilerplate “Congressional Records” Label Used to Shield Agency Emails from Disclosure - American Oversight
americanoversight.org/judge-re

@inthehands@hachyderm.io
2025-08-30 20:06:58

Well that sure was a fast trip from “never heard of Crunchyroll” to “wow do I hate Crunchyroll” blorbo.social/@blakasmoko/1151

@arXiv_csPL_bot@mastoxiv.page
2025-08-01 07:46:51

Kernel-FFI: Transparent Foreign Function Interfaces for Interactive Notebooks
Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
arxiv.org/abs/2507.23205

@tinoeberl@mastodon.online
2025-08-25 19:21:26

Im #Münsterland haben engagierte Bürger in #Billerbeck einen 5,6 Kilometer langen #Radweg in Eigenleistung gebaut.
Über 4.000 Arbeitsstunden und 2,3 Millionen Euro später steht eine sic…

@tiotasram@kolektiva.social
2025-07-17 13:31:49

To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.

@Stomata@social.linux.pizza
2025-08-13 08:40:05

If you want to try out #Piefed but don't like it's ui try out #Blorp. It's kinda similar to #Voyager but works with piefed. Voyager will also support Piefed soon.

@thomasfuchs@hachyderm.io
2025-08-13 01:32:50

It’s always “AI is great for generating boilerplate code” and never “why do we even need boilerplate code, maybe programming is broken”

@mgorny@social.treehouse.systems
2025-09-26 10:08:02

1. Have a simple job to do. Figure out #Makefile will do the job.
2. Think a bit about portability. Makefile becomes slightly more complex.
3. You're finally done. It turns out that some stupid implicit rule in GNU Make fires and adds a `rm` at the end that removes part of the output.
4. Use #Meson.
Just an average #Gentoo day.
[UPDATE: Now I regret using Meson. If you do anything that's not 100% boilerplate, it just keeps throwing obstacles in your way.]

@thomasfuchs@hachyderm.io
2025-08-20 14:53:32

If a statistical token chain generation machine can do a job satisfactorily, that's not a sign that the machine is amazing and intelligent.
It's a sign that that job is bullshit and doesn't achieve anything at all.
For example, generating boilerplate code in programming is a sign of overly verbose frameworks that should be simplified.