Tootfinder

Opt-in global Mastodon full text search. Join the index!

@jamesthebard@social.linux.pizza
2025-12-18 22:37:19

Okay, after a bit of work in #vlang:
- I think I prefer golang though I really prefer the error system of `v`.
- I enjoy writing code in `nim` more than `v`. While I do enjoy `v` more than `rust`, the documentation and support extensions are better for almost every other language which makes things difficult starting out.

@tiotasram@kolektiva.social
2025-12-12 13:41:30

Been starting a habit of writing down story/game ideas as I have them even though most of them will never seriously get started, let alone finished. It's been fun since writing things down gives me a chance to think them through a bit more than just pondering them in my head. Anyways, here's a #GameDesign idea:
"Grand" - is a "reverse metroidvania" in which as a grandparent, you slowly lose movement options as the story progresses, requiring more and more convoluted routes through the map to reach the same areas. You do still explore "new" areas in memory mode (and unlock movement options like a bike in your memories) before traversing the areas again in the diegetic present. The story follows your quest to protect a grandchild from the machinations of a Kafkaesque state, first trying to track them down within the system and then trying to get them released. Each "boss" is "fought" through an abstracted conversation system where memories, keepsakes, and various kinds of emotional/logical appeals wear down your opponent's nihilism and/or fear until they're willing to help you. Normal "enemies" are just people on the street who might bump into you and drain some of your stamina as you pass by if you don't issue a properly-timed "excuse me" or the like.

@csessh@social.linux.pizza
2025-11-26 08:43:17

"Writing your idea down is not starting the damn game. Writing a design document is not starting the damn game. Assembling a team is not starting the damn game. Even doing graphics or music is not starting the damn game. It’s easy to confuse “preparing to start the damn game” with “starting the damn game”. Just remember: a damn game can be played, and if you have not created something that can be played, it’s not a damn game!"

@rainerzufall_le@mastodon.social
2026-02-07 18:57:34

"Bitcoin is crashing hard, reaching historic lows of well below the $70,000 mark. At the time of writing, the token is hovering just above $63,000, levels we haven’t seen since October 2024."
"According to Coindesk, the average cost to mine one Bitcoin is currently around $87,000 — far higher than its current going rate, making it an extremely unprofitable proposition."
🥳🥳🥳

@Mediagazer@mstdn.social
2026-01-29 15:56:00

The Atlantic hires David Brooks as a staff writer and host of a new weekly video podcast, starting in February, after 22 years as an NYT opinion columnist (The Atlantic)
theatlantic.com/press-releases

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM