7. “Sorry, it will never happen” − Well yeah, if you don’t even try then it won’t.
People think they sound smart when they post these long splainy treatises in my mentions about nothing will get better. It seems like they think it makes them sound smart.
Sorry, honey. Cynicism does not make you worldly or wise. Cynicism makes you gullible.
I am specifically irked by the number of otherwise sensible people who see 1 and 3 and refuse to see 2, using their cynicism to rationalize their laziness.
You, yes you, can recycle AND not fall for the greenwashing. You can do it. I know you can.
The Kremlin's cynicism: Putin is "ready for peace" without stopping the war | Explainer: https://benborges.xyz/2025/12/06/the-kremlins-cynicism-putin-is.html
Thank you all for 3,000 followers on here! Here’s a photo of Danny to celebrate the occasion ☺️
3 years into being part of Mastodon, I continue to be impressed with how wonderful the people here are and how much this social network actually FEELS social. People replying to one another, having conversations, learning things, sharing moments of joy, making friends.
Mastodon brought my business clients, helped me gain confidence in my own voice, freed me from dependence on big tech and algorithms, rekindled my interests, introduced me to incredible people and projects, and served as a source of hope in humanity in the times when cynicism and nihilism felt all but inevitable.
I love our little corner of the internet, and am so glad that it’s still here despite everyone who professed it was doomed to fade into irrelevance.
Thank you to everyone reading these words for being here on the Fedi. The world is a little better thanks to your choice to support an independent web.
Jensen Huang warns "China is going to win the AI race", after the US kept a ban on high-end AI chip sales to China, and says the West is held back by "cynicism" (Financial Times)
https://www.ft.com/content/53295276-ba8d-4ec2-b0de-081e73b3ba43
How do you know when you’ve crossed the cynicism event horizon? And how do you tell it apart from being old & tired?
@… explains why Mastodon feels like it has a much brighter future than legacy social media really well here:
https://cosocial.ca/@timbray/115560976942889…
Please complain about Schumer and his cohort. Please give the mushy centrists black eyes. Please light a fire under all the politicians.
But…please also look for work you can do that doesn’t center around elections. Start local. Help people. Throw sand in the gears. Whatever it is, do work that counts.
Create a context in which there •has• to be an anti-fascist opposition party, because that’s the work so many people are already doing. •That• is what the cynicism is trying to stop you from doing.
/end
Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM