Tootfinder

Opt-in global Mastodon full text search. Join the index!

@fanf@mendeddrum.org
2025-08-20 11:42:03

from my link log —
Creating a read-only PostgreSQL user.
blog.crunchydata.com/blog/crea
saved 2021-05-12

@UP8@mastodon.social
2025-09-18 18:46:26

When Knowing Someone at Meta Is the Only Way to Break Out of “Content Jail”
#media

@scottmiller42@mstdn.social
2025-08-18 16:55:25

I overheard a woman ranting that it is bad that they are not teaching cursive in school. Her reasoning?
"If they can't write cursive, then they can't read cursive. If they can't read cursive, then they can't read The Constitution?"
This is 100% nonsense reasoning. I learned to write in cursive ~40 years ago. I've read the Constitution at least 4 times, but never by reading an image of the document - I only read it in print in a book or a website.

@jtk@infosec.exchange
2025-07-18 22:58:05

Maybe I'm doing it wrong or missing the obvious, but am I the only one who thinks the way to read a fediverse timeline is in proper chronological time order (oldest to newest) like I can easily do in my email app?
Some apps have a context feature or implement threading, but by and large reading or browsing a timeline on the fediverse from newest to oldest seems to be the norm. I find this, at best, unsatisfying.

@karlauerbach@sfba.social
2025-09-18 15:58:26

I would suspect that many birth-citizenship US citizens could not pass this citizenship test.
(I note that the USCIS website for citizenship test materials has many items blocked by Privacy Badger and Ublock Origin - in other words, the website has trackers, some of which are private companies.)
Just read the hogwash in this announcement:
"“American citizenship is the most sacred citizenship in the world and should only be reserved for aliens who will fully embrace our…

@fgraver@hcommons.social
2025-08-20 11:01:03

I was reminded of this quote today (attribution in the comment):
«…you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.»
It was in an article about AI, fittingly enough.

@ginevra@hachyderm.io
2025-06-20 00:35:29

Language learning has been part of me since high school. I'm solid in 2 non-English languages, crappy but survivable in 2 others. I've played with & started learning others many times.
I'm real busy rn, but language learning could be a fun thing to do for myself & make me feel like I'm still me.
But I'm stumped about my language picks. I learnt the obvious European languages in school; later tried key Asian languages. What do I want to do now?
African languages? I won't be getting a chance to use them much in Aus, & I'm unlikely to get to a stage where I can read literature.
I tried Slovenian/Slovene on a whim & really love it, but I'll never go there. Is the practical but unfun answer grind out more kanji/hanzi? Or is whimsically learning a language spoken by only 2.5 million people reasonable? I will continue struggling through with Ukrainian, 'cause I think it's important.
#LanguageLearning

@nemorosa@mastodon.nu
2025-07-17 13:42:30

"The paper is only interested in the use of the idea, not the idea itself. Hardly anyone can understand the importance of an idea, it is so remarkable. Except that, possibly, some children catch on. And when a child catches on to an idea like that, we have a scientist." ("The value of science, Richard P. Feynman)
Short, but it took me all day to read it because I had to stop and think all the time, and it will take a lot longer to actually grasp it.

@ErikJonker@mastodon.social
2025-08-15 07:24:59

After having looked at eIDAS and the dutch implementation of digital identity, it is very interesting to read this paper, a proposal that tries to improve on various aspects from eiDAS.
"An Architecture for Distributed Digital Identities in the Physical World"
arxiv.org/abs/2508.10185

@arXiv_eessAS_bot@mastoxiv.page
2025-09-18 09:41:11

Read to Hear: A Zero-Shot Pronunciation Assessment Using Textual Descriptions and LLMs
Yu-Wen Chen, Melody Ma, Julia Hirschberg
arxiv.org/abs/2509.14187

@arXiv_csCL_bot@mastoxiv.page
2025-07-17 07:46:20

MapIQ: Benchmarking Multimodal Large Language Models for Map Question Answering
Varun Srivastava, Fan Lei, Srija Mukhopadhyay, Vivek Gupta, Ross Maciejewski
arxiv.org/abs/2507.11625

@seeingwithsound@mas.to
2025-08-14 20:13:35

Cool, Neuralink might soon be able to read passwords from your brain - what could possibly go wrong? nytimes.com/2025/08/14/science NYT: For some patients, the 'inner voice' may soon be audi…

@thomasfuchs@hachyderm.io
2025-07-10 21:48:38

If you read the "Bluesky requires IDs now" post:
1. This only applies to UK users.[1]
2. It's because the UK is building a surveillance state that requires websites and apps to do this, not because Bluesky is evil.[2]
3. It's either that or shut down in the UK.
Whether you like it or not, this also affects Mastodon—and even personal blogs with comments enabled.[3]
[1] theverge.com/news/704468/blues
[2] en.wikipedia.org/wiki/Online_S
[3] bentasker.co.uk/posts/blog/law

@metacurity@infosec.exchange
2025-08-09 13:15:45

Each week, Metacurity offers our free and paid subscribers a rundown of the top infosec-related long reads.
Don't miss this week's selection, which includes
--How the Huione Group launders billions in scams and crypto heists,
--Fears of China spying on the UK financial system,
--How stolen iPhones travel around,
--Cyber played only an incremental role in the Israel-Iran conflict,
--CISA 2015 must be reauthorized,
--Watermarks to weed out deepfa…

@compfu@mograph.social
2025-09-14 14:47:53

I've archived 2 forums on Autodesk's site related to the RV player. There's a big banner on top that these have been temporarily restored in read-only mode. Apparently ADSK shut them down and in doing so disappeared a large knowledge base for their products.
They'll disappear again sooner or later and there is a frustratingly low amount of developer info about #RV and

@cai@mastodon.social
2025-07-15 06:31:35
Content warning: ukpol, localpol

Just read that our Labour councillor David Barker, aka one of the only councillors I’ve ever had who was responsive and I felt acted in the best interests of his constituents, has been internally deselected by the Party.
#BirminghamUK

@aardrian@toot.cafe
2025-08-07 15:20:58

Just released from my RSS-only embargo:
“1.2.5: Adversarial Conformance”
#WCAG

@arXiv_csDS_bot@mastoxiv.page
2025-09-17 08:45:09

Sublinear-Time Algorithms for Diagonally Dominant Systems and Applications to the Friedkin-Johnsen Model
Weiming Feng, Zelin Li, Pan Peng
arxiv.org/abs/2509.13112

@axbom@axbom.me
2025-09-12 07:34:47
@… Yes I do gather this is the loophole they are using to make their claim. At the same time, the way the apps can claim end-to-end encryption is by ensuring only the device can read the message before it is sent. So it certainly is a bit of semantical parkour. 😅
@emd@cosocial.ca
2025-07-07 00:18:32

Imagine if #Laravel, the company, actually shared news on their own website instead of X.
It would be like all the people everywhere could read it.
oldfriends.live/@paul/11480878

@Treppenwitz@sfba.social
2025-08-07 14:42:31

A good read. #fuckice

@rachel@norfolk.social
2025-08-11 17:42:30

My Carpe Iter is easier to read with a glossy screen protector on, rather than the frosted one.
Yes, there are reflections but only the same as there are from the KTM dash directly below it.
#Motorcycles

@teledyn@mstdn.ca
2025-07-11 22:24:59

"Not only are we experiencing an extinction of species, but also an extinction of biodiverse experiences." — Kevin Rozario
No training needed: How humans instinctively read nature’s signals | ScienceDaily
sciencedaily.com/releases/2025

@khalidabuhakmeh@mastodon.social
2025-07-09 19:27:30

When you read a comment in the #dotnet repo that can only end one way, and you're powerless to stop it from happening.

season 6 GIF
@andres4ny@social.ridetrans.it
2025-08-10 23:42:57

Not gonna lie, my blood ran cold when I read this part: bugs.debian.org/cgi-bin/bugrep

Sylvestre, your final parenthetical claim here appears to not be true for the only other major web browser in Debian and therefore an easy alternative for Debian to switch to now: chromium.
@tiotasram@kolektiva.social
2025-09-13 23:43:29

TL;DR: what if nationalism, not anarchy, is futile?
Since I had the pleasure of seeing the "what would anarchists do against a warlord?" argument again in my timeline, I'll present again my extremely simple proposed solution:
Convince the followers of the warlord that they're better off joining you in freedom, then kill or exile the warlord once they're alone or vastly outnumbered.
Remember that even in our own historical moment where nothing close to large-scale free society has existed in living memory, the warlord's promise of "help me oppress others and you'll be richly rewarded" is a lie that many understand is historically a bad bet. Many, many people currently take that bet, for a variety of reasons, and they're enough to coerce through fear an even larger number of others. But although we imagine, just as the medieval peasants might have imagined of monarchy, that such a structure is both the natural order of things and much too strong to possibly fail, in reality it takes an enormous amount of energy, coordination, and luck for these structures to persist! Nations crumble every day, and none has survived more than a couple *hundred* years, compared to pre-nation societies which persisted for *tends of thousands of years* if not more. I'm this bubbling froth of hierarchies, the notion that hierarchy is inevitable is certainly popular, but since there's clearly a bit of an ulterior motive to make (and teach) that claim, I'm not sure we should trust it.
So what I believe could form the preconditions for future anarchist societies to avoid the "warlord problem" is merely: a widespread common sense belief that letting anyone else have authority over you is morally suspect. Given such a belief, a warlord will have a hard time building any following at all, and their opponents will have an easy time getting their supporters to defect. In fact, we're already partway there, relative to the situation a couple hundred years ago. At that time, someone could claim "you need to obey my orders and fight and die for me because the Queen was my mother" and that was actually a quite successful strategy. Nowadays, this strategy is only still working in a few isolated places, and the idea that one could *start a new monarchy* or even resurrect a defunct one seems absurd. So why can't that same transformation from "this is just how the world works" to "haha, how did anyone ever believe *that*? also happen to nationalism in general? I don't see an obvious reason why not.
Now I think one popular counterargument to this is: if you think non-state societies can win out with these tactics, why didn't they work for American tribes in the face of the European colonizers? (Or insert your favorite example of colonialism here.) I think I can imagine a variety of reasons, from the fact that many of those societies didn't try this tactic (and/or were hierarchical themselves), to the impacts of disease weakening those societies pre-contact, to the fact that with much-greater communication and education possibilities it might work better now, to the fact that most of those tribes are *still* around, and a future in which they persist longer than the colonist ideologies actually seems likely to me, despite the fact that so much cultural destruction has taken place. In fact, if the modern day descendants of the colonized tribes sow the seeds of a future society free of colonialism, that's the ultimate demonstration of the futility of hierarchical domination (I just read "Theory of Water" by Leanne Betasamosake Simpson).
I guess the TL;DR on this is: what if nationalism is actually as futile as monarchy, and we're just unfortunately living in the brief period during which it is ascendant?

@azonenberg@ioc.exchange
2025-07-04 00:13:22

Ethernet nerds: Does this section from 802.3-2022 actually require that you be able to read the currently negotiated speed and duplex state back from this register, or only write to force a specific speed?
Every PHY I've ever seen except the VSC8512 lets you read back the actual operating conditions, but reading the spec it seems that there's not actually a mandate that this capability be there.
The register is defined as readable but it's not well defined whether it …

45.2.1.1.3 Speed selection (1.0.13, 1.0.6, 1.0.5:2)

For devices operating at 10 Mb/s, 100 Mb/s, or 1000 Mb/s the speed of the PMA/PMD may be sclected using bits 13 and 6. The speed abilities of the PMA/PMD are advertised in the PMA/PMD speed ability register. These two bits use the same definition as the speed selection bits defined in Clause 22.

For devices not operating at 10 Mb/s, 100 Mb/s, or 1000 Mb/s, the speed of the PMA/PMD may be selected using bits 5 through 2. When bits 5 through 2…
@june_thalia_michael@literatur.social
2025-08-12 04:50:17

#EroticMusings Week 10 (August 3-9) Culture: Is there a refreshing trope, pattern, or topic that you'd like to see more of in erotica?
More own voice stories. (A reminder that I only read books and comics, with the occasional artwork on my feed on the fedi).
There are lots of stories around about how people imagine certain groups to have sexy times. I often see complaints f…

@mariyadelano@hachyderm.io
2025-07-07 19:52:55

I'm somehow tired and depressed with the world already even though it's only 4 pm on a Monday.
Gonna do what always works best in these situations - turn off electronics and go read a novel outside.

@mgorny@social.treehouse.systems
2025-08-04 03:48:13

Average person: "I watch movies / TV series, because I don't have time to read books."
Me: "I read books, because the only free time I have is while traveling."

@tml@urbanists.social
2025-06-30 20:44:21

What is your favourite physical book that you want to be seen reading in public? Wrong answers only.
theguardian.com/lifeandstyle/2

@grahamperrin@bsd.cafe
2025-07-09 18:21:40

Pictured: Kubuntu shutting down gracefully – without forcing off the computer – following an insane zpool-scrub(8) command.
For the insanity:
github.com/openzfs/zfs/issues/
― Gracefully reject an attempt to scrub a read-only pool

@buercher@tooting.ch
2025-09-17 06:02:00

If Adobe does want us to use Adobe Reader to read PDF documents, then they should work more on the startup time of their application and the lousy screen rendering than on proposing AI to proofread read-only PDF files.

@fortune@social.linux.pizza
2025-07-04 22:00:01

Dear Emily:
I recently read an article that said, "reply by mail, I'll summarize."
What should I do?
-- Doubtful
Dear Doubtful:
Post your response to the whole net. That request applies only to
dumb people who don't have something interesting to say. Your postings are
much more worthwhile than other people's, so it would be a waste to reply by
mail.
-- Emily Postnews Answers Your Questions on Netiquette

@Techmeme@techhub.social
2025-07-30 18:11:02

Dropbox says it will discontinue Dropbox Passwords, launched in 2020, on October 28 to focus on its core product, and recommends 1Password as a replacement (Richard Speed/The Register)
theregister.com/2025/07/30/dro

@bici@mastodon.social
2025-08-07 03:00:24

"AI means nobody need attend a uni lecture or course in order to write a winning term paper. Why bother doing research in any job when you can just scrape the Internet? Read a book? That’s for losers. Just order up an instant summary and analysis. The applications of LLMs are stunning. Endless. And we are only in the formative stages."
-- Garth Turner
We ae doomed

@rachel@norfolk.social
2025-08-11 17:42:30

My Carpe Iter is easier to read with a glossy screen protector on, rather than the frosted one.
Yes, there are reflections but only the same as there are from the KTM dash directly below it.
#Motorcycles

@pavelasamsonov@mastodon.social
2025-09-02 13:25:32

#AI is inevitable, which means it cannot fail - it can only be failed. But it's not *your* fault if wreckers are deliberately sabotaging your innovative digital transformation. It's those pesky millennials, who hate productivity.
As always, the instinct of management is to control and punish.
#llm

31% of employees are ‘sabotaging’ your gen AI strategy
Lian Turc - 2nd :
| help $1M founders scale t... + Follow
“WO ed
Your employees are sabotaging your Al strategy.
(41% admit to doing that, but there's more)
I've just read the 2025 Al adoption report by WRITER.
This point immediately caught my eye:
"41% of Millennial and Gen Z employees admit
they're sabotaging their company’s Al strategy, for
example by refusing to use Al tools or outputs.
Are your employees sabotaging your AI strategy?
As AI transforms the workplace, one-third of resistant employees cite fears of devaluation rather than technological concerns—revealing the deeply human challenge at the heart of digital transformation.
Description

Why Your IT Team Is Sabotaging Your AI Strategy

Jake Dunlap
0
Likes
41
Views
Apr 2
2025
AI-Powered Seller EP9 - IT is blocking AI adoption in sales, and it’s costing your company more than you realize.
@akosma@mastodon.online
2025-08-03 05:43:30

Just read this on one of those vertical video things (old man here, bear with me) and it's actually an interesting observation:
Lottery winners are the only millionaires who get taxed by governments.

@markhburton@mstdn.social
2025-08-04 08:15:29

"Wind, solar, and batteries are cheap. They are the fastest and most cost effective way to bring on new power generation [batteries: generation??]. However they are not cheaper than the great mass of incumbent capacity."
Paywalled so I've only read the first paras.
coldeye.earth/p/next-steps

@paulbusch@mstdn.ca
2025-07-04 22:35:14

So on to the new house. We decided to collect 3 quotes, so I fired up the Google thingy and looked for HVAC companies in my area. I only looked at companies with 500 reviews with a customer rating of 4.8 or better. The company we ultimately chose had 2,000 reviews with an average of 4.9. Read through the reviews! You learn a lot about their responsiveness and how they deal with problems. If multiple named service people are praised versus 1 individual, then it's likely a cultural thing.

@arXiv_csAR_bot@mastoxiv.page
2025-09-11 08:37:53

BitROM: Weight Reload-Free CiROM Architecture Towards Billion-Parameter 1.58-bit LLM Inference
Wenlun Zhang, Xinyu Li, Shimpei Ando, Kentaro Yoshioka
arxiv.org/abs/2509.08542

@toxi@mastodon.thi.ng
2025-06-29 15:09:26

Thanks to a book recommendation by @… [1], I went on a tangent learning more about the absolutely fascinating history and workings of core memory (and core rope memory, its read-only version). Some of this also very interesting for #PermaComputing and …

@arXiv_qbioGN_bot@mastoxiv.page
2025-07-08 08:41:50

Finding easy regions for short-read variant calling from pangenome data
Heng Li
arxiv.org/abs/2507.03718 arxiv.org/pd…

When an Alaska Native group asked state law enforcement officials in June for a list of murders investigated by state police
— one of the most fundamental pieces of data needed to understand the issue
— the state said no.
Charlene Aqpik Apok launched
"Data for Indigenous Justice"
in 2020 after trying to collect the names of missing and murdered Indigenous people to read at a rally,
only to discover no government agency had been keeping track.

@tiotasram@kolektiva.social
2025-07-04 20:14:31

Long; central Massachusetts colonial history
Today on a whim I visited a site in Massachusetts marked as "Huguenot Fort Ruins" on OpenStreetMaps. I drove out with my 4-year-old through increasingly rural central Massachusetts forests & fields to end up on a narrow street near the top of a hill beside a small field. The neighboring houses had huge lawns, some with tractors.
Appropriately for this day and this moment in history, the history of the site turns out to be a microcosm of America. Across the field beyond a cross-shaped stone memorial stood an info board with a few diagrams and some text. The text of the main sign (including typos/misspellings) read:
"""
Town Is Formed
Early in the 1680's, interest began to generate to develop a town in the area west of Natick in the south central part of the Commonwealth that would be suitable for a settlement. A Mr. Hugh Campbell, a Scotch merchant of Boston petitioned the court for land for a colony. At about the same time, Joseph Dudley and William Stoughton also were desirous of obtaining land for a settlement. A claim was made for all lands west of the Blackstone River to the southern land of Massachusetts to a point northerly of the Springfield Road then running southwesterly until it joined the southern line of Massachusetts.
Associated with Dudley and Stoughton was Robert Thompson of London, England, Dr. Daniel Cox and John Blackwell, both of London and Thomas Freak of Hannington, Wiltshire, as proprietors. A stipulation in the acquisition of this land being that within four years thirty families and an orthodox minister settle in the area. An extension of this stipulation was granted at the end of the four years when no group large enough seemed to be willing to take up the opportunity.
In 1686, Robert Thompson met Gabriel Bernor and learned that he was seeking an area where his countrymen, who had fled their native France because of the Edict of Nantes, were desirous of a place to live. Their main concern was to settle in a place that would allow them freedom of worship. New Oxford, as it was the so-named, at that time included the larger part of Charlton, one-fourth of Auburn, one-fifth of Dudley and several square miles of the northeast portion of Southbridge as well as the easterly ares now known as Webster.
Joseph Dudley's assessment that the area was capable of a good settlement probably was based on the idea of the meadows already established along with the plains, ponds, brooks and rivers. Meadows were a necessity as they provided hay for animal feed and other uses by the settlers. The French River tributary books and streams provided a good source for fishing and hunting. There were open areas on the plains as customarily in November of each year, the Indians burnt over areas to keep them free of underwood and brush. It appeared then that this area was ready for settling.
The first seventy-five years of the settling of the Town of Oxford originally known as Manchaug, embraced three different cultures. The Indians were known to be here about 1656 when the Missionary, John Eliott and his partner Daniel Gookin visited in the praying towns. Thirty years later, in 1686, the Huguenots walked here from Boston under the guidance of their leader Isaac Bertrand DuTuffeau. The Huguenot's that arrived were not peasants, but were acknowledged to be the best Agriculturist, Wine Growers, Merchant's, and Manufacter's in France. There were 30 families consisting of 52 people. At the time of their first departure (10 years), due to Indian insurrection, there were 80 people in the group, and near their Meetinghouse/Church was a Cemetery that held 20 bodies. In 1699, 8 to 10 familie's made a second attempt to re-settle, failing after only four years, with the village being completely abandoned in 1704.
The English colonist made their way here in 1713 and established what has become a permanent settlement.
"""
All that was left of the fort was a crumbling stone wall that would have been the base of a higher wooden wall according to a picture of a model (I didn't think to get a shot of that myself). Only trees and brush remain where the multi-story main wooden building was.
This story has so many echoes in the present:
- The rich colonialists from Boston & London agree to settle the land, buying/taking land "rights" from the colonial British court that claimed jurisdiction without actually having control of the land. Whether the sponsors ever actually visited the land themselves I don't know. They surely profited somehow, whether from selling on the land rights later or collecting taxes/rent or whatever, by they needed poor laborers to actually do the work of developing the land (& driving out the original inhabitants, who had no say in the machinations of the Boston court).
- The land deal was on condition that there capital-holders who stood to profit would find settlers to actually do the work of colonizing. The British crown wanted more territory to be controlled in practice not just in theory, but they weren't going to be the ones to do the hard work.
- The capital-holders actually failed to find enough poor suckers to do their dirty work for 4 years, until the Huguenots, fleeing religious persecution in France, were desperate enough to accept their terms.
- Of course, the land was only so ripe for settlement because of careful tending over centuries by the natives who were eventually driven off, and whose land management practices are abandoned today. Given the mention of praying towns (& dates), this was after King Phillip's war, which resulted in at least some forced resettlement of native tribes around the area, but the descendants of those "Indians" mentioned in this sign are still around. For example, this is the site of one local band of Nipmuck, whose namesake lake is about 5 miles south of the fort site: #LandBack.

@zudn@theres.life
2025-07-01 14:02:32

Some people think the Bible is hard to understand. (They probably haven't read and studied it.) What's hard to understand is politics and human society in general. God knows the mind of man; a person looking at humanity in general can only come away confused and frustrated.
#thoughts

@tgpo@social.linux.pizza
2025-08-02 22:10:35

#CleanCode is a lot like #TheBible.
Everyone claims to have read it.
Only a few actually have.
Many misuse its teachings.
#programminghumor

@simon_brooke@mastodon.scot
2025-06-25 06:32:51

I know that one of these days I am going to cave in and agree to pay the Grauniad £3 a month to read their site, but the amount of money I have to pay for media is not large and it mainly goes to small indy podcasts by individuals or small groups. I do have a sub to @…, which is corporate media, but that's the only exception.
There are so many rea…

@hikingdude@mastodon.social
2025-07-26 04:10:37

I'm currently reading the second book of the DAEMON (German: Darknet, English: Freedom), a techno thriller from Daniel Suarez.
Pretty cool I think.
But what's even cooler: I just checked his homepage ( daniel-suarez.com/daemon10thsy ) and saw a Ma…

@newsie@darktundra.xyz
2025-06-24 13:06:37

This Queer Online Zine Can Only Be Read Via an Ancient Internet Protocol 404media.co/queer-online-zine-

@ruari@velocipederider.com
2025-07-20 17:43:14

I was outside today sitting in the shade reading this. I'd read it years ago but it was very enjoyable reading it again. I really recommend it! Bonus if you like watches, science or a story of one man's pursuit of perfection.
[Low quality picture as I left my smartphone behind and only had a Nokia 225 4G feature phone with me. Something else I recommend!]

A picture of the book Longitude by Dava Sobel. Opened and resting on a beach towel.
@StephenRees@mas.to
2025-07-25 16:34:31

Evidence for Democracy - new report
Accretion and Erosion – A Comparative Analysis of Scientific Integrity in Canada and the US.
Read the full report at the following link

@samueljohn@mastodon.world
2025-08-29 05:09:50

"My read of economic and financial history is that market pricing almost never takes into account the possibility of huge, disruptive events, even when the strong possibility of such events should be obvious. The usual pattern, instead, is one of market complacency until the last possible moment."
info…

@arXiv_qbioPE_bot@mastoxiv.page
2025-08-12 08:22:52

Treemble: A Graphical Tool to Generate Newick Strings from Phylogenetic Tree Images
John B. Allard, Sudhir Kumar
arxiv.org/abs/2508.07081 a…

@arXiv_csFL_bot@mastoxiv.page
2025-09-04 08:39:01

Store Languages of Turing Machines and Counter Machines
Noah Friesen, Oscar H. Ibarra, Jozef Jir\'asek, Ian McQuillan
arxiv.org/abs/2509.02828

@gwire@mastodon.social
2025-06-24 10:15:47

I only read this to see if it was David Mitchell (author) or David Mitchell (comedian).
gov.uk/government/news/david-m

@arXiv_astrophGA_bot@mastoxiv.page
2025-08-06 08:33:00

Memoirs of mass accretion: probing the edges of intracluster light in simulated galaxy clusters
Tara Dacunha, Phil Mansfield, Risa Wechsler
arxiv.org/abs/2508.02837

@shoppingtonz@mastodon.social
2025-08-06 07:38:03

You know the saying
'Those who dislike you for the right reasons are the people you should worry about'
I don't quite agree:
THOSE ARE PROBABLY THE ONLY PEOPLE WHO CAN TELL YOU THE TRUTH THAT YOU YOURSELF ARE PROBABLY NOT AWARE OF
You need their help!
ie. if I ... I dunno post something bad, even if I delete it and someone tells me they read it and explain why, maybe I'll get mad, maybe I'll mute them but their truth will stick with me if it&…

@mgorny@social.treehouse.systems
2025-07-21 03:09:13

How often do you stop to consider how much harm has come from the absurd capitalist notion of "everyone must work for living" [does not apply to the rich], and its sister notion "everyone must work full hours"?
How many harmful technologies couldn't be phased out because it meant a lot of people losing their only source of income? How many destructive industries have been proliferating simply because closing them down would mean a lot of people without jobs? How much further are we going to push for the absurd notion of infinite growth?
And of course it only applies to the quasi-privileged groups. Nobody cares when lots of "low-tech" people are laid off and told to find a new job, because techbros need their new "high-tech" (read: more destructive to the planet) ideas to sell.
#AntiCapitalism #ClimateCrisis

@bibbleco@infosec.exchange
2025-08-20 14:35:51

I thought this #Guardian / #WaPo piece by some Peter Brennan might be worth a glance, though expecting only 60% stuff everyone knows (or should know), and 40% tedious irrelevant human interest guff. Which just proves that you can never set one's expectations low enough.
Well, I made it as …

@tiotasram@kolektiva.social
2025-08-30 01:40:19

Just finished "Concrete Rose" by Angie Thomas (I haven't yet read "The Hate U Give" but that's now high on my list of things to find). It's excellent, and in particular, an excellent treatise on positive masculinity in fiction form. It's not a super easy book to read emotionally, but is excellently written and deeply immersive. I don't have the perspective to know how it might land among teens like those it portrays, but I have a feeling it's true enough to life, and it held a lot of great wisdom for me.
CW for the book include murder, hard drugs, and parental abandonment.
I caught myself in a racist/classist habit of thought while reading that others night appreciate hearing about: early on I was mentally comparing it to "All my Rage" by Sabaa Tahir and wondering if/when we'd see the human cost of the drug dealing to the junkies, thinking that it would weaken the book not to include that angle. Why is that racist/classist? Because I'm always expecting books with hard drug dealers in them to show the ugly side of their business since it's been drilled into me that they're evil for the harm they cause, yet I never expect the same of characters who are bankers, financial analysts, health insurance claims adjudicators, police officers, etc. (Okay, maybe I do now look for that in police narratives). The point is, our society includes many people who as part of their jobs directly immiserate others, so why and I only concerned about that misery being brought up when it's drug dealers?
#AmReading

@emd@cosocial.ca
2025-07-29 04:38:55

Same
circumstances.run/@ukapala/114

@bici@mastodon.social
2025-08-26 03:03:39

Well worth a read:
The icons in /Applications/Utilities/ in MacOS 26 Tahoe represent a folder full of dead canaries.
via @…

@buercher@tooting.ch
2025-09-14 07:18:59

A read-only version of Tiny Wiki with the content of the Sofawiki website. You can see the speed and what is rendered and what not (CSS, tables, templates, images).
belle-nuit.com/tiny-wiki-sofaw

@june_thalia_michael@literatur.social
2025-07-21 22:15:58

#EroticMusings 8: Do you consume much erotica? Is it similar to your work? Share your favourite works or creators!
As mentioned on last week's prompt, I respond to erotica mainly on an aesthetic level (I think it's pretty) and only ever look at comics and drawings, or read fiction.
Most of my favs are in German, but I have some favourite artists who create in English/in…

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@arXiv_csFL_bot@mastoxiv.page
2025-07-22 09:23:20

A Myhill-Nerode Type Characterization of 2detLIN Languages
Benedek Nagy (Eastern Mediterranean University / Eszterh\'azy K\'aroly Catholic University)
arxiv.org/abs/2507.15316

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding