Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.