
2025-07-14 20:32:15
Oh, the arguments we had! There was a marketing person — quite a good one, and a good person too — who just couldn’t see it, who fought the dev team tooth and nail on this. She finally relented when @… wrote a brief positive plug for our app, and made her realize that the landscape had changed and her J2ME-world design instincts just didn’t work in this new iPhone market.
(I’m not sure that company ever really made the shift. They struggled with and eventually dropped their general consumer app, and concentrated quite successfully on some pro markets where features ruled all.)
4/
Cowboys’ Micah Parsons drama mirrors familiar (and positive) storyline https://www.si.com/nfl/cowboys/news/dallas-cowboys-micah-parsons-drama-mirrors-familiar-positive-storyline
COVID-BLUeS -- A Prospective Study on the Value of AI in Lung Ultrasound Analysis
Nina Wiedemann, Dianne de Korte-de Boer, Matthias Richter, Sjors van de Weijer, Charlotte Buhre, Franz A. M. Eggert, Sophie Aarnoudse, Lotte Grevendonk, Steffen R\"ober, Carlijn M. E. Remie, Wolfgang Buhre, Ronald Henry, Jannis Born
https://arxiv.org/abs…
I love this App!
These people are too classy to even hint at this but if you like them and what they do - consider leaving a 5 star review in both the Mac & iOS app stores, and if you have positive things to say, write a quick 2 paragraph review - the buzz helps.
https://iconfactory.world/@Iconfactory
The three-dimensional structure of population density in world cities
Ga\"etan Laziou, R\'emi Lemoy
https://arxiv.org/abs/2509.06140 https://arxiv…
Forgive the elevator pitch but the types of projects which really excite me have the potential for real world positive outcomes for people, community and environment.
Particularly those tackling the challenges of climate change and sustainability.
If you are looking for someone for your sustainability project, I have experience working on projects for WRAP and the Swedish EPA #ClimateChange #Plastics #Sustainability #Fedihired #lookingforwork
Cowboys’ Micah Parsons drama mirrors familiar (and positive) storyline https://www.si.com/nfl/cowboys/news/dallas-cowboys-micah-parsons-drama-mirrors-familiar-positive-storyline
The natural-born posthuman: applying extended mind to post- and transhumanist discourse https://link.springer.com/article/10.1007/s11229-025-05202-4 "Newer discussions have expanded upon this idea through sensory substitution devices, such as The vOICe system which use…
“It is likely that Ukraine will suffer the greatest military and political damage in this situation,
apart from Iran itself, of course.
A new war in the Middle East will not only distract the world’s attention from the [conflict in Ukraine]
but will also, apparently, contribute to the final reorientation of the US towards providing military assistance to Israel.”
But while these may offer short-term gains, the long-term picture is far more precarious for Russia, anal…
Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
47's government being a menace to the whole world again. Tuberculosis in prisons means tuberculosis everywhere.
"Detainees have tested positive for tuberculosis at the Anchorage Correctional Complex in Alaska and Adelanto ICE Processing Center in California, according to news reports. ...
"Tuberculosis thrives in carceral settings of all kinds. It spreads through the air when an infected person coughs, sneezes, or spits, and it only takes a few droplets to sicken someone. ...
"Almost everyone who contracts it needs treatment to survive, and those who do may live with lungs so damaged they struggle to breathe. ...
"Anyone in ICE custody with symptoms suggestive of tuberculosis is supposed to be placed into an airborne infection isolation room with negative pressure ventilation ...
"But ICE detention facilities don’t necessarily have such rooms ... The typical solitary cell does not use negative pressure, detention and medical researchers said. ...
"Experts expect the situation to get much worse in the months ahead. That’s because Trump’s drive to deport one million people hasn’t yet coincided with the height of flu season, or the GOP’s recent cuts to the health care system, or its exclusion of undocumented immigrants from several social programs."
- Whitney Curry Wimbish, The American Prospect
#tuberculosis #USA #USPol
Modulating task outcome value to mitigate real-world procrastination via noninvasive brain stimulation
Zhiyi Chen, Zhilin Ren, Wei Li, ZhenZhen Huo, ZhuangZheng Wang, Ye Liu, Bowen Hu, Wanting Chen, Ting Xu, Artemiy Leonov, Chenyan Zhang, Bernhard Hommel, Tingyong Feng
https://arxiv.org/abs/2506.21000
Explainable Vulnerability Detection in C/C Using Edge-Aware Graph Attention Networks
Radowanul Haque, Aftab Ali, Sally McClean, Naveed Khan
https://arxiv.org/abs/2507.16540
New Hardness Results for Low-Rank Matrix Completion
Dror Chawin, Ishay Haviv
https://arxiv.org/abs/2506.18440 https://arxiv.org/pdf/2…
Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
https://social.coop/@eloquence/114940607434005478
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.
Cowboys Headlines: NFL world reacts to Micah Parsons news; is it 'worst trade ever'? https://cowboyswire.usatoday.com/story/sports/nfl/cowboys/2025/08/29/news-headlines-august-28-2025-micah-par…
Just finished "Get a Life, Chloe Brown" by Talia Hibbert. It's... much less chaste than most of the other romances I've been reading, but also incredibly sweet and positive, so I enjoyed it a lot.
My one reservation is that it does the thing a lot of romance novels do where they equate physical desire with romantic desire, and physical flirtations/advances with actual communication, and yes people equate those things in the real world all the time, by it's often really harmful when they do that.
This novel does better with consent than 99% of the field probably, and legitimately deserves props for that, so this isn't the harsh criticism I'd level if it seriously broke the "would this be okay if we didn't have access to interior monologues" test, but it skirts the edges of that a bit.
#AmReading