Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@simon_brooke@mastodon.scot
2025-07-11 11:04:31

"[Chain of reasoning] reports are untrustworthy on principle: they are plausible explanations for plausible responses, and since the inferences involved are more complex, they burn more compute and carbon per query as well as introducing more mistakes"
This is a particularly offensive point about #LLMs: we actually do have a class of systems, inference engines, which do reason and can…

@arXiv_csCV_bot@mastoxiv.page
2025-09-12 10:13:19

Improving Human Motion Plausibility with Body Momentum
Ha Linh Nguyen, Tze Ho Elden Tse, Angela Yao
arxiv.org/abs/2509.09496 arxiv.org/pdf/…

@mgorny@social.treehouse.systems
2025-09-05 10:41:07

"""
In the sixteenth century, lunacy was a constant theme that was never questioned. It was still frequent in the seventeenth century, but started to disappear, and by 1707, the year in which Le François asked the question ‘Estne aliquod lunae in corpora humana imperium?’ (Does the moon have any influence over the human body?), after lengthy discussions, the university decided that their reply was in the negative. In the course of the eighteenth century the moon was rarely cited among the causes of madness, even as a possible factor or an aggravation. But right at the end of the century the idea reappears, perhaps under the influence of English medicine, which had never entirely forgotten the moon, and Daquin, followed by Leuret and Guislain, all admitted the influence of the moon on the phases of maniacal excitement, or at the least on the agitation of their patients. But what is important here is not so much the return of the theme as the possibility and conditions necessary for its reappearance. It reappears entirely transformed, filled with a new significance that it did not formerly possess. In its traditional form, it designated an immediate influence, a direct coincidence in time and intersection in space, whose mode of action was entirely situated in the power of the stars. But in Daquin by contrast, the influence of the moon acts through a whole series of mediations, in a kind of hierarchy, surrounding man. The moon acts on the atmosphere with such intensity that it can set in motion a mass as heavy as the ocean. The nervous system, of all the parts that make up the human organism, is the part most sensitive to atmospheric variations, as the slightest variation in temperature, humidity or dryness can have serious effects upon it. The moon therefore, given the important power that its trajectory exerts on the atmosphere, is likely to act most on people whose nervous fibres are particularly delicate:
“Madness is an exclusively nervous condition, and the brain of a madman must therefore be infinitely more susceptible to the influence of the atmosphere, which itself undergoes considerable changes of intensity as a result of the different positions of the moon relative to the earth.” [Daquin, Philosophie de la folie, Paris, 1792]
"""
(Michel Foucault, History of Madness)

@muz4now@mastodon.world
2025-08-10 23:08:35

Just a taste of "Ritual Escape" - #Listen and #Watch -
or play the full playlist 👉🏼

The video features a person playing a djembe drum outdoors, wearing a light green shirt and a gold ring on their left hand. The drum, with a black frame and a light-colored skin, is held close to the camera, highlighting the intricate patterns on its side. The person's hands are actively engaged in playing the drum, with fingers and palms striking the drumhead in various rhythmic patterns. The background is a natural setting with trees and sunlight filtering through the leaves, creating a seren…
@arXiv_astrophGA_bot@mastoxiv.page
2025-06-13 08:55:10

Precipitation plausible: magnetized thermal instability in the intracluster medium
Benjamin D. Wibking, G. Mark Voit, Brian W. O'Shea
arxiv.org/abs/2506.10277

@adulau@infosec.exchange
2025-08-10 09:26:38

Anyone having issues with Tor for the past few days?
It seems one of the Snowflake bridge is down (but should not impact obfs4):
gitlab.torproject.org/tpo/anti

"This graph shows the fraction of timeouts and failures when downloading static files of different sizes over Tor, either from a server on the public internet or from an onion server. A timeout occurs when a download does not complete within the scheduled time, in which case it is aborted in order not to overlap with the next scheduled download. A failure occurs when the download completes, but the response is smaller than expected." There is an increase from one source for some failures and ti…
@arXiv_csRO_bot@mastoxiv.page
2025-08-12 11:43:23

LAURON VI: A Six-Legged Robot for Dynamic Walking
Christian Eichmann (FZI Research Center for Information Technology), Sabine Bellmann (FZI Research Center for Information Technology), Nicolas H\"ugel (FZI Research Center for Information Technology), Louis-Elias Enslin (FZI Research Center for Information Technology), Carsten Plasberg (FZI Research Center for Information Technology), Georg Heppner (FZI Research Center for Information Technology), Arne Roennau (FZI Research Center …

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@arXiv_csCV_bot@mastoxiv.page
2025-09-10 10:44:01

ScoreHOI: Physically Plausible Reconstruction of Human-Object Interaction via Score-Guided Diffusion
Ao Li, Jinpeng Liu, Yixuan Zhu, Yansong Tang
arxiv.org/abs/2509.07920

@arXiv_csCV_bot@mastoxiv.page
2025-09-12 10:15:39

Geometric Neural Distance Fields for Learning Human Motion Priors
Zhengdi Yu, Simone Foti, Linguang Zhang, Amy Zhao, Cem Keskin, Stefanos Zafeiriou, Tolga Birdal
arxiv.org/abs/2509.09667