Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_physicsfludyn_bot@mastoxiv.page
2026-02-26 09:01:51

Large eddy simulation of turbulent swirl-stabilized flames using the front propagation formulation: impact of the resolved flame thickness
Ruochen Guo, Yunde Su, Yuewen Jiang
arxiv.org/abs/2602.21940 arxiv.org/pdf/2602.21940 arxiv.org/html/2602.21940
arXiv:2602.21940v1 Announce Type: new
Abstract: This work extends the front propagation formulation (FPF) combustion model to large eddy simulation (LES) of swirl-stabilized turbulent premixed flames and investigates the effects of resolved flame thickness on the predicted flame dynamics. The FPF method is designed to mitigate the spurious propagation of under-resolved flames while preserving the reaction characteristics of filtered flame fronts. In this study, the model is extended to account for non-adiabatic effects and is coupled with an improved sub-filter flame speed estimation that resolves the inconsistency arising from heat-release effects on local sub-filter turbulence. The performance of the extended FPF method is validated by LES of the TECFLAM swirl-stabilized burner, where the results agree well with experimental measurements. The simulations reveal that the stretching of vortical structures in the outer shear layer leads to the formation of trapped flame pockets, which are identified as the physical mechanism responsible for the secondary temperature peaks observed in the experiment. The prediction of this phenomenon is shown to be strongly dependent on the resolved flame thickness, when the filter size is used for modeling sub-filter flame wrinklings. Without proper modeling of the chemical steepening effects, the thickness of the resolved flame brush is over-predicted, causing the flame consumption rate to be under-estimated. Consequently, the flame brush detaches from the outer shear layer, resulting in a failure to capture the flame pockets and the associated secondary temperature peaks.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:44:11

Sequential Counterfactual Inference for Temporal Clinical Data: Addressing the Time Traveler Dilemma
Jingya Cheng, Alaleh Azhir, Jiazi Tian, Hossein Estiri
arxiv.org/abs/2602.21168 arxiv.org/pdf/2602.21168 arxiv.org/html/2602.21168
arXiv:2602.21168v1 Announce Type: new
Abstract: Counterfactual inference enables clinicians to ask "what if" questions about patient outcomes, but standard methods assume feature independence and simultaneous modifiability -- assumptions violated by longitudinal clinical data. We introduce the Sequential Counterfactual Framework, which respects temporal dependencies in electronic health records by distinguishing immutable features (chronic diagnoses) from controllable features (lab values) and modeling how interventions propagate through time. Applied to 2,723 COVID-19 patients (383 Long COVID heart failure cases, 2,340 matched controls), we demonstrate that 38-67% of patients with chronic conditions would require biologically impossible counterfactuals under naive methods. We identify a cardiorenal cascade (CKD -> AKI -> HF) with relative risks of 2.27 and 1.19 at each step, illustrating temporal propagation that sequential -- but not naive -- counterfactuals can capture. Our framework transforms counterfactual explanation from "what if this feature were different?" to "what if we had intervened earlier, and how would that propagate forward?" -- yielding clinically actionable insights grounded in biological plausibility.
toXiv_bot_toot

@Techmeme@techhub.social
2025-12-03 06:05:49

Stripe agrees to acquire Metronome, which offers APIs to help SaaS companies charge customers based on usage and has raised $128M in total funding (Scott Woody/Metronome)
metronome.com/blog/important-c

@deprogrammaticaipsum@mas.to
2025-11-30 17:16:12

"As explained in chapter 11 of Meyer’s book, assertions are meant to check the correctness of a piece of software; that is, its ability to perform the tasks defined in their specification.
Because, you do have a specification, right? Right?"
deprogrammaticaipsum.com/asser

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM