Finally, with a hundred thousand dead, two thirds of the buildings bombed, all the hospitals and infrastructure destroyed, more than a million people on the verge of starvation, and Israel's leadership even more openly bragging about their intent to ethnically cleanse the area, it's gone on "too long".
Well done Lammy and Starmer. You finally noticed eh?
So trade deal talks are off, and weapons export licenses are... Well. We'll see. "Always Under Review" they say.
And the main reason for the slight and tiny change in emphasis? Mostly Trump. They are following Trump. They feel easier criticizing Israel's genocide now Trump looks easier with it.
#ukpol #gaza #israel
The full formula for the probability of "success" is:
p = {
1/(2^(-n 1)) if n is negative, or
1 - (1/(2^(n 1))) if n is zero or positive
}
(Both branches have the same value when n is 0, so the behavior is smooth around the origin.)
How can we tweak this?
First, we can introduce fixed success and/or failure chances unaffected by level, with this formula only taking effect if those don't apply. For example, you could do 10% failure, 80% by formula, and 10% success to keep things from being too sure either way even when levels are very high or low. On the other hand, this flattening makes the benefit of extra advantage levels even less exciting.
Second, we could allow for gradations of success/failure, and treat the coin pools I used to explain that math like dice pools a bit. An in-between could require linearly more success flips to achieve the next higher grade of success at each grade. For example, simple success on a crit role might mean dealing 1.5x damage, but if you succeed on 2 of your flips, you get 9/4 damage, or on 4 flips 27/8, or on 7 flips 81/16. In this world, stacking crit levels might be a viable build, and just giving up on armor would be super dangerous. In the particular case I was using this for just now, I can't easily do gradations of success (that's the reason I turned to probabilities in the first place) but I think I'd favor this approach when feasible.
The main innovation here over simple dice pools is how to handle situations where the number of dice should be negative. I'm almost certain it's not a truly novel innovation though, and some RPG fan can point out which system already does this (please actually do this, I'm an RPG nerd too at heart).
I'll leave this with one more tweak we could do: what if the number 2 in the probability equation were 3, or 2/3? I think this has a similar effect to just scaling all the modifiers a bit, but the algebra escapes me in this moment and I'm a bit lazy. In any case, reducing the base of the probability exponent should let you get a few more gradations near 50%, which is probably a good thing, since the default goes from 25% straight to 50% and then to 75% with no integer stops in between.
A model-agnostic likelihood for the reinterpretation of the $\boldsymbol{B^{ }\to K^{ } \nu \bar{\nu}}$ measurement at Belle II
Collaboration, Abumusabh, Adachi, Aggarwal, Ahmed, Ahn, Akopov, Alghamdi, Alhakami, Aloisio, Althubiti, Amos, Ky, Asner, Atmacan, Ayad, Babu, Bae, Baghel, Bambade, Banerjee, Barrett, Bartl, Baudot, Baur, Beaubien, Becherer, Becker, Bennett, Bernlochner, Bertacchi, Bertemes, Bertholet, Bessner, Bettarini, Bhardwaj, Bhuyan, Bianchi, Biswas, Bodrov, Bondar, Bonvi…
One of the goals I've set for further development of #Python eclasses in #Gentoo was to avoid needless complexity. Unfortunately, the subject matter sometimes requires them. However, many of the functions added lately were already manually done in ebuilds for years.
We've started disabling plugin autoloading years ago. First we just did that for individual packages that caused issues. Then, for these where tests ended up being really slow. Finally, pretty much anywhere `python_test()` was declared. Doing it all manually was particularly cumbersome — all I needed for `EPYTEST_PLUGINS` is a good idea how to generalize it.
Similarly, `EPYTEST_XDIST` was added after we have been adding manually `epytest -p xdist -n "$(makeopts_jobs)" --dist=worksteal` — and while at it, I've added `EPYTEST_JOBS` to override the job count.
Perhaps `EPYTEST_TIMEOUT` wasn't that common. However, it was meant to help CI systems that could otherwise get stuck on hanging test.
Similarly, "standard library" version (like `3.9`) matching to `python_gen_cond_dep` was added after a long period of explicitly stating `python3_9 pypy3`. As an extra benefit, this also resolved the problem that at the time `pypy3` could mean different Python versions.
Honey bees remove 80% of pollen—leaving native bees with nothing #evironment
Understanding the Error Sensitivity of Privacy-Aware Computing
Mat\'ias Mazzanti (University of Buenos Aires), Esteban Mocskos (University of Buenos Aires), Augusto Vega (IBM T. J. Watson Research Center), Pradip Bose (IBM T. J. Watson Research Center)
https://arxiv.org/abs/2506.07957
Robotic Multimodal Data Acquisition for In-Field Deep Learning Estimation of Cover Crop Biomass
Joe Johnson, Phanender Chalasani, Arnav Shah, Ram L. Ray, Muthukumar Bagavathiannan
https://arxiv.org/abs/2506.22364
This https://arxiv.org/abs/2506.02120 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csNE_…
Random-key genetic algorithms
Mariana A. Londe, Luciana S. Pessoa, Carlos E. Andrade, Jos\'e F. Gon\c{c}alves, Mauricio G. C. Resende
https://arxiv.org/abs/2506.02120