Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Dragofix@veganism.social
2026-02-18 22:58:51

China carbon emissions 'flat or falling' in 2025: analysis #China

@Dragofix@veganism.social
2026-02-17 21:57:47

H5N1 bird flu kills more than 50 skuas in first Antarctica wildlife die off #Antarctica

A trial date has been set for Trump's $10 billion lawsuit against the BBC.
On Thursday, Judge Roy K. Altman of the Southern District of Florida set a provisional start date of ⭐️February 15, 2027, for a two-week trial. 
The lawsuit was filed following the release of an episode from Panorama,
the BBC's investigative documentary series, titled
"Trump: A Second Chance?"
In it, the BBC cut together two parts of Trump's January 2021 speech to …

@detondev@social.linux.pizza
2026-02-12 21:00:36

choose your character

Prophet Royal Robertson outside his home (from the documentary Make by Scott Ogden and Malcolm Hearn[1])
In this extract from the film Celestial Knowledge Credo Mutwa speaks about aliens or greys beings known as Mantindane in Africa.
Stories of alien sightings and landings of mysterious aircraft have emerged from Iino as far back as the 1970s. Tsugio Kinoshita, a researcher of unidentified flying objects, said he first saw such an UFO in 1972 at the age of 25.

Kinoshita was hiking a mountain in Fukushima prefecture with four friends when suddenly a saucer-like shape appeared in front of them. “This thing stuck out in front of me. Starting and stopping in the blue sky. Then all of a sudden, it was gone,” he told VICE World …
Tamil Nadu: Man Builds Temple For Alien God
@aredridel@kolektiva.social
2026-04-14 14:22:42

So to follow up on this, I've caught it in action. Models, when quantized a bit, just do a bit more poorly with short contexts. Even going from f32 (as trained) to bf16 (as usually run) to q8 tends to do okay for "normal" context windows. And q4 you start feeling like "this model is a little stupid and gets stuck sometimes” (it is! It's just that it's still mostly careening about in the space of "plausible" most of the time. Not good guesswork, but still in the zone). With long contexts, the probability of parameters collapsing to zero are higher, so the more context the more likelihood you are to see brokenness.
And then at Q2 (2 bits per parameter) or Q1, the model falls apart completely. Parameters collapse to zero easily. You start seeing "all work and no play makes jack a dull boy” sorts of behavior, with intense and unscrutinized repetition, followed by a hard stop when it just stops working.
And quantization is a parameter that a model vendor can turn relatively easily. (they have to regenerate the model from the base with more quantization, but it's a data transformation on the order of running a terabyte through a straightforward and fast process, not like training).
If you have 1000 customers and enough equipment to handle the requests of 700, going from bf16 to q8 is a no-brainer. Suddenly you can handle the load and have a little spare capacity. They get worse results, probably pay the same per token (or they're on a subscription that hides the cost anyway so you are even freer to make trade-offs. There's a reason that subscription products are kinda poorly described.)
It's also possible for them to vary this across a day: use models during quieter periods? Maybe you get an instance running a bf16 quantization. If you use it during a high use period? You get a Q4 model.
Or intelligent routing is possible. No idea if anyone is doing this, but if they monitor what you send a bit, and you generally shoot for an expensive model for simple requests? They could totally substitute a highly quantized version of the model to answer the question.
There are •so many tricks• that can be pulled here. Some of them very reasonable to make, some of them treading into outright misleading or fraudulent, and it's weirdly hard to draw the line between them.

@seav@en.osm.town
2026-03-11 00:26:48

One fascinating film trivia I learned recently is that Sofia Coppola and Spike Jonze’s marriage from 1999 to 2003 partially inspired Coppola’s 2003 film Lost In Translation and Jonze’s 2013 film Her.
Coincidentally, both films star Scarlett Johansson as the main female protagonist, both films were nominated for the Oscar Best Picture, and both films won the Oscar Best Original Screenplay for Coppola and Jonze.

Diptych image featuring the movie poster of Lost In Translation on the left, showing Bill Murray in a bathrobe sitting on a hotel bed with the nighttime skyline of Tokyo in the background through a window; and the movie poster of Her on the right, depicting a profile photo of a mustachioed Joaquin Phoenix wearing a red collared shirt against a vivid red background.
@cosmos4u@scicomm.xyz
2026-02-14 01:56:17

While preparing #ArtemisII for flight, NASA engineers are reviewing data after a confidence test Feb. 12, in which operators partially filled the SLS (Space Launch System) core stage liquid hydrogen tank to assess newly replaced seals in an area used to fill the rocket with propellant: nasa.gov/blogs/missions/2026/0 - during the test, "teams encountered an issue with ground support equipment that reduced the flow of liquid hydrogen into the rocket. [...] Engineers will examine findings before setting a timeline for the next test, a second wet dress rehearsal this month. March remains the earliest potential launch window for Artemis II."

@Dragofix@veganism.social
2026-03-15 23:42:00

Mass pilot whale stranding in Indonesia raises questions about ocean health news.mongabay.com/short-articl

@Dragofix@veganism.social
2026-02-17 23:45:26

Quick course correction needed to avoid 'hothouse Earth' scenario, scientists say #climate

@Dragofix@veganism.social
2026-03-15 22:45:31

Microplastics may be quietly damaging your brain and fueling Alzheimer’s and Parkinson’s #health