Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@primonatura@mstdn.social
2026-04-17 17:00:41

"Fleet of ‘Flying Ferries’ Will Provide Zero-Emission, Silent EV Boats for Travelers Along Norway’s Busy Coast"
#Norway #Boats

@Techmeme@techhub.social
2026-01-17 11:35:52

Sources: following investor backlash, Monzo to give outgoing CEO TS Anil an expanded role after he steps down in February; he is likely to retain a board seat (Laith Al-Khalaf/Financial Times)
ft.com/content/7d4e11d3-f30e-4

@edintone@mastodon.green
2026-04-13 06:56:05

Fleet of ‘Flying Ferries’ Will Provide Zero-Emission, Silent EV Boats for Commuters and Tourists Along Norway’s Coast goodnewsnetwork.org/fleet-of-f

@awinkler@openbiblio.social
2026-03-12 22:20:46

Schöne Geschichte eines wiedergefundenen Films in der Library of Congress #loc:
blogs.loc.gov/loc/2026/02/lost

@aredridel@kolektiva.social
2026-04-14 14:22:42

So to follow up on this, I've caught it in action. Models, when quantized a bit, just do a bit more poorly with short contexts. Even going from f32 (as trained) to bf16 (as usually run) to q8 tends to do okay for "normal" context windows. And q4 you start feeling like "this model is a little stupid and gets stuck sometimes” (it is! It's just that it's still mostly careening about in the space of "plausible" most of the time. Not good guesswork, but still in the zone). With long contexts, the probability of parameters collapsing to zero are higher, so the more context the more likelihood you are to see brokenness.
And then at Q2 (2 bits per parameter) or Q1, the model falls apart completely. Parameters collapse to zero easily. You start seeing "all work and no play makes jack a dull boy” sorts of behavior, with intense and unscrutinized repetition, followed by a hard stop when it just stops working.
And quantization is a parameter that a model vendor can turn relatively easily. (they have to regenerate the model from the base with more quantization, but it's a data transformation on the order of running a terabyte through a straightforward and fast process, not like training).
If you have 1000 customers and enough equipment to handle the requests of 700, going from bf16 to q8 is a no-brainer. Suddenly you can handle the load and have a little spare capacity. They get worse results, probably pay the same per token (or they're on a subscription that hides the cost anyway so you are even freer to make trade-offs. There's a reason that subscription products are kinda poorly described.)
It's also possible for them to vary this across a day: use models during quieter periods? Maybe you get an instance running a bf16 quantization. If you use it during a high use period? You get a Q4 model.
Or intelligent routing is possible. No idea if anyone is doing this, but if they monitor what you send a bit, and you generally shoot for an expensive model for simple requests? They could totally substitute a highly quantized version of the model to answer the question.
There are •so many tricks• that can be pulled here. Some of them very reasonable to make, some of them treading into outright misleading or fraudulent, and it's weirdly hard to draw the line between them.

@tinoeberl@mastodon.online
2026-02-14 15:16:53

Eine gentechnisch veränderte #Alge könnte helfen, #Mikroplastik aus Wasser zu entfernen.
Sie produziert ein flüchtiges Naturöl namens Limonen. Weil sowohl Alge als auch #Plastik wasserabweisend …

@randombaywatch@mastodon.social
2026-02-17 12:42:14

#FallingFast
Season 9 Episode 12 "The Big Blue"
#RandomBaywatch #lvdlpx #Baywatch

Iraq oil ports halt operations after tanker attacks – report
Iraqi officials are saying oil ports have “completely stopped operations” while commercials ports continue to operate after an attack on a fuel tankers, according to Iraqi state media.
Iraqi port security officials have said two foreign tankers carrying Iraqi fuel oil were in flames after being attacked by Iranian boats laden with explosives, killing a foreign crew member.
Iraq’s General Company for Ports chief Fa…

@floheinstein@chaos.social
2026-02-11 10:59:27

Notepad.exe RCE Vulnerability 8.8
Are you shitting me?
#cve202620841

Vincent Kapoor saying Notepad.exe. Vulnerable to RCE. A score of 8.8. Are you shitting me.
@samvarma@fosstodon.org
2026-03-12 20:48:40

In early April I'm flying to Florida to play a show with Steve Augeri, 60m of Journey material.
So, just spent the afternoon watching old live videos, and fuck they were awesome and Neal Schon was a total badass.