Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@netzschleuder@social.skewed.de
2026-01-16 17:00:04

inploid: Inploid: an online social Q&A platform
Inploid is a social question & answer website in Turkish. Users can follow others and see their questions and answers on the main page. Each user is associated with a reputability score which is influenced by feedback of others about questions and answers of the user. Each user can also specify interest in topics. The data is crawled in June 2017 and consist of 39,749 nodes and 57,276 directed links between them. In addition, for …

inploid: Inploid: an online social Q&A platform. 39749 nodes, 57276 edges. https://networks.skewed.de/net/inploid
@detondev@social.linux.pizza
2026-03-16 20:30:46

Finally 🙄

In earlier years I thought and wrote things about women of which I am now ashamed, and I want to offer an apology to all of the women on my legal team and to the female sex in general.

There were several reasons for my earlier resentment of women (and I present these only as causes, not as excuses): youthful machismo, my own lack of success with women, a fund of frustrated anger, and the fact that in youth I never had the opportunity to know well any women who were worthy of much respect. (Tho…
@CubitOom@social.linux.pizza
2026-03-16 13:41:39

Fascist paramilitary invaders hit and run a protester (Los Angeles, CA - 03/13/26)
If this angers you, organize to fight fascism.
Video Source:
reddit.com/comments/1rux74w
Article:

@heiseonline@social.heise.de
2026-04-09 14:45:00

TP-Link-Angriff: Microsoft im Visier, Deutschland im Glück
Die Attacke auf Router und Access Points von TP-Link zielte auf die Übernahme von Microsofts Office-Cloud-Sessions. Deutschland war laut BSI wenig betroffen.

@fanf@mendeddrum.org
2026-02-16 15:42:03

from my link log —
Towards fearless macros.
lambdaland.org/posts/2023-10-1
saved 2026-02-15

@Carwil@mastodon.online
2026-04-16 11:59:03

"Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up."
arxiv.org/abs/2604.04721

@presseportal_pol_NDS@frawas.de
2026-03-16 13:30:26

POL-CLP: Pressemeldungen aus dem Nordkreis Cloppenburg Cloppenburg/Vechta (ots) - Barßel - Müll im Garten verbrannt Am Sonntag, 15. März 2026 gegen 12:11 Uhr verbrannte eine Anwohnerin im Garten ihres Wohnhauses in der Lange Straße diversen Müll. Der Brand griff auf angrenzende Mülltonnen über. ... presseport…

@usul@piaille.fr
2026-04-14 11:04:56

Asahi is not only a linux distribution
#asahilinux #beer

Fridge with beers in it one is Asahi the other one is angkor
@aredridel@kolektiva.social
2026-04-14 14:22:42

So to follow up on this, I've caught it in action. Models, when quantized a bit, just do a bit more poorly with short contexts. Even going from f32 (as trained) to bf16 (as usually run) to q8 tends to do okay for "normal" context windows. And q4 you start feeling like "this model is a little stupid and gets stuck sometimes” (it is! It's just that it's still mostly careening about in the space of "plausible" most of the time. Not good guesswork, but still in the zone). With long contexts, the probability of parameters collapsing to zero are higher, so the more context the more likelihood you are to see brokenness.
And then at Q2 (2 bits per parameter) or Q1, the model falls apart completely. Parameters collapse to zero easily. You start seeing "all work and no play makes jack a dull boy” sorts of behavior, with intense and unscrutinized repetition, followed by a hard stop when it just stops working.
And quantization is a parameter that a model vendor can turn relatively easily. (they have to regenerate the model from the base with more quantization, but it's a data transformation on the order of running a terabyte through a straightforward and fast process, not like training).
If you have 1000 customers and enough equipment to handle the requests of 700, going from bf16 to q8 is a no-brainer. Suddenly you can handle the load and have a little spare capacity. They get worse results, probably pay the same per token (or they're on a subscription that hides the cost anyway so you are even freer to make trade-offs. There's a reason that subscription products are kinda poorly described.)
It's also possible for them to vary this across a day: use models during quieter periods? Maybe you get an instance running a bf16 quantization. If you use it during a high use period? You get a Q4 model.
Or intelligent routing is possible. No idea if anyone is doing this, but if they monitor what you send a bit, and you generally shoot for an expensive model for simple requests? They could totally substitute a highly quantized version of the model to answer the question.
There are •so many tricks• that can be pulled here. Some of them very reasonable to make, some of them treading into outright misleading or fraudulent, and it's weirdly hard to draw the line between them.

@Techmeme@techhub.social
2026-04-09 08:55:32

Tubi becomes the first streamer to launch a native app within ChatGPT, allowing viewers to find movies or shows to watch by using conversational phrases (Lauren Forristal/TechCrunch)
techcrunch.com/2026/04/08/tubi