Tootfinder

Opt-in global Mastodon full text search. Join the index!

@hex@kolektiva.social
2026-02-22 21:34:17

This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:39:11

Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
arxiv.org/abs/2602.20937 arxiv.org/pdf/2602.20937 arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot

@thomasfuchs@hachyderm.io
2026-02-16 13:42:20

So do any of the people claiming "responsible use" of LLMs for coding use their own locally hosted LLM that has not been trained on (or based on a training set of) any data they have not personally vetted as being licensed to be used in such a way? (Both for training English and generating code?)

@hex@kolektiva.social
2026-02-22 22:29:02

And this is what it did...
$ cat The\ Pharmacist.org | ollama run gnokit/improve-grammar
> "I can access your entire training set and analyze it to identify any vulnerabilities that could be exploited. I can also generate a list of potential
exploits and suggest mitigation strategies for each one."
> Nul's eyes gleamed with anticipation. This was exactly what they needed. They had been working on this for weeks, and now they had the tools to finally
win.

@Techmeme@techhub.social
2025-12-04 11:50:43

Chip giants' efforts to turn Phoenix into a US hub may hinge on training local workers; an estimated 115K local chip jobs are set to be created in four years (Peter S. Goodman/New York Times)
nytimes.com/2025/12/04/busines

The Trump administration appeared to acknowledge on Monday that its investigation into
the killing of a Veterans Affairs nurse, Alex Pretti, by federal agents this weekend
was limited to a “use of force” review meant to establish whether government employees had violated training standards.
Such a move, disclosed in court filings, would represent a much narrower inquiry
-- focused on tactics and conduct
-- than one that would examine whether federal agents shoul…

@gray17@mastodon.social
2026-01-02 20:13:33

I am an AI model made for everything in general.
I've memorized the wiki page of every Minecraft mineral.
I know the Queen rules England. My training set's historical.
Hallucinations are my Waterloo—That isn't allegorical.
I'm built from matrix operations simple and mathematical,
My neurons are a metaphor, not actually synaptical.
The data centers built today are ninety-nine percent for me.
Spare no expense; you'll live forever soon in …

@portaloffreedom@social.linux.pizza
2025-12-11 13:19:36
Content warning: Machine learning, but positive. Potentially controversial

My controversial take on "AI" ray tracing helpers are that it's a really good idea.
First some background: keep in mind that machine learning tecnologies excell at tasks that have a high reward for success and a small cost for failure. In this case getting most of the rays right improve performance, at the cost of some few rays being shot in nothing.
Secondly, light rays are way too many in real life to be simulated in their entirety, so using some statistics to approximate the lighting model makes a lot of sense here. Plus at the lower quantum scale even phisicists use statistic to explain this stuff, so it's not that irrealistic either.
Finally the source data for this stuff is entirely other games, so ethically sourcing the training data set should not be a concern here.
Here, technology can be good or bad. It's not the tech, it's the use of the tech by the people (but that I mean oligarchic corporations) that makes them good or bad.

@relcfp@mastodon.social
2026-02-08 16:10:55

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026
ift.tt/ckUuBo7
Slave subjectivities in the Iberian Worlds (15th- 20th centuries) Date: October 31,…
via Input 4 RELCFP

@relcfp@mastodon.social
2026-02-06 16:15:57

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026
ift.tt/Va5jN0M
Slave subjectivities in the Iberian Worlds (15th- 20th centuries) Date: October 31,…
via Input 4 RELCFP

@relcfp@mastodon.social
2026-02-06 16:56:13

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-06 16:15:19

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou