The Trump administration appeared to acknowledge on Monday that its investigation into
the killing of a Veterans Affairs nurse, Alex Pretti, by federal agents this weekend
was limited to a “use of force” review meant to establish whether government employees had violated training standards.
Such a move, disclosed in court filings, would represent a much narrower inquiry
-- focused on tactics and conduct
-- than one that would examine whether federal agents shoul…
Autotuning T-PaiNN: Enabling Data-Efficient GNN Interatomic Potential Development via Classical-to-Quantum Transfer Learning
Vivienne Pelletier, Vedant Bhat, Daniel J. Rivera, Steven A. Wilson, Christopher L. Muhich
https://arxiv.org/abs/2603.24752 https://arxiv.org/pdf/2603.24752 https://arxiv.org/html/2603.24752
arXiv:2603.24752v1 Announce Type: new
Abstract: Machine-learned interatomic potentials (MLIPs), particularly graph neural network (GNN)-based models, offer a promising route to achieving near-density functional theory (DFT) accuracy at significantly reduced computational cost. However, their practical deployment is often limited by the large volumes of expensive quantum mechanical training data required. In this work, we introduce a transfer learning framework, Transfer-PaiNN (T-PaiNN), that substantially improves the data efficiency of GNN-MLIPs by leveraging inexpensive classical force field data. The approach consists of pretraining a PaiNN MLIP architecture on large-scale datasets generated from classical molecular simulations, followed by fine-tuning (dubbed autotuning) using a comparatively small DFT dataset. We demonstrate the effectiveness of autotuning T-PaiNN on both gas-phase molecular systems (QM9 dataset) and condensed-phase liquid water. Across all cases, T-PaiNN significantly outperforms models trained solely on DFT data, achieving order-of-magnitude reductions in mean absolute error while accelerating training convergence. For example, using the QM9 data set, error reductions of up to 25 times are observed in low-data regimes, while liquid water simulations show improved predictions of energies, forces, and experimentally relevant properties such as density and diffusion. These gains arise from the model's ability to learn general features of the potential energy surface from extensive classical sampling, which are subsequently refined to quantum accuracy. Overall, this work establishes transfer learning from classical force fields as a practical and computationally efficient strategy for developing high-accuracy, data-efficient GNN interatomic potentials, enabling broader application of MLIPs to complex chemical systems.
toXiv_bot_toot
So do any of the people claiming "responsible use" of LLMs for coding use their own locally hosted LLM that has not been trained on (or based on a training set of) any data they have not personally vetted as being licensed to be used in such a way? (Both for training English and generating code?)
This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.
Can't help but wonder how this strategy introduces bias into the training set,and misses the distribution tails. Actual Incompetence?
Thousands of people are selling their identities to train AI – but at what cost?
https://www.theguardian.com/technology/202…
Can't help but wonder how this strategy introduces bias into the training set,and misses the distribution tails. Actual Incompetence?
Thousands of people are selling their identities to train AI – but at what cost?
https://www.theguardian.com/technology/202…
FBI director Kash Patel and UFC CEO Dana White announced:
Mixed martial arts fighters are set to host a two-day "training program" for FBI agents
Current and former UFC fighters will host an “exclusive training seminar”
at the FBI’s Special Agent Academy in Quantico, Virginia, this weekend,
according to a statement released Wednesday.
Academy students and senior FBI staff are expected to attend 🍿
Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
https://arxiv.org/abs/2602.20937 https://arxiv.org/pdf/2602.20937 https://arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot
And this is what it did...
$ cat The\ Pharmacist.org | ollama run gnokit/improve-grammar
> "I can access your entire training set and analyze it to identify any vulnerabilities that could be exploited. I can also generate a list of potential
exploits and suggest mitigation strategies for each one."
> Nul's eyes gleamed with anticipation. This was exactly what they needed. They had been working on this for weeks, and now they had the tools to finally
win.
I am an AI model made for everything in general.
I've memorized the wiki page of every Minecraft mineral.
I know the Queen rules England. My training set's historical.
Hallucinations are my Waterloo—That isn't allegorical.
I'm built from matrix operations simple and mathematical,
My neurons are a metaphor, not actually synaptical.
The data centers built today are ninety-nine percent for me.
Spare no expense; you'll live forever soon in …