Tootfinder

Opt-in global Mastodon full text search. Join the index!

@relcfp@mastodon.social
2026-02-08 16:10:55

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026
ift.tt/ckUuBo7
Slave subjectivities in the Iberian Worlds (15th- 20th centuries) Date: October 31,…
via Input 4 RELCFP

@matthiasott@mastodon.social
2026-03-30 10:59:33

Quick reminder, especially if you’re a freelancer or developer using Free/Pro/Pro plans for client work: Opt out of GitHub using your data for AI model training before April 24 (seriously, wtf that this isn’t opt-in!).
github.com/settings/copilot/fe

A GitHub setting labelled “Allow GitHub to use my data for AI model training", set to disabled

The Trump administration appeared to acknowledge on Monday that its investigation into
the killing of a Veterans Affairs nurse, Alex Pretti, by federal agents this weekend
was limited to a “use of force” review meant to establish whether government employees had violated training standards.
Such a move, disclosed in court filings, would represent a much narrower inquiry
-- focused on tactics and conduct
-- than one that would examine whether federal agents shoul…

@thomasfuchs@hachyderm.io
2026-02-16 13:42:20

So do any of the people claiming "responsible use" of LLMs for coding use their own locally hosted LLM that has not been trained on (or based on a training set of) any data they have not personally vetted as being licensed to be used in such a way? (Both for training English and generating code?)

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:11:57

GraphWalker: Agentic Knowledge Graph Question Answering via Synthetic Trajectory Curriculum
Shuwen Xu, Yao Xu, Jiaxiang Liu, Chenhao Yuan, Wenshuo Peng, Jun Zhao, Kang Liu
arxiv.org/abs/2603.28533 arxiv.org/pdf/2603.28533 arxiv.org/html/2603.28533
arXiv:2603.28533v1 Announce Type: new
Abstract: Agentic knowledge graph question answering (KGQA) requires an agent to iteratively interact with knowledge graphs (KGs), posing challenges in both training data scarcity and reasoning generalization. Specifically, existing approaches often restrict agent exploration: prompting-based methods lack autonomous navigation training, while current training pipelines usually confine reasoning to predefined trajectories. To this end, this paper proposes \textit{GraphWalker}, a novel agentic KGQA framework that addresses these challenges through \textit{Automated Trajectory Synthesis} and \textit{Stage-wise Fine-tuning}. GraphWalker adopts a two-stage SFT training paradigm: First, the agent is trained on structurally diverse trajectories synthesized from constrained random-walk paths, establishing a broad exploration prior over the KG; Second, the agent is further fine-tuned on a small set of expert trajectories to develop reflection and error recovery capabilities. Extensive experiments demonstrate that our stage-wise SFT paradigm unlocks a higher performance ceiling for a lightweight reinforcement learning (RL) stage, enabling GraphWalker to achieve state-of-the-art performance on CWQ and WebQSP. Additional results on GrailQA and our constructed GraphWalkerBench confirm that GraphWalker enhances generalization to out-of-distribution reasoning paths. The code is publicly available at github.com/XuShuwenn/GraphWalk
toXiv_bot_toot

@hex@kolektiva.social
2026-02-22 21:34:17

This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.

FBI director Kash Patel and UFC CEO Dana White announced:
Mixed martial arts fighters are set to host a two-day "training program" for FBI agents
Current and former UFC fighters will host an “exclusive training seminar”
at the FBI’s Special Agent Academy in Quantico, Virginia, this weekend,
according to a statement released Wednesday.
Academy students and senior FBI staff are expected to attend 🍿

@relcfp@mastodon.social
2026-02-06 16:15:57

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026
ift.tt/Va5jN0M
Slave subjectivities in the Iberian Worlds (15th- 20th centuries) Date: October 31,…
via Input 4 RELCFP

@PaulWermer@sfba.social
2026-03-21 22:21:34

Can't help but wonder how this strategy introduces bias into the training set,and misses the distribution tails. Actual Incompetence?
Thousands of people are selling their identities to train AI – but at what cost?
theguardian.com/technology/202…

@paulwermer@sfba.social
2026-03-21 22:21:34

Can't help but wonder how this strategy introduces bias into the training set,and misses the distribution tails. Actual Incompetence?
Thousands of people are selling their identities to train AI – but at what cost?
theguardian.com/technology/202…

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:12:22

Training data generation for context-dependent rubric-based short answer grading
Pavel \v{S}indel\'a\v{r}, D\'avid Slivka, Christopher Bouma, Filip Pr\'a\v{s}il, Ond\v{r}ej Bojar
arxiv.org/abs/2603.28537 arxiv.org/pdf/2603.28537 arxiv.org/html/2603.28537
arXiv:2603.28537v1 Announce Type: new
Abstract: Every 4 years, the PISA test is administered by the OECD to test the knowledge of teenage students worldwide and allow for comparisons of educational systems. However, having to avoid language differences and annotator bias makes the grading of student answers challenging. For these reasons, it would be interesting to compare methods of automatic student answer grading. To train some of these methods, which require machine learning, or to compute parameters or select hyperparameters for those that do not, a large amount of domain-specific data is needed. In this work, we explore a small number of methods for creating a large-scale training dataset using only a relatively small confidential dataset as a reference, leveraging a set of very simple derived text formats to preserve confidentiality. Using these methods, we successfully created three surrogate datasets that are, at the very least, superficially more similar to the reference dataset than purely the result of prompt-based generation. Early experiments suggest one of these approaches might also lead to improved model training.
toXiv_bot_toot

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-27 08:19:37

Autotuning T-PaiNN: Enabling Data-Efficient GNN Interatomic Potential Development via Classical-to-Quantum Transfer Learning
Vivienne Pelletier, Vedant Bhat, Daniel J. Rivera, Steven A. Wilson, Christopher L. Muhich
arxiv.org/abs/2603.24752 arxiv.org/pdf/2603.24752 arxiv.org/html/2603.24752
arXiv:2603.24752v1 Announce Type: new
Abstract: Machine-learned interatomic potentials (MLIPs), particularly graph neural network (GNN)-based models, offer a promising route to achieving near-density functional theory (DFT) accuracy at significantly reduced computational cost. However, their practical deployment is often limited by the large volumes of expensive quantum mechanical training data required. In this work, we introduce a transfer learning framework, Transfer-PaiNN (T-PaiNN), that substantially improves the data efficiency of GNN-MLIPs by leveraging inexpensive classical force field data. The approach consists of pretraining a PaiNN MLIP architecture on large-scale datasets generated from classical molecular simulations, followed by fine-tuning (dubbed autotuning) using a comparatively small DFT dataset. We demonstrate the effectiveness of autotuning T-PaiNN on both gas-phase molecular systems (QM9 dataset) and condensed-phase liquid water. Across all cases, T-PaiNN significantly outperforms models trained solely on DFT data, achieving order-of-magnitude reductions in mean absolute error while accelerating training convergence. For example, using the QM9 data set, error reductions of up to 25 times are observed in low-data regimes, while liquid water simulations show improved predictions of energies, forces, and experimentally relevant properties such as density and diffusion. These gains arise from the model's ability to learn general features of the potential energy surface from extensive classical sampling, which are subsequently refined to quantum accuracy. Overall, this work establishes transfer learning from classical force fields as a practical and computationally efficient strategy for developing high-accuracy, data-efficient GNN interatomic potentials, enabling broader application of MLIPs to complex chemical systems.
toXiv_bot_toot

@hex@kolektiva.social
2026-02-22 22:29:02

And this is what it did...
$ cat The\ Pharmacist.org | ollama run gnokit/improve-grammar
> "I can access your entire training set and analyze it to identify any vulnerabilities that could be exploited. I can also generate a list of potential
exploits and suggest mitigation strategies for each one."
> Nul's eyes gleamed with anticipation. This was exactly what they needed. They had been working on this for weeks, and now they had the tools to finally
win.

@relcfp@mastodon.social
2026-02-06 16:56:13

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:39:11

Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
arxiv.org/abs/2602.20937 arxiv.org/pdf/2602.20937 arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot

@relcfp@mastodon.social
2026-02-06 16:15:19

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou