Tootfinder

Opt-in global Mastodon full text search. Join the index!

@markrsmith@smithtodon.org
2026-02-09 00:01:33

My wife is rooting for entropy - for things to go wrong
#superbowl2026 #entropy #engineer

@Techmeme@techhub.social
2026-01-26 06:30:41

Entropy, a decentralized crypto custodian that raised a $25M seed led by a16z in June 2022, is shutting down after "several pivots, and two rounds of layoffs" (Zack Abrams/The Block)
theblock.co/post/386942/entro…

@toxi@mastodon.thi.ng
2026-01-28 21:10:36

A perfect meditation & expression of both the sorrow and hope of these dark days...
Love Over Entropy — Ojalš (2025)
#Music4Coding

@arXiv_physicsbioph_bot@mastoxiv.page
2026-02-03 08:42:58

Harnessing the Peripheral Surface Information Entropy from Globular Protein-Peptide Complexes
Tyler Grear, Donald J. Jacobs
arxiv.org/abs/2602.00498

@arXiv_nlinPS_bot@mastoxiv.page
2026-02-26 11:06:15

Crosslisted article(s) found for nlin.PS. arxiv.org/list/nlin.PS/new
[1/1]:
- Spectral entropy of the discrete Hasimoto effective potential exposes sub-residue geometric trans...
Yiquan Wang
arxiv.org/abs/2602.21787 mastoxiv.page/@arXiv_qbioBM_bo
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:44:31

The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
arxiv.org/abs/2602.21185 arxiv.org/pdf/2602.21185 arxiv.org/html/2602.21185
arXiv:2602.21185v1 Announce Type: new
Abstract: Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: s-sahoo.com/duo-ch2
toXiv_bot_toot

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-26 09:57:22

Replaced article(s) found for physics.chem-ph. arxiv.org/list/physics.chem-ph
[1/1]:
- Proposal on the Calculation of the Ionisation-Cluster Size Distribution (I). The Model and Its Si...
Bernd Heide
arxiv.org/abs/2404.03961 mastoxiv.page/@arXiv_physicsco
- Bridging chemistry and Gaussian boson sampling: A photonic hierarchy of approximations for molecu...
Jan-Lucas Eickmann, et al.
arxiv.org/abs/2507.19442 mastoxiv.page/@arXiv_quantph_b
- Benchmarking Universal Machine Learning Interatomic Potentials for Supported Nanoparticles: Decou...
Jiayan Xu, Abhirup Patra, Amar Deep Pathak, Sharan Shetty, Detlef Hohl, Roberto Car
arxiv.org/abs/2512.05221 mastoxiv.page/@arXiv_condmatmt
- Knowledge Distillation of a Protein Language Model Yields a Foundational Implicit Solvent Model
Justin Airas, Bin Zhang
arxiv.org/abs/2601.05388 mastoxiv.page/@arXiv_physicsbi
- Universal Foundations of Thermodynamics: Entropy and Energy Beyond Equilibrium and Without Extens...
Gian Paolo Beretta
arxiv.org/abs/2602.09986 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot

@arXiv_qbioPE_bot@mastoxiv.page
2026-03-27 08:09:37

Modeling the mutational dynamics of very short tandem repeats
Amos Onn (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig), Tzipy Marx (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Liming Tao (Cellular Tissue Genomics, Genentech), Tamir Biezuner (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Ehud Shapiro (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Christoph A. Klein (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Fraunhofer Institute for Toxicology and Experimental Medicine Regensburg), Peter F. Stadler (Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig, Max Planck Institute for Mathematics in the Sciences, Institute for Theoretical Chemistry, University of Vienna, Facultad de Ciencias, Universidad Nacional de Colombia, Center for non-coding RNA in Technology and Health, University of Copenhagen, Santa Fe Institute)
arxiv.org/abs/2603.25628 arxiv.org/pdf/2603.25628 arxiv.org/html/2603.25628
arXiv:2603.25628v1 Announce Type: new
Abstract: Short tandem repeats (STRs) are low-entropy regions in the genome, consisting of a short (1-6 bp) unit that is consecutively repeated multiple times. They are known for high mutational instability, due to so-called stutter-mutations, in which the number of units in the run increases or descreases. In particular, STRs with repeat unit length of 1-2 bp are prone to mutate even within several cell divisions. The extremely rapid accumulation of variation makes them interesting phylogenetic markers for retrospective single-cell lineage reconstruction. Here we model their mutational dynamics at the level of individual repeat unit type and then aggregate length variations over many STR loci with the aim of obtaining a very fast ``molecular clock''. We calibrate our model based on several datasets with known lineage structure prepared from cultured cells. We find that the mutational dynamics of STRs are reasonably consistent for a given cell line, but vary among different ones. This suggests that the dynamics are not entirely explained by mutations in caretaker genes, rather, various other factors play a role -- possibly tissue origin and differentiation state. Further data and research is necessary to asses their relative effects.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:43:51

SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards
Dengjia Zhang, Xiaoou Liu, Lu Cheng, Yaqing Wang, Kenton Murray, Hua Wei
arxiv.org/abs/2602.21158 arxiv.org/pdf/2602.21158 arxiv.org/html/2602.21158
arXiv:2602.21158v1 Announce Type: new
Abstract: Large language models (LLMs) are increasingly deployed as multi-step decision-making agents, where effective reward design is essential for guiding learning. Although recent work explores various forms of reward shaping and step-level credit assignment, a key signal remains largely overlooked: the intrinsic uncertainty of LLMs. Uncertainty reflects model confidence, reveals where exploration is needed, and offers valuable learning cues even in failed trajectories. We introduce SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards, a reinforcement learning framework that incorporates uncertainty directly into the reward design. SELAUR integrates entropy-, least-confidence-, and margin-based metrics into a combined token-level uncertainty estimate, providing dense confidence-aligned supervision, and employs a failure-aware reward reshaping mechanism that injects these uncertainty signals into step- and trajectory-level rewards to improve exploration efficiency and learning stability. Experiments on two benchmarks, ALFWorld and WebShop, show that our method consistently improves success rates over strong baselines. Ablation studies further demonstrate how uncertainty signals enhance exploration and robustness.
toXiv_bot_toot