2026-02-09 00:01:33
My wife is rooting for entropy - for things to go wrong
#superbowl2026 #entropy #engineer
My wife is rooting for entropy - for things to go wrong
#superbowl2026 #entropy #engineer
Entropy, a decentralized crypto custodian that raised a $25M seed led by a16z in June 2022, is shutting down after "several pivots, and two rounds of layoffs" (Zack Abrams/The Block)
https://www.theblock.co/post/386942/entro…
A perfect meditation & expression of both the sorrow and hope of these dark days...
Love Over Entropy — Ojalš (2025)
#Music4Coding …
Harnessing the Peripheral Surface Information Entropy from Globular Protein-Peptide Complexes
Tyler Grear, Donald J. Jacobs
https://arxiv.org/abs/2602.00498 https://
Crosslisted article(s) found for nlin.PS. https://arxiv.org/list/nlin.PS/new
[1/1]:
- Spectral entropy of the discrete Hasimoto effective potential exposes sub-residue geometric trans...
Yiquan Wang
https://arxiv.org/abs/2602.21787 https://mastoxiv.page/@arXiv_qbioBM_bot/116136186417486346
toXiv_bot_toot
The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
https://arxiv.org/abs/2602.21185 https://arxiv.org/pdf/2602.21185 https://arxiv.org/html/2602.21185
arXiv:2602.21185v1 Announce Type: new
Abstract: Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: https://s-sahoo.com/duo-ch2
toXiv_bot_toot
Replaced article(s) found for physics.chem-ph. https://arxiv.org/list/physics.chem-ph/new
[1/1]:
- Proposal on the Calculation of the Ionisation-Cluster Size Distribution (I). The Model and Its Si...
Bernd Heide
https://arxiv.org/abs/2404.03961 https://mastoxiv.page/@arXiv_physicscompph_bot/112234374992126208
- Bridging chemistry and Gaussian boson sampling: A photonic hierarchy of approximations for molecu...
Jan-Lucas Eickmann, et al.
https://arxiv.org/abs/2507.19442 https://mastoxiv.page/@arXiv_quantph_bot/114930272911651358
- Benchmarking Universal Machine Learning Interatomic Potentials for Supported Nanoparticles: Decou...
Jiayan Xu, Abhirup Patra, Amar Deep Pathak, Sharan Shetty, Detlef Hohl, Roberto Car
https://arxiv.org/abs/2512.05221 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/115683143867496047
- Knowledge Distillation of a Protein Language Model Yields a Foundational Implicit Solvent Model
Justin Airas, Bin Zhang
https://arxiv.org/abs/2601.05388 https://mastoxiv.page/@arXiv_physicsbioph_bot/115881090848393264
- Universal Foundations of Thermodynamics: Entropy and Energy Beyond Equilibrium and Without Extens...
Gian Paolo Beretta
https://arxiv.org/abs/2602.09986 https://mastoxiv.page/@arXiv_quantph_bot/116051530776008418
toXiv_bot_toot
Modeling the mutational dynamics of very short tandem repeats
Amos Onn (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig), Tzipy Marx (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Liming Tao (Cellular Tissue Genomics, Genentech), Tamir Biezuner (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Ehud Shapiro (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Christoph A. Klein (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Fraunhofer Institute for Toxicology and Experimental Medicine Regensburg), Peter F. Stadler (Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig, Max Planck Institute for Mathematics in the Sciences, Institute for Theoretical Chemistry, University of Vienna, Facultad de Ciencias, Universidad Nacional de Colombia, Center for non-coding RNA in Technology and Health, University of Copenhagen, Santa Fe Institute)
https://arxiv.org/abs/2603.25628 https://arxiv.org/pdf/2603.25628 https://arxiv.org/html/2603.25628
arXiv:2603.25628v1 Announce Type: new
Abstract: Short tandem repeats (STRs) are low-entropy regions in the genome, consisting of a short (1-6 bp) unit that is consecutively repeated multiple times. They are known for high mutational instability, due to so-called stutter-mutations, in which the number of units in the run increases or descreases. In particular, STRs with repeat unit length of 1-2 bp are prone to mutate even within several cell divisions. The extremely rapid accumulation of variation makes them interesting phylogenetic markers for retrospective single-cell lineage reconstruction. Here we model their mutational dynamics at the level of individual repeat unit type and then aggregate length variations over many STR loci with the aim of obtaining a very fast ``molecular clock''. We calibrate our model based on several datasets with known lineage structure prepared from cultured cells. We find that the mutational dynamics of STRs are reasonably consistent for a given cell line, but vary among different ones. This suggests that the dynamics are not entirely explained by mutations in caretaker genes, rather, various other factors play a role -- possibly tissue origin and differentiation state. Further data and research is necessary to asses their relative effects.
toXiv_bot_toot
SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards
Dengjia Zhang, Xiaoou Liu, Lu Cheng, Yaqing Wang, Kenton Murray, Hua Wei
https://arxiv.org/abs/2602.21158 https://arxiv.org/pdf/2602.21158 https://arxiv.org/html/2602.21158
arXiv:2602.21158v1 Announce Type: new
Abstract: Large language models (LLMs) are increasingly deployed as multi-step decision-making agents, where effective reward design is essential for guiding learning. Although recent work explores various forms of reward shaping and step-level credit assignment, a key signal remains largely overlooked: the intrinsic uncertainty of LLMs. Uncertainty reflects model confidence, reveals where exploration is needed, and offers valuable learning cues even in failed trajectories. We introduce SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards, a reinforcement learning framework that incorporates uncertainty directly into the reward design. SELAUR integrates entropy-, least-confidence-, and margin-based metrics into a combined token-level uncertainty estimate, providing dense confidence-aligned supervision, and employs a failure-aware reward reshaping mechanism that injects these uncertainty signals into step- and trajectory-level rewards to improve exploration efficiency and learning stability. Experiments on two benchmarks, ALFWorld and WebShop, show that our method consistently improves success rates over strong baselines. Ablation studies further demonstrate how uncertainty signals enhance exploration and robustness.
toXiv_bot_toot