2026-02-08 21:42:03
from my link log —
pred_recdec: Predicated LL / recursive descent parser / grammar interpreter in Rust.
https://github.com/wareya/pred_recdec
saved 2026-02-08 https://
from my link log —
pred_recdec: Predicated LL / recursive descent parser / grammar interpreter in Rust.
https://github.com/wareya/pred_recdec
saved 2026-02-08 https://
I might want to look at this later… 👀
❤️ https://github.com/voicetreelab/voicetree
recursive DNS servers are required to also be authoritative DNS servers for certain special zones
https://www.rfc-editor.org/rfc/rfc6303
https://
Ricursive, founded by ex-Google researchers to automate advanced chip design, raised $335M from Sequoia, Radical, Lightspeed, and others at a $4B valuation (Cade Metz/New York Times)
https://www.nytimes.com/2026/01/26/technology/recursive-ai-ricursive.html
recursive make!
everyone agrees it is bad!
lots of software still uses recursive make!
https://lobste.rs/c/azxg6o
why!
🌳 Recursive entries via parent_id — enabling nested hierarchies like threaded comments, email topics with messages & private notes
📝 Immutability: create new versions instead of updating — Entry is a pointer to current state with full edit history
🛠️ Battle-tested at #37signals in #Basecamp
Sources: Richard Socher's Recursive is in talks to raise hundreds of millions at a $4B pre-money valuation to build self-improving superintelligent AI (Bloomberg)
https://www.bloomberg.com/news/articles/2026-01-23/ai-startup…
And what of it if I made a full blown relational logic based database to make a recursive inventory Š la "Baldur's Gate 3"!
That is also what they did, so I am allowed to make a stupidly gigantic heap of hashtables to implement something that could very well be done with a couple of OOP style classes
RE: https://mastodon.social/@urlyman/116232505890032336
If you're reading only one thing today, let it be this thread below, connecting the Straight of Hormuz with the recursive risks & dependencies to fertilizers, food safety, energy, minerals, EV…
3Blue1Brown is one of the most well-known math channels on youtube. He explains high-level maths in an almost meditative way using simple but eye-opening motion graphics. I just stumbled upon this video about a paper that figured out how MC Escher made a certain distorted, recursive drawing.
The artist probably didn't know about logarithms of complex numbers and neither do I. But it's super interesting even if you just watch for the animations:
#AI model collapse: Experimental evidence of progressive ChatGPT models self-convergence https://arxiv.org/abs/2603.12683v2
🪟 Full Window Function support: ROW_NUMBER, RANK, DENSE_RANK, LAG, LEAD, NTILE, FIRST_VALUE, LAST_VALUE & more for analytical workloads
🔄 Common Table Expressions including recursive CTEs for complex hierarchical and graph-style data queries
📈 Advanced aggregations: ROLLUP, CUBE & GROUPING SETS for multi-dimensional reporting and subtotal calculations
💾 Write-Ahead Logging (WAL) periodic snapshots for crash-safe persistence – runs in both in-memory and file-based …
A Novel Explicit Filter for the Approximate Deconvolution in Large-Eddy Simulation on General Unstructured Grids: A posteriori tests on highly stretched grids
Mohammad Bagher Molaei, Ehsan Amani, Morteza Ghorbani
https://arxiv.org/abs/2602.21166 https://arxiv.org/pdf/2602.21166 https://arxiv.org/html/2602.21166
arXiv:2602.21166v1 Announce Type: new
Abstract: Explicit filters play a pivotal role in the scale separation and numerical stability of advanced Large Eddy Simulation (LES) closures, such as dynamic eddy-viscosity or Approximate Deconvolution (AD) methods. In the present study, it is demonstrated that the performance of commonly used explicit filters applicable to general unstructured grids highly depends on the grid configuration, specifically the cell aspect ratio, which can result in poor filter spectral properties, ultimately leading to large errors and even solution divergence. This study introduces a novel, efficient explicit filter for general unstructured grids, addressing this shortcoming through a combination of a face-averaging technique and recursive filtering. The filter parameters are then determined through a constrained multi-objective optimization, ensuring desirable spectral properties, including high-wavenumber attenuation, filter-width precision, filter stability and positivity, and minimized dispersion and commutation errors. The AD-LES of turbulent channel flow benchmarks using the new filter demonstrate a noticeable improvement in turbulent flow predictions on highly stretched boundary-layer-type grids, particularly in reducing the log-layer mean velocity profile mismatch, compared to simulations using conventional filters. The analyses show that this enhancement is mainly attributed to the sufficient level of attenuation near the Nyquist wavenumber achieved by the new filter in all spatial directions across various grid configurations, among others. The new filter was also successfully tested on unstructured prism grids for the 3D Taylor-Green vortex benchmark.
toXiv_bot_toot
Polynomials in $c$-free random variables with applications to free denoising
Adrian Celestino, Franz Lehner, Kamil Szpojankowski
https://arxiv.org/abs/2603.21372 https://arxiv.org/pdf/2603.21372 https://arxiv.org/html/2603.21372
arXiv:2603.21372v1 Announce Type: new
Abstract: We study distributions of polynomials in conditionally free (c-free) random variables, a notion of independence for two-state noncommutative probability spaces introduced by Bozejko, Leinert and Speicher. To this end we establish recursive relations between the joint Boolean cumulants of c-free random variables, analogous to previously found recursions for Boolean cumulants of free random variables. The algebraic reformulation of these recursions on the free associative algebra provides an effective formal machinery for the computation of the moment generating functions and thus the distributions of arbitrary self-adjoint polynomials in c-free random variables. As an application of a recent observation, our approach can be used to determine conditional expectations of the form $E[a|P(a,b)]$, where $P(a,b)$ is a self-adjoint polynomial in free (in the sense of Voiculescu) random variables $a,b$. We illustrate this with an example where $P(a,b)=i[a,b]$. Finally we define orthogonal projections that formally play the role of conditional expectations in the framework of c-freeness and share some properties with the conditional expectations of free variables. In particular they can be used to re-derive by purely algebraic methods the formula of Popa and Wang for the $\Sigma$-transform for the c-free multiplicative convolution.
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
https://arxiv.org/abs/2505.19698 https://mastoxiv.page/@arXiv_csLG_bot/114578810521008766
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
https://arxiv.org/abs/2506.08660 https://mastoxiv.page/@arXiv_csLG_bot/114664238967892509
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
https://arxiv.org/abs/2506.10167 https://mastoxiv.page/@arXiv_csLG_bot/114675175949432731
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
https://arxiv.org/abs/2506.10914 https://mastoxiv.page/@arXiv_csLG_bot/114675529854402158
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
https://arxiv.org/abs/2506.18481 https://mastoxiv.page/@arXiv_csLG_bot/114738421450807709
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
https://arxiv.org/abs/2506.21220 https://mastoxiv.page/@arXiv_csLG_bot/114754764750730849
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
https://arxiv.org/abs/2507.04448 https://mastoxiv.page/@arXiv_csLG_bot/114818005803079705
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
https://arxiv.org/abs/2508.01115 https://mastoxiv.page/@arXiv_csLG_bot/114975999992144374
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
https://arxiv.org/abs/2508.13904 https://mastoxiv.page/@arXiv_csLG_bot/115060568241390847
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
https://arxiv.org/abs/2508.16815 https://mastoxiv.page/@arXiv_csLG_bot/115094785677272005
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
https://arxiv.org/abs/2508.21785 https://mastoxiv.page/@arXiv_csLG_bot/115128450608548173
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
https://arxiv.org/abs/2509.15796 https://mastoxiv.page/@arXiv_csLG_bot/115247429156900905
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
https://arxiv.org/abs/2509.19975 https://mastoxiv.page/@arXiv_csLG_bot/115264498084813952
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
https://arxiv.org/abs/2509.21895 https://mastoxiv.page/@arXiv_csLG_bot/115287261047939306
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
https://arxiv.org/abs/2509.22566 https://mastoxiv.page/@arXiv_csLG_bot/115287379672141023
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
https://arxiv.org/abs/2509.23115 https://mastoxiv.page/@arXiv_csLG_bot/115293273559547106
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
https://arxiv.org/abs/2509.25424 https://mastoxiv.page/@arXiv_csLG_bot/115298579764580635
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
https://arxiv.org/abs/2509.26626 https://mastoxiv.page/@arXiv_csLG_bot/115298789487177431
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
https://arxiv.org/abs/2510.12402 https://mastoxiv.page/@arXiv_csLG_bot/115377759317818093
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
https://arxiv.org/abs/2510.15425 https://mastoxiv.page/@arXiv_csLG_bot/115405933861293858
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
https://arxiv.org/abs/2510.18114 https://mastoxiv.page/@arXiv_csLG_bot/115417332500265972
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
https://arxiv.org/abs/2510.22293 https://mastoxiv.page/@arXiv_csLG_bot/115451746201804373
toXiv_bot_toot
Crosslisted article(s) found for q-fin.PM. https://arxiv.org/list/q-fin.PM/new
[1/1]:
- Merton's Problem with Recursive Perturbed Utility
Min Dai, Yuchao Dong, Yanwei Jia, Xun Yu Zhou