How far have you come? This page will show you how far you’ve traveled since you were born.
The Cosmic Odometer calculates how far you've traveled through space just by being alive. Even if you've never left Earth, you've orbited the sun, rotated with the planet, and embarked on both solar and galactic travel — all while sitting on your couch. https://…
Western Tatami Mat Mania Keeping Alive Japan’s Traditional Woven Grass Flooring Industry https://www.goodnewsnetwork.org/western-tatami-mat-mania-keeping-alive-japans-traditional-woven-grass-flooring-industry/
🇺🇦 Auf radioeins läuft...
Magdalena Bay:
🎵 This Is The World (I Made It For You)
#NowPlaying #MagdalenaBay
https://magdalenabay.bandcamp.com/album/this-is-the-world-i-made-it-for-you-nice-day
https://open.spotify.com/track/6xgNV9489zKLRXnvpiZQXJ
That is an excellent summary of why solar and wind power is *the* choice for the future even if you do not include climate change in the equation. It is plain and simple the better solution for the 21st century!
We need to get rid of people in power who do not see that as they are being paid to keep the fossil fuel industry alive.
Fossil fuels are a relic of the past and we need to get rid of them.
Replaced article(s) found for math.DG. https://arxiv.org/list/math.DG/new
[1/1]:
- On the modified $J$-equation
Ryosuke Takahashi
https://arxiv.org/abs/2207.04953
- Surfaces with flat normal connection in 4-dimensional space forms
Naoya Ando, Ryusei Hatanaka
https://arxiv.org/abs/2501.15780
- Regularized $\zeta_{\Delta}(1)$ for Polyhedra
Alexey Yu. Kokotov, Dmitrii V. Korikov
https://arxiv.org/abs/2502.03351 https://mastoxiv.page/@arXiv_mathDG_bot/113955669526276293
- General Chen-Ricci inequalities for Riemannian submersions and Riemannian maps
Ravindra Singh, Kiran Meena, Kapish Chand Meena
https://arxiv.org/abs/2509.15281 https://mastoxiv.page/@arXiv_mathDG_bot/115246828823405190
- Some configuration results for area-minimizing cones
Yongsheng Zhang
https://arxiv.org/abs/2510.17240 https://mastoxiv.page/@arXiv_mathDG_bot/115411416287934120
- Real Bers embedding on the line: Fisher-Rao linearization, Schwarzian curvature, and scattering c...
Hy Lam
https://arxiv.org/abs/2602.07373 https://mastoxiv.page/@arXiv_mathDG_bot/116045447030638429
- Explicit Hamiltonian representations of meromorphic connections and duality from different perspe...
Mohamad Alameddine, Olivier Marchal
https://arxiv.org/abs/2406.19187 https://mastoxiv.page/@arXiv_mathph_bot/112692974532066693
- An alternative solvability criterion for the Dirichlet problem for the minimal surface equation a...
Ari J. Aiolfi, Giovanni da Silva Nunes, Jaime Ripoll, Lisandra Sauer, Rodrigo Soares
https://arxiv.org/abs/2508.09806 https://mastoxiv.page/@arXiv_mathAP_bot/115026282591071982
- Gromov's Compactness Theorem for the Intrinsic Timed-Hausdorff Distance
Mauricio Che, Raquel Perales, Christina Sormani
https://arxiv.org/abs/2510.13069 https://mastoxiv.page/@arXiv_mathMG_bot/115382835673596437
- Nearly optimal spectral gaps for random Belyi surfaces
Yang Shen, Yunhui Wu
https://arxiv.org/abs/2511.02517 https://mastoxiv.page/@arXiv_mathSP_bot/115496082425305449
toXiv_bot_toot
Please stop with the “do LLMs have fee-fees?” bullshit
This presupposes LLMs are alive which in turn means that for every prompt an LLM baby is born and after answering is snuffed out, dying horribly
Like the whale in the Hitchhiker’s Guide
Ant Group reports 30M MAUs for its AI health chatbot Ant Afu, which integrates appointments, test analysis, and insurance payments within Alipay's ecosystem (Viola Zhou/Rest of World)
https://restofworld.org/2026/ai-health-care-is-taking-off-…
Time is Not Compute: Scaling Laws for Wall-Clock Constrained Training on Consumer GPUs
Yi Liu
https://arxiv.org/abs/2603.28823 https://arxiv.org/pdf/2603.28823 https://arxiv.org/html/2603.28823
arXiv:2603.28823v1 Announce Type: new
Abstract: Scaling laws relate model quality to compute budget (FLOPs), but practitioners face wall-clock time constraints, not compute budgets. We study optimal model sizing under fixed time budgets from 5 minutes to 24 hours on consumer GPUs (RTX 4090). Across 70 runs spanning 50M--1031M parameters, we find: (1)~at each time budget a U-shaped curve emerges where too-small models overfit and too-large models undertrain; (2)~optimal model size follows $N^* \propto t^{0.60}$, growing \emph{faster} than Chinchilla's $N^* \propto C^{0.50}$, with $\alpha = 0.60 \pm 0.07$ robustly exceeding compute-optimal across all sensitivity analyses; (3)~a \emph{dual U-shape mechanism}: short-budget U-curves arise from compute bottlenecks, while long-budget U-curves emerge from data bottlenecks (overfitting), with an intermediate regime where the U-curve temporarily disappears. These findings have immediate implications for researchers training on consumer hardware, where wall-clock time -- not FLOPs -- is the binding constraint. We release all code, logs, and 70 experimental configurations.
toXiv_bot_toot