Das neue Gebäudeenergiegesetz wird jetzt tatsächlich ein Heizungshammer.
1. Es ist ein Schlag ins Gesicht für die zukünftigen Generationen, denn es bedeutet im Ergebnis eine komplett Rolle rückwärts bei den Klimazielen.
2. Es wird diejenigen eine Stange Geld kosten, die sich jetzt unsinnigerweise wieder eine Gasheizung zulegen.
3. Es ist fatal für die Wirtschaft, denn es bringt Unsicherheit.
Ich hatte eine Vollkatastrophe von Reiche erwartet - und es ist eine geworden.
Mytra, which is building autonomous robots for warehouses that can move loads up to 3,000 pounds, raised a $120M Series C led by Avenir Growth (Allie Garfinkle/Fortune)
https://fortune.com/2026/01/15/mytra-raises-120-million-series-c-scale-supply…
„Discombobulator“ - Trump verrät Details über mysteriöse Super-Waffe #News #Nachrichten
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[5/6]:
- Watermarking Degrades Alignment in Language Models: Analysis and Mitigation
Apurv Verma, NhatHai Phan, Shubhendu Trivedi
https://arxiv.org/abs/2506.04462 https://mastoxiv.page/@arXiv_csCL_bot/114635190037336859
- Sensory-Motor Control with Large Language Models via Iterative Policy Refinement
J\^onata Tyska Carvalho, Stefano Nolfi
https://arxiv.org/abs/2506.04867 https://mastoxiv.page/@arXiv_csAI_bot/114635187854195641
- ICE-ID: A Novel Historical Census Dataset for Longitudinal Identity Resolution
de Carvalho, Popov, Kaatee, Correia, Th\'orisson, Li, Bj\"ornsson, Sigur{\dh}arson, Dibangoye
https://arxiv.org/abs/2506.13792 https://mastoxiv.page/@arXiv_csAI_bot/114703312162525342
- Feedback-driven recurrent quantum neural network universality
Lukas Gonon, Rodrigo Mart\'inez-Pe\~na, Juan-Pablo Ortega
https://arxiv.org/abs/2506.16332 https://mastoxiv.page/@arXiv_quantph_bot/114732532383196043
- Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs
Cook, Sapora, Ahmadian, Khan, Rocktaschel, Foerster, Ruis
https://arxiv.org/abs/2506.18777 https://mastoxiv.page/@arXiv_csAI_bot/114738213040759661
- Stochastic Quantum Spiking Neural Networks with Quantum Memory and Local Learning
Jiechen Chen, Bipin Rajendran, Osvaldo Simeone
https://arxiv.org/abs/2506.21324 https://mastoxiv.page/@arXiv_csNE_bot/114754367612728319
- Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
Pierre Boudart (SIERRA), Pierre Gaillard (Thoth), Alessandro Rudi (PSL, DI-ENS, Inria)
https://arxiv.org/abs/2507.05306 https://mastoxiv.page/@arXiv_statML_bot/114822374525501660
- Characterizing State Space Model and Hybrid Language Model Performance with Long Context
Saptarshi Mitra, Rachid Karami, Haocheng Xu, Sitao Huang, Hyoukjun Kwon
https://arxiv.org/abs/2507.12442 https://mastoxiv.page/@arXiv_csAR_bot/114867589638074984
- Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Da...
Ayush Roy, Samin Enam, Jun Xia, Won Hwa Kim, Vishnu Suresh Lokhande
https://arxiv.org/abs/2507.19575 https://mastoxiv.page/@arXiv_csCV_bot/114935399825741861
- TASER: Table Agents for Schema-guided Extraction and Recommendation
Nicole Cho, Kirsty Fielding, William Watson, Sumitra Ganesh, Manuela Veloso
https://arxiv.org/abs/2508.13404 https://mastoxiv.page/@arXiv_csAI_bot/115060386723032051
- Morphology-Aware Peptide Discovery via Masked Conditional Generative Modeling
Nuno Costa, Julija Zavadlav
https://arxiv.org/abs/2509.02060 https://mastoxiv.page/@arXiv_qbioBM_bot/115139546511384706
- PCPO: Proportionate Credit Policy Optimization for Aligning Image Generation Models
Jeongjae Lee, Jong Chul Ye
https://arxiv.org/abs/2509.25774 https://mastoxiv.page/@arXiv_csCV_bot/115298580419859537
- Multi-hop Deep Joint Source-Channel Coding with Deep Hash Distillation for Semantically Aligned I...
Didrik Bergstr\"om, Deniz G\"und\"uz, Onur G\"unl\"u
https://arxiv.org/abs/2510.06868 https://mastoxiv.page/@arXiv_csIT_bot/115343320768797486
- MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile...
Chengshu Li, et al.
https://arxiv.org/abs/2510.18316 https://mastoxiv.page/@arXiv_csRO_bot/115416889485910123
- A Spectral Framework for Graph Neural Operators: Convergence Guarantees and Tradeoffs
Roxanne Holden, Luana Ruiz
https://arxiv.org/abs/2510.20954 https://mastoxiv.page/@arXiv_statML_bot/115445273121677005
- Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
Bazinska, Mathys, Casucci, Rojas-Carulla, Davies, Souly, Pfister
https://arxiv.org/abs/2510.22620 https://mastoxiv.page/@arXiv_csCR_bot/115451397563132982
- Uncertainty Calibration of Multi-Label Bird Sound Classifiers
Raphael Schwinger, Ben McEwen, Vincent S. Kather, Ren\'e Heinrich, Lukas Rauch, Sven Tomforde
https://arxiv.org/abs/2511.08261 https://mastoxiv.page/@arXiv_csSD_bot/115535982708483824
- Two-dimensional RMSD projections for reaction path visualization and validation
Rohit Goswami (Institute IMX and Lab-COSMO, \'Ecole polytechnique f\'ed\'erale de Lausanne)
https://arxiv.org/abs/2512.07329 https://mastoxiv.page/@arXiv_physicschemph_bot/115688910885717951
- Distribution-informed Online Conformal Prediction
Dongjian Hu, Junxi Wu, Shu-Tao Xia, Changliang Zou
https://arxiv.org/abs/2512.07770 https://mastoxiv.page/@arXiv_statML_bot/115689281155541568
- Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss
Ang Lv, Jin Ma, Yiyuan Ma, Siyuan Qiao
https://arxiv.org/abs/2512.23447 https://mastoxiv.page/@arXiv_csCL_bot/115808311310246601
toXiv_bot_toot
“How did it feel for the man who built a home, only to watch it turn to the rubble? How does a farmer stand before the land he tended year after year, now lying barren—no scent of soil, no whisper of harvest? How does a father tell his son the school he loved is gone, that the garden where he played is now only a rumor in the rubble? How does a mother walk through the ghost of a playground, finding a small shoe, a torn notebook, a toy she once mended? How do neighbors look at one another, wo…
Our little cat just came running into the bathroom, demanding, as he does most mornings, no matter how cold, that the window be opened so he can sit in the sill and look outside.
And then he saw the storm outside and boom he’s gone. LOL
Thanks, Greenland. Stay strong and don’t let up. We’re with you.
Love, Minneapolis
https://metro.co.uk/2026/01/18/tenth-greenlands-population-join-protest-telling-trump-we-not-sale-26366246/
I’m sure Julliet is alive, she’s using a firefighter outfit. What about the IT guy, will he be protected by Julliet’s suit too if she covers him?
Also what the fuck did Kyle learn? He can’t tell anyone otherwise safeguard will start, but he also says it doesn’t matter and that it’s over so will it just start no matter what as far as he knows? If so, why did he not just tell Sims?
And why did he give up on being IT’s shadow just like Meadows, is the current IT shadow supposed to stay locked forever in the Vault like “Solo”? Wtf is going on here?
Does Order Matter : Connecting The Law of Robustness to Robust Generalization
Himadri Mandal, Vishnu Varadarajan, Jaee Ponde, Aritra Das, Mihir More, Debayan Gupta
https://arxiv.org/abs/2602.20971 https://arxiv.org/pdf/2602.20971 https://arxiv.org/html/2602.20971
arXiv:2602.20971v1 Announce Type: new
Abstract: Bubeck and Sellke (2021) pose as an open problem the connection between the law of robustness and robust generalization. The law of robustness states that overparameterization is necessary for models to interpolate robustly; in particular, robust interpolation requires the learned function to be Lipschitz. Robust generalization asks whether small robust training loss implies small robust test loss. We resolve this problem by explicitly connecting the two for arbitrary data distributions. Specifically, we introduce a nontrivial notion of robust generalization error and convert it into a lower bound on the expected Rademacher complexity of the induced robust loss class. Our bounds recover the $\Omega(n^{1/d})$ regime of Wu et al.\ (2023) and show that, up to constants, robust generalization does not change the order of the Lipschitz constant required for smooth interpolation. We conduct experiments to probe the predicted scaling with dataset size and model capacity, testing whether empirical behavior aligns more closely with the predictions of Bubeck and Sellke (2021) or Wu et al.\ (2023). For MNIST, we find that the lower-bound Lipschitz constant scales on the order predicted by Wu et al.\ (2023). Informally, to obtain low robust generalization error, the Lipschitz constant must lie in a range that we bound, and the allowable perturbation radius is linked to the Lipschitz scale.
toXiv_bot_toot
Sam Altman says currently "the idea of putting data centers in space is ridiculous" and that it is "not something that's going to matter at scale this decade" (Lauren Edmonds/Business Insider)
https://www.businessinsider.com/sam-altman