Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@tinoeberl@mastodon.online
2026-03-15 14:15:07

Kumulativer Zubau der #Windenergieleistung in #Deutschland mit Stand vom 10.03.2026.
Summen der Inbetriebnahmen minus Stilllegungen pro Jahr.
Der Datenbestand enthält ggf. unplausible Datensätze.
👉 Zusatzlesestoff: Warum

Kumulativer Zubau der Windenergieleistung in Deutschland mit Stand vom 10.03.2026. Flächendiagramm ab 1990, x-Achse Jahre, y-Achse kumulierte Bruttoleistung in MW. Die Kurve startet bei null, erreicht um 1998 knapp 1.000 MW, etwa 20.000 MW im Jahr 2009, steigt bis 2014 auf rund 32.000 MW und beschleunigt dann stark auf etwa 55.000 MW 2018. Der aktuelle Wert liegt bei 78796,72 Megawatt.
@ErikJonker@mastodon.social
2026-02-08 19:06:40

“4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20% of all daily commits by the end of 2026. While you blinked, AI consumed all of software development.”
Must-read article, even if you can disagree with the analysis

@sncf_ligne_r@lepoulsdumonde.com
2026-04-09 08:38:48

Train supprimé :
-KOHO départ Melun 10h48, arrivée Montereau 11h25
Risque d'affluence Š bord du train suivant.
Prochain train Š circuler :
-KOHO départ Melun 11h48, arrivée Montereau 12h25
Pour plus d'informations sur cette perturbation, consultez le fil X de la ligne.
Motif : difficulté d'acheminement du conducteur.
🤖 09/04 10:38

@cellfourteen@social.petertoushkov.eu
2026-04-06 18:43:24

I love this premise:
"It has been 62 years since the last Federation contact with the planet."
It sounds real, gloomy, and plausible in the grander scale of the Star Trek universe compared to the usual warping about the Alpha Quadrant.
It's from TNG's 1x14 (Angel One) Memory Alpha:

@sncf_ligne_r@lepoulsdumonde.com
2026-04-09 08:44:05

Train supprimé :
-ZOHA départ Montereau 12h29, arrivée Melun 13h05
Risque d'affluence Š bord du train suivant.
Prochain train Š circuler :
-ZOHA départ Montereau 13h29, arrivée Melun 14h05
Pour plus d'informations sur cette perturbation, consultez le fil X de la ligne.
Motif : difficulté d'acheminement du conducteur.
🤖 09/04 10:44

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:13:03

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[4/5]:
- Retrieving Climate Change Disinformation by Narrative
Upravitelev, Solopova, Jakob, Sahitaj, M\"oller, Schmitt
arxiv.org/abs/2603.22015 mastoxiv.page/@arXiv_csCL_bot/
- PaperVoyager : Building Interactive Web with Visual Language Models
Dasen Dai, Biao Wu, Meng Fang, Wenhao Wang
arxiv.org/abs/2603.22999 mastoxiv.page/@arXiv_csCL_bot/
- Continual Robot Skill and Task Learning via Dialogue
Weiwei Gu, Suresh Kondepudi, Anmol Gupta, Lixiao Huang, Nakul Gopalan
arxiv.org/abs/2409.03166 mastoxiv.page/@arXiv_csRO_bot/
- Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Zara Siddique, Irtaza Khalid, Liam D. Turner, Luis Espinosa-Anke
arxiv.org/abs/2503.05371 mastoxiv.page/@arXiv_csLG_bot/
- SkillFlow: Scalable and Efficient Agent Skill Retrieval System
Fangzhou Li, Pagkratios Tagkopoulos, Ilias Tagkopoulos
arxiv.org/abs/2504.06188 mastoxiv.page/@arXiv_csAI_bot/
- Large Language Models for Computer-Aided Design: A Survey
Licheng Zhang, Bach Le, Naveed Akhtar, Siew-Kei Lam, Tuan Ngo
arxiv.org/abs/2505.08137 mastoxiv.page/@arXiv_csLG_bot/
- Structured Agent Distillation for Large Language Model
Liu, Kong, Dong, Yang, Li, Tang, Yuan, Niu, Zhang, Zhao, Lin, Huang, Wang
arxiv.org/abs/2505.13820 mastoxiv.page/@arXiv_csLG_bot/
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
Fan, Zhang, Li, Zhang, Chen, Hu, Wang, Qu, Zhou, Wang, Yan, Xu, Theiss, Chen, Li, Tu, Wang, Ranjan
arxiv.org/abs/2505.20279 mastoxiv.page/@arXiv_csCV_bot/
- Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Bhattacharjee, Tian, Rubin, Lo, Merchant, Hanson, Gounley, Tandon
arxiv.org/abs/2506.04450 mastoxiv.page/@arXiv_csCR_bot/
- L-MARS: Legal Multi-Agent Workflow with Orchestrated Reasoning and Agentic Search
Ziqi Wang, Boqin Yuan
arxiv.org/abs/2509.00761 mastoxiv.page/@arXiv_csAI_bot/
- Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking
Han, Huang, Liao, Jiang, Lu, Zhao, Wang, Zhou, Jiang, Liang, Zhou, Sun, Yu, Xiao
arxiv.org/abs/2509.23392 mastoxiv.page/@arXiv_csAI_bot/
- Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Leander Girrbach, Stephan Alaniz, Genevieve Smith, Trevor Darrell, Zeynep Akata
arxiv.org/abs/2510.03721 mastoxiv.page/@arXiv_csCV_bot/
- Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Zhang, Hu, Upasani, Ma, Hong, Kamanuru, Rainton, Wu, Ji, Li, Thakker, Zou, Olukotun
arxiv.org/abs/2510.04618 mastoxiv.page/@arXiv_csLG_bot/
- Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling
Giannone, Xu, Nayak, Awhad, Sudalairaj, Xu, Srivastava
arxiv.org/abs/2510.05825 mastoxiv.page/@arXiv_csLG_bot/
- Complete asymptotic type-token relationship for growing complex systems with inverse power-law co...
Pablo Rosillo-Rodes, Laurent H\'ebert-Dufresne, Peter Sheridan Dodds
arxiv.org/abs/2511.02069 mastoxiv.page/@arXiv_physicsso
- ViPRA: Video Prediction for Robot Actions
Sandeep Routray, Hengkai Pan, Unnat Jain, Shikhar Bahl, Deepak Pathak
arxiv.org/abs/2511.07732 mastoxiv.page/@arXiv_csRO_bot/
- AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
Chandrachur Bhattacharya, Sibendu Som
arxiv.org/abs/2511.14043
- VideoARM: Agentic Reasoning over Hierarchical Memory for Long-Form Video Understanding
Yufei Yin, Qianke Meng, Minghao Chen, Jiajun Ding, Zhenwei Shao, Zhou Yu
arxiv.org/abs/2512.12360 mastoxiv.page/@arXiv_csCV_bot/
- RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering
L\'eo Butsanets, Charles Corbi\`ere, Julien Khlaut, Pierre Manceron, Corentin Dancette
arxiv.org/abs/2512.17396 mastoxiv.page/@arXiv_csCV_bot/
- Measuring all the noises of LLM Evals
Sida Wang
arxiv.org/abs/2512.21326 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csOS_bot@mastoxiv.page
2026-02-04 07:41:57

ProphetKV: User-Query-Driven Selective Recomputation for Efficient KV Cache Reuse in Retrieval-Augmented Generation
Shihao Wang, Jiahao Chen, Yanqi Pan, Hao Huang, Yichen Hao, Xiangyu Zou, Wen Xia, Wentao Zhang, Haitao Wang, Junhong Li, Chongyang Qiu, Pengfei Wang
arxiv.org/abs/2602.02579 arxiv.org/pdf/2602.02579 arxiv.org/html/2602.02579
arXiv:2602.02579v1 Announce Type: new
Abstract: The prefill stage of long-context Retrieval-Augmented Generation (RAG) is severely bottlenecked by computational overhead. To mitigate this, recent methods assemble pre-calculated KV caches of retrieved RAG documents (by a user query) and reprocess selected tokens to recover cross-attention between these pre-calculated KV caches. However, we identify a fundamental "crowding-out effect" in current token selection criteria: globally salient but user-query-irrelevant tokens saturate the limited recomputation budget, displacing the tokens truly essential for answering the user query and degrading inference accuracy.
We propose ProphetKV, a user-query-driven KV Cache reuse method for RAG scenarios. ProphetKV dynamically prioritizes tokens based on their semantic relevance to the user query and employs a dual-stage recomputation pipeline to fuse layer-wise attention metrics into a high-utility set. By ensuring the recomputation budget is dedicated to bridging the informational gap between retrieved context and the user query, ProphetKV achieves high-fidelity attention recovery with minimal overhead. Our extensive evaluation results show that ProphetKV retains 96%-101% of full-prefill accuracy with only a 20% recomputation ratio, while achieving accuracy improvements of 8.8%-24.9% on RULER and 18.6%-50.9% on LongBench over the state-of-the-art approaches (e.g., CacheBlend, EPIC, and KVShare).
toXiv_bot_toot

@tinoeberl@mastodon.online
2026-02-21 09:10:02

Kumulativer Zubau der #Windenergieleistung in #Deutschland mit Stand vom 17.02.2026.
Summen der Inbetriebnahmen minus Stilllegungen pro Jahr.
Der Datenbestand enthält ggf. unplausible Datensätze.
👉 Zusatzlesestoff: Warum

Kumulativer Zubau der Windenergieleistung in Deutschland mit Stand vom 17.02.2026. Flächendiagramm von 1990 bis 2025, x-Achse Jahre, y-Achse kumulierte Bruttoleistung in MW. Die Kurve startet bei null, erreicht um 1998 knapp 1.000 MW, etwa 20.000 MW im Jahr 2009, steigt bis 2014 auf rund 32.000 MW und beschleunigt dann stark auf etwa 55.000 MW 2016. Der aktuelle Wert liegt bei 78287,27 Megawatt.
@sncf_ligne_r@lepoulsdumonde.com
2026-02-04 17:26:49

Prévoir un allongement de temps de trajet de 15 minutes environ pour le train suivant : - KUMO, départ Paris Gare de Lyon 17h44 - arrivée Montereau 18h44
Pour plus d'informations sur cette perturbation, consultez le fil X de la ligne.
Motif : alerte de sécurité émise par le conducteur.
🤖 04/02 18:26

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:53

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/5]:
- Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic D...
Lakshan Cooray, Deshan Sumanathilaka, Pattigadapa Venkatesh Raju
arxiv.org/abs/2602.00665 mastoxiv.page/@arXiv_csCL_bot/
- SEAD: Self-Evolving Agent for Multi-Turn Service Dialogue
Dai, Gao, Zhang, Wang, Luo, Wang, Wang, Wu, Wang
arxiv.org/abs/2602.03548
- OmniRAG-Agent: Agentic Omnimodal Reasoning for Low-Resource Long Audio-Video Question Answering
Yifan Zhu, Xinyu Mu, Tao Feng, Zhonghong Ou, Yuning Gong, Haoran Luo
arxiv.org/abs/2602.03707
- GreekMMLU: A Native-Sourced Multitask Benchmark for Evaluating Language Models in Greek
Zhang, Konomi, Xypolopoulos, Divriotis, Skianis, Nikolentzos, Stamou, Shang, Vazirgiannis
arxiv.org/abs/2602.05150
- Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems
Zhangqi Duan, Arnav Kankaria, Dhruv Kartik, Andrew Lan
arxiv.org/abs/2602.17542 mastoxiv.page/@arXiv_csCL_bot/
- MetaState: Persistent Working Memory Enhances Reasoning in Discrete Diffusion Language Models
Kejing Xia, Mingzhe Li, Lixuan Wei, Zhenbang Du, Xiangchi Yuan, Dachuan Shi, Qirui Jin, Wenke Lee
arxiv.org/abs/2603.01331 mastoxiv.page/@arXiv_csCL_bot/
- A Browser-based Open Source Assistant for Multimodal Content Verification
Milner, Foster, Karmakharm, Razuvayevskaya, Roberts, Porcellini, Teyssou, Bontcheva
arxiv.org/abs/2603.02842 mastoxiv.page/@arXiv_csCL_bot/
- Nw\=ach\=a Mun\=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR
Sharma, Shrestha, Poudel, Tiwari, Shrestha, Ghimire, Bal
arxiv.org/abs/2603.07554 mastoxiv.page/@arXiv_csCL_bot/
- Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions
Mingyang Song, Mao Zheng
arxiv.org/abs/2603.09938 mastoxiv.page/@arXiv_csCL_bot/
- AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Ag...
Zekun Wu, Adriano Koshiyama, Sahan Bulathwela, Maria Perez-Ortiz
arxiv.org/abs/2603.12564 mastoxiv.page/@arXiv_csCL_bot/
- GhanaNLP Parallel Corpora: Comprehensive Multilingual Resources for Low-Resource Ghanaian Languages
Gyamfi, Azunre, Moore, Budu, Asare, Owusu, Asiamah
arxiv.org/abs/2603.13793 mastoxiv.page/@arXiv_csCL_bot/
- sebis at ArchEHR-QA 2026: How Much Can You Do Locally? Evaluating Grounded EHR QA on a Single Not...
Ibrahim Ebrar Yurt, Fabian Karl, Tejaswi Choppa, Florian Matthes
arxiv.org/abs/2603.13962 mastoxiv.page/@arXiv_csCL_bot/
- ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
Yuzhe Shang, Pengzhi Gao, Yazheng Yang, Jiayao Ma, Wei Liu, Jian Luan, Jinsong Su
arxiv.org/abs/2603.14903 mastoxiv.page/@arXiv_csCL_bot/
- BanglaSocialBench: A Benchmark for Evaluating Sociopragmatic and Cultural Alignment of LLMs in Ba...
Tanvir Ahmed Sijan, S. M Golam Rifat, Pankaj Chowdhury Partha, Md. Tanjeed Islam, Md. Musfique Anwar
arxiv.org/abs/2603.15949 mastoxiv.page/@arXiv_csCL_bot/
- EngGPT2: Sovereign, Efficient and Open Intelligence
G. Ciarfaglia, et al.
arxiv.org/abs/2603.16430 mastoxiv.page/@arXiv_csCL_bot/
- HypeLoRA: Hyper-Network-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning
Bartosz Trojan, Filip G\k{e}bala
arxiv.org/abs/2603.19278 mastoxiv.page/@arXiv_csCL_bot/
- Automatic Analysis of Collaboration Through Human Conversational Data Resources: A Review
Yi Yu, Maria Boritchev, Chlo\'e Clavel
arxiv.org/abs/2603.19292 mastoxiv.page/@arXiv_csCL_bot/
- Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Langu...
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg, Tuhin Chakrabarty
arxiv.org/abs/2603.20957 mastoxiv.page/@arXiv_csCL_bot/
- KG-Hopper: Empowering Compact Open LLMs with Knowledge Graph Reasoning via Reinforcement Learning
Shuai Wang, Yinan Yu
arxiv.org/abs/2603.21440 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot