Tootfinder

Opt-in global Mastodon full text search. Join the index!

@socallinuxexpo@social.linux.pizza
2026-02-09 20:40:02

Alex Garnett will speak on 'AT: The Billion-Edge Open Social Graph' as part of our Developer track at SCaLE 23x. Full details: socallinuxexpo.org/scale/23x

@inthehands@hachyderm.io
2026-03-18 17:03:46

The other one I truly love is GitUp (gitup.co). Its visualization handles certain specific tasks better than anything else — tasks where I’m more concerned about the shape of the commit graph than the contents of individual commits.
Because of the way it does live updates of repo state and offers a whole-commit-graph-level undo, I’ll sometimes keep it open in the background while doing some fiddly thing in another tool (Fork, CLI, whatever) just so I can see what the ^*@# is happening.
Alas, its lack of support for commit signing means I use it less and less.

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:53

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/5]:
- Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic D...
Lakshan Cooray, Deshan Sumanathilaka, Pattigadapa Venkatesh Raju
arxiv.org/abs/2602.00665 mastoxiv.page/@arXiv_csCL_bot/
- SEAD: Self-Evolving Agent for Multi-Turn Service Dialogue
Dai, Gao, Zhang, Wang, Luo, Wang, Wang, Wu, Wang
arxiv.org/abs/2602.03548
- OmniRAG-Agent: Agentic Omnimodal Reasoning for Low-Resource Long Audio-Video Question Answering
Yifan Zhu, Xinyu Mu, Tao Feng, Zhonghong Ou, Yuning Gong, Haoran Luo
arxiv.org/abs/2602.03707
- GreekMMLU: A Native-Sourced Multitask Benchmark for Evaluating Language Models in Greek
Zhang, Konomi, Xypolopoulos, Divriotis, Skianis, Nikolentzos, Stamou, Shang, Vazirgiannis
arxiv.org/abs/2602.05150
- Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems
Zhangqi Duan, Arnav Kankaria, Dhruv Kartik, Andrew Lan
arxiv.org/abs/2602.17542 mastoxiv.page/@arXiv_csCL_bot/
- MetaState: Persistent Working Memory Enhances Reasoning in Discrete Diffusion Language Models
Kejing Xia, Mingzhe Li, Lixuan Wei, Zhenbang Du, Xiangchi Yuan, Dachuan Shi, Qirui Jin, Wenke Lee
arxiv.org/abs/2603.01331 mastoxiv.page/@arXiv_csCL_bot/
- A Browser-based Open Source Assistant for Multimodal Content Verification
Milner, Foster, Karmakharm, Razuvayevskaya, Roberts, Porcellini, Teyssou, Bontcheva
arxiv.org/abs/2603.02842 mastoxiv.page/@arXiv_csCL_bot/
- Nw\=ach\=a Mun\=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR
Sharma, Shrestha, Poudel, Tiwari, Shrestha, Ghimire, Bal
arxiv.org/abs/2603.07554 mastoxiv.page/@arXiv_csCL_bot/
- Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions
Mingyang Song, Mao Zheng
arxiv.org/abs/2603.09938 mastoxiv.page/@arXiv_csCL_bot/
- AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Ag...
Zekun Wu, Adriano Koshiyama, Sahan Bulathwela, Maria Perez-Ortiz
arxiv.org/abs/2603.12564 mastoxiv.page/@arXiv_csCL_bot/
- GhanaNLP Parallel Corpora: Comprehensive Multilingual Resources for Low-Resource Ghanaian Languages
Gyamfi, Azunre, Moore, Budu, Asare, Owusu, Asiamah
arxiv.org/abs/2603.13793 mastoxiv.page/@arXiv_csCL_bot/
- sebis at ArchEHR-QA 2026: How Much Can You Do Locally? Evaluating Grounded EHR QA on a Single Not...
Ibrahim Ebrar Yurt, Fabian Karl, Tejaswi Choppa, Florian Matthes
arxiv.org/abs/2603.13962 mastoxiv.page/@arXiv_csCL_bot/
- ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
Yuzhe Shang, Pengzhi Gao, Yazheng Yang, Jiayao Ma, Wei Liu, Jian Luan, Jinsong Su
arxiv.org/abs/2603.14903 mastoxiv.page/@arXiv_csCL_bot/
- BanglaSocialBench: A Benchmark for Evaluating Sociopragmatic and Cultural Alignment of LLMs in Ba...
Tanvir Ahmed Sijan, S. M Golam Rifat, Pankaj Chowdhury Partha, Md. Tanjeed Islam, Md. Musfique Anwar
arxiv.org/abs/2603.15949 mastoxiv.page/@arXiv_csCL_bot/
- EngGPT2: Sovereign, Efficient and Open Intelligence
G. Ciarfaglia, et al.
arxiv.org/abs/2603.16430 mastoxiv.page/@arXiv_csCL_bot/
- HypeLoRA: Hyper-Network-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning
Bartosz Trojan, Filip G\k{e}bala
arxiv.org/abs/2603.19278 mastoxiv.page/@arXiv_csCL_bot/
- Automatic Analysis of Collaboration Through Human Conversational Data Resources: A Review
Yi Yu, Maria Boritchev, Chlo\'e Clavel
arxiv.org/abs/2603.19292 mastoxiv.page/@arXiv_csCL_bot/
- Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Langu...
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg, Tuhin Chakrabarty
arxiv.org/abs/2603.20957 mastoxiv.page/@arXiv_csCL_bot/
- KG-Hopper: Empowering Compact Open LLMs with Knowledge Graph Reasoning via Reinforcement Learning
Shuai Wang, Yinan Yu
arxiv.org/abs/2603.21440 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-04 07:41:25

Perfect Network Resilience in Polynomial Time
Matthias Bentert, Stefan Schmid
arxiv.org/abs/2602.03827 arxiv.org/pdf/2602.03827 arxiv.org/html/2602.03827
arXiv:2602.03827v1 Announce Type: new
Abstract: Modern communication networks support local fast rerouting mechanisms to quickly react to link failures: nodes store a set of conditional rerouting rules which define how to forward an incoming packet in case of incident link failures. The rerouting decisions at any node $v$ must rely solely on local information available at $v$: the link from which a packet arrived at $v$, the target of the packet, and the incident link failures at $v$. Ideally, such rerouting mechanisms provide perfect resilience: any packet is routed from its source to its target as long as the two are connected in the underlying graph after the link failures. Already in their seminal paper at ACM PODC '12, Feigenbaum, Godfrey, Panda, Schapira, Shenker, and Singla showed that perfect resilience cannot always be achieved. While the design of local rerouting algorithms has received much attention since then, we still lack a detailed understanding of when perfect resilience is achievable.
This paper closes this gap and presents a complete characterization of when perfect resilience can be achieved. This characterization also allows us to design an $O(n)$-time algorithm to decide whether a given instance is perfectly resilient and an $O(nm)$-time algorithm to compute perfectly resilient rerouting rules whenever it is. Our algorithm is also attractive for the simple structure of the rerouting rules it uses, known as skipping in the literature: alternative links are chosen according to an ordered priority list (per in-port), where failed links are simply skipped. Intriguingly, our result also implies that in the context of perfect resilience, skipping rerouting rules are as powerful as more general rerouting rules. This partially answers a long-standing open question by Chiesa, Nikolaevskiy, Mitrovic, Gurtov, Madry, Schapira, and Shenker [IEEE/ACM Transactions on Networking, 2017] in the affirmative.
toXiv_bot_toot

@arXiv_mathDG_bot@mastoxiv.page
2026-02-27 08:01:00

Calibrations for the Sasaki volume on odd spheres and the no-gap problem
Jonas Matuzas
arxiv.org/abs/2602.22961 arxiv.org/pdf/2602.22961 arxiv.org/html/2602.22961
arXiv:2602.22961v1 Announce Type: new
Abstract: For each odd sphere $S^{n}$ with $n=2m 1\ge 5$, we consider the Sasaki volume functional $\mathrm{Vol}^S(V)=\int_{S^{n}}\sqrt{\det(I (\nabla V)^{\top}(\nabla V))}\,d\mathrm{vol}$ on smooth unit tangent vector fields $V$. Using the Brito--Chacon--Naveira calibration $\omega=a\wedge\Theta$ on the unit tangent bundle $E=UTS^{n}$, we establish the universal calibrated lower bound $\mathrm{Vol}^S(V)\ge c(m;1)\,\mathrm{vol}(S^{n})$, where $c(m;1)=4^{m}/\binom{2m}{m}$. In the relaxed (integral-current) setting, we show that the section-constrained stable mass in $E$ equals the calibration value and is attained by an $\omega$-calibrated mass-minimizing integral $n$-cycle in the section class.
We also analyze the equality case on smooth graphs. If a smooth graph is $\omega$-calibrated on an open set, then it satisfies the rigidity system $\nabla_V V=0$ and $\nabla_X V=\lambda X$ for all $X\perp V$, hence is locally a radial distance-gradient field. In particular, for $m\ge 2$ there is no smooth unit field on $S^n$ whose graph is $\omega$-calibrated everywhere.
Finally, we construct an explicit smooth recovery sequence (presented in detail for $S^5$ and then extended to all odd dimensions) and prove a uniform nonvanishing estimate for the polar-shell normalization in the patching construction. As a consequence, $\inf_{V}\,\mathrm{Vol}^S(V)=c(m;1)\,\mathrm{vol}(S^{n})$, so there is no Lavrentiev gap.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 12:33:36

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/3]:
- Diffusion Modulation via Environment Mechanism Modeling for Planning
Hanping Zhang, Yuhong Guo
arxiv.org/abs/2602.20422 mastoxiv.page/@arXiv_csAI_bot/
- Heterogeneity-Aware Client Selection Methodology For Efficient Federated Learning
Nihal Balivada, Shrey Gupta, Shashank Shreedhar Bhatt, Suyash Gupta
arxiv.org/abs/2602.20450 mastoxiv.page/@arXiv_csDC_bot/
- Prior-Agnostic Incentive-Compatible Exploration
Ramya Ramalingam, Osbert Bastani, Aaron Roth
arxiv.org/abs/2602.20465 mastoxiv.page/@arXiv_csGT_bot/
- PhyGHT: Physics-Guided HyperGraph Transformer for Signal Purification at the HL-LHC
Mohammed Rakib, Luke Vaughan, Shivang Patel, Flera Rizatdinova, Alexander Khanov, Atriya Sen
arxiv.org/abs/2602.20475 mastoxiv.page/@arXiv_hepex_bot
- ActionEngine: From Reactive to Programmatic GUI Agents via State Machine Memory
Zhong, Faisal, Fran\c{c}a, Leesatapornwongsa, Szekeres, Rong, Nath
arxiv.org/abs/2602.20502 mastoxiv.page/@arXiv_csAI_bot/
- Inner Speech as Behavior Guides: Steerable Imitation of Diverse Behaviors for Human-AI coordination
Rakshit Trivedi, Kartik Sharma, David C Parkes
arxiv.org/abs/2602.20517 mastoxiv.page/@arXiv_csAI_bot/
- Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning
Lovelace, Belardi, Zalouk, Polavaram, Kundurthy, Weinberger
arxiv.org/abs/2602.20528 mastoxiv.page/@arXiv_csCL_bot/
- Standard Transformers Achieve the Minimax Rate in Nonparametric Regression with $C^{s,\lambda}$ T...
Yanming Lai, Defeng Sun
arxiv.org/abs/2602.20555 mastoxiv.page/@arXiv_statML_bo
- Personal Information Parroting in Language Models
Nishant Subramani, Kshitish Ghate, Mona Diab
arxiv.org/abs/2602.20580 mastoxiv.page/@arXiv_csCL_bot/
- Characterizing Online and Private Learnability under Distributional Constraints via Generalized S...
Mo\"ise Blanchard, Abhishek Shetty, Alexander Rakhlin
arxiv.org/abs/2602.20585 mastoxiv.page/@arXiv_statML_bo
- Amortized Bayesian inference for actigraph time sheet data from mobile devices
Daniel Zhou, Sudipto Banerjee
arxiv.org/abs/2602.20611 mastoxiv.page/@arXiv_statML_bo
- Knowing the Unknown: Interpretable Open-World Object Detection via Concept Decomposition Model
Xueqiang Lv, Shizhou Zhang, Yinghui Xing, Di Xu, Peng Wang, Yanning Zhang
arxiv.org/abs/2602.20616 mastoxiv.page/@arXiv_csCV_bot/
- On the Convergence of Stochastic Gradient Descent with Perturbed Forward-Backward Passes
Boao Kong, Hengrui Zhang, Kun Yuan
arxiv.org/abs/2602.20646 mastoxiv.page/@arXiv_mathOC_bo
- DANCE: Doubly Adaptive Neighborhood Conformal Estimation
Feng, Reich, Beaglehole, Luo, Park, Yoo, Huang, Mao, Boz, Kim
arxiv.org/abs/2602.20652 mastoxiv.page/@arXiv_statML_bo
- Vision-Language Models for Ergonomic Assessment of Manual Lifting Tasks: Estimating Horizontal an...
Mohammad Sadra Rajabi, Aanuoluwapo Ojelade, Sunwook Kim, Maury A. Nussbaum
arxiv.org/abs/2602.20658 mastoxiv.page/@arXiv_csCV_bot/
- F10.7 Index Prediction: A Multiscale Decomposition Strategy with Wavelet Transform for Performanc...
Xuran Ma, et al.
arxiv.org/abs/2602.20712 mastoxiv.page/@arXiv_astrophIM
- Communication-Inspired Tokenization for Structured Image Representations
Davtyan, Sahin, Haghighi, Stapf, Acuaviva, Alahi, Favaro
arxiv.org/abs/2602.20731 mastoxiv.page/@arXiv_csCV_bot/
- SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing
Yifei Xu, et al.
arxiv.org/abs/2602.20751 mastoxiv.page/@arXiv_csCL_bot/
- Assessing the Impact of Speaker Identity in Speech Spoofing Detection
Anh-Tuan Dao, Driss Matrouf, Nicholas Evans
arxiv.org/abs/2602.20805 mastoxiv.page/@arXiv_csSD_bot/
- Don't Ignore the Tail: Decoupling top-K Probabilities for Efficient Language Model Distillation
Sayantan Dasgupta, Trevor Cohn, Timothy Baldwin
arxiv.org/abs/2602.20816 mastoxiv.page/@arXiv_csCL_bot/
- DRESS: A Continuous Framework for Structural Graph Refinement
Eduar Castrillo Velilla
arxiv.org/abs/2602.20833 mastoxiv.page/@arXiv_csDS_bot/
toXiv_bot_toot

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:10:07

Marco DeepResearch: Unlocking Efficient Deep Research Agents via Verification-Centric Design
Bin Zhu, Qianghuai Jia, Tian Lan, Junyang Ren, Feng Gu, Feihu Jiang, Longyue Wang, Zhao Xu, Weihua Luo
arxiv.org/abs/2603.28376 arxiv.org/pdf/2603.28376 arxiv.org/html/2603.28376
arXiv:2603.28376v1 Announce Type: new
Abstract: Deep research agents autonomously conduct open-ended investigations, integrating complex information retrieval with multi-step reasoning across diverse sources to solve real-world problems. To sustain this capability on long-horizon tasks, reliable verification is critical during both training and inference. A major bottleneck in existing paradigms stems from the lack of explicit verification mechanisms in QA data synthesis, trajectory construction, and test-time scaling. Errors introduced at each stage propagate downstream and degrade the overall agent performance. To address this, we present Marco DeepResearch, a deep research agent optimized with a verification-centric framework design at three levels: \textbf{(1)~QA Data Synthesis:} We introduce verification mechanisms to graph-based and agent-based QA synthesis to control question difficulty while ensuring answers are unique and correct; \textbf{(2)~Trajectory Construction:} We design a verification-driven trajectory synthesis method that injects explicit verification patterns into training trajectories; and \textbf{(3)~Test-time scaling:} We use Marco DeepResearch itself as a verifier at inference time and effectively improve performance on challenging questions. Extensive experimental results demonstrate that our proposed Marco DeepResearch agent significantly outperforms 8B-scale deep research agents on most challenging benchmarks, such as BrowseComp and BrowseComp-ZH. Crucially, under a maximum budget of 600 tool calls, Marco DeepResearch even surpasses or approaches several 30B-scale agents, like Tongyi DeepResearch-30B.
toXiv_bot_toot

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:28

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[1/5]:
- Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
arxiv.org/abs/2502.18273 mastoxiv.page/@arXiv_csCL_bot/
- Benchmarking NLP-supported Language Sample Analysis for Swiss Children's Speech
Anja Ryser, Yingqiang Gao, Sarah Ebling
arxiv.org/abs/2504.00780 mastoxiv.page/@arXiv_csCL_bot/
- Cultural Biases of Large Language Models and Humans in Historical Interpretation
Fabio Celli, Georgios Spathulas
arxiv.org/abs/2504.02572 mastoxiv.page/@arXiv_csCL_bot/
- BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text
Jiageng Wu, et al.
arxiv.org/abs/2504.19467 mastoxiv.page/@arXiv_csCL_bot/
- Understanding the Anchoring Effect of LLM with Synthetic Data: Existence, Mechanism, and Potentia...
Yiming Huang, Biquan Bie, Zuqiu Na, Weilin Ruan, Songxin Lei, Yutao Yue, Xinlei He
arxiv.org/abs/2505.15392 mastoxiv.page/@arXiv_csCL_bot/
- Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods
Raza, Qureshi, Farooq, Lotif, Chadha, Pandya, Emmanouilidis
arxiv.org/abs/2505.17870 mastoxiv.page/@arXiv_csCL_bot/
- LingoLoop Attack: Trapping MLLMs via Linguistic Context and State Entrapment into Endless Loops
Fu, Jiang, Hong, Li, Guo, Yang, Chen, Zhang
arxiv.org/abs/2506.14493 mastoxiv.page/@arXiv_csCL_bot/
- GHTM: A Graph-based Hybrid Topic Modeling Approach with a Benchmark Dataset for the Low-Resource ...
Farhana Haque, Md. Abdur Rahman, Sumon Ahmed
arxiv.org/abs/2508.00605 mastoxiv.page/@arXiv_csCL_bot/
- Link Prediction for Event Logs in the Process Industry
Anastasia Zhukova, Thomas Walton, Christian E. Lobm\"uller, Bela Gipp
arxiv.org/abs/2508.09096 mastoxiv.page/@arXiv_csCL_bot/
- AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
Huang, Cao, Zhang, Kang, Wang, Wang, Luo, Zheng, Qian, Chen, Yu
arxiv.org/abs/2509.16952 mastoxiv.page/@arXiv_csCL_bot/
- Multi-View Attention Multiple-Instance Learning Enhanced by LLM Reasoning for Cognitive Distortio...
Jun Seo Kim, Hyemi Kim, Woo Joo Oh, Hongjin Cho, Hochul Lee, Hye Hyeon Kim
arxiv.org/abs/2509.17292 mastoxiv.page/@arXiv_csCL_bot/
- Dual-Space Smoothness for Robust and Balanced LLM Unlearning
Han Yan, Zheyuan Liu, Meng Jiang
arxiv.org/abs/2509.23362 mastoxiv.page/@arXiv_csCL_bot/
- The Rise of AfricaNLP: Contributions, Contributors, Community Impact, and Bibliometric Analysis
Tadesse Destaw Belay, et al.
arxiv.org/abs/2509.25477 mastoxiv.page/@arXiv_csCL_bot/
- Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual and Long-Form Speech Reco...
Srivastav, Zheng, Bezzam, Le Bihan, Koluguri, \.Zelasko, Majumdar, Moumen, Gandhi
arxiv.org/abs/2510.06961 mastoxiv.page/@arXiv_csCL_bot/
- Neuron-Level Analysis of Cultural Understanding in Large Language Models
Taisei Yamamoto, Ryoma Kumon, Danushka Bollegala, Hitomi Yanaka
arxiv.org/abs/2510.08284 mastoxiv.page/@arXiv_csCL_bot/
- CLMN: Concept based Language Models via Neural Symbolic Reasoning
Yibo Yang
arxiv.org/abs/2510.10063 mastoxiv.page/@arXiv_csCL_bot/
- Schema for In-Context Learning
Chen, Chen, Wang, Leong, Fung, Bernales, Aspuru-Guzik
arxiv.org/abs/2510.13905 mastoxiv.page/@arXiv_csCL_bot/
- Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
Matteo Silvestri, Fabiano Veglianti, Flavio Giorgi, Fabrizio Silvestri, Gabriele Tolomei
arxiv.org/abs/2510.20351 mastoxiv.page/@arXiv_csCL_bot/
- LuxIT: A Luxembourgish Instruction Tuning Dataset from Monolingual Seed Data
Julian Valline, Cedric Lothritz, Siwen Guo, Jordi Cabot
arxiv.org/abs/2510.24434 mastoxiv.page/@arXiv_csCL_bot/
- Surfacing Subtle Stereotypes: A Multilingual, Debate-Oriented Evaluation of Modern LLMs
Muhammed Saeed, Muhammad Abdul-mageed, Shady Shehata
arxiv.org/abs/2511.01187 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot