Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@NFL@darktundra.xyz
2026-03-09 18:29:20

Source: Cardinals land Allgeier on $12.25M deal espn.com/nfl/story/_/id/481547

@bebe@social.linux.pizza
2026-03-12 23:08:04

Wer definiert #Desinformation - wer entscheidet was #Wahrheit ist?
Könnt ihr euch noch an die Argumente gegen die #Uploadfilter aus Artikel13/17 erinnern? 🤔

@digitalnaiv@mastodon.social
2026-02-01 08:49:00

Bloggen sei „alt“ und Websites „peinlich“? So einfach ist’s nicht. @… kontert, dass Blogs mit stabiler URL, eigenem Denken und echter Argumentation dem algorithmisch verkürzten Social-Media-Lärm überlegen sind. Das eigene Web bleibt ein Ort der Freiheit – keine Klick-Ökonomie, sondern Gedankentiefe. 🔗

@arXiv_csGR_bot@mastoxiv.page
2026-02-03 07:43:07

Fast Sparse Matrix Permutation for Mesh-Based Direct Solvers
Behrooz Zarebavami, Ahmed H. Mahmoud, Ana Dodik, Changcheng Yuan, Serban D. Porumbescu, John D. Owens, Maryam Mehri Dehnavi, Justin Solomon
arxiv.org/abs/2602.00898 arxiv.org/pdf/2602.00898 arxiv.org/html/2602.00898
arXiv:2602.00898v1 Announce Type: new
Abstract: We present a fast sparse matrix permutation algorithm tailored to linear systems arising from triangle meshes. Our approach produces nested-dissection-style permutations while significantly reducing permutation runtime overhead. Rather than enforcing strict balance and separator optimality, the algorithm deliberately relaxes these design decisions to favor fast partitioning and efficient elimination-tree construction. Our method decomposes permutation into patch-level local orderings and a compact quotient-graph ordering of separators, preserving the essential structure required by sparse Cholesky factorization while avoiding its most expensive components. We integrate our algorithm into vendor-maintained sparse Cholesky solvers on both CPUs and GPUs. Across a range of graphics applications, including single factorizations, repeated factorizations, our method reduces permutation time and improves the sparse Cholesky solve performance by up to 6.27x.
toXiv_bot_toot

@BBC3MusicBot@mastodonapp.uk
2026-01-19 19:23:32

🇺🇦 #NowPlaying on BBCRadio3's #ClassicalMixtape
Robert Schumann, Dénes Všrjon & Steven Isserlis:
🎵 Adagio and Allegro in A-Flat Major, Op. 70; Adagio
#RobertSchumann #DénesVárjon #StevenIsserlis

@arXiv_csDS_bot@mastoxiv.page
2026-02-03 08:07:36

Fast $k$-means Seeding Under The Manifold Hypothesis
Poojan Shah, Shashwat Agrawal, Ragesh Jaiswal
arxiv.org/abs/2602.01104 arxiv.org/pdf/2602.01104 arxiv.org/html/2602.01104
arXiv:2602.01104v1 Announce Type: new
Abstract: We study beyond worst case analysis for the $k$-means problem where the goal is to model typical instances of $k$-means arising in practice. Existing theoretical approaches provide guarantees under certain assumptions on the optimal solutions to $k$-means, making them difficult to validate in practice. We propose the manifold hypothesis, where data obtained in ambient dimension $D$ concentrates around a low dimensional manifold of intrinsic dimension $d$, as a reasonable assumption to model real world clustering instances. We identify key geometric properties of datasets which have theoretically predictable scaling laws depending on the quantization exponent $\varepsilon = 2/d$ using techniques from optimum quantization theory. We show how to exploit these regularities to design a fast seeding method called $\operatorname{Qkmeans}$ which provides $O(\rho^{-2} \log k)$ approximate solutions to the $k$-means problem in time $O(nD) \widetilde{O}(\varepsilon^{1 \rho}\rho^{-1}k^{1 \gamma})$; where the exponent $\gamma = \varepsilon \rho$ for an input parameter $\rho < 1$. This allows us to obtain new runtime - quality tradeoffs. We perform a large scale empirical study across various domains to validate our theoretical predictions and algorithm performance to bridge theory and practice for beyond worst case data clustering.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:18

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/6]:
- Watermarking Degrades Alignment in Language Models: Analysis and Mitigation
Apurv Verma, NhatHai Phan, Shubhendu Trivedi
arxiv.org/abs/2506.04462 mastoxiv.page/@arXiv_csCL_bot/
- Sensory-Motor Control with Large Language Models via Iterative Policy Refinement
J\^onata Tyska Carvalho, Stefano Nolfi
arxiv.org/abs/2506.04867 mastoxiv.page/@arXiv_csAI_bot/
- ICE-ID: A Novel Historical Census Dataset for Longitudinal Identity Resolution
de Carvalho, Popov, Kaatee, Correia, Th\'orisson, Li, Bj\"ornsson, Sigur{\dh}arson, Dibangoye
arxiv.org/abs/2506.13792 mastoxiv.page/@arXiv_csAI_bot/
- Feedback-driven recurrent quantum neural network universality
Lukas Gonon, Rodrigo Mart\'inez-Pe\~na, Juan-Pablo Ortega
arxiv.org/abs/2506.16332 mastoxiv.page/@arXiv_quantph_b
- Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs
Cook, Sapora, Ahmadian, Khan, Rocktaschel, Foerster, Ruis
arxiv.org/abs/2506.18777 mastoxiv.page/@arXiv_csAI_bot/
- Stochastic Quantum Spiking Neural Networks with Quantum Memory and Local Learning
Jiechen Chen, Bipin Rajendran, Osvaldo Simeone
arxiv.org/abs/2506.21324 mastoxiv.page/@arXiv_csNE_bot/
- Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
Pierre Boudart (SIERRA), Pierre Gaillard (Thoth), Alessandro Rudi (PSL, DI-ENS, Inria)
arxiv.org/abs/2507.05306 mastoxiv.page/@arXiv_statML_bo
- Characterizing State Space Model and Hybrid Language Model Performance with Long Context
Saptarshi Mitra, Rachid Karami, Haocheng Xu, Sitao Huang, Hyoukjun Kwon
arxiv.org/abs/2507.12442 mastoxiv.page/@arXiv_csAR_bot/
- Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Da...
Ayush Roy, Samin Enam, Jun Xia, Won Hwa Kim, Vishnu Suresh Lokhande
arxiv.org/abs/2507.19575 mastoxiv.page/@arXiv_csCV_bot/
- TASER: Table Agents for Schema-guided Extraction and Recommendation
Nicole Cho, Kirsty Fielding, William Watson, Sumitra Ganesh, Manuela Veloso
arxiv.org/abs/2508.13404 mastoxiv.page/@arXiv_csAI_bot/
- Morphology-Aware Peptide Discovery via Masked Conditional Generative Modeling
Nuno Costa, Julija Zavadlav
arxiv.org/abs/2509.02060 mastoxiv.page/@arXiv_qbioBM_bo
- PCPO: Proportionate Credit Policy Optimization for Aligning Image Generation Models
Jeongjae Lee, Jong Chul Ye
arxiv.org/abs/2509.25774 mastoxiv.page/@arXiv_csCV_bot/
- Multi-hop Deep Joint Source-Channel Coding with Deep Hash Distillation for Semantically Aligned I...
Didrik Bergstr\"om, Deniz G\"und\"uz, Onur G\"unl\"u
arxiv.org/abs/2510.06868 mastoxiv.page/@arXiv_csIT_bot/
- MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile...
Chengshu Li, et al.
arxiv.org/abs/2510.18316 mastoxiv.page/@arXiv_csRO_bot/
- A Spectral Framework for Graph Neural Operators: Convergence Guarantees and Tradeoffs
Roxanne Holden, Luana Ruiz
arxiv.org/abs/2510.20954 mastoxiv.page/@arXiv_statML_bo
- Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
Bazinska, Mathys, Casucci, Rojas-Carulla, Davies, Souly, Pfister
arxiv.org/abs/2510.22620 mastoxiv.page/@arXiv_csCR_bot/
- Uncertainty Calibration of Multi-Label Bird Sound Classifiers
Raphael Schwinger, Ben McEwen, Vincent S. Kather, Ren\'e Heinrich, Lukas Rauch, Sven Tomforde
arxiv.org/abs/2511.08261 mastoxiv.page/@arXiv_csSD_bot/
- Two-dimensional RMSD projections for reaction path visualization and validation
Rohit Goswami (Institute IMX and Lab-COSMO, \'Ecole polytechnique f\'ed\'erale de Lausanne)
arxiv.org/abs/2512.07329 mastoxiv.page/@arXiv_physicsch
- Distribution-informed Online Conformal Prediction
Dongjian Hu, Junxi Wu, Shu-Tao Xia, Changliang Zou
arxiv.org/abs/2512.07770 mastoxiv.page/@arXiv_statML_bo
- Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss
Ang Lv, Jin Ma, Yiyuan Ma, Siyuan Qiao
arxiv.org/abs/2512.23447 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot