Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2026-01-23 02:05:47

TikTok's new majority US-owned JV includes investors Oracle, Silver Lake, and Abu Dhabi's MGX, each holding 15%, and Dell Family Office; ByteDance retains 19.9% (Financial Times)
ft.com/content/b905cb50-3093-4

@Mediagazer@mstdn.social
2026-01-23 02:06:04

TikTok's new majority US-owned JV includes investors Oracle, Silver Lake, and Abu Dhabi's MGX, each holding 15%, and Dell Family Office; ByteDance retains 19.9% (Financial Times)
ft.com/content/b905cb50-3093-4

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:35:41

Rethink Efficiency Side of Neural Combinatorial Solver: An Offline and Self-Play Paradigm
Zhenxing Xu, Zeyuan Ma, Weidong Bao, Hui Yan, Yan Zheng, Ji Wang
arxiv.org/abs/2602.20730 arxiv.org/pdf/2602.20730 arxiv.org/html/2602.20730
arXiv:2602.20730v1 Announce Type: new
Abstract: We propose ECO, a versatile learning paradigm that enables efficient offline self-play for Neural Combinatorial Optimization (NCO). ECO addresses key limitations in the field through: 1) Paradigm Shift: Moving beyond inefficient online paradigms, we introduce a two-phase offline paradigm consisting of supervised warm-up and iterative Direct Preference Optimization (DPO); 2) Architecture Shift: We deliberately design a Mamba-based architecture to further enhance the efficiency in the offline paradigm; and 3) Progressive Bootstrapping: To stabilize training, we employ a heuristic-based bootstrapping mechanism that ensures continuous policy improvement during training. Comparison results on TSP and CVRP highlight that ECO performs competitively with up-to-date baselines, with significant advantage on the efficiency side in terms of memory utilization and training throughput. We provide further in-depth analysis on the efficiency, throughput and memory usage of ECO. Ablation studies show rationale behind our designs.
toXiv_bot_toot

@ubuntourist@mastodon.social
2026-02-23 12:25:51

15,000 people singing to call on ICE to leave their jobs!
🎵🎵🎵
Oh-h-h, It's okay to change your mind,
Show us your courage,
Leave this behind,
It's okay to change your mind,
And you can join us,
Join us here any time
🎵🎵🎵
instagram.com/reel/DVEuV1hkvjG/

@kubikpixel@chaos.social
2026-02-19 11:10:15

«dCode - Online Ciphers, Solvers, Decoders, Calculators:
dCode is the universal site for decipher coded messages, cheating on letter games, solving puzzles, geocaches and treasure hunts, etc.»
This data-collecting website is often suggested to me when I am looking for something for data security and encryption. Are there any data-saving online alternatives in this regard?
🔑

@june_thalia_michael@literatur.social
2026-02-22 20:13:49

#EroticMusings 39: Does your erotica explore themes of the "exotic" and/or of "otherness"?
I describe different bodies, but I do my best to have my characters marvel at all of them. Fenia and her pink skin is as wondrous to behold as Fabiolas dark one with the silvery strech marks, as wondrous as Alexis and her freckled knees.
My characters share a deep ado…

@Techmeme@techhub.social
2025-12-18 22:55:50

Memo: the TikTok US deal is set to close on Jan. 22; terms include retraining the recommendation algorithm on US user data and Oracle overseeing data protection (Alex Weprin/The Hollywood Reporter)
hollywoodreporter.com/business

@tinoeberl@mastodon.online
2026-02-13 08:09:02

Weniger #Verpackungsmüll:
#Vanillezucker lässt sich einfach und kostengünstig selbst herstellen. 🌟
„Benutzte“ #Vanilleschoten bekommen dafür ein zweites Leben. Was …

@Mediagazer@mstdn.social
2025-12-18 22:55:37

Memo: the TikTok US deal is set to close on Jan. 22; terms include retraining the recommendation algorithm on US user data and Oracle overseeing data protection (Alex Weprin/The Hollywood Reporter)
hollywoodreporter.com/business

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:37

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
arxiv.org/abs/2408.07016 mastoxiv.page/@arXiv_csLG_bot/
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
arxiv.org/abs/2411.00759 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
arxiv.org/abs/2412.06871 mastoxiv.page/@arXiv_csLG_bot/
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
arxiv.org/abs/2501.08219 mastoxiv.page/@arXiv_csLG_bot/
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
arxiv.org/abs/2501.10538 mastoxiv.page/@arXiv_csLG_bot/
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
arxiv.org/abs/2501.16613 mastoxiv.page/@arXiv_csLG_bot/
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
arxiv.org/abs/2502.03652 mastoxiv.page/@arXiv_csLG_bot/
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
arxiv.org/abs/2502.12108 mastoxiv.page/@arXiv_csLG_bot/
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
arxiv.org/abs/2502.17028 mastoxiv.page/@arXiv_csLG_bot/
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
arxiv.org/abs/2503.00229 mastoxiv.page/@arXiv_csLG_bot/
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
arxiv.org/abs/2503.04398 mastoxiv.page/@arXiv_csLG_bot/
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
arxiv.org/abs/2503.12016 mastoxiv.page/@arXiv_csLG_bot/
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
arxiv.org/abs/2503.23434 mastoxiv.page/@arXiv_csLG_bot/
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
arxiv.org/abs/2504.13961 mastoxiv.page/@arXiv_csLG_bot/
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
arxiv.org/abs/2505.11602 mastoxiv.page/@arXiv_csLG_bot/
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
arxiv.org/abs/2505.13289 mastoxiv.page/@arXiv_csLG_bot/
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
arxiv.org/abs/2505.18877 mastoxiv.page/@arXiv_csLG_bot/
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
arxiv.org/abs/2505.19193 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot