Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@stephane_klein@social.coop
2025-11-21 13:25:32

#OpenRouterAI propose maintenant des embeddings models. Actuellement 22 models.
notes.sklein.xyz/2025-11-21_13

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot

@usul@piaille.fr
2025-12-17 07:30:05

Your Donations at Work: Funding Josh Matthews' Contributions to Servo - Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.
servo.org/blog/2025/09/17/your

@Techmeme@techhub.social
2026-01-20 18:36:04

OpenAI and ServiceNow sign a three-year deal to integrate OpenAI's models into ServiceNow's business software, including embedding OpenAI's AI agents (Belle Lin/Wall Street Journal)
wsj.com…

@michabbb@social.vivaldi.net
2026-01-24 00:28:07

🎯 Zero accuracy loss - preserves what matters: errors, anomalies, high-scoring items & query-relevant content using BM25/embedding similarity
✅ Full provider support: #OpenAI, #Anthropic, #Google

@zachleat@zachleat.com
2025-12-10 14:32:20

I wish dependencies would stop embedding their own arguments parser in a package that didn’t need a CLI to begin with.
Eleventy has *three* different dependencies with pretty hefty (and outdated) CLI argument parser libraries 😭
Is there an `overrides`-style feature that works for libraries? (afaik this feature is only for app-level code)

@cellfourteen@social.petertoushkov.eu
2025-12-14 10:31:42

Citizen DJ / Homepage
citizen-dj.labs.loc.gov/

@khalidabuhakmeh@mastodon.social
2026-01-06 15:56:10

2026 “resolution" for documentation writers embedding screenshots.
Please update your images to hi-dpi (2x). My old-man eyes can't take this blurriness.

Eyes Watching GIF by jbianart
@Techmeme@techhub.social
2026-01-13 16:01:41

CrowdStrike acquires Israel-based browser security startup Seraphic, a source says for around $400M; Seraphic has raised around $37M in total (Meir Orbach/CTech)
calcalistech.com/ctechnews/art

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot