Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2026-01-21 12:21:05

Ukrainian-founded language learning marketplace Preply raised a $150M Series D led by WestCap at a $1.2B valuation; the startup has a 150-person office in Kyiv (Anna Heim/TechCrunch)
techcrunch.com/2026/01/21/lang

@heiseonline@social.heise.de
2025-12-23 17:20:00

VR-Brillen der 90er und 2000er: Ohne sie gäbe es heute keine Meta Quest
Virtual Reality wie wir sie heute kennen, hat einen langen Weg hinter sich. Wir blicken zurück auf eine Ära des Scheiterns und des Lernens.

Over the last decade, America’s roads have become more dangerous,
with serious crashes increasing by nearly 20 percent since 2013.
Approximately 94 percent of crashes are the result of driver behavior
like speeding, impairment or distraction
— behavior that can be detected and corrected by a new generation of machine learning-enabled dash-cams.
Seamless integration between machine learning, IoT management and the cloud allows these cameras to improve safety in r…

@inthehands@hachyderm.io
2025-12-22 17:36:05

“Make the seats adjustable” is a thought I bring to teaching, for example: Does the context I’m creating for learning accommodate people with all different kinds of minds? What variations am I not accommodating? Can I make some things more individually adjustable to better embrace those variations? Can multiple instructors / learning environments / schools offer the flexibility that I can’t offer myself?
Total adjustability is impossible; infinite flexibility is impossible. But as an ongoing effort, as a •direction•, this work is both feasible and useful.
9/

@kubikpixel@chaos.social
2025-11-22 07:35:25

KI-Newbie oder Tech-Profi? Finde es heraus im KI-Quiz
Du möchtest gerne spielerisch mehr über KI und Algorithmen lernen? Dann bist du hier genau richtig! In unserem KI-Quiz erfährst du mehr über die Themen von AlgorithmWatch. Egal ob KI-Newbie oder Tech-Profi – viel Erfolg beim Raten oder Wissen.
— von @…
🍀

@NFL@darktundra.xyz
2025-11-23 16:46:33

Buccaneers vs. Rams NFL player props, SGP: Self-learning AI backs Baker Mayfield Over 242.5 yards on 'SNF'

cbssports.com/nfl/news/buccane

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@kubikpixel@chaos.social
2025-12-23 14:05:09

»Cybersicherheit — was sollten junge Menschen wissen:
Junge Internetnutzer werden oft Opfer von Betrug, Mobbing und kriminellen Übergriffen. Forscher haben jetzt einen Katalog von Fähigkeiten erstellt, die Kinder und Jugendliche für digitale Sicherheit lernen sollen.«
Alles richtig und gut so doch wann kommen die Kommerziellen online Dienstleiter diesbezüglich wirklich zur Verantwortung?
🔊

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:35

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
arxiv.org/abs/2506.10892 mastoxiv.page/@arXiv_csLG_bot/
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
arxiv.org/abs/2506.20494 mastoxiv.page/@arXiv_csLG_bot/
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
arxiv.org/abs/2506.20584 mastoxiv.page/@arXiv_csLG_bot/
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
arxiv.org/abs/2506.22376 mastoxiv.page/@arXiv_csLG_bot/
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
arxiv.org/abs/2507.18242 mastoxiv.page/@arXiv_csLG_bot/
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
arxiv.org/abs/2508.17702 mastoxiv.page/@arXiv_csLG_bot/
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
arxiv.org/abs/2508.19009 mastoxiv.page/@arXiv_csLG_bot/
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
arxiv.org/abs/2508.19011 mastoxiv.page/@arXiv_csLG_bot/
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
arxiv.org/abs/2508.20705 mastoxiv.page/@arXiv_csLG_bot/
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
arxiv.org/abs/2509.25977 mastoxiv.page/@arXiv_csLG_bot/
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
arxiv.org/abs/2510.01384 mastoxiv.page/@arXiv_csLG_bot/
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
arxiv.org/abs/2510.09775 mastoxiv.page/@arXiv_csLG_bot/
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
arxiv.org/abs/2510.14386 mastoxiv.page/@arXiv_csLG_bot/
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
arxiv.org/abs/2510.16882 mastoxiv.page/@arXiv_csLG_bot/
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
arxiv.org/abs/2510.23117 mastoxiv.page/@arXiv_csLG_bot/
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
arxiv.org/abs/2510.23501 mastoxiv.page/@arXiv_csLG_bot/
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
arxiv.org/abs/2511.00040 mastoxiv.page/@arXiv_csLG_bot/
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
arxiv.org/abs/2511.04469 mastoxiv.page/@arXiv_csLG_bot/
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
arxiv.org/abs/2511.09902 mastoxiv.page/@arXiv_csLG_bot/
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
arxiv.org/abs/2511.11571 mastoxiv.page/@arXiv_csLG_bot/
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
arxiv.org/abs/2511.12817 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:50

Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning
Wei Tang, Yin-Fang Yang, Weijia Zhang, Min-Ling Zhang
arxiv.org/abs/2512.17788 arxiv.org/pdf/2512.17788 arxiv.org/html/2512.17788
arXiv:2512.17788v1 Announce Type: new
Abstract: Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance.
toXiv_bot_toot