Tootfinder

Opt-in global Mastodon full text search. Join the index!

@emd@cosocial.ca
2025-12-25 07:08:53

The best version
yac.grumpy-learning.com/@grmpy

@Techmeme@techhub.social
2026-01-21 12:21:05

Ukrainian-founded language learning marketplace Preply raised a $150M Series D led by WestCap at a $1.2B valuation; the startup has a 150-person office in Kyiv (Anna Heim/TechCrunch)
techcrunch.com/2026/01/21/lang

@NFL@darktundra.xyz
2025-11-23 16:46:33

Buccaneers vs. Rams NFL player props, SGP: Self-learning AI backs Baker Mayfield Over 242.5 yards on 'SNF'

cbssports.com/nfl/news/buccane

@ErikUden@mastodon.de
2025-12-24 08:40:51

Adin Ross is 25 years old, has 7 million subscribers on Twitch, 4.5 million subscribers on YouTube, and livestreams daily to hundreds of thousands of people.
This is him finding out what fascism is.
Chat, are we cooked?

A video of Adin Ross learning what fascism is

Over the last decade, America’s roads have become more dangerous,
with serious crashes increasing by nearly 20 percent since 2013.
Approximately 94 percent of crashes are the result of driver behavior
like speeding, impairment or distraction
— behavior that can be detected and corrected by a new generation of machine learning-enabled dash-cams.
Seamless integration between machine learning, IoT management and the cloud allows these cameras to improve safety in r…

@aredridel@kolektiva.social
2025-12-23 15:12:20

One thing I'm learning from so many people declaring that LLMs are like a pair programmer you have to guide a lot is how few people I ever want to pair program with.

@inthehands@hachyderm.io
2025-12-22 17:36:05

“Make the seats adjustable” is a thought I bring to teaching, for example: Does the context I’m creating for learning accommodate people with all different kinds of minds? What variations am I not accommodating? Can I make some things more individually adjustable to better embrace those variations? Can multiple instructors / learning environments / schools offer the flexibility that I can’t offer myself?
Total adjustability is impossible; infinite flexibility is impossible. But as an ongoing effort, as a •direction•, this work is both feasible and useful.
9/

@frankel@mastodon.top
2025-11-23 17:31:41

I have been learning #Rust for a couple of years, and using it for pet projects and demos alike. Working for a JVM-heavy company, I thought it would be my fate forever. Last week, I had a nice surprise: I convinced my management that using Rust for a particular project was the right choice. It’s not a huge project, but I want to describe my experience using Rust in a "real" project.

@hikingdude@mastodon.social
2025-12-23 18:37:05

I'm just reviewing some old(er) video I recorded quite a while ago ... and I admit that I'm eager to record in higher quality!
But hey, it's all a learning curve. I'm not feeling bad about the bad quality. I regard it pretty nice to see the advancement over time!
video.franzgraf.de/w/r3XH…

@midtsveen@social.linux.pizza
2025-11-23 21:47:03

This week has been incredibly challenging, but sometimes with people who refuse to accept disagreement, there's only so much you can do.
If you're interested in learning more about anarcho-syndicalism, I’ve curated some excellent articles on my Linktree, mostly in Norwegian, but also some in English.
I’m planning to update it soon with more reliable sources beyond just blog posts.
Anyway, I'm not mad, just really exhausted.

@mcdanlj@social.makerforums.info
2025-12-24 19:26:38

I post a lot here and on Maker Forums Discourse, and have kind of not gotten around to blogging for a while. But today I had time to reflect on my first full year of #HamRadio — this new hobby has surprised me in a lot of (good) ways.

@cheryanne@aus.social
2026-01-23 20:59:56

Raw With J
Each week, Jacintha Field sits down with thought leaders, parents, experts, and real people to dive into life's messier moments: parenting, separation, burnout, grief, rebuilding, and learning to feel again...
Great Australian Pods Podcast Directory: greataustralianpods.com/raw-wi

Raw With J
Screenshot of the podcast listing on the Great Australian Pods website
@gadgetboy@gadgetboy.social
2026-01-23 17:22:39

"Back in my day, we walked to school in the snow. Uphill. Both ways. And we spent hours at the terminal learning commands and syntax." 😂
nlsh.dev/

@CerstinMahlow@mastodon.acm.org
2025-10-25 09:15:04

You know that you’re teaching in #Switzerland when you catch the majority of students running the live broadcast of today’s #ski competition in a separate window while doing learning tasks
#AcademicChatter

@Dragofix@veganism.social
2025-11-24 00:36:40

Mapping the unseen: How Europe is fighting back against invisible soil pollution #Europe

@NFL@darktundra.xyz
2025-12-23 13:56:30

Self-learning AI releases NFL picks, score predictions every Week 17 game

cbssports.com/nfl/news/nfl-wee

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:45

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/5]:
- Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
arxiv.org/abs/2511.14745 mastoxiv.page/@arXiv_csLG_bot/
- Deep Gaussian Process Proximal Policy Optimization
Matthijs van der Lende, Juan Cardenas-Cartagena
arxiv.org/abs/2511.18214 mastoxiv.page/@arXiv_csLG_bot/
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
Akira Tamamori
arxiv.org/abs/2511.23083 mastoxiv.page/@arXiv_csLG_bot/
- xGR: Efficient Generative Recommendation Serving at Scale
Sun, Liu, Zhang, Wu, Yang, Liang, Li, Ma, Liang, Ren, Zhang, Liu, Zhang, Qian, Yang
arxiv.org/abs/2512.11529 mastoxiv.page/@arXiv_csLG_bot/
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
arxiv.org/abs/2512.12783 mastoxiv.page/@arXiv_csLG_bot/
- The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems
Debu Sinha
arxiv.org/abs/2512.15068 mastoxiv.page/@arXiv_csLG_bot/
- Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library
Stritzel, H\"uhnerbein, Rauch, Zarate, Fleischmann, Buck, Lischka, Frey
arxiv.org/abs/2512.16715 mastoxiv.page/@arXiv_csLG_bot/
- Differentially private Bayesian tests
Abhisek Chakraborty, Saptati Datta
arxiv.org/abs/2401.15502 mastoxiv.page/@arXiv_statML_bo
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
arxiv.org/abs/2402.04114
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
arxiv.org/abs/2408.07588 mastoxiv.page/@arXiv_statML_bo
- Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng
arxiv.org/abs/2410.13161 mastoxiv.page/@arXiv_heplat_bo
- Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules
Sun, Zhang, Xia, Sun, Chen, Yang, Liu, Zhu, Liu
arxiv.org/abs/2410.22674 mastoxiv.page/@arXiv_eessIV_bo
- Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
arxiv.org/abs/2411.02221 mastoxiv.page/@arXiv_statML_bo
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
arxiv.org/abs/2412.01389 mastoxiv.page/@arXiv_statML_bo
- Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement
Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi
arxiv.org/abs/2412.12667 mastoxiv.page/@arXiv_csCV_bot/
- 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence
Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu
arxiv.org/abs/2502.01890 mastoxiv.page/@arXiv_csCV_bot/
- DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents
Shashank Sharma, Janina Hoffmann, Vinay Namboodiri
arxiv.org/abs/2502.01956 mastoxiv.page/@arXiv_csRO_bot/
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
Diana Koldasbayeva, Alexey Zaytsev
arxiv.org/abs/2502.03480
- GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing
Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng
arxiv.org/abs/2502.09652 mastoxiv.page/@arXiv_csCV_bot/
- LookAhead Tuning: Safer Language Models via Partial Answer Previews
Liu, Wang, Luo, Yuan, Sun, Liang, Zhang, Zhou, Hooi, Deng
arxiv.org/abs/2503.19041 mastoxiv.page/@arXiv_csCL_bot/
- Constraint-based causal discovery with tiered background knowledge and latent variables in single...
Christine W. Bang, Vanessa Didelez
arxiv.org/abs/2503.21526 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@thomasfuchs@hachyderm.io
2026-01-22 18:30:15

I’m wondering what all this software is that people now make that it wasn’t worth for them learning programming for

@Laur12@social.linux.pizza
2026-01-23 16:33:05

It's hard and i broke 4 pads, but also kinda fun)
Also just started learning how to desolder with the copper whick! (The rows on the top)

The gpu board with the core taken off. On top-left you can see 4 broken pads.
The core of the gpu, fixed on a ceramic plate using aluminum foil. The top rows are cleaned off.
@curiouscat@fosstodon.org
2026-01-23 20:39:04

Ways to stand with those in Minnesota that are under attack by their federal government
standwithminnesota.com/

@markhburton@mstdn.social
2025-11-15 08:07:50

Poor cancer survival rates for those with learning disabilities
northwestbylines.co.uk/news/he

Short-range kamikaze drones are one of the fastest moving facets of the defense sector today —
The Marine Corps "Organic Precision Fires-Light" (OPF-L) program, is designed to provide dismounted Marine infantry rifle squads with a man-packable, easy-to-operate precision strike drone to engage adversaries beyond line of sight.
A recent announcement of a $23.9-million contract to provide the U.S. Marine Corps with more than 600 "Bolt-M" drones is the next phas…

@raiders@darktundra.xyz
2025-12-19 01:39:25

Raiders’ Darien Porter intent on learning from mistakes reviewjournal.com/sports/raide

@johnhobbs@mstdn.ca
2025-12-24 20:20:15

The journey of learning is transformative, fueling curiosity and sparking growth. Knowledge lets us transcend limits and transform challenges into thrilling adventures. 🚀
Where has your curiosity led you lately? Share your stories!
#LifelongLearning #CuriosityJourney

@nemobis@mamot.fr
2025-12-22 17:35:30

The other day I had a funny conversation. A Finnish person was making excuses for my laziness at learning Finnish. Then she asked «is Italian hard to learn?». I never know how to answer the question so I said «I don't know: at least pronunciation is not too bad for Finns, they may sound funny but they're understandable; they mostly have trouble because they have no concept of separate p and b, and so on». She said «you mean strong p and soft p»? Not how I would have phrased it, but y…

@YaleDivinitySchool@mstdn.social
2026-01-22 15:16:13

Dean of St. Philip's Cathedral in Atlanta, The Very Rev. Sam Candler ’82 M.Div. would rather bless than curse. "People learning how to pray together, serve together, get angry together, and love together—that’s what’s going to save the world,” he says.
Read Ray Waddle's new profile of Sam Candler and his uplifting work in Atlanta.

A man in a clerical collar.
@rasterweb@mastodon.social
2025-12-21 16:15:20

I still need to spend more time learning TrueNAS and how to get container applications running properly. It's much more complex than OpenMediaVault.
Some applications do just work, but many seem to need a lot more care and configuration to get working as desired.
#trueNaS #selfHosting

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:35

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
arxiv.org/abs/2506.10892 mastoxiv.page/@arXiv_csLG_bot/
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
arxiv.org/abs/2506.20494 mastoxiv.page/@arXiv_csLG_bot/
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
arxiv.org/abs/2506.20584 mastoxiv.page/@arXiv_csLG_bot/
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
arxiv.org/abs/2506.22376 mastoxiv.page/@arXiv_csLG_bot/
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
arxiv.org/abs/2507.18242 mastoxiv.page/@arXiv_csLG_bot/
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
arxiv.org/abs/2508.17702 mastoxiv.page/@arXiv_csLG_bot/
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
arxiv.org/abs/2508.19009 mastoxiv.page/@arXiv_csLG_bot/
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
arxiv.org/abs/2508.19011 mastoxiv.page/@arXiv_csLG_bot/
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
arxiv.org/abs/2508.20705 mastoxiv.page/@arXiv_csLG_bot/
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
arxiv.org/abs/2509.25977 mastoxiv.page/@arXiv_csLG_bot/
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
arxiv.org/abs/2510.01384 mastoxiv.page/@arXiv_csLG_bot/
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
arxiv.org/abs/2510.09775 mastoxiv.page/@arXiv_csLG_bot/
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
arxiv.org/abs/2510.14386 mastoxiv.page/@arXiv_csLG_bot/
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
arxiv.org/abs/2510.16882 mastoxiv.page/@arXiv_csLG_bot/
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
arxiv.org/abs/2510.23117 mastoxiv.page/@arXiv_csLG_bot/
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
arxiv.org/abs/2510.23501 mastoxiv.page/@arXiv_csLG_bot/
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
arxiv.org/abs/2511.00040 mastoxiv.page/@arXiv_csLG_bot/
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
arxiv.org/abs/2511.04469 mastoxiv.page/@arXiv_csLG_bot/
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
arxiv.org/abs/2511.09902 mastoxiv.page/@arXiv_csLG_bot/
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
arxiv.org/abs/2511.11571 mastoxiv.page/@arXiv_csLG_bot/
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
arxiv.org/abs/2511.12817 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@pre@boing.world
2025-11-21 15:01:03

Speaking of kids, here if a kid who is building a nostr gaming system.
Not by vibe coding, by learning how to use unity.
Gamestr.io is the marketplace he's building. Submitting a protocol improvement proposal for gaming types.
His gaming market is first going to feature a Tetris clone that broadcasts scores as that new type.
Talented kid named Sam.
#nostr #gamestr #nostrshire

@lpryszcz@genomic.social
2026-01-19 14:26:55

Seriously, knowing the dire predicament we are in, we should be busy learning and re-learning how to produce food, clothing and housing on a planet transitioning into a radically different climate. We desperately need a plan how to scale back on technology use, in tandem with the natural decline in resource and energy availability.

@datascience@genomic.social
2026-01-02 11:00:00

R learning for applied statistics by Chenxin Li: #rstats

@detondev@social.linux.pizza
2025-12-21 20:13:18

The Seeing Center (2005.5)
⭐️⭐️⭐️⭐️ (183,746 reviews)
"This Best Picture-winning Tour de Force follows Moses Jackson, a good-hearted African-American PhD from Brooklyn, as he moves to a rural southern school to introduce some rowdy, expressive, redneck kids to the joys of learning, overcoming racial and economic divides in the process."

@ripienaar@devco.social
2025-11-22 05:44:58

Needed to build something quite weird that involves NFC tags and mainly Notion DB as storage and UI
Didn’t feel like learning Notion cos I am not a fan.
Worked with ChatGPT to make a spec and prompt this was useful while doing that totally changed my mind to something better
Gave it to codex and it one-shot built it perfectly in 20 minutes without interaction, worked first time once I fixed a mistake I made in Notion.
Lots of issues with AI but it’s certainly moving…

@ErikJonker@mastodon.social
2025-12-06 12:08:54

I am not as pessimistic as the writer of this article, still an absolute must-read,
"AI is Destroying the University and Learning Itself"
currentaffairs.org/news/ai-is-

@cellfourteen@social.petertoushkov.eu
2026-01-21 15:05:17

I like AI. I like robots. I love machine learning automation. I don't like it when their use cases are replacing people, spying on or profiling people, prosecuting people, submitting people into subscription, creating "art", "videos", "pictures" and scumbag "memes", warfare, propaganda, disinformation, deepfakes of any kind, ads, trolling, pumping up stocks, just plain wrong search results that force you to waste twice as much time to confirm they …

@ruth_mottram@fediscience.org
2025-11-21 05:56:34

Britain’s tax system combines the worst of the US and Scandinavia - on.ft.com/48abwRo via @FT
Sharing also for the sake of warning my Scandinavian compatriots some of whom seem to be learning the wrong things from the UK.

@kctipton@mas.to
2025-11-11 18:49:19

How Do We Honor Veterans? By Learning from Their Experiences. - TheHumanist.com thehumanist.com/commentary/how

@metacurity@infosec.exchange
2025-11-15 13:39:31

Every week, Metacurity delivers our free and paid subscribers a run-down of the top infosec-related long reads we didn't have time for during the daily crush of cyber news.
This week's selection covers
--Massive surveillance in Mexico City leaves crime high,
--Workplace surveillance can harm workers,
--Machine learning privacy attacks are less effective in reality than they are in theory,
--LLMs produce more secure code when trained on flaw-free code,

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:50

Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning
Wei Tang, Yin-Fang Yang, Weijia Zhang, Min-Ling Zhang
arxiv.org/abs/2512.17788 arxiv.org/pdf/2512.17788 arxiv.org/html/2512.17788
arXiv:2512.17788v1 Announce Type: new
Abstract: Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance.
toXiv_bot_toot

@brichapman@mastodon.social
2025-11-12 16:58:01

DTO-BioFlow is using AI to analyze decades of marine biodiversity records, revealing insights that support ocean monitoring, conservation, and restoration. 💙
dto-bioflow.eu/news/using-deep

@kubikpixel@chaos.social
2025-12-20 07:05:11

Let's Learn Rust Using Rustlings
This video is a walkthrough and review of learning Rust using Rustlings, aimed at developers who struggle with Rust’s ownership model and borrow checker.
📺 youtube.com/watch?v=jJopQTH7vmg
🦀

@mia@hcommons.social
2025-12-05 14:51:22

Ethical Considerations Around Machine Learning-Engaged Online Participatory Research - poster from Zooniverse community at #FF2025 zenodo.org/records/17779992

@NFL@darktundra.xyz
2025-11-24 13:46:50

49ers vs. Panthers SGP: 'Monday Night Football' same-game parlay picks, bets, props from SportsLine AI

cbssports.com/nfl/news/49ers-p

@cketti@int21.dev
2025-11-20 17:45:51

In the remaining two thirds of the book a second interpreter – a bytecode virtual machine – is built using C. I'm very much looking forward to that part of the book. However, I can't bring myself to write C, not even for something inconsequential like this. So I guess I'll finally have to get serious about properly learning Rust.

@cketti@social.int21.dev
2025-11-20 17:45:51

In the remaining two thirds of the book a second interpreter – a bytecode virtual machine – is built using C. I'm very much looking forward to that part of the book. However, I can't bring myself to write C, not even for something inconsequential like this. So I guess I'll finally have to get serious about properly learning Rust.

@lornajane@indieweb.social
2025-11-19 19:22:31

Learning about PHP 8.5 with @… at @… this evening. Cold night but a good turnout and there’s a second talk on Clickhouse to come!

@StephenRees@mas.to
2025-12-09 01:32:16

Here is a link to Current Affairs to an article about the impact of AI on universities in the US.
It is a very long post, but well worth your time and attention
currentaffairs.org/news/ai-is-

@edintone@mastodon.green
2025-11-21 07:57:14

Timbuktu’s Medieval Manuscripts Return Home After a Decade Away Safe from Insurgents goodnewsnetwork.org/timbuktus-

@mariyadelano@hachyderm.io
2025-11-19 14:39:15

Thank you all for 3,000 followers on here! Here’s a photo of Danny to celebrate the occasion ☺️
3 years into being part of Mastodon, I continue to be impressed with how wonderful the people here are and how much this social network actually FEELS social. People replying to one another, having conversations, learning things, sharing moments of joy, making friends.
Mastodon brought my business clients, helped me gain confidence in my own voice, freed me from dependence on big tech and algorithms, rekindled my interests, introduced me to incredible people and projects, and served as a source of hope in humanity in the times when cynicism and nihilism felt all but inevitable.
I love our little corner of the internet, and am so glad that it’s still here despite everyone who professed it was doomed to fade into irrelevance.
Thank you to everyone reading these words for being here on the Fedi. The world is a little better thanks to your choice to support an independent web.

@arXiv_qbioNC_bot@mastoxiv.page
2025-12-11 08:16:21

Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback
Dimitra Maoutsa
arxiv.org/abs/2512.09366 arxiv.org/pdf/2512.09366 arxiv.org/html/2512.09366
arXiv:2512.09366v1 Announce Type: new
Abstract: Biological neural networks learn complex behaviors from sparse, delayed feedback using local synaptic plasticity, yet the mechanisms enabling structured credit assignment remain elusive. In contrast, artificial recurrent networks solving similar tasks typically rely on biologically implausible global learning rules or hand-crafted local updates. The space of local plasticity rules capable of supporting learning from delayed reinforcement remains largely unexplored. Here, we present a meta-learning framework that discovers local learning rules for structured credit assignment in recurrent networks trained with sparse feedback. Our approach interleaves local neo-Hebbian-like updates during task execution with an outer loop that optimizes plasticity parameters via \textbf{tangent-propagation through learning}. The resulting three-factor learning rules enable long-timescale credit assignment using only local information and delayed rewards, offering new insights into biologically grounded mechanisms for learning in recurrent circuits.
toXiv_bot_toot

@arXiv_csDC_bot@mastoxiv.page
2026-01-21 13:23:05

A Kubernetes custom scheduler based on reinforcement learning for compute-intensive pods
Hanlin Zhou, Huah Yong Chan, Shun Yao Zhang, Meie Lin, Jingfei Ni
arxiv.org/abs/2601.13579

@sauer_lauwarm@mastodon.social
2025-12-20 21:14:21

instagram.com/p/DSbtXuCCe9Y/?u

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:50

Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
arxiv.org/abs/2512.17884 arxiv.org/pdf/2512.17884 arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot

@georgiamuseum@glammr.us
2025-12-18 14:49:19

Ellen Patton has been a joy to add to our learning and engagement team and to our staff as a whole. A native Athenian, she grew up visiting the museum, which helped her forge strong connections with both the artwork and her family. “I think that having these great memories in the museum makes me want to push for everyone who visits to have a connective experience,” Patton said.
Read more about her journey on our blog:

Museum programs assistant Ellen Patton stands for a photo in the museum's Jane and Harry Willson Sculpture Garden, with an array of pink and yellow flowers in front of her and a low concrete wall behind her.
@Techmeme@techhub.social
2026-01-21 04:11:15

Barret Zoph says Thinking Machines Lab fired him only after learning he was leaving, and at no time did the company cite his performance or unethical conduct (Wall Street Journal)
wsj.com/tech/ai/the-messy-h…

@NFL@darktundra.xyz
2026-01-24 19:26:37

NFL player props, 2026 AFC, NFC Championship picks, odds, AI predictions: Puka Nacua Over 92.5 receiving yards

cbssports.com/nfl/news/nfl-pla

@kexpmusicbot@mastodonapp.uk
2026-01-11 15:42:21

🇺🇦 #NowPlaying on KEXP's #PacificNotions
Passarani:
🎵 Learning To Let Go
#Passarani
marcopassarani.bandcamp.com/tr

@UP8@mastodon.social
2025-11-04 04:46:11

🚴 Benchmarking On-Device Machine Learning on Apple Silicon with MLX
#apple #hardware

@cheryanne@aus.social
2026-01-11 01:36:23

Cakeberra!
All pictures in the book are from the charity’s 2017 Cake Off competition, in which the theme was ‘Canberra’. They were professionally captured, but unutilised until now.
Learning your ABCs? It's a piece of cake with this charity's Canberra-themed book | Region Canberra

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 08:00:50

Multi-agent learning under uncertainty: Recurrence vs. concentration
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
arxiv.org/abs/2512.08132 arxiv.org/pdf/2512.08132 arxiv.org/html/2512.08132
arXiv:2512.08132v1 Announce Type: new
Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
toXiv_bot_toot

@v_i_o_l_a@openbiblio.social
2026-01-16 15:11:35

"Digital Learning: Exploring Perceived Usefulness and Perceived Ease of Use of Open Educational Resources"
#OpenIrony

@netzschleuder@social.skewed.de
2026-01-16 02:00:06

livemocha: Livemocha friendship network (2010)
A network of friendships among users on Livemocha, a large online language learning community. Nodes represent users and edges represent a mutual declaration of friendship.
This network has 104103 nodes and 2193083 edges.
Tags: Social, Online, Unweighted
ne…

livemocha: Livemocha friendship network (2010). 104103 nodes, 2193083 edges. https://networks.skewed.de/net/livemocha
@ethanwhite@hachyderm.io
2025-12-17 01:44:23

Align. Your. Assessments. With. Your. Learning objectives.

@ErikJonker@mastodon.social
2025-12-11 12:56:58

This is an important announcement. Google uses it's ecosystem to gain an advantage, "Announcing Model Context Protocol (MCP) support for Google services"
cloud.google.com/blog/products

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:43

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/3]:
- Fraud detection in credit card transactions using Quantum-Assisted Restricted Boltzmann Machines
Jo\~ao Marcos Cavalcanti de Albuquerque Neto, Gustavo Castro do Amaral, Guilherme Penello Tempor\~ao
arxiv.org/abs/2512.17660 mastoxiv.page/@arXiv_quantph_b
- Vidarc: Embodied Video Diffusion Model for Closed-loop Control
Feng, Xiang, Mao, Tan, Zhang, Huang, Zheng, Liu, Su, Zhu
arxiv.org/abs/2512.17661 mastoxiv.page/@arXiv_csRO_bot/
- Imputation Uncertainty in Interpretable Machine Learning Methods
Pegah Golchian, Marvin N. Wright
arxiv.org/abs/2512.17689 mastoxiv.page/@arXiv_statML_bo
- Revisiting the Broken Symmetry Phase of Solid Hydrogen: A Neural Network Variational Monte Carlo ...
Shengdu Chai, Chen Lin, Xinyang Dong, Yuqiang Li, Wanli Ouyang, Lei Wang, X. C. Xie
arxiv.org/abs/2512.17703 mastoxiv.page/@arXiv_condmatst
- Breast Cancer Neoadjuvant Chemotherapy Treatment Response Prediction Using Aligned Longitudinal M...
Rahul Ravi, Ruizhe Li, Tarek Abdelfatah, Stephen Chan, Xin Chen
arxiv.org/abs/2512.17759 mastoxiv.page/@arXiv_eessIV_bo
- MedNeXt-v2: Scaling 3D ConvNeXts for Large-Scale Supervised Representation Learning in Medical Im...
Roy, Kirchhoff, Ulrich, Rokuss, Wald, Isensee, Maier-Hein
arxiv.org/abs/2512.17774 mastoxiv.page/@arXiv_eessIV_bo
- Domain-Aware Quantum Circuit for QML
Gurinder Singh, Thaddeus Pellegrini, Kenneth M. Merz, Jr
arxiv.org/abs/2512.17800 mastoxiv.page/@arXiv_quantph_b
- Visually Prompted Benchmarks Are Surprisingly Fragile
Feng, Lian, Dunlap, Shu, Wang, Wang, Darrell, Suhr, Kanazawa
arxiv.org/abs/2512.17875 mastoxiv.page/@arXiv_csCV_bot/
- Learning vertical coordinates via automatic differentiation of a dynamical core
Tim Whittaker, Seth Taylor, Elsa Cardoso-Bihlo, Alejandro Di Luca, Alex Bihlo
arxiv.org/abs/2512.17877 mastoxiv.page/@arXiv_physicsao
- RadarGen: Automotive Radar Point Cloud Generation from Cameras
Tomer Borreda, Fangqiang Ding, Sanja Fidler, Shengyu Huang, Or Litany
arxiv.org/abs/2512.17897 mastoxiv.page/@arXiv_csCV_bot/
- Distributionally Robust Imitation Learning: Layered Control Architecture for Certifiable Autonomy
Gahlawat, Aboudonia, Banik, Hovakimyan, Matni, Ames, Zardini, Speranzon
arxiv.org/abs/2512.17899 mastoxiv.page/@arXiv_eessSY_bo
- Re-Depth Anything: Test-Time Depth Refinement via Self-Supervised Re-lighting
Ananta R. Bhattarai, Helge Rhodin
arxiv.org/abs/2512.17908 mastoxiv.page/@arXiv_csCV_bot/
toXiv_bot_toot

@NFL@darktundra.xyz
2025-11-21 22:46:50

Self-learning AI releases NFL picks, score predictions every Week 12 game

cbssports.com/nfl/news/nfl-wee

@inthehands@hachyderm.io
2025-11-18 15:57:18

I urge men — and everyone of a privileged identity — to read the Reddit post in the OP.
We’re the ones who need to hear about these experiences. We’re the ones who need to start learning to recognize it sooner, recognize it at a distance. We’re the ones who need to start sharing notes, sharing warnings, and having our colleagues’ backs.
5/

@arXiv_csDC_bot@mastoxiv.page
2026-01-21 13:28:36

Device Association and Resource Allocation for Hierarchical Split Federated Learning in Space-Air-Ground Integrated Network
Haitao Zhao, Xiaoyu Tang, Bo Xu, Jinlong Sun, Linghao Zhang
arxiv.org/abs/2601.13817

@Techmeme@techhub.social
2025-12-19 18:55:53

Neural Concept, whose 3D product design software uses deep learning to help cut development times, raised a $100M Series C, bringing its total funding to $130M (Chris Metinko/Axios)
axios.com/pro/enterprise-softw

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:31

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
arxiv.org/abs/2512.17341 mastoxiv.page/@arXiv_statML_bo
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
arxiv.org/abs/2512.17381 mastoxiv.page/@arXiv_csNI_bot/
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
arxiv.org/abs/2512.17419 mastoxiv.page/@arXiv_csSE_bot/
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
arxiv.org/abs/2512.17426 mastoxiv.page/@arXiv_statML_bo
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
arxiv.org/abs/2512.17450 mastoxiv.page/@arXiv_csCV_bot/
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
arxiv.org/abs/2512.17460 mastoxiv.page/@arXiv_csSE_bot/
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
arxiv.org/abs/2512.17462 mastoxiv.page/@arXiv_csIR_bot/
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
arxiv.org/abs/2512.17466 mastoxiv.page/@arXiv_eessSY_bo
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
arxiv.org/abs/2512.17470 mastoxiv.page/@arXiv_csAI_bot/
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
arxiv.org/abs/2512.17473 mastoxiv.page/@arXiv_eessSP_bo
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
arxiv.org/abs/2512.17488 mastoxiv.page/@arXiv_csCV_bot/
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
arxiv.org/abs/2512.17515 mastoxiv.page/@arXiv_eessIV_bo
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
arxiv.org/abs/2512.17517 mastoxiv.page/@arXiv_csCV_bot/
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
arxiv.org/abs/2512.17534 mastoxiv.page/@arXiv_physicsfl
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
arxiv.org/abs/2512.17562 mastoxiv.page/@arXiv_csSD_bot/
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
arxiv.org/abs/2512.17574 mastoxiv.page/@arXiv_csDC_bot/
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
arxiv.org/abs/2512.17585 mastoxiv.page/@arXiv_eessIV_bo
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
arxiv.org/abs/2512.17594 mastoxiv.page/@arXiv_csCR_bot/
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
arxiv.org/abs/2512.17630 mastoxiv.page/@arXiv_csCL_bot/
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
arxiv.org/abs/2512.17659 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@cheryanne@aus.social
2026-01-10 20:59:07

RMS Hospitality Learning Labs
A practical webinar series from RMS that explore how technology can make hospitality simpler, smarter, and more human...
Great Australian Pods Podcast Directory: greataustralianpods.com/rms-ho

RMS Hospitality Learning Labs
Screenshot of the podcast listing on the Great Australian Pods website

We have learnt to look at the world through ratings, reviews, and whether a place is “Instagrammable.”
Nuance is pressed flat into a system of stars.
Even mountains and valleys are scored on Google Maps,
while countless unassuming places slip silently through the net.
I often wonder, ruefully, how much we are missing when only the ranked and the rated rise to the surface.
It begins to feel as though such systems are not merely cataloguing the world,
bu…

@YaleDivinitySchool@mstdn.social
2025-12-18 20:39:47

"Taking small steps is crucial. You might start by just learning about your immediate neighborhood, how things actually work, finding ways to intervene in things that are not working well. Understand the water supply and how to keep it clean. ... Share. Share what you have."
—YDS professor Willie James Jennings in this interview in the new issue of Reflections, focused on building hope for a living planet

People walking on paths during autumn
@UP8@mastodon.social
2025-11-18 15:49:24

⏰ Electric Vehicle Range Prediction Models: A Review of Machine Learning, Mathematical, and Simulation Approaches
#ev

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:19

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
arxiv.org/abs/2512.16927 mastoxiv.page/@arXiv_csDS_bot/
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
arxiv.org/abs/2512.16956 mastoxiv.page/@arXiv_csSE_bot/
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
arxiv.org/abs/2512.16962 mastoxiv.page/@arXiv_csCR_bot/
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
arxiv.org/abs/2512.16964 mastoxiv.page/@arXiv_eessIV_bo
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
arxiv.org/abs/2512.16969 mastoxiv.page/@arXiv_csAI_bot/
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
arxiv.org/abs/2512.16970 mastoxiv.page/@arXiv_csAI_bot/
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
arxiv.org/abs/2512.17028 mastoxiv.page/@arXiv_csCL_bot/
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
arxiv.org/abs/2512.17075 mastoxiv.page/@arXiv_csCL_bot/
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
arxiv.org/abs/2512.17127 mastoxiv.page/@arXiv_statML_bo
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
arxiv.org/abs/2512.17146 mastoxiv.page/@arXiv_csCR_bot/
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
arxiv.org/abs/2512.17169 mastoxiv.page/@arXiv_qbioBM_bo
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
arxiv.org/abs/2512.17185 mastoxiv.page/@arXiv_qfinRM_bo
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
arxiv.org/abs/2512.17209 mastoxiv.page/@arXiv_csSD_bot/
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
arxiv.org/abs/2512.17213 mastoxiv.page/@arXiv_csCV_bot/
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
arxiv.org/abs/2512.17245 mastoxiv.page/@arXiv_condmatmt
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
arxiv.org/abs/2512.17251 mastoxiv.page/@arXiv_csCR_bot/
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
arxiv.org/abs/2512.17254 mastoxiv.page/@arXiv_csCR_bot/
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
arxiv.org/abs/2512.17259 mastoxiv.page/@arXiv_csMA_bot/
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
arxiv.org/abs/2512.17277 mastoxiv.page/@arXiv_csIR_bot/
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
arxiv.org/abs/2512.17281 mastoxiv.page/@arXiv_csSD_bot/
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
arxiv.org/abs/2512.17340 mastoxiv.page/@arXiv_statME_bo
toXiv_bot_toot

@brichapman@mastodon.social
2025-12-16 20:21:00

Want to break into climate work but don't know where to start? Terra.do's Learning for Action fellowship might be your answer.
This 12-week program goes deep on real-world climate solutions—beyond just clean energy. You'll learn the science, explore diverse solutions, and connect with a global community, all while working full-time (6-10 hrs/week).
Financial aid available.

@mcdanlj@social.makerforums.info
2025-12-27 13:11:48

I've spent the past year learning Morse code, and it's ended up being a great deal of fun. I've had a lot of questions. I've tried to share what I've learned with others as I've gone along.
To celebrate the end of the year, I wrote a combination of information, story, and advice. How to get started, mobile apps and websites, books, getting on the air, learning to send, POTA, SST, keys and keyers. I'm not fluent or expert yet, but that means that I still remember what's hard!

@detondev@social.linux.pizza
2025-11-16 13:19:45

Im still "e-learning" how to generate aura and hype moments at a market-competitive rate so bear with me

@NFL@darktundra.xyz
2025-12-22 14:26:15

Colts vs. 49ers SGP: 'Monday Night Football' same-game parlay picks, bets, props from SportsLine AI

cbssports.com/nfl/news/colts-4

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:55

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/5]:
- Sample, Don't Search: Rethinking Test-Time Alignment for Language Models
Gon\c{c}alo Faria, Noah A. Smith
arxiv.org/abs/2504.03790 mastoxiv.page/@arXiv_csCL_bot/
- A Survey on Archetypal Analysis
Aleix Alcacer, Irene Epifanio, Sebastian Mair, Morten M{\o}rup
arxiv.org/abs/2504.12392 mastoxiv.page/@arXiv_statME_bo
- The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations
Michael L. Wells, Kamel Lahouel, Bruno Jedynak
arxiv.org/abs/2505.11622 mastoxiv.page/@arXiv_statML_bo
- BOLT: Block-Orthonormal Lanczos for Trace estimation of matrix functions
Kingsley Yeon, Promit Ghosal, Mihai Anitescu
arxiv.org/abs/2505.12289 mastoxiv.page/@arXiv_mathNA_bo
- Clustering and Pruning in Causal Data Fusion
Otto Tabell, Santtu Tikka, Juha Karvanen
arxiv.org/abs/2505.15215 mastoxiv.page/@arXiv_statML_bo
- On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of ph...
Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi, Alison L. Marsden
arxiv.org/abs/2506.11683 mastoxiv.page/@arXiv_statML_bo
- Beyond Force Metrics: Pre-Training MLFFs for Stable MD Simulations
Maheshwari, Tang, Ock, Kolluru, Farimani, Kitchin
arxiv.org/abs/2506.14850 mastoxiv.page/@arXiv_physicsch
- Quantifying Uncertainty in the Presence of Distribution Shifts
Yuli Slavutsky, David M. Blei
arxiv.org/abs/2506.18283 mastoxiv.page/@arXiv_statML_bo
- ZKPROV: A Zero-Knowledge Approach to Dataset Provenance for Large Language Models
Mina Namazi, Alexander Nemecek, Erman Ayday
arxiv.org/abs/2506.20915 mastoxiv.page/@arXiv_csCR_bot/
- SpecCLIP: Aligning and Translating Spectroscopic Measurements for Stars
Zhao, Huang, Xue, Kong, Liu, Tang, Beers, Ting, Luo
arxiv.org/abs/2507.01939 mastoxiv.page/@arXiv_astrophIM
- Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based I...
Ko Watanabe, Stanislav Frolov, Aya Hassan, David Dembinsky, Adriano Lucieri, Andreas Dengel
arxiv.org/abs/2507.17860 mastoxiv.page/@arXiv_csCV_bot/
- PASS: Probabilistic Agentic Supernet Sampling for Interpretable and Adaptive Chest X-Ray Reasoning
Yushi Feng, Junye Du, Yingying Hong, Qifan Wang, Lequan Yu
arxiv.org/abs/2508.10501 mastoxiv.page/@arXiv_csAI_bot/
- Unified Acoustic Representations for Screening Neurological and Respiratory Pathologies from Voice
Ran Piao, Yuan Lu, Hareld Kemps, Tong Xia, Aaqib Saeed
arxiv.org/abs/2508.20717 mastoxiv.page/@arXiv_csSD_bot/
- Machine Learning-Driven Predictive Resource Management in Complex Science Workflows
Tasnuva Chowdhury, et al.
arxiv.org/abs/2509.11512 mastoxiv.page/@arXiv_csDC_bot/
- MatchFixAgent: Language-Agnostic Autonomous Repository-Level Code Translation Validation and Repair
Ali Reza Ibrahimzada, Brandon Paulsen, Reyhaneh Jabbarvand, Joey Dodds, Daniel Kroening
arxiv.org/abs/2509.16187 mastoxiv.page/@arXiv_csSE_bot/
- Automated Machine Learning Pipeline: Large Language Models-Assisted Automated Dataset Generation ...
Adam Lahouari, Jutta Rogal, Mark E. Tuckerman
arxiv.org/abs/2509.21647 mastoxiv.page/@arXiv_condmatmt
- Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference
Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, Zheng Ma
arxiv.org/abs/2509.21791 mastoxiv.page/@arXiv_csCL_bot/
- The Generation Phases of Flow Matching: a Denoising Perspective
Anne Gagneux, S\'egol\`ene Martin, R\'emi Gribonval, Mathurin Massias
arxiv.org/abs/2510.24830 mastoxiv.page/@arXiv_csCV_bot/
- Data-driven uncertainty-aware seakeeping prediction of the Delft 372 catamaran using ensemble Han...
Giorgio Palma, Andrea Serani, Matteo Diez
arxiv.org/abs/2511.04461 mastoxiv.page/@arXiv_eessSY_bo
- Generalized infinite dimensional Alpha-Procrustes based geometries
Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra
arxiv.org/abs/2511.09801 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@inthehands@hachyderm.io
2025-12-02 17:18:14

I’m sympathetic to this from @…, but my patience for that learning process is short because the wrongheaded “plastic recycling is 100% hoax!!” canard quickly gets turned into “ALL recycling is 100% hoax!!” by right-wingers. It’s a learning process, yes, but a learning process that bad actors have been actively exploiting for at least a decade.
mastodon.social/@schwa/1156510

Google is still getting all the information collected by Nest Learning Thermostats,
including data measured by their sensors, such as temperature, humidity, ambient light, and motion.
“I was under the impression that the Google connection would be severed along with the remote functionality,
however that connection is not severed, and instead is a one-way street,” Kociemba says.

@brichapman@mastodon.social
2025-11-17 04:31:01

Participants in the Learning for Action fellowship are gaining vital skills as they engage with climate solutions and expert insights, empowering them to tackle climate change effectively. terra.do/blog/answering-your-l

@Techmeme@techhub.social
2025-12-17 13:30:35

Coursera plans to acquire rival online education platform Udemy in an all-stock deal valued at $2.5B, combining two of the largest US learning platforms (Akash Sriram/Reuters)
reuters.com/business/coursera-

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:24

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
arxiv.org/abs/2402.12118 mastoxiv.page/@arXiv_csLG_bot/
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
arxiv.org/abs/2405.00645 mastoxiv.page/@arXiv_csLG_bot/
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
arxiv.org/abs/2405.15325 mastoxiv.page/@arXiv_csLG_bot/
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
arxiv.org/abs/2405.15877 mastoxiv.page/@arXiv_csLG_bot/
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
arxiv.org/abs/2409.03735 mastoxiv.page/@arXiv_csLG_bot/
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
arxiv.org/abs/2410.06800 mastoxiv.page/@arXiv_csLG_bot/
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
arxiv.org/abs/2410.18686 mastoxiv.page/@arXiv_csLG_bot/
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
arxiv.org/abs/2412.00720 mastoxiv.page/@arXiv_csLG_bot/
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
arxiv.org/abs/2412.15184 mastoxiv.page/@arXiv_csLG_bot/
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
arxiv.org/abs/2501.10290 mastoxiv.page/@arXiv_csLG_bot/
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
arxiv.org/abs/2501.10321 mastoxiv.page/@arXiv_csLG_bot/
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
arxiv.org/abs/2502.06658 mastoxiv.page/@arXiv_csLG_bot/
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
arxiv.org/abs/2502.09496 mastoxiv.page/@arXiv_csLG_bot/
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
arxiv.org/abs/2502.10784 mastoxiv.page/@arXiv_csLG_bot/
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
arxiv.org/abs/2502.11027 mastoxiv.page/@arXiv_csLG_bot/
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
arxiv.org/abs/2505.10432 mastoxiv.page/@arXiv_csLG_bot/
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
arxiv.org/abs/2505.17720 mastoxiv.page/@arXiv_csLG_bot/
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
arxiv.org/abs/2505.22255 mastoxiv.page/@arXiv_csLG_bot/
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
arxiv.org/abs/2506.06486 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@NFL@darktundra.xyz
2026-01-19 18:01:54

Self-learning AI generates NFL picks, exact score predictions for 2026 NFC, AFC Championship Games

cbssports.com/nfl/news/nfl-afc

@NFL@darktundra.xyz
2025-11-16 16:16:47

Lions vs. Eagles NFL player props, SGP: Self-learning AI backs Jahmyr Gibbs Over 13.5 carries on 'SNF'

cbssports.com/nfl/news/lions-e

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:10

Polyharmonic Cascade
Yuriy N. Bakhvalov
arxiv.org/abs/2512.17671 arxiv.org/pdf/2512.17671 arxiv.org/html/2512.17671
arXiv:2512.17671v1 Announce Type: new
Abstract: This paper presents a deep machine learning architecture, the "polyharmonic cascade" -- a sequence of packages of polyharmonic splines, where each layer is rigorously derived from the theory of random functions and the principles of indifference. This makes it possible to approximate nonlinear functions of arbitrary complexity while preserving global smoothness and a probabilistic interpretation. For the polyharmonic cascade, a training method alternative to gradient descent is proposed: instead of directly optimizing the coefficients, one solves a single global linear system on each batch with respect to the function values at fixed "constellations" of nodes. This yields synchronized updates of all layers, preserves the probabilistic interpretation of individual layers and theoretical consistency with the original model, and scales well: all computations reduce to 2D matrix operations efficiently executed on a GPU. Fast learning without overfitting on MNIST is demonstrated.
toXiv_bot_toot

Whistles have become a popular raid alert tool in cities across the country
– New Yorkers wear them around their necks to warn neighbors,
the people of New Orleans blast them outside ICE facilities
and Charlotte residents used them to ward off Customs and Border Protection officials.
While strongly associated with Chicago, the tactic is actually one that city organizers learned in part from groups in Los Angeles.
Its spread is illustrative of the many ways cit…

@NFL@darktundra.xyz
2025-11-18 15:12:01

Self-learning AI releases NFL picks, score predictions every Week 12 game

cbssports.com/nfl/news/nfl-wee

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:00

Mitigating Forgetting in Low Rank Adaptation
Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
arxiv.org/abs/2512.17720 arxiv.org/pdf/2512.17720 arxiv.org/html/2512.17720
arXiv:2512.17720v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
toXiv_bot_toot

Google’s vibe-coding tool, Opal,
is making its way to Gemini.
The company on Wednesday said it is integrating the tool,
which lets you build AI-powered mini apps,
inside the Gemini web app,
allowing users to create their own custom apps,
which Google calls Gems.
Introduced in 2024,
Gems are customized versions of Gemini designed for specific tasks or scenarios.
For instance, some of Google’s pre-made Gems include
a learning coach,…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:55:06

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/5]:
- CLAReSNet: When Convolution Meets Latent Attention for Hyperspectral Image Classification
Asmit Bandyopadhyay, Anindita Das Bhattacharjee, Rakesh Das
arxiv.org/abs/2511.12346 mastoxiv.page/@arXiv_csCV_bot/
- Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without ...
Dimitris Oikonomou, Nicolas Loizou
arxiv.org/abs/2512.02342 mastoxiv.page/@arXiv_mathOC_bo
- Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven App...
Karthik Prabhakar, Durgamadhab Mishra
arxiv.org/abs/2512.06699 mastoxiv.page/@arXiv_csPF_bot/
- Minimum Bayes Risk Decoding for Error Span Detection in Reference-Free Automatic Machine Translat...
Lyu, Song, Kamigaito, Ding, Tanaka, Utiyama, Funakoshi, Okumura
arxiv.org/abs/2512.07540 mastoxiv.page/@arXiv_csCL_bot/
- In-Context Learning for Seismic Data Processing
Fabian Fuchs, Mario Ruben Fernandez, Norman Ettrich, Janis Keuper
arxiv.org/abs/2512.11575 mastoxiv.page/@arXiv_csCV_bot/
- Journey Before Destination: On the importance of Visual Faithfulness in Slow Thinking
Rheeya Uppaal, Phu Mon Htut, Min Bai, Nikolaos Pappas, Zheng Qi, Sandesh Swamy
arxiv.org/abs/2512.12218 mastoxiv.page/@arXiv_csCV_bot/
- Non-Resolution Reasoning (NRR): A Computational Framework for Contextual Identity and Ambiguity P...
Kei Saito
arxiv.org/abs/2512.13478 mastoxiv.page/@arXiv_csCL_bot/
- Stylized Synthetic Augmentation further improves Corruption Robustness
Georg Siedel, Rojan Regmi, Abhirami Anand, Weijia Shao, Silvia Vock, Andrey Morozov
arxiv.org/abs/2512.15675 mastoxiv.page/@arXiv_csCV_bot/
- mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
Jonas Pai, Liam Achenbach, Victoriano Montesinos, Benedek Forrai, Oier Mees, Elvis Nava
arxiv.org/abs/2512.15692 mastoxiv.page/@arXiv_csRO_bot/
toXiv_bot_toot

A damning new study could put AI companies on the defensive.
In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data,
not “learning” from it.
Specifically, four prominent LLMs
— OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet
— happily reproduced lengthy excerpts from popular
— and protected
— works, with a stunning degree of accuracy.
They fou…

@NFL@darktundra.xyz
2025-12-16 14:31:31

Self-learning AI releases NFL picks, score predictions every Week 16 game

cbssports.com/nfl/news/nfl-wee

@NFL@darktundra.xyz
2026-01-13 16:11:48

Self-learning AI generates NFL picks, score predictions for every 2026 divisional round matchup

cbssports.com/nfl/news/nfl-div

@NFL@darktundra.xyz
2025-11-09 16:46:19

Steelers vs. Chargers NFL player props: Self-learning AI backs Justin Herbert Over 252.5 passing on SNF

cbssports.com/nfl/news/steeler

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot