Tootfinder

Opt-in global Mastodon full text search. Join the index!

@seeingwithsound@mas.to
2026-03-02 20:58:45

Neurons receive precisely tailored teaching signals as we learn (well, as mice learn) mcgovern.mit.edu/2026/02/25/ne "the brain can deliver neuron-specific feedback during learning";

@sauer_lauwarm@mastodon.social
2025-12-20 21:14:21

instagram.com/p/DSbtXuCCe9Y/?u

@NFL@darktundra.xyz
2026-02-27 15:56:22

Fernando Mendoza on Raiders, working with Tom Brady: 'I’m all about learning' nytimes.com/athletic/7075383/2

@raiders@darktundra.xyz
2026-02-27 15:53:25

Fernando Mendoza on Raiders, working with Tom Brady: 'I’m all about learning' nytimes.com/athletic/7075383/2

@frankel@mastodon.top
2026-01-16 09:03:34

From #Either to Raise
#Kotlin

@theodric@social.linux.pizza
2026-02-19 23:05:11

Spent my evening learning enough about the rpmbuild system to beat another Arch aur package into submission on openSUSE (Firefox with global menu support.)
I probably need to learn how to use quilt to properly rebase the patch on the current Firefox release rather than just forcing patching to fuzz level 3. One thing at a time.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:38:31

From Isolation to Integration: Building an Adaptive Expert Forest for Pre-Trained Model-based Class-Incremental Learning
Ruiqi Liu, Boyu Diao, Hangda Liu, Zhulin An, Fei Wang, Yongjun Xu
arxiv.org/abs/2602.20911 arxiv.org/pdf/2602.20911 arxiv.org/html/2602.20911
arXiv:2602.20911v1 Announce Type: new
Abstract: Class-Incremental Learning (CIL) requires models to learn new classes without forgetting old ones. A common method is to freeze a pre-trained model and train a new, lightweight adapter for each task. While this prevents forgetting, it treats the learned knowledge as a simple, unstructured collection and fails to use the relationships between tasks. To this end, we propose the Semantic-guided Adaptive Expert Forest (SAEF), a new method that organizes adapters into a structured hierarchy for better knowledge sharing. SAEF first groups tasks into conceptual clusters based on their semantic relationships. Then, within each cluster, it builds a balanced expert tree by creating new adapters from merging the adapters of similar tasks. At inference time, SAEF finds and activates a set of relevant experts from the forest for any given input. The final prediction is made by combining the outputs of these activated experts, weighted by how confident each expert is. Experiments on several benchmark datasets show that SAEF achieves SOTA performance.
toXiv_bot_toot

@nemobis@mamot.fr
2025-12-22 17:35:30

The other day I had a funny conversation. A Finnish person was making excuses for my laziness at learning Finnish. Then she asked «is Italian hard to learn?». I never know how to answer the question so I said «I don't know: at least pronunciation is not too bad for Finns, they may sound funny but they're understandable; they mostly have trouble because they have no concept of separate p and b, and so on». She said «you mean strong p and soft p»? Not how I would have phrased it, but y…

@hex@kolektiva.social
2026-02-20 10:37:46

In my head I'm just replacing "counter insurgency" with "horse cavalry."
"We're going to keep learning how to leverage horse cavalry against machine guns and tanks until we get it right."
No. No you will not. You will keep trying until you learn the hard way that it can't be done.

@dennisfaucher@infosec.exchange
2026-02-13 15:04:44

Run your own local chat on your laptop. Plus add web search like Perplexity. I do this on my Mac to to save a few AI DC BTUs and to learn stuff based on real web pages rather than hallucinated LLMs.
•⁠ ⁠Run your own chat: carlosvaz.com/posts/running-ll

@frankel@mastodon.top
2026-02-15 09:20:44

Learn fundamentals, not frameworks
newsletter.techworld-with-mila

@lightweight@mastodon.nzoss.nz
2026-01-12 03:26:40

Hello all - it's that time again - tomorrow evening, Tue 13 Jan, at 20:00 NZDT, we'll be having our Jan Libre/FOSS meeting at meeting.iridescent.nz - anyone welcome to join us. It'll be held on our BigBlueButton instance (thanks to Prodigi.nz for sponsored hosting infrastructure!). We'll …

@candidexmedia@mastodon.design
2026-02-12 19:19:02

@0xdjdev@mastodon.art Here are my recs:
General design principles: baselinehq.com/course.html
For learning software tools: If you can access Lynda.com / LinkedIn Learning through your public library, I highly recommend it.
Additional Learning Resources:
- Extra Bold:

@brichapman@mastodon.social
2025-12-14 20:20:01

Want to break into climate work but don't know where to start? Terra.do's Learning for Action fellowship might be your answer.
This 12-week program covers clean energy, climate policy, and other key solutions—all designed to fit around your full-time job (6-10 hours/week). You'll learn from industry pros, build your network, and join graduates who've successfully landed climate careers.
Financial aid available for those ready to make a difference.

@Techmeme@techhub.social
2025-12-13 23:05:53

To build more powerful AI systems, some AI leaders are focusing on pursuing an approach called continual learning, which mimics how people learn over time (Shirin Ghaffary/Bloomberg)

@arXiv_csGT_bot@mastoxiv.page
2025-12-09 07:58:07

Learning Paths to Multi-Sector Equilibrium: Belief Dynamics Under Uncertain Returns to Scale
Stefano Nasini, Rabia Nessah, Bertrand Wigniolle
arxiv.org/abs/2512.07013 arxiv.org/pdf/2512.07013 arxiv.org/html/2512.07013
arXiv:2512.07013v1 Announce Type: new
Abstract: This paper explores the dynamics of learning in a multi-sector general equilibrium model where firms operate under incomplete information about their production returns to scale. Firms iteratively update their beliefs using maximum a-posteriori estimation, derived from observed production outcomes, to refine their knowledge of their returns to scale. The implications of these learning dynamics for market equilibrium and the conditions under which firms can effectively learn their true returns to scale are the key objects of this study. Our results shed light on how idiosyncratic shocks influence the learning process and demonstrate that input decisions encode all pertinent information for belief updates. Additionally, we show that a long-memory (path-dependent) learning which keeps track of all past estimations ends up having a worse performance than a short-memory (path-independent) approach.
toXiv_bot_toot

@berlinbuzzwords@floss.social
2026-01-07 12:51:06

Become a partner and learn about the latest trends and buzz in the world of Data, Search and Machine Learning, while simultaneously supporting Open Source communities through your sponsorship!
 
If your company or organization would like to support #bbuzz, please email us at partner@berlinbuzzwords.de.
 
To learn more, visit: 2026.berlinbuzzwords.de/become

@bobmueller@mastodon.world
2025-12-10 15:30:06

Punctuation matters, yo.
instagram.com/reel/DR032CTDDv1

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:40:31

Matching Multiple Experts: On the Exploitability of Multi-Agent Imitation Learning
Antoine Bergerault, Volkan Cevher, Negar Mehr
arxiv.org/abs/2602.21020 arxiv.org/pdf/2602.21020 arxiv.org/html/2602.21020
arXiv:2602.21020v1 Announce Type: new
Abstract: Multi-agent imitation learning (MA-IL) aims to learn optimal policies from expert demonstrations of interactions in multi-agent interactive domains. Despite existing guarantees on the performance of the resulting learned policies, characterizations of how far the learned polices are from a Nash equilibrium are missing for offline MA-IL. In this paper, we demonstrate impossibility and hardness results of learning low-exploitable policies in general $n$-player Markov Games. We do so by providing examples where even exact measure matching fails, and demonstrating a new hardness result on characterizing the Nash gap given a fixed measure matching error. We then show how these challenges can be overcome using strategic dominance assumptions on the expert equilibrium. Specifically, for the case of dominant strategy expert equilibria, assuming Behavioral Cloning error $\epsilon_{\text{BC}}$, this provides a Nash imitation gap of $\mathcal{O}\left(n\epsilon_{\text{BC}}/(1-\gamma)^2\right)$ for a discount factor $\gamma$. We generalize this result with a new notion of best-response continuity, and argue that this is implicitly encouraged by standard regularization techniques.
toXiv_bot_toot

@brichapman@mastodon.social
2025-12-16 20:21:00

Want to break into climate work but don't know where to start? Terra.do's Learning for Action fellowship might be your answer.
This 12-week program goes deep on real-world climate solutions—beyond just clean energy. You'll learn the science, explore diverse solutions, and connect with a global community, all while working full-time (6-10 hrs/week).
Financial aid available.

@lightweight@mastodon.nzoss.nz
2025-12-08 02:22:52

It's that time again! Tomorrow evening (Tue 9/12), 20:00 NZST is our monthly FOSS/Libre Tech catch up. Find us at meeting.iridescent.nz - all welcome! We'll be discussing relevant current events, sharing case studies, looking at new technologies, and how to make a living doing this stuff! Our mee…

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:41:21

Localized Dynamics-Aware Domain Adaption for Off-Dynamics Offline Reinforcement Learning
Zhangjie Xia, Yu Yang, Pan Xu
arxiv.org/abs/2602.21072 arxiv.org/pdf/2602.21072 arxiv.org/html/2602.21072
arXiv:2602.21072v1 Announce Type: new
Abstract: Off-dynamics offline reinforcement learning (RL) aims to learn a policy for a target domain using limited target data and abundant source data collected under different transition dynamics. Existing methods typically address dynamics mismatch either globally over the state space or via pointwise data filtering; these approaches can miss localized cross-domain similarities or incur high computational cost. We propose Localized Dynamics-Aware Domain Adaptation (LoDADA), which exploits localized dynamics mismatch to better reuse source data. LoDADA clusters transitions from source and target datasets and estimates cluster-level dynamics discrepancy via domain discrimination. Source transitions from clusters with small discrepancy are retained, while those from clusters with large discrepancy are filtered out. This yields a fine-grained and scalable data selection strategy that avoids overly coarse global assumptions and expensive per-sample filtering. We provide theoretical insights and extensive experiments across environments with diverse global and local dynamics shifts. Results show that LoDADA consistently outperforms state-of-the-art off-dynamics offline RL methods by better leveraging localized distribution mismatch.
toXiv_bot_toot