Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Cognessence@social.linux.pizza
2026-02-13 09:46:57

‘Only Embrace’ was actually called ‘Only Envelope’ for the longest time, partly because along with the emotional layer I was interested in breaking free of any use of percussion - rather implying rhythm through envelopes programmed into the patches (along with musical use of shifting compression flaring in response to these, and then saturation that would “bloom” out in various M/S configurations.)

@deprogrammaticaipsum@mas.to
2026-01-04 16:15:34

"This single task of managing memory has proven to be one of the most difficult, let alone to grasp and understand, but most importantly, to get right.
Because not getting this right meant crashes, security issues, resource shortages, unhappy customers, and lots of white hair. To make things worse, pretty much every programming language comes these days with their own ideas of how to keep track of things on the heap."

@kubikpixel@chaos.social
2026-01-07 06:05:10

Disable JavaScript — How to disable JavaScript in your browser
Nowadays almost all web pages contain JavaScript, a scripting programming language that runs arbitrary code, through the web browser, on the visitor's computer. It is supposed to make web pages functional for specific purposes but it has proven its potential to cause significant harm to users time and time again. […]

@mgorny@social.treehouse.systems
2025-12-21 04:55:48

I became a programmer because I found it much easier to program computers than to talk to people. Why would anyone in their sane mind claim that I'd be better off talking in human language to machines that pretend to be the kind of smug humans who have no clue about coding, but are going to fulfill all the assignments given by me by googling and copy-pasting whatever they can find?!
#NoAI #AI #LLM

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:55

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/5]:
- Sample, Don't Search: Rethinking Test-Time Alignment for Language Models
Gon\c{c}alo Faria, Noah A. Smith
arxiv.org/abs/2504.03790 mastoxiv.page/@arXiv_csCL_bot/
- A Survey on Archetypal Analysis
Aleix Alcacer, Irene Epifanio, Sebastian Mair, Morten M{\o}rup
arxiv.org/abs/2504.12392 mastoxiv.page/@arXiv_statME_bo
- The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations
Michael L. Wells, Kamel Lahouel, Bruno Jedynak
arxiv.org/abs/2505.11622 mastoxiv.page/@arXiv_statML_bo
- BOLT: Block-Orthonormal Lanczos for Trace estimation of matrix functions
Kingsley Yeon, Promit Ghosal, Mihai Anitescu
arxiv.org/abs/2505.12289 mastoxiv.page/@arXiv_mathNA_bo
- Clustering and Pruning in Causal Data Fusion
Otto Tabell, Santtu Tikka, Juha Karvanen
arxiv.org/abs/2505.15215 mastoxiv.page/@arXiv_statML_bo
- On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of ph...
Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi, Alison L. Marsden
arxiv.org/abs/2506.11683 mastoxiv.page/@arXiv_statML_bo
- Beyond Force Metrics: Pre-Training MLFFs for Stable MD Simulations
Maheshwari, Tang, Ock, Kolluru, Farimani, Kitchin
arxiv.org/abs/2506.14850 mastoxiv.page/@arXiv_physicsch
- Quantifying Uncertainty in the Presence of Distribution Shifts
Yuli Slavutsky, David M. Blei
arxiv.org/abs/2506.18283 mastoxiv.page/@arXiv_statML_bo
- ZKPROV: A Zero-Knowledge Approach to Dataset Provenance for Large Language Models
Mina Namazi, Alexander Nemecek, Erman Ayday
arxiv.org/abs/2506.20915 mastoxiv.page/@arXiv_csCR_bot/
- SpecCLIP: Aligning and Translating Spectroscopic Measurements for Stars
Zhao, Huang, Xue, Kong, Liu, Tang, Beers, Ting, Luo
arxiv.org/abs/2507.01939 mastoxiv.page/@arXiv_astrophIM
- Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based I...
Ko Watanabe, Stanislav Frolov, Aya Hassan, David Dembinsky, Adriano Lucieri, Andreas Dengel
arxiv.org/abs/2507.17860 mastoxiv.page/@arXiv_csCV_bot/
- PASS: Probabilistic Agentic Supernet Sampling for Interpretable and Adaptive Chest X-Ray Reasoning
Yushi Feng, Junye Du, Yingying Hong, Qifan Wang, Lequan Yu
arxiv.org/abs/2508.10501 mastoxiv.page/@arXiv_csAI_bot/
- Unified Acoustic Representations for Screening Neurological and Respiratory Pathologies from Voice
Ran Piao, Yuan Lu, Hareld Kemps, Tong Xia, Aaqib Saeed
arxiv.org/abs/2508.20717 mastoxiv.page/@arXiv_csSD_bot/
- Machine Learning-Driven Predictive Resource Management in Complex Science Workflows
Tasnuva Chowdhury, et al.
arxiv.org/abs/2509.11512 mastoxiv.page/@arXiv_csDC_bot/
- MatchFixAgent: Language-Agnostic Autonomous Repository-Level Code Translation Validation and Repair
Ali Reza Ibrahimzada, Brandon Paulsen, Reyhaneh Jabbarvand, Joey Dodds, Daniel Kroening
arxiv.org/abs/2509.16187 mastoxiv.page/@arXiv_csSE_bot/
- Automated Machine Learning Pipeline: Large Language Models-Assisted Automated Dataset Generation ...
Adam Lahouari, Jutta Rogal, Mark E. Tuckerman
arxiv.org/abs/2509.21647 mastoxiv.page/@arXiv_condmatmt
- Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference
Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, Zheng Ma
arxiv.org/abs/2509.21791 mastoxiv.page/@arXiv_csCL_bot/
- The Generation Phases of Flow Matching: a Denoising Perspective
Anne Gagneux, S\'egol\`ene Martin, R\'emi Gribonval, Mathurin Massias
arxiv.org/abs/2510.24830 mastoxiv.page/@arXiv_csCV_bot/
- Data-driven uncertainty-aware seakeeping prediction of the Delft 372 catamaran using ensemble Han...
Giorgio Palma, Andrea Serani, Matteo Diez
arxiv.org/abs/2511.04461 mastoxiv.page/@arXiv_eessSY_bo
- Generalized infinite dimensional Alpha-Procrustes based geometries
Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra
arxiv.org/abs/2511.09801 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@BBC3MusicBot@mastodonapp.uk
2025-12-21 23:30:11

🔊 #NowPlaying on #BBCRadio3:
#Unclassified
- Peace on Earth
Elizabeth Alker presents a mix of calm wintery sounds, ambient dreamscapes and tranquil tracks to soundtrack the longest night of the year.
Relisten now 👇
bbc.co.uk/programmes/m002ngbm

@sauer_lauwarm@mastodon.social
2026-01-22 06:08:28

instagram.com/p/DTy4y3VDpS0/?u

@cic_podcast@hostsharing.coop
2026-02-03 09:27:51

Our second episode is the first one on a more specific topic. This time we talk about knowing your Language. In order to be able to communication effectively through your code, you have to know your programming language, its features, and its idioms.
As always, enjoy, subscribe, and give us constructive feedback.

@relcfp@mastodon.social
2026-01-19 06:10:31

UC Berkeley Postdoctoral Fellow in Buddhist Studies
ift.tt/DhQaOFx
CFP: [FRISTVERLÄNGERUNG] Variations 28/2021: Gender through technology (25.09.2021) Call…
via Input 4 RELCFP

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot