Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2026-03-08 05:55:53

Samsung's consumer device chief TM Roh says it was "open to strategic co-operation" with more AI groups, having recently added Perplexity to its mobile OS (Michael Acton/Financial Times)
ft.com/content/3752d058-d3ee-4

@peterhoneyman@a2mi.social
2026-04-11 02:32:24

i'm playing records that don't spark joy tonight, thanking them for their service before i trade them in at the record store down the street
#vinyl #jazz #harryjames

This photo shows a classic jazz reissue album featuring the legendary trumpeter Harry James.

The Album:
"More Harry James in Hi-Fi"

Released on Pausa Records (Jazz Origin Series)

Catalog number PR 9037

Album Design:
Bold black text "HARRY JAMES." across the top

Meta-design showing multiple copies of Harry James album covers arranged in a grid

Central photograph of Harry James in a sharp suit holding his trumpet

Black and white photography

Text reads "more harry james in hi-fi" in styliz…
This photo shows a classic soul and R&B albums opened in a gatefold display, showcasing contrasting but equally striking cover art.

Left side (Album back side):
"Leon Thomas - Blues and the Soulful Truth"

Portrait of Leon Thomas wearing a wide-brimmed straw hat

Colorful patterned shirt

Contemplative pose with hands clasped

Natural outdoor setting with green bokeh background

Warm, earthy tones

Right side (Album front side):
Beautiful psychedelic/spiritual artwork

Profile of a face tilted…
This photo shows a historic jazz album with dramatic black and white photography and handwritten notes.

The Album:
Thelonious Monk album

Subtitle: "les liaisons dangereuses - 1960" (Dangerous Liaisons)

The Monk logo (stylized "M" with musical note) visible in white

Album Design:
Striking black and white photography

Moody, atmospheric close-up of Monk

Dark tones with dramatic lighting creating deep shadows

Monk appears to be smoking, captured in profile/three-quarter view

Artistic, cinem…
@seeingwithsound@mas.to
2026-04-07 09:59:10

Human #echolocation works step by step sciencenews.org/article/human- "A study reveals how individual tongue clicks and their echoes …

@arXiv_physicsinsdet_bot@mastoxiv.page
2026-02-02 09:14:39

High-bandwidth frequency domain multiplexed readout of transition-edge sensors for neutrinoless double beta decay searches
M. Adami\v{c} (McGill,LBNL), M. Beretta (UCB,INFN), J. Camilleri (LBNL,Virginia Tech), C. Capelli (LBNL,Zurich U.), M. A. Dobbs (McGill), T. Elleflot (LBNL), B. K. Fujikawa (LBNL), Yu. G. Kolomensky (LBNL,UCB), D. Mayer (MIT), J. Montgomery (McGill), V. Novosad (ANL), A. M. Sindhwad (UCB), V. Singh (UCB), G. Smecher (t0.technology), A. Suzuki (LBNL), B. Welliver (UCB)
arxiv.org/abs/2601.23106 arxiv.org/pdf/2601.23106 arxiv.org/html/2601.23106
arXiv:2601.23106v1 Announce Type: new
Abstract: The next-generation of cryogenic neutrinoless double-beta decay experiments require increasingly fast readout in order to improve background discrimination. These experiments, operated as cryogenic calorimeters at $\sim$10 mK, are usually read out by high-impedance neutron transmutation doped (NTD) thermistors, which provide good energy resolution, but are limited by $\sim$1 ms response times. Superconducting detectors, such as transition-edge sensors (TESs) with a time resolution of $\sim$100 $\mu$s, offer superior timing performance over NTD semiconductor bolometers. To make this technology viable for an application to a thousand or more channels, multiplexed readout is necessary in order to minimize the thermal load and radioactive contamination induced by the readout. Frequency-domain multiplexing readout (fMux) for TESs, previously developed at Berkeley Lab and McGill University, is currently in use for mm-wave telescopes with detector sampling rates in the order of 100 Hz. We demonstrate a new readout system, based on the McGill/Berkeley digital fMux readout, to satisfy the higher bandwidth and noise requirements of the next generation of TES-instrumented cryogenic calorimeters. The new readout samples detectors at 156 kHz, three orders of magnitude faster than its cosmology-oriented predecessor. Each multiplexing readout module comprises ten superconducting resonators in the MHz range and a superconducting quantum interference device (SQUID), interfaced to high-bandwidth field programmable gate array (FPGA)-based electronics for digital signal processing and low-latency feedback.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:38:51

Hierarchic-EEG2Text: Assessing EEG-To-Text Decoding across Hierarchical Abstraction Levels
Anupam Sharma, Harish Katti, Prajwal Singh, Shanmuganathan Raman, Krishna Miyapuram
arxiv.org/abs/2602.20932 arxiv.org/pdf/2602.20932 arxiv.org/html/2602.20932
arXiv:2602.20932v1 Announce Type: new
Abstract: An electroencephalogram (EEG) records the spatially averaged electrical activity of neurons in the brain, measured from the human scalp. Prior studies have explored EEG-based classification of objects or concepts, often for passive viewing of briefly presented image or video stimuli, with limited classes. Because EEG exhibits a low signal-to-noise ratio, recognizing fine-grained representations across a large number of classes remains challenging; however, abstract-level object representations may exist. In this work, we investigate whether EEG captures object representations across multiple hierarchical levels, and propose episodic analysis, in which a Machine Learning (ML) model is evaluated across various, yet related, classification tasks (episodes). Unlike prior episodic EEG studies that rely on fixed or randomly sampled classes of equal cardinality, we adopt hierarchy-aware episode sampling using WordNet to generate episodes with variable classes of diverse hierarchy. We also present the largest episodic framework in the EEG domain for detecting observed text from EEG signals in the PEERS dataset, comprising $931538$ EEG samples under $1610$ object labels, acquired from $264$ human participants (subjects) performing controlled cognitive tasks, enabling the study of neural dynamics underlying perception, decision-making, and performance monitoring.
We examine how the semantic abstraction level affects classification performance across multiple learning techniques and architectures, providing a comprehensive analysis. The models tend to improve performance when the classification categories are drawn from higher levels of the hierarchy, suggesting sensitivity to abstraction. Our work highlights abstraction depth as an underexplored dimension of EEG decoding and motivates future research in this direction.
toXiv_bot_toot

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:28

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[1/5]:
- Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
arxiv.org/abs/2502.18273 mastoxiv.page/@arXiv_csCL_bot/
- Benchmarking NLP-supported Language Sample Analysis for Swiss Children's Speech
Anja Ryser, Yingqiang Gao, Sarah Ebling
arxiv.org/abs/2504.00780 mastoxiv.page/@arXiv_csCL_bot/
- Cultural Biases of Large Language Models and Humans in Historical Interpretation
Fabio Celli, Georgios Spathulas
arxiv.org/abs/2504.02572 mastoxiv.page/@arXiv_csCL_bot/
- BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text
Jiageng Wu, et al.
arxiv.org/abs/2504.19467 mastoxiv.page/@arXiv_csCL_bot/
- Understanding the Anchoring Effect of LLM with Synthetic Data: Existence, Mechanism, and Potentia...
Yiming Huang, Biquan Bie, Zuqiu Na, Weilin Ruan, Songxin Lei, Yutao Yue, Xinlei He
arxiv.org/abs/2505.15392 mastoxiv.page/@arXiv_csCL_bot/
- Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods
Raza, Qureshi, Farooq, Lotif, Chadha, Pandya, Emmanouilidis
arxiv.org/abs/2505.17870 mastoxiv.page/@arXiv_csCL_bot/
- LingoLoop Attack: Trapping MLLMs via Linguistic Context and State Entrapment into Endless Loops
Fu, Jiang, Hong, Li, Guo, Yang, Chen, Zhang
arxiv.org/abs/2506.14493 mastoxiv.page/@arXiv_csCL_bot/
- GHTM: A Graph-based Hybrid Topic Modeling Approach with a Benchmark Dataset for the Low-Resource ...
Farhana Haque, Md. Abdur Rahman, Sumon Ahmed
arxiv.org/abs/2508.00605 mastoxiv.page/@arXiv_csCL_bot/
- Link Prediction for Event Logs in the Process Industry
Anastasia Zhukova, Thomas Walton, Christian E. Lobm\"uller, Bela Gipp
arxiv.org/abs/2508.09096 mastoxiv.page/@arXiv_csCL_bot/
- AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
Huang, Cao, Zhang, Kang, Wang, Wang, Luo, Zheng, Qian, Chen, Yu
arxiv.org/abs/2509.16952 mastoxiv.page/@arXiv_csCL_bot/
- Multi-View Attention Multiple-Instance Learning Enhanced by LLM Reasoning for Cognitive Distortio...
Jun Seo Kim, Hyemi Kim, Woo Joo Oh, Hongjin Cho, Hochul Lee, Hye Hyeon Kim
arxiv.org/abs/2509.17292 mastoxiv.page/@arXiv_csCL_bot/
- Dual-Space Smoothness for Robust and Balanced LLM Unlearning
Han Yan, Zheyuan Liu, Meng Jiang
arxiv.org/abs/2509.23362 mastoxiv.page/@arXiv_csCL_bot/
- The Rise of AfricaNLP: Contributions, Contributors, Community Impact, and Bibliometric Analysis
Tadesse Destaw Belay, et al.
arxiv.org/abs/2509.25477 mastoxiv.page/@arXiv_csCL_bot/
- Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual and Long-Form Speech Reco...
Srivastav, Zheng, Bezzam, Le Bihan, Koluguri, \.Zelasko, Majumdar, Moumen, Gandhi
arxiv.org/abs/2510.06961 mastoxiv.page/@arXiv_csCL_bot/
- Neuron-Level Analysis of Cultural Understanding in Large Language Models
Taisei Yamamoto, Ryoma Kumon, Danushka Bollegala, Hitomi Yanaka
arxiv.org/abs/2510.08284 mastoxiv.page/@arXiv_csCL_bot/
- CLMN: Concept based Language Models via Neural Symbolic Reasoning
Yibo Yang
arxiv.org/abs/2510.10063 mastoxiv.page/@arXiv_csCL_bot/
- Schema for In-Context Learning
Chen, Chen, Wang, Leong, Fung, Bernales, Aspuru-Guzik
arxiv.org/abs/2510.13905 mastoxiv.page/@arXiv_csCL_bot/
- Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
Matteo Silvestri, Fabiano Veglianti, Flavio Giorgi, Fabrizio Silvestri, Gabriele Tolomei
arxiv.org/abs/2510.20351 mastoxiv.page/@arXiv_csCL_bot/
- LuxIT: A Luxembourgish Instruction Tuning Dataset from Monolingual Seed Data
Julian Valline, Cedric Lothritz, Siwen Guo, Jordi Cabot
arxiv.org/abs/2510.24434 mastoxiv.page/@arXiv_csCL_bot/
- Surfacing Subtle Stereotypes: A Multilingual, Debate-Oriented Evaluation of Modern LLMs
Muhammed Saeed, Muhammad Abdul-mageed, Shady Shehata
arxiv.org/abs/2511.01187 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:47

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
arxiv.org/abs/2505.19698 mastoxiv.page/@arXiv_csLG_bot/
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
arxiv.org/abs/2506.08660 mastoxiv.page/@arXiv_csLG_bot/
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
arxiv.org/abs/2506.10167 mastoxiv.page/@arXiv_csLG_bot/
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
arxiv.org/abs/2506.10914 mastoxiv.page/@arXiv_csLG_bot/
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2506.18481 mastoxiv.page/@arXiv_csLG_bot/
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
arxiv.org/abs/2506.21220 mastoxiv.page/@arXiv_csLG_bot/
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
arxiv.org/abs/2507.04448 mastoxiv.page/@arXiv_csLG_bot/
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
arxiv.org/abs/2508.01115 mastoxiv.page/@arXiv_csLG_bot/
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
arxiv.org/abs/2508.13904 mastoxiv.page/@arXiv_csLG_bot/
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
arxiv.org/abs/2508.16815 mastoxiv.page/@arXiv_csLG_bot/
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
arxiv.org/abs/2508.21785 mastoxiv.page/@arXiv_csLG_bot/
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
arxiv.org/abs/2509.15796 mastoxiv.page/@arXiv_csLG_bot/
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
arxiv.org/abs/2509.19975 mastoxiv.page/@arXiv_csLG_bot/
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
arxiv.org/abs/2509.21895 mastoxiv.page/@arXiv_csLG_bot/
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
arxiv.org/abs/2509.22566 mastoxiv.page/@arXiv_csLG_bot/
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
arxiv.org/abs/2509.23115 mastoxiv.page/@arXiv_csLG_bot/
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
arxiv.org/abs/2509.25424 mastoxiv.page/@arXiv_csLG_bot/
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
arxiv.org/abs/2509.26626 mastoxiv.page/@arXiv_csLG_bot/
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
arxiv.org/abs/2510.12402 mastoxiv.page/@arXiv_csLG_bot/
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
arxiv.org/abs/2510.15425 mastoxiv.page/@arXiv_csLG_bot/
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
arxiv.org/abs/2510.18114 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
arxiv.org/abs/2510.22293 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@johnhobbs@mstdn.ca
2026-03-18 19:48:53

Exploring books on scientific discovery never fails to spark my curiosity. The birth of transformative ideas and their resonance in fields like business is astounding. One key discovery can spur innovation across multiple domains — a true testament to our interconnected knowledge. Forever inspired! #ScienceAndBusiness #CuriosityUnleashed 📚