Samsung's consumer device chief TM Roh says it was "open to strategic co-operation" with more AI groups, having recently added Perplexity to its mobile OS (Michael Acton/Financial Times)
https://www.ft.com/content/3752d058-d3ee-41a4-b702-d49ae7f61b5c
i'm playing records that don't spark joy tonight, thanking them for their service before i trade them in at the record store down the street
#vinyl #jazz #harryjames
Human #echolocation works step by step https://www.sciencenews.org/article/human-echolocation-blind-brain "A study reveals how individual tongue clicks and their echoes …
High-bandwidth frequency domain multiplexed readout of transition-edge sensors for neutrinoless double beta decay searches
M. Adami\v{c} (McGill,LBNL), M. Beretta (UCB,INFN), J. Camilleri (LBNL,Virginia Tech), C. Capelli (LBNL,Zurich U.), M. A. Dobbs (McGill), T. Elleflot (LBNL), B. K. Fujikawa (LBNL), Yu. G. Kolomensky (LBNL,UCB), D. Mayer (MIT), J. Montgomery (McGill), V. Novosad (ANL), A. M. Sindhwad (UCB), V. Singh (UCB), G. Smecher (t0.technology), A. Suzuki (LBNL), B. Welliver (UCB)
https://arxiv.org/abs/2601.23106 https://arxiv.org/pdf/2601.23106 https://arxiv.org/html/2601.23106
arXiv:2601.23106v1 Announce Type: new
Abstract: The next-generation of cryogenic neutrinoless double-beta decay experiments require increasingly fast readout in order to improve background discrimination. These experiments, operated as cryogenic calorimeters at $\sim$10 mK, are usually read out by high-impedance neutron transmutation doped (NTD) thermistors, which provide good energy resolution, but are limited by $\sim$1 ms response times. Superconducting detectors, such as transition-edge sensors (TESs) with a time resolution of $\sim$100 $\mu$s, offer superior timing performance over NTD semiconductor bolometers. To make this technology viable for an application to a thousand or more channels, multiplexed readout is necessary in order to minimize the thermal load and radioactive contamination induced by the readout. Frequency-domain multiplexing readout (fMux) for TESs, previously developed at Berkeley Lab and McGill University, is currently in use for mm-wave telescopes with detector sampling rates in the order of 100 Hz. We demonstrate a new readout system, based on the McGill/Berkeley digital fMux readout, to satisfy the higher bandwidth and noise requirements of the next generation of TES-instrumented cryogenic calorimeters. The new readout samples detectors at 156 kHz, three orders of magnitude faster than its cosmology-oriented predecessor. Each multiplexing readout module comprises ten superconducting resonators in the MHz range and a superconducting quantum interference device (SQUID), interfaced to high-bandwidth field programmable gate array (FPGA)-based electronics for digital signal processing and low-latency feedback.
toXiv_bot_toot
Hierarchic-EEG2Text: Assessing EEG-To-Text Decoding across Hierarchical Abstraction Levels
Anupam Sharma, Harish Katti, Prajwal Singh, Shanmuganathan Raman, Krishna Miyapuram
https://arxiv.org/abs/2602.20932 https://arxiv.org/pdf/2602.20932 https://arxiv.org/html/2602.20932
arXiv:2602.20932v1 Announce Type: new
Abstract: An electroencephalogram (EEG) records the spatially averaged electrical activity of neurons in the brain, measured from the human scalp. Prior studies have explored EEG-based classification of objects or concepts, often for passive viewing of briefly presented image or video stimuli, with limited classes. Because EEG exhibits a low signal-to-noise ratio, recognizing fine-grained representations across a large number of classes remains challenging; however, abstract-level object representations may exist. In this work, we investigate whether EEG captures object representations across multiple hierarchical levels, and propose episodic analysis, in which a Machine Learning (ML) model is evaluated across various, yet related, classification tasks (episodes). Unlike prior episodic EEG studies that rely on fixed or randomly sampled classes of equal cardinality, we adopt hierarchy-aware episode sampling using WordNet to generate episodes with variable classes of diverse hierarchy. We also present the largest episodic framework in the EEG domain for detecting observed text from EEG signals in the PEERS dataset, comprising $931538$ EEG samples under $1610$ object labels, acquired from $264$ human participants (subjects) performing controlled cognitive tasks, enabling the study of neural dynamics underlying perception, decision-making, and performance monitoring.
We examine how the semantic abstraction level affects classification performance across multiple learning techniques and architectures, providing a comprehensive analysis. The models tend to improve performance when the classification categories are drawn from higher levels of the hierarchy, suggesting sensitivity to abstraction. Our work highlights abstraction depth as an underexplored dimension of EEG decoding and motivates future research in this direction.
toXiv_bot_toot
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[1/5]:
- Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
https://arxiv.org/abs/2502.18273 https://mastoxiv.page/@arXiv_csCL_bot/114069031700102129
- Benchmarking NLP-supported Language Sample Analysis for Swiss Children's Speech
Anja Ryser, Yingqiang Gao, Sarah Ebling
https://arxiv.org/abs/2504.00780 https://mastoxiv.page/@arXiv_csCL_bot/114267149909002069
- Cultural Biases of Large Language Models and Humans in Historical Interpretation
Fabio Celli, Georgios Spathulas
https://arxiv.org/abs/2504.02572 https://mastoxiv.page/@arXiv_csCL_bot/114278467094094490
- BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text
Jiageng Wu, et al.
https://arxiv.org/abs/2504.19467 https://mastoxiv.page/@arXiv_csCL_bot/114420036189999973
- Understanding the Anchoring Effect of LLM with Synthetic Data: Existence, Mechanism, and Potentia...
Yiming Huang, Biquan Bie, Zuqiu Na, Weilin Ruan, Songxin Lei, Yutao Yue, Xinlei He
https://arxiv.org/abs/2505.15392 https://mastoxiv.page/@arXiv_csCL_bot/114550277171100272
- Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods
Raza, Qureshi, Farooq, Lotif, Chadha, Pandya, Emmanouilidis
https://arxiv.org/abs/2505.17870 https://mastoxiv.page/@arXiv_csCL_bot/114572956853819813
- LingoLoop Attack: Trapping MLLMs via Linguistic Context and State Entrapment into Endless Loops
Fu, Jiang, Hong, Li, Guo, Yang, Chen, Zhang
https://arxiv.org/abs/2506.14493 https://mastoxiv.page/@arXiv_csCL_bot/114703502552989170
- GHTM: A Graph-based Hybrid Topic Modeling Approach with a Benchmark Dataset for the Low-Resource ...
Farhana Haque, Md. Abdur Rahman, Sumon Ahmed
https://arxiv.org/abs/2508.00605 https://mastoxiv.page/@arXiv_csCL_bot/114969875643478303
- Link Prediction for Event Logs in the Process Industry
Anastasia Zhukova, Thomas Walton, Christian E. Lobm\"uller, Bela Gipp
https://arxiv.org/abs/2508.09096 https://mastoxiv.page/@arXiv_csCL_bot/115020938764936882
- AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
Huang, Cao, Zhang, Kang, Wang, Wang, Luo, Zheng, Qian, Chen, Yu
https://arxiv.org/abs/2509.16952 https://mastoxiv.page/@arXiv_csCL_bot/115253526588472475
- Multi-View Attention Multiple-Instance Learning Enhanced by LLM Reasoning for Cognitive Distortio...
Jun Seo Kim, Hyemi Kim, Woo Joo Oh, Hongjin Cho, Hochul Lee, Hye Hyeon Kim
https://arxiv.org/abs/2509.17292 https://mastoxiv.page/@arXiv_csCL_bot/115253586227941157
- Dual-Space Smoothness for Robust and Balanced LLM Unlearning
Han Yan, Zheyuan Liu, Meng Jiang
https://arxiv.org/abs/2509.23362 https://mastoxiv.page/@arXiv_csCL_bot/115293308293558024
- The Rise of AfricaNLP: Contributions, Contributors, Community Impact, and Bibliometric Analysis
Tadesse Destaw Belay, et al.
https://arxiv.org/abs/2509.25477 https://mastoxiv.page/@arXiv_csCL_bot/115298213432594791
- Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual and Long-Form Speech Reco...
Srivastav, Zheng, Bezzam, Le Bihan, Koluguri, \.Zelasko, Majumdar, Moumen, Gandhi
https://arxiv.org/abs/2510.06961 https://mastoxiv.page/@arXiv_csCL_bot/115343748052193267
- Neuron-Level Analysis of Cultural Understanding in Large Language Models
Taisei Yamamoto, Ryoma Kumon, Danushka Bollegala, Hitomi Yanaka
https://arxiv.org/abs/2510.08284 https://mastoxiv.page/@arXiv_csCL_bot/115349533441895984
- CLMN: Concept based Language Models via Neural Symbolic Reasoning
Yibo Yang
https://arxiv.org/abs/2510.10063 https://mastoxiv.page/@arXiv_csCL_bot/115372392366793754
- Schema for In-Context Learning
Chen, Chen, Wang, Leong, Fung, Bernales, Aspuru-Guzik
https://arxiv.org/abs/2510.13905 https://mastoxiv.page/@arXiv_csCL_bot/115389057899856601
- Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
Matteo Silvestri, Fabiano Veglianti, Flavio Giorgi, Fabrizio Silvestri, Gabriele Tolomei
https://arxiv.org/abs/2510.20351 https://mastoxiv.page/@arXiv_csCL_bot/115428615784704418
- LuxIT: A Luxembourgish Instruction Tuning Dataset from Monolingual Seed Data
Julian Valline, Cedric Lothritz, Siwen Guo, Jordi Cabot
https://arxiv.org/abs/2510.24434 https://mastoxiv.page/@arXiv_csCL_bot/115457025096322944
- Surfacing Subtle Stereotypes: A Multilingual, Debate-Oriented Evaluation of Modern LLMs
Muhammed Saeed, Muhammad Abdul-mageed, Shady Shehata
https://arxiv.org/abs/2511.01187 https://mastoxiv.page/@arXiv_csCL_bot/115491321130591723
toXiv_bot_toot
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
https://arxiv.org/abs/2505.19698 https://mastoxiv.page/@arXiv_csLG_bot/114578810521008766
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
https://arxiv.org/abs/2506.08660 https://mastoxiv.page/@arXiv_csLG_bot/114664238967892509
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
https://arxiv.org/abs/2506.10167 https://mastoxiv.page/@arXiv_csLG_bot/114675175949432731
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
https://arxiv.org/abs/2506.10914 https://mastoxiv.page/@arXiv_csLG_bot/114675529854402158
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
https://arxiv.org/abs/2506.18481 https://mastoxiv.page/@arXiv_csLG_bot/114738421450807709
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
https://arxiv.org/abs/2506.21220 https://mastoxiv.page/@arXiv_csLG_bot/114754764750730849
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
https://arxiv.org/abs/2507.04448 https://mastoxiv.page/@arXiv_csLG_bot/114818005803079705
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
https://arxiv.org/abs/2508.01115 https://mastoxiv.page/@arXiv_csLG_bot/114975999992144374
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
https://arxiv.org/abs/2508.13904 https://mastoxiv.page/@arXiv_csLG_bot/115060568241390847
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
https://arxiv.org/abs/2508.16815 https://mastoxiv.page/@arXiv_csLG_bot/115094785677272005
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
https://arxiv.org/abs/2508.21785 https://mastoxiv.page/@arXiv_csLG_bot/115128450608548173
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
https://arxiv.org/abs/2509.15796 https://mastoxiv.page/@arXiv_csLG_bot/115247429156900905
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
https://arxiv.org/abs/2509.19975 https://mastoxiv.page/@arXiv_csLG_bot/115264498084813952
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
https://arxiv.org/abs/2509.21895 https://mastoxiv.page/@arXiv_csLG_bot/115287261047939306
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
https://arxiv.org/abs/2509.22566 https://mastoxiv.page/@arXiv_csLG_bot/115287379672141023
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
https://arxiv.org/abs/2509.23115 https://mastoxiv.page/@arXiv_csLG_bot/115293273559547106
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
https://arxiv.org/abs/2509.25424 https://mastoxiv.page/@arXiv_csLG_bot/115298579764580635
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
https://arxiv.org/abs/2509.26626 https://mastoxiv.page/@arXiv_csLG_bot/115298789487177431
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
https://arxiv.org/abs/2510.12402 https://mastoxiv.page/@arXiv_csLG_bot/115377759317818093
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
https://arxiv.org/abs/2510.15425 https://mastoxiv.page/@arXiv_csLG_bot/115405933861293858
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
https://arxiv.org/abs/2510.18114 https://mastoxiv.page/@arXiv_csLG_bot/115417332500265972
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
https://arxiv.org/abs/2510.22293 https://mastoxiv.page/@arXiv_csLG_bot/115451746201804373
toXiv_bot_toot
Exploring books on scientific discovery never fails to spark my curiosity. The birth of transformative ideas and their resonance in fields like business is astounding. One key discovery can spur innovation across multiple domains — a true testament to our interconnected knowledge. Forever inspired! #ScienceAndBusiness #CuriosityUnleashed 📚