Health care, new cars and new homes feel unaffordable to most Americans,
a Washington Post-ABC News-Ipsos poll shows.
Most Americans say that they can afford basic necessities like their current housing costs, groceries, utilities and gasoline.
But large numbers across income levels also say larger expenses and the cost of things associated with an enjoyable life
— including taking a weeklong vacation
— are out of reach.
Overall, 53 percent of adults say…
Samsung unveils the $1,299 Galaxy S26 Ultra with a Privacy Display feature that limits the screen legibility, an all-new agentic AI, improved night mode, more (Prakhar Khanna/ZDNET)
https://www.zdnet.com/article/samsung-galaxy-s26-ultra-hands-on-unpacked-2026/…
Hvis du synes at moderne spil er noget lort, så må dette være spillet, der bekræfter dine holdninger! #DanskerTrut
Hierarchic-EEG2Text: Assessing EEG-To-Text Decoding across Hierarchical Abstraction Levels
Anupam Sharma, Harish Katti, Prajwal Singh, Shanmuganathan Raman, Krishna Miyapuram
https://arxiv.org/abs/2602.20932 https://arxiv.org/pdf/2602.20932 https://arxiv.org/html/2602.20932
arXiv:2602.20932v1 Announce Type: new
Abstract: An electroencephalogram (EEG) records the spatially averaged electrical activity of neurons in the brain, measured from the human scalp. Prior studies have explored EEG-based classification of objects or concepts, often for passive viewing of briefly presented image or video stimuli, with limited classes. Because EEG exhibits a low signal-to-noise ratio, recognizing fine-grained representations across a large number of classes remains challenging; however, abstract-level object representations may exist. In this work, we investigate whether EEG captures object representations across multiple hierarchical levels, and propose episodic analysis, in which a Machine Learning (ML) model is evaluated across various, yet related, classification tasks (episodes). Unlike prior episodic EEG studies that rely on fixed or randomly sampled classes of equal cardinality, we adopt hierarchy-aware episode sampling using WordNet to generate episodes with variable classes of diverse hierarchy. We also present the largest episodic framework in the EEG domain for detecting observed text from EEG signals in the PEERS dataset, comprising $931538$ EEG samples under $1610$ object labels, acquired from $264$ human participants (subjects) performing controlled cognitive tasks, enabling the study of neural dynamics underlying perception, decision-making, and performance monitoring.
We examine how the semantic abstraction level affects classification performance across multiple learning techniques and architectures, providing a comprehensive analysis. The models tend to improve performance when the classification categories are drawn from higher levels of the hierarchy, suggesting sensitivity to abstraction. Our work highlights abstraction depth as an underexplored dimension of EEG decoding and motivates future research in this direction.
toXiv_bot_toot
🆔 Why join the GÉANT project’s Wallet subtask mailing list?
Digital identity wallets are becoming a reality across Europe, and the decisions made now will shape how students, researchers and institutions collaborate in the future.
Join the conversation, contribute R&E use cases, explore interoperability, and help ensure that the values of openness, trust, and collaboration remain at the heart of digital identity.
🔗 Subscribe here:
One hot topic for every team at the 2026 NFL combine: Quarterback contracts, Kelce's future https://www.nytimes.com/athletic/7063486/2026/02/23/nfl-combine-team-future-question-quarterback-draft/
The thing that Renee Good now knows, that Tortuguita knows, that Heather Heyer knows, that I only know because I glimpsed for a second, is that when you die fighting oppression you live forever in that memory of resistance. When we carve their names into a monument, along with all the other names of the murdered and disappeared, that will stand, perhaps, across from the statue of Willem in the park where the Northwest Detention Center once stood, they will always be reminders of what it looks like to sacrifice everything in order to be on the right side of history.
The names of those who resist live as ghosts, summoned by name to haunt future oppressors, summoned by name to awaken our own conscience to the call. Martyrs, whispered like the White Rose or yelled as a threat like John Brown, cannot die so long as any of us with a bit of spine carries even an ounce of humanity.
It is possible to die knowing you did the right thing, and I have felt it. There is an acceptance that is impossible to imagine without being there, without feeling it for yourself. You have nothing to fear in resisting, even if it ends you. But you will never forget the shame of doing nothing if you fail to.
Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers
Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath
https://arxiv.org/abs/2602.20937 https://arxiv.org/pdf/2602.20937 https://arxiv.org/html/2602.20937
arXiv:2602.20937v1 Announce Type: new
Abstract: Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
toXiv_bot_toot
Hundreds of thousands of people have taken part in rallies around the world
to show their solidarity with anti-government demonstrators in Iran
whose continued protests have been met with brutal and deadly repression.
On Saturday, Reza Pahlavi, the exiled son of Iran’s last shah,
addressed a crowd of 200,000 people in Munich,
telling them he was ready to lead the country to a “secular democratic future”.
Pahlavi urged Iranians at home and abroad to continu…
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/3]:
- SMaRT: Online Reusable Resource Assignment and an Application to Mediation in the Kenyan Judiciary
Farabi, Pinto, Lu, Ramos-Maqueda, Das, Deeb, Sautmann
https://arxiv.org/abs/2602.18431 https://mastoxiv.page/@arXiv_csCY_bot/116119352329590193
- Benchmarking Distilled Language Models: Performance and Efficiency in Resource-Constrained Settings
Sachin Gopal Wani, Eric Page, Ajay Dholakia, David Ellison
https://arxiv.org/abs/2602.20164 https://mastoxiv.page/@arXiv_csCL_bot/116130101399805837
- VISION-ICE: Video-based Interpretation and Spatial Identification of Arrhythmia Origins via Neura...
Dorsa EPMoghaddam, Feng Gao, Drew Bernard, Kavya Sinha, Mehdi Razavi, Behnaam Aazhang
https://arxiv.org/abs/2602.20165 https://mastoxiv.page/@arXiv_csCV_bot/116130222034322594
- Benchmarking Early Deterioration Prediction Across Hospital-Rich and MCI-Like Emergency Triage Un...
KMA Solaiman, Joshua Sebastian, Karma Tobden
https://arxiv.org/abs/2602.20168 https://mastoxiv.page/@arXiv_csCY_bot/116130239074411770
- Cross-Chirality Generalization by Axial Vectors for Hetero-Chiral Protein-Peptide Interaction Design
Yang, Tian, Jia, Zhang, Zheng, Wang, Su, He, Liu, Lan
https://arxiv.org/abs/2602.20176 https://mastoxiv.page/@arXiv_qbioBM_bot/116130281674122586
- Enhancing Heat Sink Efficiency in MOSFETs using Physics Informed Neural Networks: A Systematic St...
Aniruddha Bora, Isabel K. Alvarez, Julie Chalfant, Chryssostomos Chryssostomidis
https://arxiv.org/abs/2602.20177 https://mastoxiv.page/@arXiv_csNE_bot/116130397676559696
- Data-Driven Deep MIMO Detection:Network Architectures and Generalization Analysis
Yongwei Yi, Xinping Yi, Wenjin Wang, Xiao Li, Shi Jin
https://arxiv.org/abs/2602.20178 https://mastoxiv.page/@arXiv_eessSP_bot/116130257424413457
- OrgFlow: Generative Modeling of Organic Crystal Structures from Molecular Graphs
Mohammadmahdi Vahediahmar, Matthew A. McDonald, Feng Liu
https://arxiv.org/abs/2602.20195 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/116130271189617558
- KEMP-PIP: A Feature-Fusion Based Approach for Pro-inflammatory Peptide Prediction
Soumik Deb Niloy, Md. Fahmid-Ul-Alam Juboraj, Swakkhar Shatabda
https://arxiv.org/abs/2602.20198 https://mastoxiv.page/@arXiv_qbioQM_bot/116130341315320687
- Regressor-guided Diffusion Model for De Novo Peptide Sequencing with Explicit Mass Control
Shaorong Chen, Jingbo Zhou, Jun Xia
https://arxiv.org/abs/2602.20209 https://mastoxiv.page/@arXiv_qbioQM_bot/116130374083646541
- The Sim-to-Real Gap in MRS Quantification: A Systematic Deep Learning Validation for GABA
Zien Ma, S. M. Shermer, Oktay Karaku\c{s}, Frank C. Langbein
https://arxiv.org/abs/2602.20289 https://mastoxiv.page/@arXiv_eessSP_bot/116130267228834775
- Gap-Dependent Bounds for Nearly Minimax Optimal Reinforcement Learning with Linear Function Appro...
Haochen Zhang, Zhong Zheng, Lingzhou Xue
https://arxiv.org/abs/2602.20297 https://mastoxiv.page/@arXiv_statML_bot/116130255458256497
- Multilevel Determinants of Overweight and Obesity Among U.S. Children Aged 10-17: Comparative Eva...
Joyanta Jyoti Mondal
https://arxiv.org/abs/2602.20303 https://mastoxiv.page/@arXiv_csAI_bot/116130097466859145
- An artificial intelligence framework for end-to-end rare disease phenotyping from clinical notes ...
Shyr, Hu, Tinker, Cassini, Byram, Hamid, Fabbri, Wright, Peterson, Bastarache, Xu
https://arxiv.org/abs/2602.20324 https://mastoxiv.page/@arXiv_csAI_bot/116130100089848459
- Circuit Tracing in Vision-Language Models: Understanding the Internal Mechanisms of Multimodal Th...
Jingcheng Yang, Tianhu Xiong, Shengyi Qian, Klara Nahrstedt, Mingyuan Wu
https://arxiv.org/abs/2602.20330 https://mastoxiv.page/@arXiv_csCV_bot/116130463214879334
- No One Size Fits All: QueryBandits for Hallucination Mitigation
Nicole Cho, William Watson, Alec Koppel, Sumitra Ganesh, Manuela Veloso
https://arxiv.org/abs/2602.20332 https://mastoxiv.page/@arXiv_csCL_bot/116130370809116915
- Learning During Detection: Continual Learning for Neural OFDM Receivers via DMRS
Mohanad Obeed, Ming Jian
https://arxiv.org/abs/2602.20361 https://mastoxiv.page/@arXiv_csIT_bot/116130289537785136
- Detecting and Mitigating Group Bias in Heterogeneous Treatment Effects
Joel Persson, Jurri\"en Bakker, Dennis Bohle, Stefan Feuerriegel, Florian von Wangenheim
https://arxiv.org/abs/2602.20383 https://mastoxiv.page/@arXiv_statME_bot/116130509065601748
- Selecting Optimal Variable Order in Autoregressive Ising Models
Shiba Biswal, Marc Vuffray, Andrey Y. Lokhov
https://arxiv.org/abs/2602.20394 https://mastoxiv.page/@arXiv_statML_bot/116130299369541741
toXiv_bot_toot