Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-07-19 08:14:41

AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.

@ginevra@hachyderm.io
2025-06-20 00:35:29

Language learning has been part of me since high school. I'm solid in 2 non-English languages, crappy but survivable in 2 others. I've played with & started learning others many times.
I'm real busy rn, but language learning could be a fun thing to do for myself & make me feel like I'm still me.
But I'm stumped about my language picks. I learnt the obvious European languages in school; later tried key Asian languages. What do I want to do now?
African languages? I won't be getting a chance to use them much in Aus, & I'm unlikely to get to a stage where I can read literature.
I tried Slovenian/Slovene on a whim & really love it, but I'll never go there. Is the practical but unfun answer grind out more kanji/hanzi? Or is whimsically learning a language spoken by only 2.5 million people reasonable? I will continue struggling through with Ukrainian, 'cause I think it's important.
#LanguageLearning

@pbloem@sigmoid.social
2025-05-19 12:46:36

What's going on with this #ICLR paper?
The metareview says that the authors provided a sound rebuttal and update to the paper, but neither are available (rebuttals are shown on other papers).
openreview.…

@azonenberg@ioc.exchange
2025-07-20 03:37:43

Learning new German vocabulary today while working on the PnP (Essemtec Fox so Swiss design)
Bestückung, as in Bestückungskopf ("assembly head")
Fett, as in Spezialfett ("special grease")
Schlauch, as in Silikonschlauch ("silicone hose")

@arXiv_csIT_bot@mastoxiv.page
2025-06-19 08:22:59

In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory
Matteo Zecchin, Tomer Raviv, Dileep Kalathil, Krishna Narayanan, Nir Shlezinger, Osvaldo Simeone
arxiv.org/abs/2506.15176

@arXiv_csHC_bot@mastoxiv.page
2025-06-19 08:23:34

Optimizing Web-Based AI Query Retrieval with GPT Integration in LangChain A CoT-Enhanced Prompt Engineering Approach
Wenqi Guan, Yang Fang
arxiv.org/abs/2506.15512

@arXiv_statML_bot@mastoxiv.page
2025-06-19 10:27:22

Double Machine Learning for Conditional Moment Restrictions: IV regression, Proximal Causal Learning and Beyond
Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska
arxiv.org/abs/2506.14950

@arXiv_csSD_bot@mastoxiv.page
2025-06-19 08:35:28

A Comparative Evaluation of Deep Learning Models for Speech Enhancement in Real-World Noisy Environments
Md Jahangir Alam Khondkar, Ajan Ahmed, Masudul Haider Imtiaz, Stephanie Schuckers
arxiv.org/abs/2506.15000

@wfryer@mastodon.cloud
2025-06-20 11:38:46

Inspired by the ever-creative, innovative and connected guru / Yoda of #edtech, @… Alan Levine, I started by own Pinboard collection of bookmarks:

A digital screenshot of a webpage featuring various links and tags related to Wes Fryer, including content about a "No Kings Charlotte Rally," discussions on creativity and social media, a cooking blog, and resources for learning with AI. The page includes references to
@arXiv_csNE_bot@mastoxiv.page
2025-06-19 08:27:19

Extending Spike-Timing Dependent Plasticity to Learning Synaptic Delays
Marissa Dominijanni, Alexander Ororbia, Kenneth W. Regan
arxiv.org/abs/2506.14984

@arXiv_eessSP_bot@mastoxiv.page
2025-06-19 08:43:27

Demonstrating Superresolution in Radar Range Estimation Using a Denoising Autoencoder
Robert Czupryniak, Abhishek Chakraborty, Andrew N. Jordan, John C. Howell
arxiv.org/abs/2506.14906

@arXiv_eessSY_bot@mastoxiv.page
2025-06-19 08:46:02

A Data-Integrated Framework for Learning Fractional-Order Nonlinear Dynamical Systems
Bahram Yaghooti, Chengyu Li, Bruno Sinopoli
arxiv.org/abs/2506.15665

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:01:27

Causality in the human niche: lessons for machine learning
Richard D. Lange, Konrad P. Kording
arxiv.org/abs/2506.13803

@arXiv_csNI_bot@mastoxiv.page
2025-06-19 08:28:34

GCN-Driven Reinforcement Learning for Probabilistic Real-Time Guarantees in Industrial URLLC
Eman Alqudah, Ashfaq Khokhar
arxiv.org/abs/2506.15011

@arXiv_csLG_bot@mastoxiv.page
2025-06-19 14:46:16

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/7]:
- A Comprehensive Survey on Continual Learning in Generative Models
Guo, Zeng, Zhu, Wang, Wang, Zhou, Zhao, Liu, Ma, Wang, Zhang, Liu

@michabbb@social.vivaldi.net
2025-06-20 18:22:08

Instead of needing to write a complex function or remember complicated formulas, we just need to know how to instruct AI to get the results we want.
And if we’re not experts, we can have multiple AIs check the work.
We’re basically sitting in a council of experts and learning along the way.
So, in the end, we’re not getting dumber - we’re just shifting our knowledge and adapting to a new way of thinking.

@GroupNebula563@mastodon.social
2025-07-19 17:25:28

fun #chinafake fact: many believe that eptoys.net is short for “ever profit toys”, but it actually stands for “eepy toys” @…

@arXiv_quantph_bot@mastoxiv.page
2025-07-18 08:33:32

Sporadic Federated Learning Approach in Quantum Environment to Tackle Quantum Noise
Ratun Rahman, Atit Pokharel, Dinh C. Nguyen
arxiv.org/abs/2507.12492

@mia@hcommons.social
2025-07-18 17:16:45

#DH2028 will be in South Africa! Congratulations Menno, Juan and DHASA! Now we're learning about #DH2026 in Daejon, South Korea dh2026.adho.org

@arXiv_condmatsoft_bot@mastoxiv.page
2025-06-19 09:26:42

Learning to flock in open space by avoiding collisions and staying together
Martino Brambati, Antonio Celani, Marco Gherardi, Francesco Ginelli
arxiv.org/abs/2506.15587

@arXiv_csCY_bot@mastoxiv.page
2025-06-19 08:09:04

Transit for All: Mapping Equitable Bike2Subway Connection using Region Representation Learning
Min Namgung, JangHyeon Lee, Fangyi Ding, Yao-Yi Chiang
arxiv.org/abs/2506.15113

@arXiv_csCL_bot@mastoxiv.page
2025-06-19 08:16:54

PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning
Yuhui Shi, Yehan Yang, Qiang Sheng, Hao Mi, Beizhe Hu, Chaoxi Xu, Juan Cao
arxiv.org/abs/2506.15683

@arXiv_eessIV_bot@mastoxiv.page
2025-06-19 08:42:22

Deploying and Evaluating Multiple Deep Learning Models on Edge Devices for Diabetic Retinopathy Detection
Akwasi Asare, Dennis Agyemanh Nana Gookyi, Derrick Boateng, Fortunatus Aabangbio Wulnye
arxiv.org/abs/2506.14834

@arXiv_csRO_bot@mastoxiv.page
2025-07-18 08:35:12

Learning to Predict Mobile Robot Stability in Off-Road Environments
Nathaniel Rose, Arif Ahmed, Emanuel Gutierrez-Cornejo, Parikshit Maini
arxiv.org/abs/2507.12731

@arXiv_econEM_bot@mastoxiv.page
2025-06-19 08:38:22

Fast Learning of Optimal Policy Trees
James Cussens, Julia Hatamyar, Vishalie Shah, Noemi Kreif
arxiv.org/abs/2506.15435

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-06-19 09:24:33

A Machine Learning Framework for Modeling Ensemble Properties of Atomically Disordered Materials
Zhenyao Fang, Ting-Wei Hsu, Qimin Yan
arxiv.org/abs/2506.15652

@jensilber@mastodon.social
2025-06-20 01:25:32

I certainly remember learning that standing under a tree is the thing *not* to do in a thunderstorm. Are children no longer getting that lesson?
cbsnews.com/newyork/news/perso

@Techmeme@techhub.social
2025-06-15 21:30:34

Berlin-based Knowunity, an AI-powered learning platform with 20M users in 15 countries, raised a €27M Series B led by XAnge, bringing its total funding to €45M (Tamara Djurickovic/Tech.eu)
tech.eu/2025/06/13/knowunity-r

@arXiv_csCV_bot@mastoxiv.page
2025-06-18 09:37:21

CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion
Jiahua Ma, Yiran Qin, Yixiong Li, Xuanqi Liao, Yulan Guo, Ruimao Zhang
arxiv.org/abs/2506.14769

@arXiv_csCR_bot@mastoxiv.page
2025-06-18 09:06:19

EBS-CFL: Efficient and Byzantine-robust Secure Clustered Federated Learning
Zhiqiang Li, Haiyong Bao, Menghong Guan, Hao Pan, Cheng Huang, Hong-Ning Dai
arxiv.org/abs/2506.13612

@datascience@genomic.social
2025-05-16 10:00:01

R learning for applied statistics by Chenxin Li: #rstats

@arXiv_csSE_bot@mastoxiv.page
2025-07-18 09:11:32

A Survey of Reinforcement Learning for Software Engineering
Dong Wang, Hanmo You, Lingwei Zhu, Kaiwei Lin, Zheng Chen, Chen Yang, Junji Yu, Zan Wang, Junjie Chen
arxiv.org/abs/2507.12483

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-06-19 09:27:57

Fokker-Planck Score Learning: Efficient Free-Energy Estimation under Periodic Boundary Conditions
Daniel Nagel, Tristan Bereau
arxiv.org/abs/2506.15653

@arXiv_csDS_bot@mastoxiv.page
2025-06-19 08:14:19

Efficient space reduction techniques by optimized majority rules for the Kemeny aggregation problem
Xuan Kien Phung, Sylvie Hamel
arxiv.org/abs/2506.15097

@arXiv_csDC_bot@mastoxiv.page
2025-07-18 08:00:52

Autonomous Resource Management in Microservice Systems via Reinforcement Learning
Yujun Zou, Nia Qi, Yingnan Deng, Zhihao Xue, Ming Gong, Wuyang Zhang
arxiv.org/abs/2507.12879

@arXiv_statML_bot@mastoxiv.page
2025-06-19 10:27:27

An Observation on Lloyd's k-Means Algorithm in High Dimensions
David Silva-S\'anchez, Roy R. Lederman
arxiv.org/abs/2506.14952

@arXiv_csGR_bot@mastoxiv.page
2025-06-19 08:17:49

One-shot Face Sketch Synthesis in the Wild via Generative Diffusion Prior and Instruction Tuning
Han Wu, Junyao Li, Kangbo Zhao, Sen Zhang, Yukai Shi, Liang Lin
arxiv.org/abs/2506.15312

@arXiv_csAR_bot@mastoxiv.page
2025-07-18 08:42:52

WIP: Turning Fake Chips into Learning Opportunities
Haniye Mehraban, Saad Azmeen-ur-Rahman, John Hu
arxiv.org/abs/2507.13281

@rafa_font@mastodon.online
2025-06-18 19:31:17

You'll never become a NATIVE English speaker
No matter how hard you try, the years in the UK or Ireland, the effort in your accent, or the AI applications you might use to fake it
There is a language wall, made of accents, cultural references and seemingly illogical phrasal verbs and idioms, that we cannot jump
But IT DOESN'T MATTER.
90% of your interactions are probably with other non-native speakers. As long as you understand each other, you're good.

@pbloem@sigmoid.social
2025-07-18 09:25:22

Now out in #TMLR:
🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇
There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.
We introduce an RL-based and a GFLowNet-based sampler and show that the approach perf…

A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampl…
A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.
Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline
and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.
A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.
@arXiv_qbioNC_bot@mastoxiv.page
2025-07-18 08:33:22

Mapping Emotions in the Brain: A Bi-Hemispheric Neural Model with Explainable Deep Learning
David Freire-Obreg\'on, Agnieszka Dubiel, Prasoon Kumar Vinodkumar, Gholamreza Anbarjafari, Dorota Kami\'nska, Modesto Castrill\'on-Santana
arxiv.org/abs/2507.12625

@arXiv_mathNA_bot@mastoxiv.page
2025-06-19 09:04:47

Weak TransNet: A Petrov-Galerkin based neural network method for solving elliptic PDEs
Zhihang Xu, Min Wang, Zhu Wang
arxiv.org/abs/2506.14812

@arXiv_hepex_bot@mastoxiv.page
2025-06-18 09:44:56

Review of Machine Learning for Real-Time Analysis at the Large Hadron Collider experiments ALICE, ATLAS, CMS and LHCb
Laura Boggia, Carlos Cocha, Fotis Giasemis, Joachim Hansen, Patin Inkaew, Kaare Endrup Iversen, Pratik Jawahar, Henrique Pineiro Monteagudo, Micol Olocco, Sten Astrand, Martino Borsato, Leon Bozianu, Steven Schramm, the SMARTHEP Network

@arXiv_statME_bot@mastoxiv.page
2025-06-17 12:18:41

Bayesian inference for the learning rate in Generalised Bayesian inference
Jeong Eun Lee, Sitong Liu, Geoff K. Nicholls
arxiv.org/abs/2506.12532

@arXiv_csLG_bot@mastoxiv.page
2025-07-17 10:24:20

PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning
M. Anwar Ma'sum, Mahardhika Pratama, Savitha Ramasamy, Lin Liu, Habibullah Habibullah, Ryszard Kowalczyk
arxiv.org/abs/2507.12305

@arXiv_csHC_bot@mastoxiv.page
2025-06-19 08:21:59

Human-Centred AI in FinTech: Developing a User Experience (UX) Research Point of View (PoV) Playbook
Festus Adedoyin, Huseyin Dogan
arxiv.org/abs/2506.15325

@arXiv_eessSP_bot@mastoxiv.page
2025-06-19 08:43:52

Fiber Signal Denoising Algorithm using Hybrid Deep Learning Networks
Linlin Wang, Wei Wang, Dezhao Wang, Shanwen Wang
arxiv.org/abs/2506.15125

@arXiv_condmatsoft_bot@mastoxiv.page
2025-06-19 09:26:52

Machine learning based prediction of dynamical clustering in granular gases
Sai Preetham Sata, Ralf Stannarius, Benjamin Noack, Dmitry Puzyrev
arxiv.org/abs/2506.15657

@arXiv_quantph_bot@mastoxiv.page
2025-06-19 10:06:58

Learning to Maximize Quantum Neural Network Expressivity via Effective Rank
Juan Yao
arxiv.org/abs/2506.15375 arxiv.o…

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:11:22

Enhancing Symbolic Machine Learning by Subsymbolic Representations
Stephen Roth, Lennart Baur, Derian Boer, Stefan Kramer
arxiv.org/abs/2506.14569

@arXiv_csRO_bot@mastoxiv.page
2025-06-19 08:33:23

Towards Perception-based Collision Avoidance for UAVs when Guiding the Visually Impaired
Suman Raj, Swapnil Padhi, Ruchi Bhoot, Prince Modi, Yogesh Simmhan
arxiv.org/abs/2506.14857

@arXiv_csCL_bot@mastoxiv.page
2025-07-18 09:40:32

QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
Jiazheng Li, Hong Lu, Kaiyue Wen, Zaiwen Yang, Jiaxuan Gao, Hongzhou Lin, Yi Wu, Jingzhao Zhang
arxiv.org/abs/2507.13266

@arXiv_eessSY_bot@mastoxiv.page
2025-06-19 08:44:37

Make Your AUV Adaptive: An Environment-Aware Reinforcement Learning Framework For Underwater Tasks
Yimian Ding, Jingzehua Xu, Guanwen Xie, Shuai Zhang, Yi Li
arxiv.org/abs/2506.15082

@arXiv_eessIV_bot@mastoxiv.page
2025-06-19 08:42:42

NeuroMoE: A Transformer-Based Mixture-of-Experts Framework for Multi-Modal Neurological Disorder Classification
Wajih Hassan Raza, Aamir Bader Shah, Yu Wen, Yidan Shen, Juan Diego Martinez Lemus, Mya Caryn Schiess, Timothy Michael Ellmore, Renjie Hu, Xin Fu
arxiv.org/abs/2506.14970

@arXiv_csSE_bot@mastoxiv.page
2025-06-17 10:59:37

Isolating Noisy Labelled Test Cases in Human-in-the-Loop Oracle Learning
Charaka Geethal Kapugama
arxiv.org/abs/2506.13273

@arXiv_csCV_bot@mastoxiv.page
2025-07-18 10:22:32

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
Senqiao Yang, Junyi Li, Xin Lai, Bei Yu, Hengshuang Zhao, Jiaya Jia
arxiv.org/abs/2507.13348

@arXiv_csSD_bot@mastoxiv.page
2025-06-19 08:35:48

SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning
Anuradha Chopra, Abhinaba Roy, Dorien Herremans
arxiv.org/abs/2506.15154

@arXiv_csCR_bot@mastoxiv.page
2025-06-19 08:07:43

Advanced Prediction of Hypersonic Missile Trajectories with CNN-LSTM-GRU Architectures
Amir Hossein Baradaran
arxiv.org/abs/2506.15043

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-06-18 09:29:18

Evolutionary chemical learning in dimerization networks
Alexei V. Tkachenko, Bortolo Matteo Mognetti, Sergei Maslov
arxiv.org/abs/2506.14006

@arXiv_csCY_bot@mastoxiv.page
2025-06-18 08:16:30

Between Regulation and Accessibility: How Chinese University Students Navigate Global and Domestic Generative AI
Qin Xie, Ming Li, Fei Cheng
arxiv.org/abs/2506.14377

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

@arXiv_csIT_bot@mastoxiv.page
2025-07-18 09:08:42

Analytical Optimization for Antenna Placement in Pinching-Antenna Systems
Zhiguo Ding, H. Vincent Poor
arxiv.org/abs/2507.13307

@arXiv_csRO_bot@mastoxiv.page
2025-07-18 09:51:32

Evaluating Reinforcement Learning Algorithms for Navigation in Simulated Robotic Quadrupeds: A Comparative Study Inspired by Guide Dog Behaviour
Emma M. A. Harrison
arxiv.org/abs/2507.13277

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:03:26

Discovering Temporal Structure: An Overview of Hierarchical Reinforcement Learning
Martin Klissarov, Akhil Bagaria, Ziyan Luo, George Konidaris, Doina Precup, Marlos C. Machado
arxiv.org/abs/2506.14045

@arXiv_statML_bot@mastoxiv.page
2025-06-18 10:28:23

Universal Rates of ERM for Agnostic Learning
Steve Hanneke, Mingyue Xu
arxiv.org/abs/2506.14110 arxiv.org/pdf/2506.14…

@arXiv_csHC_bot@mastoxiv.page
2025-06-19 08:19:29

WebXAII: an open-source web framework to study human-XAI interaction
Jules Leguy, Pierre-Antoine Jean, Felipe Torres Figueroa, S\'ebastien Harispe
arxiv.org/abs/2506.14777

@arXiv_csCL_bot@mastoxiv.page
2025-06-18 09:15:18

Reasoning with Exploration: An Entropy Perspective
Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, Furu Wei
arxiv.org/abs/2506.14758

@arXiv_quantph_bot@mastoxiv.page
2025-07-18 08:38:22

Leveraging Quantum Layers in Classical Neural Networks
Silvie Ill\'esov\'a
arxiv.org/abs/2507.12505 arxiv.org…

@arXiv_eessSP_bot@mastoxiv.page
2025-06-19 08:44:12

Reinforcement Learning-Based Policy Optimisation For Heterogeneous Radio Access
Anup Mishra, \v{C}edomir Stefanovi\'c, Xiuqiang Xu, Petar Popovski, Israel Leyva-Mayorga
arxiv.org/abs/2506.15273

@arXiv_csCV_bot@mastoxiv.page
2025-06-17 09:51:45

Branch, or Layer? Zeroth-Order Optimization for Continual Learning of Vision-Language Models
Ziwei Liu, Borui Kang, Wei Li, Hangjie Yuan, Yanbing Yang, Wenbin Li, Jun Luo, Yifan Zhu, Tao Feng
arxiv.org/abs/2506.12409

@arXiv_csRO_bot@mastoxiv.page
2025-06-18 09:23:35

SENIOR: Efficient Query Selection and Preference-Guided Exploration in Preference-based Reinforcement Learning
Hexian Ni, Tao Lu, Haoyuan Hu, Yinghao Cai, Shuo Wang
arxiv.org/abs/2506.14648

@arXiv_eessIV_bot@mastoxiv.page
2025-06-18 08:51:51

Latent Anomaly Detection: Masked VQ-GAN for Unsupervised Segmentation in Medical CBCT
Pengwei Wang
arxiv.org/abs/2506.14209

@arXiv_eessSY_bot@mastoxiv.page
2025-06-19 08:44:52

Local Differential Privacy for Distributed Stochastic Aggregative Optimization with Guaranteed Optimality
Ziqin Chen, Yongqiang Wang
arxiv.org/abs/2506.15106

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:06:21

Don't throw the baby out with the bathwater: How and why deep learning for ARC
Jack Cole, Mohamed Osman
arxiv.org/abs/2506.14276

@arXiv_csCL_bot@mastoxiv.page
2025-07-18 09:37:42

Enhancing Cross-task Transfer of Large Language Models via Activation Steering
Xinyu Tang, Zhihao Lv, Xiaoxue Cheng, Junyi Li, Wayne Xin Zhao, Zujie Wen, Zhiqiang Zhang, Jun Zhou
arxiv.org/abs/2507.13236

@arXiv_csCR_bot@mastoxiv.page
2025-07-18 07:38:02

Architectural Backdoors in Deep Learning: A Survey of Vulnerabilities, Detection, and Defense
Victoria Childress, Josh Collyer, Jodie Knapp
arxiv.org/abs/2507.12919

@arXiv_statML_bot@mastoxiv.page
2025-06-19 15:38:43

Replaced article(s) found for stat.ML. arxiv.org/list/stat.ML/new
[1/1]:
- Local minima of the empirical risk in high dimension: General theorems and convex examples
Kiana Asgari, Andrea Montanari, Basil Saeed

@arXiv_csHC_bot@mastoxiv.page
2025-06-17 10:52:09

Can you see how I learn? Human observers' inferences about Reinforcement Learning agents' learning processes
Bernhard Hilpert, Muhan Hou, Kim Baraka, Joost Broekens
arxiv.org/abs/2506.13583

@arXiv_quantph_bot@mastoxiv.page
2025-07-18 09:36:52

Inverse Physics-informed neural networks procedure for detecting noise in open quantum systems
Gubio G. de Lima, Iann Cunha, Leonardo Kleber Castelano
arxiv.org/abs/2507.12552

@arXiv_csAI_bot@mastoxiv.page
2025-06-19 14:10:50

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[2/5]:
- Advancing oncology with federated learning: transcending boundaries in breast, lung, and prostate...
Anshu Ankolekar, et al.

@arXiv_csCV_bot@mastoxiv.page
2025-06-18 09:24:38

DiFuse-Net: RGB and Dual-Pixel Depth Estimation using Window Bi-directional Parallax Attention and Cross-modal Transfer Learning
Kunal Swami, Debtanu Gupta, Amrit Kumar Muduli, Chirag Jaiswal, Pankaj Kumar Bajpai
arxiv.org/abs/2506.14709

@arXiv_csCL_bot@mastoxiv.page
2025-06-19 08:14:09

From Model to Classroom: Evaluating Generated MCQs for Portuguese with Narrative and Difficulty Concerns
Bernardo Leite, Henrique Lopes Cardoso, Pedro Pinto, Abel Ferreira, Lu\'is Abreu, Isabel Rangel, Sandra Monteiro
arxiv.org/abs/2506.15598

@arXiv_eessIV_bot@mastoxiv.page
2025-06-19 08:43:07

FedWSIDD: Federated Whole Slide Image Classification via Dataset Distillation
Haolong Jin, Shenglin Liu, Cong Cong, Qingmin Feng, Yongzhi Liu, Lina Huang, Yingzi Hu
arxiv.org/abs/2506.15365

@arXiv_csHC_bot@mastoxiv.page
2025-06-19 08:19:19

See What I Mean? CUE: A Cognitive Model of Understanding Explanations
Tobias Labarta, Nhi Hoang, Katharina Weitz, Wojciech Samek, Sebastian Lapuschkin, Leander Weber
arxiv.org/abs/2506.14775

@arXiv_csCL_bot@mastoxiv.page
2025-06-19 08:16:24

Oldies but Goldies: The Potential of Character N-grams for Romanian Texts
Dana Lupsa, Sanda-Maria Avram
arxiv.org/abs/2506.15650

@arXiv_csRO_bot@mastoxiv.page
2025-07-18 09:21:42

DEMONSTRATE: Zero-shot Language to Robotic Control via Multi-task Demonstration Learning
Rahel Rickenbach, Bruce Lee, Ren\'e Zurbr\"ugg, Carmen Amo Alonso, Melanie N. Zeilinger
arxiv.org/abs/2507.12855

@arXiv_statML_bot@mastoxiv.page
2025-06-18 10:27:34

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models
Marouane Il Idrissi, Agathe Fernandes Machado, Arthur Charpentier
arxiv.org/abs/2506.13900

@arXiv_csCV_bot@mastoxiv.page
2025-06-18 09:14:22

DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification
Matt Poyser, Toby P. Breckon
arxiv.org/abs/2506.14667

@arXiv_quantph_bot@mastoxiv.page
2025-07-18 09:24:12

Learning mixed quantum states in large-scale experiments
Matteo Votto, Marko Ljubotina, C\'ecilia Lancien, J. Ignacio Cirac, Peter Zoller, Maksym Serbyn, Lorenzo Piroli, Beno\^it Vermersch
arxiv.org/abs/2507.12550

@arXiv_csRO_bot@mastoxiv.page
2025-07-18 09:55:12

Latent Policy Steering with Embodiment-Agnostic Pretrained World Models
Yiqi Wang, Mrinal Verghese, Jeff Schneider
arxiv.org/abs/2507.13340

@arXiv_statML_bot@mastoxiv.page
2025-06-18 10:28:02

Mirror Descent Using the Tempesta Generalized Multi-parametric Logarithms
Andrzej Cichocki
arxiv.org/abs/2506.13984 a…

@arXiv_csAI_bot@mastoxiv.page
2025-06-18 08:06:49

AviationLLM: An LLM-based Knowledge System for Aviation Training
Jia'ang Wan, Feng Shen, Fujuan Li, Yanjin Sun, Yan Li, Shiwen Zhang
arxiv.org/abs/2506.14336

@arXiv_csCL_bot@mastoxiv.page
2025-07-18 09:29:32

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts
Marc Brinner, Sina Zarriess
arxiv.org/abs/2507.13105

@arXiv_csHC_bot@mastoxiv.page
2025-06-16 08:01:09

Conversational AI as a Catalyst for Informal Learning: An Empirical Large-Scale Study on LLM Use in Everyday Learning
Na{\dj}a Terzimehi\'c, Babette B\"uhler, Enkelejda Kasneci
arxiv.org/abs/2506.11789

@arXiv_statML_bot@mastoxiv.page
2025-06-17 12:09:30

Theoretical Tensions in RLHF: Reconciling Empirical Success with Inconsistencies in Social Choice Theory
Jiancong Xiao, Zhekun Shi, Kaizhao Liu, Qi Long, Weijie J. Su
arxiv.org/abs/2506.12350

@arXiv_csCL_bot@mastoxiv.page
2025-07-17 08:05:40

Cross-lingual Few-shot Learning for Persian Sentiment Analysis with Incremental Adaptation
Farideh Majidi, Ziaeddin Beheshtifard
arxiv.org/abs/2507.11634

@arXiv_statML_bot@mastoxiv.page
2025-06-17 12:30:01

Understanding Learning Invariance in Deep Linear Networks
Hao Duan, Guido Mont\'ufar
arxiv.org/abs/2506.13714 arx…

@arXiv_csCL_bot@mastoxiv.page
2025-06-17 09:58:33

Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics
Asifullah khan, Muhammad Zaeem Khan, Saleha Jamshed, Sadia Ahmad, Aleesha Zainab, Kaynat Khatib, Faria Bibi, Abdul Rehman
arxiv.org/abs/2506.12365

@arXiv_csCL_bot@mastoxiv.page
2025-06-18 08:59:40

Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot
Xiang Cheng, Chengyan Pan, Minjun Zhao, Deyang Li, Fangchao Liu, Xinyu Zhang, Xiao Zhang, Yong Liu
arxiv.org/abs/2506.14641

@arXiv_csCL_bot@mastoxiv.page
2025-07-18 09:36:32

Automatically assessing oral narratives of Afrikaans and isiXhosa children
R. Louw (Stellenbosch University), E. Sharratt (Stellenbosch University), F. de Wet (Stellenbosch University), C. Jacobs (Stellenbosch University), A. Smith (Stellenbosch University), H. Kamper (Stellenbosch University)
arxiv.org/abs/2507.13205