Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2025-09-05 10:28:51

IPA: An Information-Preserving Input Projection Framework for Efficient Foundation Model Adaptation
Yuan Yin, Shashanka Venkataramanan, Tuan-Hung Vu, Andrei Bursuc, Matthieu Cord
arxiv.org/abs/2509.04398

@arXiv_csCV_bot@mastoxiv.page
2025-10-06 10:17:39

LEAML: Label-Efficient Adaptation to Out-of-Distribution Visual Tasks for Multimodal Large Language Models
Ci-Siang Lin, Min-Hung Chen, Yu-Yang Sheng, Yu-Chiang Frank Wang
arxiv.org/abs/2510.03232

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 10:22:20

Tackling Distribution Shift in LLM via KILO: Knowledge-Instructed Learning for Continual Adaptation
Iing Muttakhiroh, Thomas Fevens
arxiv.org/abs/2508.03571

@arXiv_csRO_bot@mastoxiv.page
2025-08-06 10:13:00

Why Evolve When You Can Adapt? Post-Evolution Adaptation of Genetic Memory for On-the-Fly Control
Hamze Hammami, Eva Denisa Barbulescu, Talal Shaikh, Mouayad Aldada, Muhammad Saad Munawar
arxiv.org/abs/2508.03600

@arXiv_csAI_bot@mastoxiv.page
2025-09-05 09:55:31

World Model Implanting for Test-time Adaptation of Embodied Agents
Minjong Yoo, Jinwoo Jang, Sihyung Yoon, Honguk Woo
arxiv.org/abs/2509.03956

@arXiv_eessSY_bot@mastoxiv.page
2025-08-05 09:15:20

System Identification via Validation and Adaptation for Model Updating Applied to a Nonlinear Cantilever Beam
Cristian L\'opez, Jackson E. Herzlieb, Keegan J. Moore
arxiv.org/abs/2508.00931

@arXiv_eessIV_bot@mastoxiv.page
2025-08-06 08:19:30

MPCA-based Domain Adaptation for Transfer Learning in Ultrasonic Guided Waves
Lucio Pinello, Francesco Cadini, Luca Lomazzi
arxiv.org/abs/2508.02726

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:33:50

Neutralizing Token Aggregation via Information Augmentation for Efficient Test-Time Adaptation
Yizhe Xiong, Zihan Zhou, Yiwen Liang, Hui Chen, Zijia Lin, Tianxiang Hao, Fan Zhang, Jungong Han, Guiguang Ding
arxiv.org/abs/2508.03388

@arXiv_qbioNC_bot@mastoxiv.page
2025-09-05 08:43:51

Optimal rate-variance coding due to firing threshold adaptation near criticality
Mauricio Girardi-Schappo, Leonard Maler, Andr\'e Longtin
arxiv.org/abs/2509.04106

@arXiv_eessAS_bot@mastoxiv.page
2025-09-05 08:44:51

Test-Time Adaptation for Speech Enhancement via Domain Invariant Embedding Transformation
Tobias Raichle, Niels Edinger, Bin Yang
arxiv.org/abs/2509.04280

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:29:10

Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation
Jun Luo, Zijing Zhao, Yang Liu
arxiv.org/abs/2508.03300

@arXiv_csRO_bot@mastoxiv.page
2025-08-06 10:14:50

DiWA: Diffusion Policy Adaptation with World Models
Akshay L Chandra, Iman Nematollahi, Chenguang Huang, Tim Welschehold, Wolfram Burgard, Abhinav Valada
arxiv.org/abs/2508.03645

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@arXiv_statAP_bot@mastoxiv.page
2025-08-05 09:01:30

Understanding Heterogeneity in Adaptation to Intermittent Water Supply: Clustering Household Types in Amman, Jordan
Shreyas Gadge, V\'itor V. Vasconcelos, Andr\'e de Roos, Elisabeth H. Krueger
arxiv.org/abs/2508.02569

@arXiv_csLG_bot@mastoxiv.page
2025-09-04 10:27:21

TeRA: Vector-based Random Tensor Network for High-Rank Adaptation of Large Language Models
Yuxuan Gu, Wuyang Zhou, Giorgos Iacovides, Danilo Mandic
arxiv.org/abs/2509.03234

@arXiv_csCR_bot@mastoxiv.page
2025-08-06 08:53:40

VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs
Zixuan Gu, Qiufeng Fan, Long Sun, Yang Liu, Xiaojun Ye
arxiv.org/abs/2508.03097

@arXiv_csIR_bot@mastoxiv.page
2025-09-03 09:38:53

Algorithm Adaptation Bias in Recommendation System Online Experiments
Chen Zheng, Zhenyu Zhao
arxiv.org/abs/2509.00199 arxiv.org/pdf/2509.0…

@arXiv_nlinAO_bot@mastoxiv.page
2025-09-05 10:51:17

Crosslisted article(s) found for nlin.AO. arxiv.org/list/nlin.AO/new
[1/1]:
- Optimal rate-variance coding due to firing threshold adaptation near criticality
Mauricio Girardi-Schappo, Leonard Maler, Andr\'e Longtin

@arXiv_mathCO_bot@mastoxiv.page
2025-08-06 08:15:20

The asymptotic rank of adjacency matrices of weighted configuration models over arbitrary fields
Remco van der Hofstad, Noela M\"uller, Haodong Zhu
arxiv.org/abs/2508.02813

@arXiv_qbioQM_bot@mastoxiv.page
2025-10-03 08:33:21

To Remember, To Adapt, To Preempt: A Stable Continual Test-Time Adaptation Framework for Remote Physiological Measurement in Dynamic Domain Shifts
Shuyang Chu, Jingang Shi, Xu Cheng, Haoyu Chen, Xin Liu, Jian Xu, Guoying Zhao
arxiv.org/abs/2510.01282

@arXiv_astrophIM_bot@mastoxiv.page
2025-09-04 07:58:41

Radio Astronomy in the Era of Vision-Language Models: Prompt Sensitivity and Adaptation
Mariia Drozdova, Erica Lastufka, Vitaliy Kinakh, Taras Holotyak, Daniel Schaerer, Slava Voloshynovskiy
arxiv.org/abs/2509.02615

@arXiv_csHC_bot@mastoxiv.page
2025-09-03 12:30:03

Community-Centered Spatial Intelligence for Climate Adaptation at Nova Scotia's Eastern Shore
Gabriel Spadon, Oladapo Oyebode, Camilo M. Botero, Tushar Sharma, Floris Goerlandt, Ronald Pelot
arxiv.org/abs/2509.01845

@arXiv_mathDS_bot@mastoxiv.page
2025-08-05 09:14:20

Modeling the Impact of NATO Policy Actions on Taliban Disinformation Campaigns with Lotka-Volterra Models
Timothy Tarter, Bella Santos
arxiv.org/abs/2508.01904

@arXiv_csNE_bot@mastoxiv.page
2025-09-05 11:49:51

Replaced article(s) found for cs.NE. arxiv.org/list/cs.NE/new
[1/1]:
- Case Study of Novelty, Complexity, and Adaptation in a Multicellular System
Matthew Andres Moreno, Santiago Rodriguez Papa, Charles Ofria

@arXiv_csSD_bot@mastoxiv.page
2025-10-01 08:39:17

EMO-TTA: Improving Test-Time Adaptation of Audio-Language Models for Speech Emotion Recognition
Jiacheng Shi, Hongfei Du, Y. Alicia Hong, Ye Gao
arxiv.org/abs/2509.25495

@arXiv_physicsfludyn_bot@mastoxiv.page
2025-08-06 08:05:10

Numerical investigation of engine position effects on contrail formation and evolution in the near-field of a realistic aircraft configuration
R\'emy Annunziata (ETS), Nicolas Bonne (ETS), Fran\c{c}ois Garnier (ETS)
arxiv.org/abs/2508.02706

@arXiv_csAI_bot@mastoxiv.page
2025-09-05 07:57:50

CausalARC: Abstract Reasoning with Causal World Models
Jacqueline Maasch, John Kalantari, Kia Khezeli
arxiv.org/abs/2509.03636 arxiv.org/pd…

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 09:57:40

CardiffNLP at CLEARS-2025: Prompting Large Language Models for Plain Language and Easy-to-Read Text Rewriting
Mutaz Ayesh, Nicol\'as Guti\'errez-Rol\'on, Fernando Alva-Manchego
arxiv.org/abs/2508.03240

@arXiv_statML_bot@mastoxiv.page
2025-07-31 08:33:51

A Unified Analysis of Generalization and Sample Complexity for Semi-Supervised Domain Adaptation
Elif Vural, Huseyin Karaca
arxiv.org/abs/2507.22632

@arXiv_qbioNC_bot@mastoxiv.page
2025-10-01 09:37:57

Coexistence of two adaptation processes in a visuomotor rotation task
Alexis Berland (ISIR, CAOR), Youssouf Ismail Cherifi (CAOR), Alexis Paljic (CAOR), Emmanuel Guigon (ISIR)
arxiv.org/abs/2509.26090

@arXiv_eessSP_bot@mastoxiv.page
2025-10-01 10:32:57

A Physics-Informed Multi-Source Domain Adaptation Framework for Label-Free Post-Earthquake Damage Assessment
Yifeng Zhang, Xiao Liang
arxiv.org/abs/2509.26356

@gerald_leppert@bonn.social
2025-08-02 11:35:46

New paper published! ➡️ Climate change adaptation preferences of small enterprises
Vulnerable entrepreneurs’ preferences for climate risk management: A discrete choice experiment with micro-enterprises in the Philippines
Authors: #AnnKristin_Becker @…

@arXiv_csCV_bot@mastoxiv.page
2025-09-03 14:55:53

ADVMEM: Adversarial Memory Initialization for Realistic Test-Time Adaptation via Tracklet-Based Benchmarking
Shyma Alhuwaider, Motasem Alfarra, Juan C. Perez, Merey Ramazanova, Bernard Ghanem
arxiv.org/abs/2509.02182

@arXiv_qbioPE_bot@mastoxiv.page
2025-08-06 08:49:40

Fitness and Overfitness: Implicit Regularization in Evolutionary Dynamics
Hagai Rappeport, Mor Nitzan
arxiv.org/abs/2508.03187 arxiv.org/pd…

@arXiv_csCE_bot@mastoxiv.page
2025-10-02 08:35:21

Signal Classification Recovery Across Domains Using Unsupervised Domain Adaptation
Mohammad Ali, Fuhao Li, Jielun Zhang
arxiv.org/abs/2510.00589

@arXiv_csGR_bot@mastoxiv.page
2025-09-30 07:36:51

Modeling and Exploiting the Time Course of Chromatic Adaptation for Display Power Optimizations in Virtual Reality
Ethan Chen, Sushant Kondguli, Carl Marshall, Yuhao Zhu
arxiv.org/abs/2509.23489

@arXiv_eessIV_bot@mastoxiv.page
2025-08-05 09:56:01

LoRA-based methods on Unet for transfer learning in Subarachnoid Hematoma Segmentation
Cristian Minoccheri, Matthew Hodgman, Haoyuan Ma, Rameez Merchant, Emily Wittrup, Craig Williamson, Kayvan Najarian
arxiv.org/abs/2508.01772

@arXiv_csLG_bot@mastoxiv.page
2025-08-05 19:34:03

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/9]:
- BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation
Peijia Qin, Ruiyi Zhang, Pengtao Xie

@arXiv_csCV_bot@mastoxiv.page
2025-08-04 10:11:41

Sample-Aware Test-Time Adaptation for Medical Image-to-Image Translation
Irene Iele, Francesco Di Feola, Valerio Guarrasi, Paolo Soda
arxiv.org/abs/2508.00766

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 10:01:40

LECTOR: LLM-Enhanced Concept-based Test-Oriented Repetition for Adaptive Spaced Learning
Jiahao Zhao
arxiv.org/abs/2508.03275 arxiv.org/pdf…

@arXiv_csSE_bot@mastoxiv.page
2025-10-03 09:01:41

Automatic Generation of Combinatorial Reoptimisation Problem Specifications: A Vision
Maximilian Kratz, Steffen Zschaler, Jens Kosiol, Gabriele Taentzer
arxiv.org/abs/2510.02002

@arXiv_csNI_bot@mastoxiv.page
2025-09-04 09:35:31

Multi-layer Digital Twin System for Future Mobile Metaverse
Gaosheng Zhao, Dong In Kim
arxiv.org/abs/2509.03049 arxiv.org/pdf/2509.03049

@arXiv_condmatsoft_bot@mastoxiv.page
2025-09-04 08:48:31

Control across scales: signals, information, and adaptive biological mechanical function
James Clarke, Jake McGrath, Colin Johnson, Jos\'e Alvarado
arxiv.org/abs/2509.03418

@arXiv_csHC_bot@mastoxiv.page
2025-10-03 10:03:31

EvolveCaptions: Empowering DHH Users Through Real-Time Collaborative Captioning
Liang-Yuan Wu, Dhruv Jain
arxiv.org/abs/2510.02181 arxiv.or…

@arXiv_csSD_bot@mastoxiv.page
2025-09-03 11:11:53

A Unified Denoising and Adaptation Framework for Self-Supervised Bengali Dialectal ASR
Swadhin Biswas, Imran, Tuhin Sheikh
arxiv.org/abs/2509.00988

@arXiv_csSC_bot@mastoxiv.page
2025-08-04 07:42:10

A Variant of Non-uniform Cylindrical Algebraic Decomposition for Real Quantifier Elimination
Jasper Nalbach, Erika \'Abrah\'am
arxiv.org/abs/2508.00505

@arXiv_physicsbioph_bot@mastoxiv.page
2025-09-03 15:43:29

Crosslisted article(s) found for physics.bio-ph. arxiv.org/list/physics.bio-ph/
[1/1]:
- Perfect adaptation in eukaryotic gradient sensing using cooperative allosteric binding
Vishnu Srinivasan, Wei Wang, Brian A. Camley

@arXiv_csDC_bot@mastoxiv.page
2025-09-03 08:24:53

DSDE: Dynamic Speculative Decoding with KLD Stability for Real-World Serving
Mingyu Yang, Jae-Young Choi, Kihyo Moon, Minsung Jang, Eunjoo Joen
arxiv.org/abs/2509.01083

@arXiv_csLG_bot@mastoxiv.page
2025-09-05 10:23:01

Privacy Risks in Time Series Forecasting: User- and Record-Level Membership Inference
Nicolas Johansson (Chalmers University of Technology), Tobias Olsson (Chalmers University of Technology), Daniel Nilsson (AI Sweden), Johan \"Ostman (AI Sweden), Fazeleh Hoseini (AI Sweden)
arxiv.org/abs/2509.04169

@arXiv_nlinAO_bot@mastoxiv.page
2025-08-06 13:11:00

Replaced article(s) found for nlin.AO. arxiv.org/list/nlin.AO/new
[1/1]:
- A More Convex Ising Formulation of Max-3-Cut Using Higher-Order Spin Interactions
Robbe De Prins, Guy Van der Sande, Peter Bienstman, Thomas Van Vaerenbergh

@arXiv_csCL_bot@mastoxiv.page
2025-08-05 18:58:45

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[5/5]:
- LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
Juzheng Zhang, Jiacheng You, Ashwinee Panda, Tom Goldstein

@arXiv_qbioNC_bot@mastoxiv.page
2025-09-03 09:58:43

Context dependent adaptation in a neural computation
Charles J. Edelson, Sima Setayeshgar, William Bialek, Rob R. de Ruyter van Steveninck
arxiv.org/abs/2509.01760

@arXiv_eessAS_bot@mastoxiv.page
2025-08-04 08:59:10

Dynamic Real-Time Ambisonics Order Adaptation for Immersive Networked Music Performances
Paolo Ostan, Carlo Centofanti, Mirco Pezzoli, Alberto Bernardini, Claudia Rinaldi, Fabio Antonacci
arxiv.org/abs/2508.00509

@arXiv_csAI_bot@mastoxiv.page
2025-10-01 11:36:27

Commmunication-Efficient and Accurate Approach for Aggregation in Federated Low-Rank Adaptation
Le-Tuan Nguyen, Minh-Duong Nguyen, Seon-Geun Jeong, Dung D. Le, Quoc-Viet Pham
arxiv.org/abs/2509.26399

@arXiv_csRO_bot@mastoxiv.page
2025-10-03 10:32:11

ARMADA: Autonomous Online Failure Detection and Human Shared Control Empower Scalable Real-world Deployment and Adaptation
Wenye Yu, Jun Lv, Zixi Ying, Yang Jin, Chuan Wen, Cewu Lu
arxiv.org/abs/2510.02298

@arXiv_csCV_bot@mastoxiv.page
2025-10-03 10:42:01

microCLIP: Unsupervised CLIP Adaptation via Coarse-Fine Token Fusion for Fine-Grained Image Classification
Sathira Silva, Eman Ali, Chetan Arora, Muhammad Haris Khan
arxiv.org/abs/2510.02270

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 10:21:50

EmbedGrad: Gradient-Based Prompt Optimization in Embedding Space for Large Language Models
Xiaoming Hou, Jiquan Zhang, Zibin Lin, DaCheng Tao, Shengli Zhang
arxiv.org/abs/2508.03533

@arXiv_nlinAO_bot@mastoxiv.page
2025-08-05 17:18:00

Replaced article(s) found for nlin.AO. arxiv.org/list/nlin.AO/new
[1/1]:
- Why collective behaviours self-organise to criticality: A primer on information-theoretic and the...
Qianyang Chen, Mikhail Prokopenko

@arXiv_csCE_bot@mastoxiv.page
2025-08-04 08:51:31

Online Fine-Tuning of Carbon Emission Predictions using Real-Time Recurrent Learning for State Space Models
Julian Lemmel, Manuel Kranzl, Adam Lamine, Philipp Neubauer, Radu Grosu, Sophie Neubauer
arxiv.org/abs/2508.00804

@arXiv_csCV_bot@mastoxiv.page
2025-10-03 09:40:41

VirDA: Reusing Backbone for Unsupervised Domain Adaptation with Visual Reprogramming
Duy Nguyen, Dat Nguyen
arxiv.org/abs/2510.01660 arxiv.…

@arXiv_csCL_bot@mastoxiv.page
2025-09-05 09:50:31

MLSD: A Novel Few-Shot Learning Approach to Enhance Cross-Target and Cross-Domain Stance Detection
Parush Gera, Tempestt Neal
arxiv.org/abs/2509.03725

@arXiv_csNE_bot@mastoxiv.page
2025-09-04 10:48:48

Crosslisted article(s) found for cs.NE. arxiv.org/list/cs.NE/new
[1/1]:
- StableSleep: Source-Free Test-Time Adaptation for Sleep Staging with Lightweight Safety Rails
Hritik Arasu, Faisal R Jahangiri

@arXiv_statML_bot@mastoxiv.page
2025-09-26 09:14:31

Unsupervised Domain Adaptation with an Unobservable Source Subpopulation
Chao Ying, Jun Jin, Haotian Zhang, Qinglong Tian, Yanyuan Ma, Yixuan Li, Jiwei Zhao
arxiv.org/abs/2509.20587

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 08:37:31

Prototype-Guided Pseudo-Labeling with Neighborhood-Aware Consistency for Unsupervised Adaptation
Eman Ali, Chetan Arora, Muhammad Haris Khan
arxiv.org/abs/2507.22075

@arXiv_csCL_bot@mastoxiv.page
2025-09-05 09:54:41

Measuring How (Not Just Whether) VLMs Build Common Ground
Saki Imai, Mert \.Inan, Anthony Sicilia, Malihe Alikhani
arxiv.org/abs/2509.03805

@arXiv_eessAS_bot@mastoxiv.page
2025-08-05 08:12:10

An Age-Agnostic System for Robust Speaker Verification
Jiusi Zheng, Vishwas Shetty, Natarajan Balaji Shankar, Abeer Alwan
arxiv.org/abs/2508.01637

@arXiv_eessSP_bot@mastoxiv.page
2025-09-30 09:35:31

Time-Frequency Analysis of Non-Uniformly Sampled Signals via Sample Density Adaptation
Ashwini Kulkarni, Santosh Nannuru
arxiv.org/abs/2509.22891

@arXiv_csHC_bot@mastoxiv.page
2025-10-01 09:08:48

User Prompting Strategies and ChatGPT Contextual Adaptation Shape Conversational Information-Seeking Experiences
Haoning Xue, Yoo Jung Oh, Xinyi Zhou, Xinyu Zhang, Berit Oxley
arxiv.org/abs/2509.25513

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:33:00

FedPromo: Federated Lightweight Proxy Models at the Edge Bring New Domains to Foundation Models
Matteo Caligiuri, Francesco Barbato, Donald Shenaj, Umberto Michieli, Pietro Zanuttigh
arxiv.org/abs/2508.03356

@arXiv_nlinAO_bot@mastoxiv.page
2025-09-05 11:55:34

Replaced article(s) found for nlin.AO. arxiv.org/list/nlin.AO/new
[1/1]:
- Mixed-feedback oscillations in the foraging dynamics of arboreal turtle ants
Alia Valentine, Deborah M. Gordon, Anastasia Bizyaeva

@arXiv_eessIV_bot@mastoxiv.page
2025-09-04 09:24:11

Foundation Model-Driven Classification of Atypical Mitotic Figures with Domain-Aware Training Strategies
Piotr Giedziun, Jan So{\l}tysik, Mateusz G\'orczany, Norbert Ropiak, Marcin Przymus, Piotr Krajewski, Jaros{\l}aw Kwiecie\'n, Artur Bartczak, Izabela Wasiak, Mateusz Maniewski
arxiv.org/abs/2509.02601

@arXiv_csLG_bot@mastoxiv.page
2025-09-01 09:59:32

QR-LoRA: QR-Based Low-Rank Adaptation for Efficient Fine-Tuning of Large Language Models
Jessica Liang, Anirudh Bharadwaj
arxiv.org/abs/2508.21810

@arXiv_csAI_bot@mastoxiv.page
2025-07-31 08:45:51

Cross-Border Legal Adaptation of Autonomous Vehicle Design based on Logic and Non-monotonic Reasoning
Zhe Yu, Yiwei Lu, Burkhard Schafer, Zhe Lin
arxiv.org/abs/2507.22432

@arXiv_eessAS_bot@mastoxiv.page
2025-09-05 11:00:35

Crosslisted article(s) found for eess.AS. arxiv.org/list/eess.AS/new
[1/1]:
- Speech-Based Cognitive Screening: A Systematic Evaluation of LLM Adaptation Strategies
Taherinezhad, Nezhad, Karimi, Rashidi, Zolnour, Dadkhah, Haghbin, AzadMaleki, Zolnoori

@arXiv_eessIV_bot@mastoxiv.page
2025-09-04 08:33:41

Ensemble of Pathology Foundation Models for MIDOG 2025 Track 2: Atypical Mitosis Classification
Mieko Ochi, Bae Yuan
arxiv.org/abs/2509.02591

@arXiv_csHC_bot@mastoxiv.page
2025-09-24 09:46:14

Preference-Guided Multi-Objective UI Adaptation
Yao Song, Christoph Gebhardt, Yi-Chi Liao, Christian Holz
arxiv.org/abs/2509.18960 arxiv.or…

@arXiv_csRO_bot@mastoxiv.page
2025-09-03 12:34:23

Disentangled Multi-Context Meta-Learning: Unlocking robust and Generalized Task Learning
Seonsoo Kim, Jun-Gill Kang, Taehong Kim, Seongil Hong
arxiv.org/abs/2509.01297

@arXiv_csCL_bot@mastoxiv.page
2025-07-30 10:30:21

Culinary Crossroads: A RAG Framework for Enhancing Diversity in Cross-Cultural Recipe Adaptation
Tianyi Hu, Andrea Morales-Garz\'on, Jingyi Zheng, Maria Maistro, Daniel Hershcovich
arxiv.org/abs/2507.21934

@arXiv_nlinAO_bot@mastoxiv.page
2025-08-05 08:28:10

[2025-08-05 Tue (UTC), 2 new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot

@arXiv_csAI_bot@mastoxiv.page
2025-10-03 07:38:10

ToolBrain: A Flexible Reinforcement Learning Framework for Agentic Tools
Quy Minh Le, Minh Sao Khue Luu, Khanh-Tung Tran, Duc-Hai Nguyen, Hoang-Quoc-Viet Pham, Quan Le, Hoang Thanh Lam, Hoang D. Nguyen
arxiv.org/abs/2510.00023

@arXiv_csCV_bot@mastoxiv.page
2025-07-28 10:14:01

EA-ViT: Efficient Adaptation for Elastic Vision Transformer
Chen Zhu, Wangbo Zhao, Huiwen Zhang, Samir Khaki, Yuhao Zhou, Weidong Tang, Shuo Wang, Zhihang Yuan, Yuzhang Shang, Xiaojiang Peng, Kai Wang, Dawei Yang
arxiv.org/abs/2507.19360

@arXiv_csHC_bot@mastoxiv.page
2025-09-30 11:28:21

A Robust Multi-Scale Framework with Test-Time Adaptation for sEEG-Based Speech Decoding
Suli Wang, Yang-yang Li, Siqi Cai, Haizhou Li
arxiv.org/abs/2509.24700

@arXiv_nlinAO_bot@mastoxiv.page
2025-08-06 08:14:30

[2025-08-06 Wed (UTC), no new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-08-26 12:27:36

Type-Compliant Adaptation Cascades: Adapting Programmatic LM Workflows to Data
Chu-Cheng Lin, Daiyi Peng, Yifeng Lu, Ming Zhang, Eugene Ie
arxiv.org/abs/2508.18244

@arXiv_csAI_bot@mastoxiv.page
2025-10-03 09:20:41

MAGIC-MASK: Multi-Agent Guided Inter-Agent Collaboration with Mask-Based Explainability for Reinforcement Learning
Maisha Maliha, Dean Hougen
arxiv.org/abs/2510.00274

@arXiv_csCV_bot@mastoxiv.page
2025-07-25 10:21:22

SIDA: Synthetic Image Driven Zero-shot Domain Adaptation
Ye-Chan Kim, SeungJu Cha, Si-Woo Kim, Taewhan Kim, Dong-Jin Kim
arxiv.org/abs/2507.18632

@arXiv_csRO_bot@mastoxiv.page
2025-09-25 09:42:42

EgoBridge: Domain Adaptation for Generalizable Imitation from Egocentric Human Data
Ryan Punamiya, Dhruv Patel, Patcharapong Aphiwetsa, Pranav Kuppili, Lawrence Y. Zhu, Simar Kareer, Judy Hoffman, Danfei Xu
arxiv.org/abs/2509.19626

@arXiv_nlinAO_bot@mastoxiv.page
2025-09-05 07:53:01

[2025-09-05 Fri (UTC), no new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-09-25 10:49:22

Predictive Coding-based Deep Neural Network Fine-tuning for Computationally Efficient Domain Adaptation
Matteo Cardoni, Sam Leroux
arxiv.org/abs/2509.20269

@arXiv_csCV_bot@mastoxiv.page
2025-08-25 09:39:20

Domain Adaptation via Feature Refinement
Savvas Karatsiolis, Andreas Kamilaris
arxiv.org/abs/2508.16124 arxiv.org/pdf/2508.16124

@arXiv_csRO_bot@mastoxiv.page
2025-08-29 10:10:41

Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation
Jiahe Pan, Jiaxu Xing, Rudolf Reiter, Yifan Zhai, Elie Aljalbout, Davide Scaramuzza
arxiv.org/abs/2508.21065

@arXiv_csCL_bot@mastoxiv.page
2025-07-31 09:55:01

Resource-Efficient Adaptation of Large Language Models for Text Embeddings via Prompt Engineering and Contrastive Fine-tuning
Benedikt Roth, Stephan Rappensperger, Tianming Qiu, Hamza Imamovi\'c, Julian W\"ormann, Hao Shen
arxiv.org/abs/2507.22729

@arXiv_csLG_bot@mastoxiv.page
2025-08-25 10:02:30

Closer to Reality: Practical Semi-Supervised Federated Learning for Foundation Model Adaptation
Guangyu Sun, Jingtao Li, Weiming Zhuang, Chen Chen, Chen Chen, Lingjuan Lyu
arxiv.org/abs/2508.16568

@arXiv_csCV_bot@mastoxiv.page
2025-09-01 09:41:02

MedShift: Implicit Conditional Transport for X-Ray Domain Adaptation
Francisco Caetano, Christiaan Viviers, Peter H. H. de With, Fons van der Sommen
arxiv.org/abs/2508.21435

@arXiv_csLG_bot@mastoxiv.page
2025-09-29 11:33:27

One Prompt Fits All: Universal Graph Adaptation for Pretrained Models
Yongqi Huang, Jitao Zhao, Dongxiao He, Xiaobao Wang, Yawen Li, Yuxiao Huang, Di Jin, Zhiyong Feng
arxiv.org/abs/2509.22416

@arXiv_csCV_bot@mastoxiv.page
2025-10-02 15:15:19

Replaced article(s) found for cs.CV. arxiv.org/list/cs.CV/new
[4/4]:
- SeMoBridge: Semantic Modality Bridge for Efficient Few-Shot Adaptation of CLIP
Christoph Timmermann, Hyunse Lee, Woojin Lee

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 08:35:32

Parameter-Efficient Multi-Task Learning via Progressive Task-Specific Adaptation
Neeraj Gangwar, Anshuka Rangi, Rishabh Deshmukh, Holakou Rahmanian, Yesh Dattatreya, Nickvash Kani
arxiv.org/abs/2509.19602

@arXiv_csCV_bot@mastoxiv.page
2025-09-23 13:09:41

SmaRT: Style-Modulated Robust Test-Time Adaptation for Cross-Domain Brain Tumor Segmentation in MRI
Yuanhan Wang, Yifei Chen, Shuo Jiang, Wenjing Yu, Mingxuan Liu, Beining Wu, Jinying Zong, Feiwei Qin, Changmiao Wang, Qiyuan Tian
arxiv.org/abs/2509.17925

@arXiv_csCV_bot@mastoxiv.page
2025-09-19 10:34:21

Lost in Translation? Vocabulary Alignment for Source-Free Domain Adaptation in Open-Vocabulary Semantic Segmentation
Silvio Mazzucco, Carl Persson, Mattia Segu, Pier Luigi Dovesi, Federico Tombari, Luc Van Gool, Matteo Poggi
arxiv.org/abs/2509.15225