Personalized Feature Translation for Expression Recognition: An Efficient Source-Free Domain Adaptation Method
Masoumeh Sharafi, Soufiane Belharbi, Houssem Ben Salem, Ali Etemad, Alessandro Lameiras Koerich, Marco Pedersoli, Simon Bacon, Eric Granger
https://arxiv.org/abs/2508.09202
Adaptique: Multi-objective and Context-aware Online Adaptation of Selection Techniques in Virtual Reality
Chao-Jung Lai, Mauricio Sousa, Tianyu Zhang, Ludwig Sidenmark, Tovi Grossman
https://arxiv.org/abs/2508.08505
Enhancing Human-Robot Collaboration: A Sim2Real Domain Adaptation Algorithm for Point Cloud Segmentation in Industrial Environments
Fatemeh Mohammadi Amin, Darwin G. Caldwell, Hans Wernher van de Venn
https://arxiv.org/abs/2506.09552
DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy
Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao
https://arxiv.org/abs/2508.10260
Predictive Position Control for Movable Antenna Arrays in UAV Communications: A Spatio-Temporal Transformer-LSTM Framework
Kan Yu, Kaixuan Li, Xiaowu Liu, Qixun Zhang, Zhiyong Feng
https://arxiv.org/abs/2508.10720
Not Only Consistency: Enhance Test-Time Adaptation with Spatio-temporal Inconsistency for Remote Physiological Measurement
Xiao Yang, Yuxuan Fan, Can Liu, Houcheng Su, Weichen Guo, Jiyao Wang, Dengbo He
https://arxiv.org/abs/2507.07908
Enabling On-demand Guaranteed QoS for Real Time Video Streaming from Vehicles in 5G Advanced with CAPIF & NEF APIs
Pietro Piscione, Leonardo Lossi, Maziar Nekovee, Chathura Galkandage, Phil O Connor, Simon Davies
https://arxiv.org/abs/2508.09150
Replaced article(s) found for nlin.AO. https://arxiv.org/list/nlin.AO/new
[1/1]:
- Waveform proportionality and Taylor's law in coupled Lorenz systems
Yuzuru Mitsui, Hiroshi Kori
Facilitating Longitudinal Interaction Studies of AI Systems
Tao Long, Sitong Wang, \'Emilie Fabre, Tony Wang, Anup Sathya, Jason Wu, Savvas Petridis, Dingzeyu Li, Tuhin Chakrabarty, Yue Jiang, Jingyi Li, Tiffany Tseng, Ken Nakagaki, Qian Yang, Nikolas Martelaro, Jeffrey V. Nickerson, Lydia B. Chilton
https://arxiv.org/abs/2508.10252
Bridging Classical and Quantum Computing for Next-Generation Language Models
Yi Pan, Hanqi Jiang, Junhao Chen, Yiwei Li, Huaqin Zhao, Lin Zhao, Yohannes Abate, Yingfeng Wang, Tianming Liu
https://arxiv.org/abs/2508.07026
Bridging Simulation and Experiment: A Self-Supervised Domain Adaptation Framework for Concrete Damage Classification
Chen Xu, Giao Vu, Ba Trung Cao, Zhen Liu, Fabian Diewald, Yong Yuan, G\"unther Meschke
https://arxiv.org/abs/2508.04538
idk why people seem shocked by the homestuck adaptation, like as a total outsider if u asked me to make up a way to expand the "homestuck brand" a year ago id sleepwalk into getting vivziepop and toby fox in just off vibes of the types online who devote their souls to that shit
#TuneTuesday (June 10)
For #PrideMonth 🏳️🌈, I’m selecting #HeatherSmall’s song “Proud”, notably used in the American adaptation of the TV show
Replaced article(s) found for nlin.AO. https://arxiv.org/list/nlin.AO/new
[1/1]:
- Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptive learning in neural networks
Hoony Kang, Wolfgang Losert
A Hamilton-Jacobi approach for the evolutionary dynamics of a model with gene transfer: characterizing monomorphic dynamics for non-concave fitness functions
Alejandro G\'arriz (UGR), Sepideh Mirrahimi (IMT)
https://arxiv.org/abs/2508.07886
MAESTRO: Masked AutoEncoders for Multimodal, Multitemporal, and Multispectral Earth Observation Data
Antoine Labatie, Michael Vaccaro, Nina Lardiere, Anatol Garioud, Nicolas Gonthier
https://arxiv.org/abs/2508.10894
Degrowth: a dead end or the way out? Capital’s future scam
Aurora Despierta In the series Prospects for Degrowth This article by the Spanish writer Aurora Despierta is her adaptation (highly summarised for translation into English), for the series Prospects for Degrowth, of the original article: Decrecer ¿callejón o salida? La futura estafa del capital’ (2-5-2025) Translated by Mark Burton and Anna Gregoletto For a radically anti-capitalist degrowth that cannot be…
Crosslisted article(s) found for nlin.AO. https://arxiv.org/list/nlin.AO/new
[1/1]:
- Dynamic mode decomposition for detecting transient activity via sparsity and smoothness regulariz...
Yutaro Tanaka, Hiroya Nakao
Civil Society in the Loop: Feedback-Driven Adaptation of (L)LM-Assisted Classification in an Open-Source Telegram Monitoring Tool
Milena Pustet, Elisabeth Steffen, Helena Mihaljevi\'c, Grischa Stanjek, Yannis Illies
https://arxiv.org/abs/2507.06734
Mitigating Multi-Sequence 3D Prostate MRI Data Scarcity through Domain Adaptation using Locally-Trained Latent Diffusion Models for Prostate Cancer Detection
Emerson P. Grabke, Babak Taati, Masoom A. Haider
https://arxiv.org/abs/2507.06384
going through my months-old screenshots and this motherfucker is a recurring pattern
[2025-08-15 Fri (UTC), 1 new article found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot
O_FT@EvalLLM2025 : \'etude comparative de choix de donn\'ees et de strat\'egies d'apprentissage pour l'adaptation de mod\`eles de langue \`a un domaine
Isma\"el Rousseau, Claire Perroux, Pierre Adam, Thomas Girault, Lionel Delphin-Poulat, Morgan Veyret, Gw\'enol\'e Lecorv\'e, G\'eraldine Damnati
https://
[2025-07-14 Mon (UTC), 2 new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot
Discrepancy-Aware Contrastive Adaptation in Medical Time Series Analysis
Yifan Wang, Hongfeng Ai, Ruiqi Li, Maowei Jiang, Ruiyuan Kang, Jiahua Dong, Cheng Jiang, Chenzhong Li
https://arxiv.org/abs/2508.05572
Why Evolve When You Can Adapt? Post-Evolution Adaptation of Genetic Memory for On-the-Fly Control
Hamze Hammami, Eva Denisa Barbulescu, Talal Shaikh, Mouayad Aldada, Muhammad Saad Munawar
https://arxiv.org/abs/2508.03600
LoRAShield: Data-Free Editing Alignment for Secure Personalized LoRA Sharing
Jiahao Chen, junhao li, Yiming Wang, Zhe Ma, Yi Jiang, Chunyi Zhou, Qingming Li, Tianyu Du, Shouling Ji
https://arxiv.org/abs/2507.07056
[2025-08-14 Thu (UTC), no new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot
Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Ziyue Li, Yang Li, Tianyi Zhou
https://arxiv.org/abs/2507.07996 https://arxiv.org/pdf/2507.07996 https://arxiv.org/html/2507.07996
arXiv:2507.07996v1 Announce Type: new
Abstract: Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
toXiv_bot_toot
Neutralizing Token Aggregation via Information Augmentation for Efficient Test-Time Adaptation
Yizhe Xiong, Zihan Zhou, Yiwen Liang, Hui Chen, Zijia Lin, Tianxiang Hao, Fan Zhang, Jungong Han, Guiguang Ding
https://arxiv.org/abs/2508.03388
Culinary Crossroads: A RAG Framework for Enhancing Diversity in Cross-Cultural Recipe Adaptation
Tianyi Hu, Andrea Morales-Garz\'on, Jingyi Zheng, Maria Maistro, Daniel Hershcovich
https://arxiv.org/abs/2507.21934
Replaced article(s) found for nlin.AO. https://arxiv.org/list/nlin.AO/new
[1/1]:
- Dynamical systems on torus related to general Heun equations: phase-lock areas and constriction b...
Artem Alexandrov, Alexey Glutsyuk
Replaced article(s) found for cs.CV. https://arxiv.org/list/cs.CV/new
[6/9]:
- UltraAD: Fine-Grained Ultrasound Anomaly Classification via Few-Shot CLIP Adaptation
Yue Zhou, Yuan Bi, Wenjuan Tong, Wei Wang, Nassir Navab, Zhongliang Jiang
$\varphi$-Adapt: A Physics-Informed Adaptation Learning Approach to 2D Quantum Material Discovery
Hoang-Quan Nguyen, Xuan Bac Nguyen, Sankalp Pandey, Tim Faltermeier, Nicholas Borys, Hugh Churchill, Khoa Luu
https://arxiv.org/abs/2507.05184
Box Pose and Shape Estimation and Domain Adaptation for Large-Scale Warehouse Automation
Xihang Yu, Rajat Talak, Jingnan Shi, Ulrich Viereck, Igor Gilitschenski, Luca Carlone
https://arxiv.org/abs/2507.00984
Efficient Industrial sLLMs through Domain Adaptive Continual Pretraining: Method, Evaluation and Applications
Seonwu Kim, Yohan Na, Kihun Kim, Hanhee Cho, Geun Lim, Mintae Kim, Seongik Park, Ki Hyun Kim, Youngsub Han, Byoung-Ki Jeon
https://arxiv.org/abs/2507.06795
[2025-06-13 Fri (UTC), no new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot
EA-ViT: Efficient Adaptation for Elastic Vision Transformer
Chen Zhu, Wangbo Zhao, Huiwen Zhang, Samir Khaki, Yuhao Zhou, Weidong Tang, Shuo Wang, Zhihang Yuan, Yuzhang Shang, Xiaojiang Peng, Kai Wang, Dawei Yang
https://arxiv.org/abs/2507.19360
[2025-08-13 Wed (UTC), no new articles found for nlin.AO Adaptation and Self-Organizing Systems]
toXiv_bot_toot
Replaced article(s) found for cs.CV. https://arxiv.org/list/cs.CV/new
[3/4]:
- Adaptation of Multi-modal Representation Models for Multi-task Surgical Computer Vision
Soham Walimbe, Britty Baby, Vinkle Srivastav, Nicolas Padoy