Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@EarthOrgUK@mastodon.energy
2025-09-24 19:51:03

On the JProfiler Java Performance-tuning Tool: Review (2012) - Tuning my Java Web Server application with JProfiler to run it on smaller servers, save energy, and cut its carbon footprint... - earth.org.uk/note-on-JProfiler

@arXiv_csLG_bot@mastoxiv.page
2025-07-24 08:27:29

TD-Interpreter: Enhancing the Understanding of Timing Diagrams with Visual-Language Learning
Jie He, Vincent Theo Willem Kenbeek, Zhantao Yang, Meixun Qu, Ezio Bartocci, Dejan Ni\v{c}kovi\'c, Radu Grosu
arxiv.org/abs/2507.16844

@arXiv_csAI_bot@mastoxiv.page
2025-09-24 10:31:04

Data Efficient Adaptation in Large Language Models via Continuous Low-Rank Fine-Tuning
Xiao Han, Zimo Zhao, Wanyu Wang, Maolin Wang, Zitao Liu, Yi Chang, Xiangyu Zhao
arxiv.org/abs/2509.18942

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 10:04:22

Robust RGB-T Tracking via Learnable Visual Fourier Prompt Fine-tuning and Modality Fusion Prompt Generation
Hongtao Yang, Bineng Zhong, Qihua Liang, Zhiruo Zhu, Yaozong Zheng, Ning Li
arxiv.org/abs/2509.19733

@arXiv_eessIV_bot@mastoxiv.page
2025-07-25 09:37:02

Parameter-Efficient Fine-Tuning of 3D DDPM for MRI Image Generation Using Tensor Networks
Binghua Li, Ziqing Chang, Tong Liang, Chao Li, Toshihisa Tanaka, Shigeki Aoki, Qibin Zhao, Zhe Sun
arxiv.org/abs/2507.18112

@arXiv_csCL_bot@mastoxiv.page
2025-08-25 10:02:40

CYCLE-INSTRUCT: Fully Seed-Free Instruction Tuning via Dual Self-Training and Cycle Consistency
Zhanming Shen, Hao Chen, Yulei Tang, Shaolin Zhu, Wentao Ye, Xiaomeng Hu, Haobo Wang, Gang Chen, Junbo Zhao
arxiv.org/abs/2508.16100

@arXiv_csCR_bot@mastoxiv.page
2025-07-25 08:24:42

TimelyHLS: LLM-Based Timing-Aware and Architecture-Specific FPGA HLS Optimization
Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar
arxiv.org/abs/2507.17962

@arXiv_csLG_bot@mastoxiv.page
2025-08-25 09:59:30

RL Is Neither a Panacea Nor a Mirage: Understanding Supervised vs. Reinforcement Learning Fine-Tuning for LLMs
Hangzhan Jin, Sicheng Lv, Sifan Wu, Mohammad Hamdaqa
arxiv.org/abs/2508.16546

@arXiv_csCL_bot@mastoxiv.page
2025-09-24 10:34:34

When Long Helps Short: How Context Length in Supervised Fine-tuning Affects Behavior of Large Language Models
Yingming Zheng, Hanqi Li, Kai Yu, Lu Chen
arxiv.org/abs/2509.18762

@arXiv_csCR_bot@mastoxiv.page
2025-07-25 09:29:42

Layer-Aware Representation Filtering: Purifying Finetuning Data to Preserve LLM Safety Alignment
Hao Li, Lijun Li, Zhenghao Lu, Xianyi Wei, Rui Li, Jing Shao, Lei Sha
arxiv.org/abs/2507.18631