Optimizing Neural Networks with Learnable Non-Linear Activation Functions via Lookup-Based FPGA AccelerationMengyuan Yin, Benjamin Chen Ming Choong, Chuping Qu, Rick Siow Mong Goh, Weng-Fai Wong, Tao Luohttps://arxiv.org/abs/2508.17069
Optimizing Neural Networks with Learnable Non-Linear Activation Functions via Lookup-Based FPGA AccelerationLearned activation functions in models like Kolmogorov-Arnold Networks (KANs) outperform fixed-activation architectures in terms of accuracy and interpretability; however, their computational complexity poses critical challenges for energy-constrained edge AI deployments. Conventional CPUs/GPUs incur prohibitive latency and power costs when evaluating higher order activations, limiting deployability under ultra-tight energy budgets. We address this via a reconfigurable lookup architecture with …