Who has two thumbs and just installed Netscape Navigator into a Windows NT 4.0 VM to find out what the <blink> tag timings actually are? Yes, this guy.
0.9s shown followed by 0.73s hidden
xAI engineer Sulaiman Ghori says he has "left" the company days after appearing on a podcast last week claiming that xAI had been skirting regulations and more (AJ Dellinger/Gizmodo)
https://gizmodo.com/engineer-at-elon-m
Fantastisk timing: podcasten Videnskabens Vindere er netop udkommet med afsnit om Von Laue hvis Nobelprismedalje blev holdt skjult fra nazisterne i Danmark: https://podcasts.apple.com/dk/podcast/periodisk-videnskabens-vindere/id1666568050?…
A propósito de sonidos de este rincón del mundo, tenemos disco nuevo de Baby Cohete, una joven banda de pop y música alternativa, titulado «El ataque de los niños Bomba»
https://album.link/s/3Vsghr8BImVrjX4em45XbO
Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments
Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo
https://arxiv.org/abs/2512.17771 https://arxiv.org/pdf/2512.17771 https://arxiv.org/html/2512.17771
arXiv:2512.17771v1 Announce Type: new
Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
toXiv_bot_toot
🇺🇦 #NowPlaying on KEXP's #VarietyMix
Tomo Nakayama:
🎵 Hidakamura
#TomoNakayama
https://tomomusic.bandcamp.com/album/gilda-hidakamura
https://open.spotify.com/track/0RRIku9NShxFaooKulzflp
Device Activity Tracker — WhatsApp & Signal Activity Tracker via RTT Analysis
A phone number can reveal whether a device is active, in standby or offline (and more). This PoC demonstrates how delivery receipts RTT timing leak sensitive device-activity patterns. (WhatsApp / Signal)
📱 https://github.com/…