China's MiniMax releases M2.1, an upgrade to its open-source M2 model that it says has "significantly enhanced" coding capabilities in Rust, Java, and others (MiniMax)
https://www.minimax.io/news/minimax-m21
Chargers will need to sustain offensive progress from win over Cowboys to be Super Bowl contenders https://www.foxsports.com/articles/nfl/chargers-will-need-to-sustain-offensive-progress-from-win-over-cowboys…
A year after the Trump administration began the dismantlement of USAID,
it is initiating a new round of significant cuts to foreign assistance.
This time, programs that survived the initial purge precisely because they were judged to be lifesaving are slated for cancellation.
According to an internal State Department email obtained by The Atlantic,
the administration will soon end all of the humanitarian funding it is currently providing as part of a “responsible exi…
CAG-Avatar: Cross-Attention Guided Gaussian Avatars for High-Fidelity Head Reconstruction
Zhe Chang, Haodong Jin, Yan Song, Hui Yu
https://arxiv.org/abs/2601.14844 https://arxiv.org/pdf/2601.14844 https://arxiv.org/html/2601.14844
arXiv:2601.14844v1 Announce Type: new
Abstract: Creating high-fidelity, real-time drivable 3D head avatars is a core challenge in digital animation. While 3D Gaussian Splashing (3D-GS) offers unprecedented rendering speed and quality, current animation techniques often rely on a "one-size-fits-all" global tuning approach, where all Gaussian primitives are uniformly driven by a single expression code. This simplistic approach fails to unravel the distinct dynamics of different facial regions, such as deformable skin versus rigid teeth, leading to significant blurring and distortion artifacts. We introduce Conditionally-Adaptive Gaussian Avatars (CAG-Avatar), a framework that resolves this key limitation. At its core is a Conditionally Adaptive Fusion Module built on cross-attention. This mechanism empowers each 3D Gaussian to act as a query, adaptively extracting relevant driving signals from the global expression code based on its canonical position. This "tailor-made" conditioning strategy drastically enhances the modeling of fine-grained, localized dynamics. Our experiments confirm a significant improvement in reconstruction fidelity, particularly for challenging regions such as teeth, while preserving real-time rendering performance.
toXiv_bot_toot
Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments
Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo
https://arxiv.org/abs/2512.17771 https://arxiv.org/pdf/2512.17771 https://arxiv.org/html/2512.17771
arXiv:2512.17771v1 Announce Type: new
Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
toXiv_bot_toot
Why the Baltic Sea still chokes after decades of nutrient controls #BalticSea
OVH Cloud (#AS16276) is raising "additional" IP4 prices from $2 USD to $2.39/month, a nearly 20% increase, on April 1, 2026 (no joke). They stated:
"In recent months, the global market has seen the cost of critical infrastructure components climb significantly and steadily. These cost pressures directly impact server hardware availability, and also affect the overall susta…
Progressive candidate Nida Allam has launched a primary challenge to a House Democrat from North Carolina,
seeking a redo of the 2022 contest that saw significant interference by the pro-Israel lobby and corporate interests.
Nida Allam announced her campaignon Thursday,
with backing from Sen. Bernie Sanders (I-Vermont) and a slate of progressive groups on launch.
She’s running on a platform supporting issues like Medicare for All and affordable housing,
and has …