Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@deprogrammaticaipsum@mas.to
2025-11-16 09:16:29

"The scene (“Hidden Figures”, 2016) ends with Mr. Stafford famously claiming “that’s old!” as if the Pythagorean theorem was suddenly not useful anymore after 3500 years… Mr. Stafford’s reaction is canonical and very appropriate; it is the same that most devs have upon learning the fact that COBOL is running most credit card transactions, or when frontend engineers discover that static or server-rendered HTML websites do not need 10 MB of JavaScript on the browser."

@arXiv_csGR_bot@mastoxiv.page
2026-01-23 07:35:52

SplatBus: A Gaussian Splatting Viewer Framework via GPU Interprocess Communication
Yinghan Xu, Th\'eo Morales, John Dingliana
arxiv.org/abs/2601.15431 arxiv.org/pdf/2601.15431 arxiv.org/html/2601.15431
arXiv:2601.15431v1 Announce Type: new
Abstract: Radiance field-based rendering methods have attracted significant interest from the computer vision and computer graphics communities. They enable high-fidelity rendering with complex real-world lighting effects, but at the cost of high rendering time. 3D Gaussian Splatting solves this issue with a rasterisation-based approach for real-time rendering, enabling applications such as autonomous driving, robotics, virtual reality, and extended reality. However, current 3DGS implementations are difficult to integrate into traditional mesh-based rendering pipelines, which is a common use case for interactive applications and artistic exploration. To address this limitation, this software solution uses Nvidia's interprocess communication (IPC) APIs to easily integrate into implementations and allow the results to be viewed in external clients such as Unity, Blender, Unreal Engine, and OpenGL viewers. The code is available at github.com/RockyXu66/splatbus.
toXiv_bot_toot

@jlpiraux@wallonie-bruxelles.social
2026-01-18 15:08:46

Le recours aux jets privés pour se rendre au WEF a triplé entre 2023 et 2025
rts.ch/info/suisse/2026/articl

@arXiv_csGR_bot@mastoxiv.page
2026-01-22 08:05:37

CAG-Avatar: Cross-Attention Guided Gaussian Avatars for High-Fidelity Head Reconstruction
Zhe Chang, Haodong Jin, Yan Song, Hui Yu
arxiv.org/abs/2601.14844 arxiv.org/pdf/2601.14844 arxiv.org/html/2601.14844
arXiv:2601.14844v1 Announce Type: new
Abstract: Creating high-fidelity, real-time drivable 3D head avatars is a core challenge in digital animation. While 3D Gaussian Splashing (3D-GS) offers unprecedented rendering speed and quality, current animation techniques often rely on a "one-size-fits-all" global tuning approach, where all Gaussian primitives are uniformly driven by a single expression code. This simplistic approach fails to unravel the distinct dynamics of different facial regions, such as deformable skin versus rigid teeth, leading to significant blurring and distortion artifacts. We introduce Conditionally-Adaptive Gaussian Avatars (CAG-Avatar), a framework that resolves this key limitation. At its core is a Conditionally Adaptive Fusion Module built on cross-attention. This mechanism empowers each 3D Gaussian to act as a query, adaptively extracting relevant driving signals from the global expression code based on its canonical position. This "tailor-made" conditioning strategy drastically enhances the modeling of fine-grained, localized dynamics. Our experiments confirm a significant improvement in reconstruction fidelity, particularly for challenging regions such as teeth, while preserving real-time rendering performance.
toXiv_bot_toot