2025-12-02 07:20:45
Oha, Bärbel Bas erinnert sich an das "S" in #SPD:
(Und der SPIEGEL berichtet nur darüber, weil die Arbeitgeber es kacke finden. Capitalist Realism in Aktion ...)
Oha, Bärbel Bas erinnert sich an das "S" in #SPD:
(Und der SPIEGEL berichtet nur darüber, weil die Arbeitgeber es kacke finden. Capitalist Realism in Aktion ...)
watching bf play Elden Ring, it strikes me that we're close to peak graphics. the realism dial can still be turned up a bit, but games are generally better when they're a step away from reality.
what's missing though is physicality. when characters swing weapons at enemies, it's clearly just sprites doing canned animation, and there's no actual contact happening.
a couple decades from now, perhaps average 3d games will have processing budget for procedural ani…
Auch gut
Sleaford Mods Ft. Sue Tompkins - No Touch (Official Video)
https://www.youtube.com/watch?v=N1aeyshn5Z0&t=62
> No Touch is colourful and steeped in heavy realism, a kitchen sink drama. It’s a song about isolation, loneliness, and crushing self-harm,…
OFERA: Blendshape-driven 3D Gaussian Control for Occluded Facial Expression to Realistic Avatars in VR
Seokhwan Yang, Boram Yoon, Seoyoung Kang, Hail Song, Woontack Woo
https://arxiv.org/abs/2602.01748 https://arxiv.org/pdf/2602.01748 https://arxiv.org/html/2602.01748
arXiv:2602.01748v1 Announce Type: new
Abstract: We propose OFERA, a novel framework for real-time expression control of photorealistic Gaussian head avatars for VR headset users. Existing approaches attempt to recover occluded facial expressions using additional sensors or internal cameras, but sensor-based methods increase device weight and discomfort, while camera-based methods raise privacy concerns and suffer from limited access to raw data. To overcome these limitations, we leverage the blendshape signals provided by commercial VR headsets as expression inputs. Our framework consists of three key components: (1) Blendshape Distribution Alignment (BDA), which applies linear regression to align the headset-provided blendshape distribution to a canonical input space; (2) an Expression Parameter Mapper (EPM) that maps the aligned blendshape signals into an expression parameter space for controlling Gaussian head avatars; and (3) a Mapper-integrated Avatar (MiA) that incorporates EPM into the avatar learning process to ensure distributional consistency. Furthermore, OFERA establishes an end-to-end pipeline that senses and maps expressions, updates Gaussian avatars, and renders them in real-time within VR environments. We show that EPM outperforms existing mapping methods on quantitative metrics, and we demonstrate through a user study that the full OFERA framework enhances expression fidelity while preserving avatar realism. By enabling real-time and photorealistic avatar expression control, OFERA significantly improves telepresence in VR communication. A project page is available at https://ysshwan147.github.io/projects/ofera/.
toXiv_bot_toot
AI image generators like Nano Banana have increased realism by mimicking phone camera traits in contrast, exposure, and sharpening to avoid the uncanny valley (Allison Johnson/The Verge)
https://www.theverge.com/column/843883/ai-image-generators-better-worse
“Ow” Isn’t Enough: Writers, We Can Do Better #CrimeFiction
https://www.bobmuellerwriter.com/ow-isnt-enough-writers-we-can-do-better/
PAColorHolo: A Perceptually-Aware Color Management Framework for Holographic Displays
Chun Chen, Minseok Chae, Seung-Woo Nam, Myeong-Ho Choi, Minseong Kim, Eunbi Lee, Yoonchan Jeong, Jae-Hyeung Park
https://arxiv.org/abs/2601.14766 https://arxiv.org/pdf/2601.14766 https://arxiv.org/html/2601.14766
arXiv:2601.14766v1 Announce Type: new
Abstract: Holographic displays offer significant potential for augmented and virtual reality applications by reconstructing wavefronts that enable continuous depth cues and natural parallax without vergence-accommodation conflict. However, despite advances in pixel-level image quality, current systems struggle to achieve perceptually accurate color reproduction--an essential component of visual realism. These challenges arise from complex system-level distortions caused by coherent laser illumination, spatial light modulator imperfections, chromatic aberrations, and camera-induced color biases. In this work, we propose a perceptually-aware color management framework for holographic displays that jointly addresses input-output color inconsistencies through color space transformation, adaptive illumination control, and neural network-based perceptual modeling of the camera's color response. We validate the effectiveness of our approach through numerical simulations, optical experiments, and a controlled user study. The results demonstrate substantial improvements in perceptual color fidelity, laying the groundwork for perceptually driven holographic rendering in future systems.
toXiv_bot_toot