Tootfinder

Opt-in global Mastodon full text search. Join the index!

@weltenkreuzer@social.tchncs.de
2025-12-02 07:20:45

Oha, Bärbel Bas erinnert sich an das "S" in #SPD:
(Und der SPIEGEL berichtet nur darüber, weil die Arbeitgeber es kacke finden. Capitalist Realism in Aktion ...)

"Bas hatte berichtet von Herren »in ihren bequemen Sesseln, der eine oder andere im Maßanzug«, die Ablehnung sei deutlich zu spüren gewesen. Sie habe an die Menschen gedacht, die auf Solidarität angewiesen seien, ihr Leben lang oft körperlich hart und schlecht bezahlt gearbeitet hätten. »Und dieser Moment hat mir noch einmal gezeigt, wo die Linien in diesem Land wirklich verlaufen. Nicht zwischen Jung und Alt. Sie verlaufen nicht zwischen Jung und Alt, sondern zwischen Arm und Reich, zwischen d…
@gray17@mastodon.social
2026-01-01 22:22:25

watching bf play Elden Ring, it strikes me that we're close to peak graphics. the realism dial can still be turned up a bit, but games are generally better when they're a step away from reality.
what's missing though is physicality. when characters swing weapons at enemies, it's clearly just sprites doing canned animation, and there's no actual contact happening.
a couple decades from now, perhaps average 3d games will have processing budget for procedural ani…

@alm10965@mastodon.social
2026-01-31 15:58:45

Auch gut
Sleaford Mods Ft. Sue Tompkins - No Touch (Official Video)
youtube.com/watch?v=N1aeyshn5Z
> No Touch is colourful and steeped in heavy realism, a kitchen sink drama. It’s a song about isolation, loneliness, and crushing self-harm,…

@arXiv_csGR_bot@mastoxiv.page
2026-02-03 08:20:05

OFERA: Blendshape-driven 3D Gaussian Control for Occluded Facial Expression to Realistic Avatars in VR
Seokhwan Yang, Boram Yoon, Seoyoung Kang, Hail Song, Woontack Woo
arxiv.org/abs/2602.01748 arxiv.org/pdf/2602.01748 arxiv.org/html/2602.01748
arXiv:2602.01748v1 Announce Type: new
Abstract: We propose OFERA, a novel framework for real-time expression control of photorealistic Gaussian head avatars for VR headset users. Existing approaches attempt to recover occluded facial expressions using additional sensors or internal cameras, but sensor-based methods increase device weight and discomfort, while camera-based methods raise privacy concerns and suffer from limited access to raw data. To overcome these limitations, we leverage the blendshape signals provided by commercial VR headsets as expression inputs. Our framework consists of three key components: (1) Blendshape Distribution Alignment (BDA), which applies linear regression to align the headset-provided blendshape distribution to a canonical input space; (2) an Expression Parameter Mapper (EPM) that maps the aligned blendshape signals into an expression parameter space for controlling Gaussian head avatars; and (3) a Mapper-integrated Avatar (MiA) that incorporates EPM into the avatar learning process to ensure distributional consistency. Furthermore, OFERA establishes an end-to-end pipeline that senses and maps expressions, updates Gaussian avatars, and renders them in real-time within VR environments. We show that EPM outperforms existing mapping methods on quantitative metrics, and we demonstrate through a user study that the full OFERA framework enhances expression fidelity while preserving avatar realism. By enabling real-time and photorealistic avatar expression control, OFERA significantly improves telepresence in VR communication. A project page is available at ysshwan147.github.io/projects/.
toXiv_bot_toot

@Techmeme@techhub.social
2025-12-15 08:50:38

AI image generators like Nano Banana have increased realism by mimicking phone camera traits in contrast, exposure, and sharpening to avoid the uncanny valley (Allison Johnson/The Verge)
theverge.com/column/843883/ai-

@bobmueller@mastodon.world
2025-12-25 16:45:00

“Ow” Isn’t Enough: Writers, We Can Do Better #CrimeFiction
bobmuellerwriter.com/ow-isnt-e

@ripienaar@devco.social
2025-11-13 20:02:05

Donno, maybe this kids ride could have dialed the realism on the pig down a bit

@arXiv_csGR_bot@mastoxiv.page
2026-01-22 07:44:12

PAColorHolo: A Perceptually-Aware Color Management Framework for Holographic Displays
Chun Chen, Minseok Chae, Seung-Woo Nam, Myeong-Ho Choi, Minseong Kim, Eunbi Lee, Yoonchan Jeong, Jae-Hyeung Park
arxiv.org/abs/2601.14766 arxiv.org/pdf/2601.14766 arxiv.org/html/2601.14766
arXiv:2601.14766v1 Announce Type: new
Abstract: Holographic displays offer significant potential for augmented and virtual reality applications by reconstructing wavefronts that enable continuous depth cues and natural parallax without vergence-accommodation conflict. However, despite advances in pixel-level image quality, current systems struggle to achieve perceptually accurate color reproduction--an essential component of visual realism. These challenges arise from complex system-level distortions caused by coherent laser illumination, spatial light modulator imperfections, chromatic aberrations, and camera-induced color biases. In this work, we propose a perceptually-aware color management framework for holographic displays that jointly addresses input-output color inconsistencies through color space transformation, adaptive illumination control, and neural network-based perceptual modeling of the camera's color response. We validate the effectiveness of our approach through numerical simulations, optical experiments, and a controlled user study. The results demonstrate substantial improvements in perceptual color fidelity, laying the groundwork for perceptually driven holographic rendering in future systems.
toXiv_bot_toot