2026-02-11 07:28:36
Opening Sequences: The Narrative Architecture of TV Titles https://call-for-papers.sas.upenn.edu/cfp/2026/02/09/opening-sequences-the-narrative-architecture-of-tv-titles
Opening Sequences: The Narrative Architecture of TV Titles https://call-for-papers.sas.upenn.edu/cfp/2026/02/09/opening-sequences-the-narrative-architecture-of-tv-titles
Opening Sequences: The Narrative Architecture of TV Titles https://call-for-papers.sas.upenn.edu/cfp/2026/02/09/opening-sequences-the-narrative-architecture-of-tv-titles
Opening Sequences: The Narrative Architecture of TV Titles https://call-for-papers.sas.upenn.edu/cfp/2026/02/09/opening-sequences-the-narrative-architecture-of-tv-titles
For millennia, the “source code” of human survival was written in the soil
— held in the physical seeds exchanged by farmers across the Eastern Ghats, the Andes and the African savannah.
Today, that code has been digitised into ATGC sequences,
stored on servers largely located in the Global North
and increasingly governed by intellectual property regimes that risk excluding the very communities that developed this biological heritage.
The issue is no longer lim…
Learning to Build Shapes by Extrusion
Thor Vestergaard Christiansen, Karran Pandey, Alba Reinders, Karan Singh, Morten Rieger Hannemose, J. Andreas B{\ae}rentzen
https://arxiv.org/abs/2601.22858 https://arxiv.org/pdf/2601.22858 https://arxiv.org/html/2601.22858
arXiv:2601.22858v1 Announce Type: new
Abstract: We introduce Text Encoded Extrusion (TEE), a text-based representation that expresses mesh construction as sequences of face extrusions rather than polygon lists, and a method for generating 3D meshes from TEE using a large language model (LLM). By learning extrusion sequences that assemble a mesh, similar to the way artists create meshes, our approach naturally supports arbitrary output face counts and produces manifold meshes by design, in contrast to recent transformer-based models. The learnt extrusion sequences can also be applied to existing meshes - enabling editing in addition to generation. To train our model, we decompose a library of quadrilateral meshes with non-self-intersecting face loops into constituent loops, which can be viewed as their building blocks, and finetune an LLM on the steps for reassembling the meshes by performing a sequence of extrusions. We demonstrate that our representation enables reconstruction, novel shape synthesis, and the addition of new features to existing meshes.
toXiv_bot_toot
How Private Are DNA Embeddings? Inverting Foundation Model Representations of Genomic Sequences
Sofiane Ouaari, Jules Kreuer, Nico Pfeifer
https://arxiv.org/abs/2603.06950 https…
30 years ago these days comet #Hyakutake amazed in the northern skies - even close to Polaris one day - with a close approach to Earth and a super-long bright plasma tail: https://www.youtube.com/watch?v=H6uCHBqfZlk is a 15-min. excerpt from a documentary (I actually have the VHS cassette ...) about creating fantastic timelapse sequences of the latter from images shot on film. See also https://www.facebook.com/photo/?fbid=1629899344915003 and https://www.facebook.com/richard.berry.777/posts/pfbid0xQ9uVTK9fKeeuFeCedm1U82dYPPWbnJiuz8bVEt5UiE2QFEgn7GWfjNueZpjxCmXl and https://www.facebook.com/photo/?fbid=26701568342780915 and https://www.facebook.com/don.davis1/posts/pfbid02YeuQ39p4igwG5GCk34iCSxwEevdpUKkCMBeF5QA7TBdfya3YuQ8sSsGnNUuxNFgGl for other still images.
SOM-VQ: Topology-Aware Tokenization for Interactive Generative Models
Alessandro Londei, Denise Lanzieri, Matteo Benati
https://arxiv.org/abs/2602.21133 https://arxiv.org/pdf/2602.21133 https://arxiv.org/html/2602.21133
arXiv:2602.21133v1 Announce Type: new
Abstract: Vector-quantized representations enable powerful discrete generative models but lack semantic structure in token space, limiting interpretable human control. We introduce SOM-VQ, a tokenization method that combines vector quantization with Self-Organizing Maps to learn discrete codebooks with explicit low-dimensional topology. Unlike standard VQ-VAE, SOM-VQ uses topology-aware updates that preserve neighborhood structure: nearby tokens on a learned grid correspond to semantically similar states, enabling direct geometric manipulation of the latent space. We demonstrate that SOM-VQ produces more learnable token sequences in the evaluated domains while providing an explicit navigable geometry in code space. Critically, the topological organization enables intuitive human-in-the-loop control: users can steer generation by manipulating distances in token space, achieving semantic alignment without frame-level constraints. We focus on human motion generation - a domain where kinematic structure, smooth temporal continuity, and interactive use cases (choreography, rehabilitation, HCI) make topology-aware control especially natural - demonstrating controlled divergence and convergence from reference sequences through simple grid-based sampling. SOM-VQ provides a general framework for interpretable discrete representations applicable to music, gesture, and other interactive generative domains.
toXiv_bot_toot
Single-molecule #peptide #sequencing through reverse translation of peptides into DNA
https://www.nature.com/articles/s41587…
A ceaseless urge beneath all appearance; an impulse not merely to persist but to intensify; life as duration, as movement, as an unbroken unfolding in time. As a technology of becoming. Life as sequence - procedural quests of forward momentum.
A language underwrites the motion; lines on lines inscribing themselves across a dark field. Pulses like pixels, moving in canon. Constructions that branch, loop, and repeat. Patterns generate patterns, as though the law of existence were simply…
Replaced article(s) found for math.CT. https://arxiv.org/list/math.CT/new
[1/1]:
- Spectral sequences via linear presheaves
Muriel Livernet, Sarah Whitehouse
https://arxiv.org/abs/2406.02777 https://mastoxiv.page/@arXiv_mathAT_bot/112568449409497918
- Coslice Colimits in Homotopy Type Theory
Perry Hart (Favonia), Kuen-Bang Hou (Favonia)
https://arxiv.org/abs/2411.15103 https://mastoxiv.page/@arXiv_csLO_bot/113542326926097595
- Perfect generation for regular algebraic stacks
Pat Lank
https://arxiv.org/abs/2601.04053 https://mastoxiv.page/@arXiv_mathAG_bot/115858482440761076
toXiv_bot_toot
Towards the complete description of stationary states of a Bose-Einstein condensate in a one-dimensional quasiperiodic lattice: A coding approach
G. L. Alfimov, A. P. Fedotov, Ya. A. Murenkov, D. A. Zezyulin
https://arxiv.org/abs/2602.17172 https://arxiv.org/pdf/2602.17172 https://arxiv.org/html/2602.17172
arXiv:2602.17172v1 Announce Type: new
Abstract: We consider stationary states of an effectively one-dimensional Bose-Einstein condensate in a quasiperiodic lattice. We formulate sufficient conditions for a one-to-one correspondence between the stationary states with a fixed chemical potential and the set of bi-infinite sequences over a finite alphabet. These conditions can be checked numerically. A bi-infinite sequence can be interpreted as a code of the corresponding solution. A numerical example demonstrates the coding approach using an alphabet of three symbols.
toXiv_bot_toot
Screen, Match, and Cache: A Training-Free Causality-Consistent Reference Frame Framework for Human Animation
Jianan Wang, Nailei Hei, Li He, Huanzhen Wang, Aoxing Li, Haofen Wang, Yan Wang, Wenqiang Zhang
https://arxiv.org/abs/2601.22160 https://arxiv.org/pdf/2601.22160 https://arxiv.org/html/2601.22160
arXiv:2601.22160v1 Announce Type: new
Abstract: Human animation aims to generate temporally coherent and visually consistent videos over long sequences, yet modeling long-range dependencies while preserving frame quality remains challenging. Inspired by the human ability to leverage past observations for interpreting ongoing actions, we propose FrameCache, a training-free three-stage framework consisting of Screen, Cache, and Match. In the Screen stage, a multi-dimensional, quality-aware mechanism with adaptive thresholds dynamically selects informative frames; the Cache stage maintains a reference pool using a dynamic replacement-hit strategy, preserving both diversity and relevance; and the Match stage extracts behavioral features to perform motion-consistent reference matching for coherent animation guidance. Extensive experiments on standard benchmarks demonstrate that FrameCache consistently improves temporal coherence and visual stability while integrating seamlessly with diverse baselines. Despite these encouraging results, further analysis reveals that its effectiveness depends on baseline temporal reasoning and real-synthetic consistency, motivating future work on compatibility conditions and adaptive cache mechanisms. Code will be made publicly available.
toXiv_bot_toot
On strong law of large numbers for weakly stationary $\varphi$-mixing set-valued random variable sequences
Luc Tri Tuyen
https://arxiv.org/abs/2601.09197 https://
Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin
https://arxiv.org/abs/2602.21196 https://arxiv.org/pdf/2602.21196 https://arxiv.org/html/2602.21196
arXiv:2602.21196v1 Announce Type: new
Abstract: Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8$\times$H100 node, improving upon prior methods by over 25$\%$.
toXiv_bot_toot
I do love the 60s sequences from this video. It's very very well done. #TOTP
🇺🇦 #NowPlaying on BBCRadio3's #NewMusicShow
Anna Thorvaldsdottir, Jessie-May Wilson, Margot Maurel & Charles Curtin:
🎵 Sequences
#AnnaThorvaldsdottir #JessieMayWilson #MargotMaurel #CharlesCurtin