Tootfinder

Opt-in global Mastodon full text search. Join the index!

@relcfp@mastodon.social
2026-02-11 07:28:36

Opening Sequences: The Narrative Architecture of TV Titles call-for-papers.sas.upenn.edu/

@relcfp@mastodon.social
2026-02-11 03:08:11

Opening Sequences: The Narrative Architecture of TV Titles call-for-papers.sas.upenn.edu/

@relcfp@mastodon.social
2026-02-10 16:28:34

Opening Sequences: The Narrative Architecture of TV Titles call-for-papers.sas.upenn.edu/

For millennia, the “source code” of human survival was written in the soil
— held in the physical seeds exchanged by farmers across the Eastern Ghats, the Andes and the African savannah.
Today, that code has been digitised into ATGC sequences,
stored on servers largely located in the Global North
and increasingly governed by intellectual property regimes that risk excluding the very communities that developed this biological heritage.
The issue is no longer lim…

@bourgwick@heads.social
2026-04-05 23:01:30

it's probably because i'm midway through my 4th decadal read of "neuromancer" & they're both from '84, but this is feeling like a soundtrack to the sprawl, the gershwin side for noir turns (& ironic violent slo-mo sequences), the james brown side playing on a jukebox in the chatsubo. @…

The Residents' George & James LP
@arXiv_csGR_bot@mastoxiv.page
2026-02-02 08:41:09

Learning to Build Shapes by Extrusion
Thor Vestergaard Christiansen, Karran Pandey, Alba Reinders, Karan Singh, Morten Rieger Hannemose, J. Andreas B{\ae}rentzen
arxiv.org/abs/2601.22858 arxiv.org/pdf/2601.22858 arxiv.org/html/2601.22858
arXiv:2601.22858v1 Announce Type: new
Abstract: We introduce Text Encoded Extrusion (TEE), a text-based representation that expresses mesh construction as sequences of face extrusions rather than polygon lists, and a method for generating 3D meshes from TEE using a large language model (LLM). By learning extrusion sequences that assemble a mesh, similar to the way artists create meshes, our approach naturally supports arbitrary output face counts and produces manifold meshes by design, in contrast to recent transformer-based models. The learnt extrusion sequences can also be applied to existing meshes - enabling editing in addition to generation. To train our model, we decompose a library of quadrilateral meshes with non-self-intersecting face loops into constituent loops, which can be viewed as their building blocks, and finetune an LLM on the steps for reassembling the meshes by performing a sequence of extrusions. We demonstrate that our representation enables reconstruction, novel shape synthesis, and the addition of new features to existing meshes.
toXiv_bot_toot

@arXiv_qbioGN_bot@mastoxiv.page
2026-03-10 09:01:49

How Private Are DNA Embeddings? Inverting Foundation Model Representations of Genomic Sequences
Sofiane Ouaari, Jules Kreuer, Nico Pfeifer
arxiv.org/abs/2603.06950

@cosmos4u@scicomm.xyz
2026-03-25 02:27:25

30 years ago these days comet #Hyakutake amazed in the northern skies - even close to Polaris one day - with a close approach to Earth and a super-long bright plasma tail: youtube.com/watch?v=H6uCHBqfZlk is a 15-min. excerpt from a documentary (I actually have the VHS cassette ...) about creating fantastic timelapse sequences of the latter from images shot on film. See also facebook.com/photo/?fbid=16298 and facebook.com/richard.berry.777 and facebook.com/photo/?fbid=26701 and facebook.com/don.davis1/posts/ for other still images.

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:43:41

SOM-VQ: Topology-Aware Tokenization for Interactive Generative Models
Alessandro Londei, Denise Lanzieri, Matteo Benati
arxiv.org/abs/2602.21133 arxiv.org/pdf/2602.21133 arxiv.org/html/2602.21133
arXiv:2602.21133v1 Announce Type: new
Abstract: Vector-quantized representations enable powerful discrete generative models but lack semantic structure in token space, limiting interpretable human control. We introduce SOM-VQ, a tokenization method that combines vector quantization with Self-Organizing Maps to learn discrete codebooks with explicit low-dimensional topology. Unlike standard VQ-VAE, SOM-VQ uses topology-aware updates that preserve neighborhood structure: nearby tokens on a learned grid correspond to semantically similar states, enabling direct geometric manipulation of the latent space. We demonstrate that SOM-VQ produces more learnable token sequences in the evaluated domains while providing an explicit navigable geometry in code space. Critically, the topological organization enables intuitive human-in-the-loop control: users can steer generation by manipulating distances in token space, achieving semantic alignment without frame-level constraints. We focus on human motion generation - a domain where kinematic structure, smooth temporal continuity, and interactive use cases (choreography, rehabilitation, HCI) make topology-aware control especially natural - demonstrating controlled divergence and convergence from reference sequences through simple grid-based sampling. SOM-VQ provides a general framework for interpretable discrete representations applicable to music, gesture, and other interactive generative domains.
toXiv_bot_toot

@lpryszcz@genomic.social
2026-03-20 10:06:40

Single-molecule #peptide #sequencing through reverse translation of peptides into DNA
nature.com/articles/s41587…

Fig. 1 | Overview of peptide sequencing through reverse translation.
In stage 1, peptides are conjugated with peptide identity barcode sequences
immobilized on magnetic beads. A modified Edman degradation reaction is
performed using a PITC molecule modified with an azide group, enabling click
coupling to a DBCO-modified, biotinylated primer that hybridizes directly to
the peptide identity barcode sequence. This primer is extended to record the
peptide identity barcode before N-termin…
@Cognessence@social.linux.pizza
2026-02-15 07:19:21

A ceaseless urge beneath all appearance; an impulse not merely to persist but to intensify; life as duration, as movement, as an unbroken unfolding in time. As a technology of becoming. Life as sequence - procedural quests of forward momentum.
A language underwrites the motion; lines on lines inscribing themselves across a dark field. Pulses like pixels, moving in canon. Constructions that branch, loop, and repeat. Patterns generate patterns, as though the law of existence were simply…

Ink drawing; plantal structures; sequences; cellular; original title "growth and becoming"
@arXiv_mathCT_bot@mastoxiv.page
2026-03-25 09:56:09

Replaced article(s) found for math.CT. arxiv.org/list/math.CT/new
[1/1]:
- Spectral sequences via linear presheaves
Muriel Livernet, Sarah Whitehouse
arxiv.org/abs/2406.02777 mastoxiv.page/@arXiv_mathAT_bo
- Coslice Colimits in Homotopy Type Theory
Perry Hart (Favonia), Kuen-Bang Hou (Favonia)
arxiv.org/abs/2411.15103 mastoxiv.page/@arXiv_csLO_bot/
- Perfect generation for regular algebraic stacks
Pat Lank
arxiv.org/abs/2601.04053 mastoxiv.page/@arXiv_mathAG_bo
toXiv_bot_toot

@arXiv_nlinPS_bot@mastoxiv.page
2026-02-20 08:29:21

Towards the complete description of stationary states of a Bose-Einstein condensate in a one-dimensional quasiperiodic lattice: A coding approach
G. L. Alfimov, A. P. Fedotov, Ya. A. Murenkov, D. A. Zezyulin
arxiv.org/abs/2602.17172 arxiv.org/pdf/2602.17172 arxiv.org/html/2602.17172
arXiv:2602.17172v1 Announce Type: new
Abstract: We consider stationary states of an effectively one-dimensional Bose-Einstein condensate in a quasiperiodic lattice. We formulate sufficient conditions for a one-to-one correspondence between the stationary states with a fixed chemical potential and the set of bi-infinite sequences over a finite alphabet. These conditions can be checked numerically. A bi-infinite sequence can be interpreted as a code of the corresponding solution. A numerical example demonstrates the coding approach using an alphabet of three symbols.
toXiv_bot_toot

@arXiv_csGR_bot@mastoxiv.page
2026-02-02 08:35:40

Screen, Match, and Cache: A Training-Free Causality-Consistent Reference Frame Framework for Human Animation
Jianan Wang, Nailei Hei, Li He, Huanzhen Wang, Aoxing Li, Haofen Wang, Yan Wang, Wenqiang Zhang
arxiv.org/abs/2601.22160 arxiv.org/pdf/2601.22160 arxiv.org/html/2601.22160
arXiv:2601.22160v1 Announce Type: new
Abstract: Human animation aims to generate temporally coherent and visually consistent videos over long sequences, yet modeling long-range dependencies while preserving frame quality remains challenging. Inspired by the human ability to leverage past observations for interpreting ongoing actions, we propose FrameCache, a training-free three-stage framework consisting of Screen, Cache, and Match. In the Screen stage, a multi-dimensional, quality-aware mechanism with adaptive thresholds dynamically selects informative frames; the Cache stage maintains a reference pool using a dynamic replacement-hit strategy, preserving both diversity and relevance; and the Match stage extracts behavioral features to perform motion-consistent reference matching for coherent animation guidance. Extensive experiments on standard benchmarks demonstrate that FrameCache consistently improves temporal coherence and visual stability while integrating seamlessly with diverse baselines. Despite these encouraging results, further analysis reveals that its effectiveness depends on baseline temporal reasoning and real-synthetic consistency, motivating future work on compatibility conditions and adaptive cache mechanisms. Code will be made publicly available.
toXiv_bot_toot

@arXiv_mathPR_bot@mastoxiv.page
2026-01-15 08:34:16

On strong law of large numbers for weakly stationary $\varphi$-mixing set-valued random variable sequences
Luc Tri Tuyen
arxiv.org/abs/2601.09197

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:45:11

Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin
arxiv.org/abs/2602.21196 arxiv.org/pdf/2602.21196 arxiv.org/html/2602.21196
arXiv:2602.21196v1 Announce Type: new
Abstract: Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8$\times$H100 node, improving upon prior methods by over 25$\%$.
toXiv_bot_toot

@losttourist@social.chatty.monster
2026-03-20 20:52:34

I do love the 60s sequences from this video. It's very very well done. #TOTP

@BBC3MusicBot@mastodonapp.uk
2026-03-15 00:08:02

🇺🇦 #NowPlaying on BBCRadio3's #NewMusicShow
Anna Thorvaldsdottir, Jessie-May Wilson, Margot Maurel & Charles Curtin:
🎵 Sequences
#AnnaThorvaldsdottir #JessieMayWilson #MargotMaurel #CharlesCurtin