2025-11-17 08:16:32
"Can Academic Libraries Lead the Quantum Revolution?" https://katinamagazine.org/content/article/future-of-work/2025/can-libraries-lead-the-quantum-revolution
– Herjeh, um was sollen wir uns denn noch alles kümmern…
"Can Academic Libraries Lead the Quantum Revolution?" https://katinamagazine.org/content/article/future-of-work/2025/can-libraries-lead-the-quantum-revolution
– Herjeh, um was sollen wir uns denn noch alles kümmern…
AAR> Buddhist Art-making and Bodily Ethics in Queer and Trans Bangkok https://networks.h-net.org/group/announcements/20132757/aar-paper-buddhist-art-making-and-bodily-ethics-queer-and-trans
This piece nails it.
#BigTech #SiliconValley https://dice.camp/@brunobord/115558496764119435…
AAR> Buddhist Art-making and Bodily Ethics in Queer and Trans Bangkok
https://ift.tt/EvFfbhC
Online Workshop: Technology & Society in Japan and Beyond (Fri: Nov 07, 2025) Susanne Brucksch…
via Input 4 RELCFP
Today at #CHR2025, I will be presenting our work on the evaluation of the historical adequacy of masked language models (MLMs) for #Latin. There are several models like this, and they represent the current state of the art for a number of downstream tasks, like semantic change and text reuse detection. However, a h…
Imagine ChatGPT but instead of predicting text it just linked you to the to 3 documents most-influential on the probabilities that would have been used to predict that text.
Could even generate some info about which parts of each would have been combined how.
There would still be issues with how training data is sourced and filtered, but these could be solved by crawling normally respecting robots.txt and by paying filterers a fair wage with a more relaxed work schedule and mental health support.
The energy issues are mainly about wild future investment and wasteful query spam, not optimized present-day per-query usage.
Is this "just search?"
Yes, but it would have some advantages for a lot of use cases, mainly in synthesizing results across multiple documents and in leveraging a language model more fully to find relevant stuff.
When we talk about the harms of current corporate LLMs, the opportunity cost of NOT building things like this is part of that.
The equivalent for art would have been so amazing too! "Here are some artists that can do what you want, with examples pulled from their portfolios."
It would be a really cool coding assistant that I'd actually encourage my students to use (with some guidelines).
#AI #GenAI #LLMs
We used to write a blog post at the end of every semester celebrating our interns. It was lovely to appreciate their work and find nice things to say about them. But when we shifted to letting THEM talk about their work and their experience, the post became much more powerful. Here they are, writing about what they got out of their internships and the different things they learned. ❤️
I’ve worked over the past year to reduce the amount of noise in my consciousness on a daily basis.
By that I mean - information noise, not literal sounds “noise”. (That problem was solved long ago by some good earplugs and noise canceling earphones.)
I’ve gotten used to spending less time on social media, regularly blocking most apps on my devices (anything with a feed news, most work communication apps, etc.), putting my phone and other devices aside for extended periods of time. Often go to work places with my iPad explicitly having its WiFi turned off and selecting cafes that don’t offer WiFi at all.
Negotiated better boundaries at work and in personal life where I exchange messages with people less often but try to make those interactions more meaningful, and people rarely expect me to respond to requests in less than 24 hours. Spent a lot of time setting up custom notification settings on all apps that would allow it, so I get fewer pings. With software, choosing fewer cloud-based options and using tools that are simple and require as few interruptions as possible.
Accustomed myself to lower-tech versions of doing things I like to do: reading on paper, writing by hand, drawing in physical sketchbooks, got a typewriter for typing without a screen. Choosing to call people on audio more, trying to make more of an effort to see people in person. Going to museums to look at art instead of browsing Pinterest. Defaulting to the library when looking for information.
I’m commenting on this now for two reasons:
1. I am pretty proud of myself for how much I’ve actually managed to reduce the constant stream of modern life esp. as a remote worker in tech!
2. Now that I’ve reached a breaking point of reducing enough noise that it’s NOTICEABLE - I am struck by the silence. I don’t know what to do with it. I don’t know how to navigate it and fill it. I made this space to be able to read and write and think more deeply - for now I feel stuck in limbo where I’m just reacquainting myself with the concept of having any space in my mind at all.
We don't allow cats on our dinner table, and they full well know that (their punishment when caught is corporal cuddling).
While I was out of the house just now and my wife asleep, the 13yo took this picture.
Oh yeah, my wife's laptop often seems to mysterious crash overnight..
#CatsOfMastodon
"Imagining the Future Library" by Masud Khokhar @ Katina Magazine:
https://katinamagazine.org/content/article/future-of-work/2025/imagining-the-future-library
"In an algorithmic economy, our understanding of knowledge …
Due to a slight gift mishap, I find myself with a duplicate copy of volumes 1 - 3 of Donald Knuth’s “The Art of Computer Programming” post-Christmas.
If anyone in the PDX area would cherish a copy of this work, I’d like to talk to you!
I’d rather see it going to a loving home than put too much effort into maximizing sales value.
#PDX
Computer programmers be like:
1. LLMs generate code for me and sometimes it even works
2. This is also true when I write code manually
3. Computer programming is clearly the hardest of all possible human endeavors, you have to be a complete genius like I am to it because it is really, really hard
4. Therefore LLMs are geniuses
5. They will certainly work really well for all these lesser fields you don’t have to be genius for like I am, like writing summaries of text or medical research or making art or…
Disappointing to see that even Electric Sheep has a slop angle now https://electricsheep.org/
Bummed I could not see this wonderful sight in person today (reasons) though I was nearby all day. Perhaps a solace that I happened to work on this Bhagavad Gita verse (11.11) today:
दिव्यमालयांबरधरं दिव्यगन्धानुलेपनम् ।
सर्वाश्चर्यमयं देवं अनन्तं विश्वतोमुखम् ॥
Wearing divine garlands and clothes, anointed with divine scents;
Lord, filled with all wonders, the unending, facing all universe [all seeing].
#Udupi #Krishna #India #travel #art #gold #solace #gita #Sanskrit
RIP Tom Stoppard. I’m grateful for his masterpiece "Arcadia", and proud of the work we did on it. It's a deep reflection on the nature of truth, love, and art, those who run roughshod over it - very relevant today.
https://theater-u34.de/arkadien/
One of our favorite year-end features is the #UniversityOfGeorgia photographers' round-up of their favorite photos of the past year. Even when our museum isn't in it, it's fun to see their great work and to read their thoughts on the combination of art and luck that makes for a great photo.
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
This week I want to hold in mind some work that I love, as inspiration and guide. I’m reposting a love letter to the precarious objects of Chilean poet and artist Cecilia Vicuña.
https://salrandolph.substack.com/p/cecilia-vicunas-precarios
Mark Rothko on the "recipe of a work of art," lecture at the Pratt Institute, 1958:
https://www.tumblr.com/teledyn/801036007022690304?source=share
Just finished "Two Tribes" by Emily Bowen Cohen. It's a bit didactic and I didn't love the art, but if was interesting as a discussion of mixed heritage and out got into a lot of good details; I feel like it might be super interesting to a pre-teen audience. It reminded me a lot of "Twin Cities" by Jose Pimenta as well as some of Pimenta's other work, but IMO Pimenta is the superior artist and storyteller.
#AmReading #ReadingNow
@… I see that a painting of Dr John Rae, owned by the Hudson Bay Co is up for sale - relatively reasonable price...crowdfund?
https://www.
Art. 88b of the Digital Omnibus summarizes 25 years of my work in IT.
Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow
Herlock Rahimi
https://arxiv.org/abs/2512.17878 https://arxiv.org/pdf/2512.17878 https://arxiv.org/html/2512.17878
arXiv:2512.17878v1 Announce Type: new
Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics.
A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments.
toXiv_bot_toot
In the age of "#AI" assisted programming and "vibe coding", I don't feel like calling myself a programmer anymore. In fact, I think that "an artist" is more appropriate.
All the code I write is mine entirely. It might be buggy, it might be inconsistent, but it reflects my personality. I've put my metaphorical soul into it. It's a work of art.
If people want to call themselves "software developers", and want their work described as a glorified copy-paste, so be it. I'm a software artist now.
EDIT: "craftsperson" is also a nice term, per the comments.
#NoAI #NoLLM #LLM
Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
https://arxiv.org/abs/2512.17820 https://arxiv.org/pdf/2512.17820 https://arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot
You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
https://arxiv.org/abs/2512.17678 https://arxiv.org/pdf/2512.17678 https://arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot