Tootfinder

Opt-in global Mastodon full text search. Join the index!

@v_i_o_l_a@openbiblio.social
2025-11-17 08:16:32

"Can Academic Libraries Lead the Quantum Revolution?" katinamagazine.org/content/art
– Herjeh, um was sollen wir uns denn noch alles kümmern…

@relcfp@mastodon.social
2025-11-18 20:14:20

AAR> Buddhist Art-making and Bodily Ethics in Queer and Trans Bangkok networks.h-net.org/group/annou

@aral@mastodon.ar.al
2025-11-16 16:08:13

This piece nails it.
#BigTech #SiliconValley dice.camp/@brunobord/115558496

@relcfp@mastodon.social
2025-11-18 16:10:26

AAR> Buddhist Art-making and Bodily Ethics in Queer and Trans Bangkok
ift.tt/EvFfbhC
Online Workshop: Technology & Society in Japan and Beyond (Fri: Nov 07, 2025) Susanne Brucksch…
via Input 4 RELCFP

@mapto@qoto.org
2025-12-11 07:47:09

Today at #CHR2025, I will be presenting our work on the evaluation of the historical adequacy of masked language models (MLMs) for #Latin. There are several models like this, and they represent the current state of the art for a number of downstream tasks, like semantic change and text reuse detection. However, a h…

A poster for the paper that could be found at https://doi.org/10.63744/sLAHYnQdA8fu
@tiotasram@kolektiva.social
2025-11-09 12:09:40

Imagine ChatGPT but instead of predicting text it just linked you to the to 3 documents most-influential on the probabilities that would have been used to predict that text.
Could even generate some info about which parts of each would have been combined how.
There would still be issues with how training data is sourced and filtered, but these could be solved by crawling normally respecting robots.txt and by paying filterers a fair wage with a more relaxed work schedule and mental health support.
The energy issues are mainly about wild future investment and wasteful query spam, not optimized present-day per-query usage.
Is this "just search?"
Yes, but it would have some advantages for a lot of use cases, mainly in synthesizing results across multiple documents and in leveraging a language model more fully to find relevant stuff.
When we talk about the harms of current corporate LLMs, the opportunity cost of NOT building things like this is part of that.
The equivalent for art would have been so amazing too! "Here are some artists that can do what you want, with examples pulled from their portfolios."
It would be a really cool coding assistant that I'd actually encourage my students to use (with some guidelines).
#AI #GenAI #LLMs

@georgiamuseum@glammr.us
2025-12-04 13:25:15

We used to write a blog post at the end of every semester celebrating our interns. It was lovely to appreciate their work and find nice things to say about them. But when we shifted to letting THEM talk about their work and their experience, the post became much more powerful. Here they are, writing about what they got out of their internships and the different things they learned. ❤️

8 photographs collaged together into a 2 x 4 grid of interns at the Georgia Museum of Art during the fall semester of 2025
@bobmueller@mastodon.world
2025-10-29 14:30:07

Is it talent?
instagram.com/reel/DQM0-w2jEOd

Duaa Izzidien - Visual Storyteller & Artist on Instagram: "I had far too much fun creating this reel and couldn’t bring myself to delete any of it to make it shorter and more algorithm friendly 🙈 If you watched it all the way to the end - well done! You’ve just demonstrated the very thing the reel is about - showing up is the only talent that matters. My arrows don’t always hit the target. My paintings don’t always turn out how I planned. And honestly? Life rarely goes the way I intend. But really that isn’t what matters. In Islam we say that actions are by intentions and that sometimes means letting go of controlling our outcomes. We can control our intentions, our effort, showing up - but we have to remember that the result doesn’t actually come from those actions. Sometimes the arrow misses because there’s a better lesson waiting or perhaps it’s to remind you to stay humble and remember that ‘you’ are not the architect of your success. Sometimes the painting goes “wrong” because it’s becoming something more beautiful than you imagined. Sometimes life doesn’t work out the way you planned because there’s something different, better, round the corner for you. A huge thank you to @thabitoon_archers and @mamluk.academy for teaching me. You’ve taught me far more than just archery - you’ve taught me a rich history and life lessons that bring peace. (any mistakes in my form are entirely mine!). Want to learn how to use art as a tool for trusting and letting go of control? DM me ‘CREATE’ and I’ll show you these techniques. #showingisenough #trusttheprocess #archery #archerygirl #traditionalarchery #archerylife #overwhelm #personalgrowth #innerstrength #growthmindset #breakthrough #findingmyself #resilience #transformation #letgoofcontrol"
24K likes, 763 comments - duaaizzidien on October 24, 2025: "I had far too much fun creating this reel and couldn’t bring myself to delete any of it to make it shorter and more algorithm friendly 🙈 If you watched it all the way to the end - well done! You’ve just demonstrated the very thing the reel is about - showing up is the only talent that matters. My arrows don’t always hit the target. My paintings don’t always turn out how I planned. And honestly? Life rarely goes the way I …

@mariyadelano@hachyderm.io
2025-10-20 20:41:14

I’ve worked over the past year to reduce the amount of noise in my consciousness on a daily basis.
By that I mean - information noise, not literal sounds “noise”. (That problem was solved long ago by some good earplugs and noise canceling earphones.)
I’ve gotten used to spending less time on social media, regularly blocking most apps on my devices (anything with a feed news, most work communication apps, etc.), putting my phone and other devices aside for extended periods of time. Often go to work places with my iPad explicitly having its WiFi turned off and selecting cafes that don’t offer WiFi at all.
Negotiated better boundaries at work and in personal life where I exchange messages with people less often but try to make those interactions more meaningful, and people rarely expect me to respond to requests in less than 24 hours. Spent a lot of time setting up custom notification settings on all apps that would allow it, so I get fewer pings. With software, choosing fewer cloud-based options and using tools that are simple and require as few interruptions as possible.
Accustomed myself to lower-tech versions of doing things I like to do: reading on paper, writing by hand, drawing in physical sketchbooks, got a typewriter for typing without a screen. Choosing to call people on audio more, trying to make more of an effort to see people in person. Going to museums to look at art instead of browsing Pinterest. Defaulting to the library when looking for information.
I’m commenting on this now for two reasons:
1. I am pretty proud of myself for how much I’ve actually managed to reduce the constant stream of modern life esp. as a remote worker in tech!
2. Now that I’ve reached a breaking point of reducing enough noise that it’s NOTICEABLE - I am struck by the silence. I don’t know what to do with it. I don’t know how to navigate it and fill it. I made this space to be able to read and write and think more deeply - for now I feel stuck in limbo where I’m just reacquainting myself with the concept of having any space in my mind at all.

@andres4ny@social.ridetrans.it
2025-11-28 05:22:26

We don't allow cats on our dinner table, and they full well know that (their punishment when caught is corporal cuddling).
While I was out of the house just now and my wife asleep, the 13yo took this picture.
Oh yeah, my wife's laptop often seems to mysterious crash overnight..
#CatsOfMastodon

A dining room table with a bunch of random stuff on it; laptop, water bottles, a box of markers/art stuff, a kid's backpack, some plastic bags, etc. There are also 3 cats on there. Erie, a mostly black tuxedo cat sits comfortably on the keyboard of the laptop (which has its screen on and is unlocked, because she's obviously hard at work). Clove, a much whiter tuxedo cat (and Erie's mom) sits next to the backpack. Twig, a tabby, loafs on top of some plastic bags. All 3 cats are looking at the ca…
@v_i_o_l_a@openbiblio.social
2025-11-03 13:35:38

"Imagining the Future Library" by Masud Khokhar @ Katina Magazine:
katinamagazine.org/content/art
"In an algorithmic economy, our understanding of knowledge …

@philip@mastodon.mallegolhansen.com
2026-01-04 03:25:16

Due to a slight gift mishap, I find myself with a duplicate copy of volumes 1 - 3 of Donald Knuth’s “The Art of Computer Programming” post-Christmas.
If anyone in the PDX area would cherish a copy of this work, I’d like to talk to you!
I’d rather see it going to a loving home than put too much effort into maximizing sales value.
#PDX

@thomasfuchs@hachyderm.io
2025-11-02 13:02:26

Computer programmers be like:
1. LLMs generate code for me and sometimes it even works
2. This is also true when I write code manually
3. Computer programming is clearly the hardest of all possible human endeavors, you have to be a complete genius like I am to it because it is really, really hard
4. Therefore LLMs are geniuses
5. They will certainly work really well for all these lesser fields you don’t have to be genius for like I am, like writing summaries of text or medical research or making art or…

@theodric@social.linux.pizza
2025-12-23 09:00:52

Disappointing to see that even Electric Sheep has a slop angle now electricsheep.org/

@smurthys@hachyderm.io
2025-11-16 17:06:24

Bummed I could not see this wonderful sight in person today (reasons) though I was nearby all day. Perhaps a solace that I happened to work on this Bhagavad Gita verse (11.11) today:
दिव्यमालयांबरधरं दिव्यगन्धानुलेपनम् ।
सर्वाश्चर्यमयं देवं अनन्तं विश्वतोमुखम् ॥
Wearing divine garlands and clothes, anointed with divine scents;
Lord, filled with all wonders, the unending, facing all universe [all seeing].
#Udupi #Krishna #India #travel #art #gold #solace #gita #Sanskrit

@sperbsen@discuss.systems
2025-11-30 10:35:45

RIP Tom Stoppard. I’m grateful for his masterpiece "Arcadia", and proud of the work we did on it. It's a deep reflection on the nature of truth, love, and art, those who run roughshod over it - very relevant today.
theater-u34.de/arkadien/

@georgiamuseum@glammr.us
2025-12-22 13:39:04

One of our favorite year-end features is the #UniversityOfGeorgia photographers' round-up of their favorite photos of the past year. Even when our museum isn't in it, it's fun to see their great work and to read their thoughts on the combination of art and luck that makes for a great photo.

The sky is reflected in the windows of Stegeman Coliseum during the spring graduate Commencement ceremony at the University of Georgia.
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@salrandolph@zirk.us
2025-11-20 14:56:12

This week I want to hold in mind some work that I love, as inspiration and guide. I’m reposting a love letter to the precarious objects of Chilean poet and artist Cecilia Vicuña.
salrandolph.substack.com/p/cec

@teledyn@mstdn.ca
2025-11-24 21:25:34

Mark Rothko on the "recipe of a work of art," lecture at the Pratt Institute, 1958:
tumblr.com/teledyn/80103600702

@tiotasram@kolektiva.social
2025-11-01 01:23:05

Just finished "Two Tribes" by Emily Bowen Cohen. It's a bit didactic and I didn't love the art, but if was interesting as a discussion of mixed heritage and out got into a lot of good details; I feel like it might be super interesting to a pre-teen audience. It reminded me a lot of "Twin Cities" by Jose Pimenta as well as some of Pimenta's other work, but IMO Pimenta is the superior artist and storyteller.
#AmReading #ReadingNow

@LaChasseuse@mastodon.scot
2025-11-20 11:08:41

@… I see that a painting of Dr John Rae, owned by the Hudson Bay Co is up for sale - relatively reasonable price...crowdfund?

Dr. John Rae Meets with Eskimos (Franklin Expedition)
Artist: Charles Fraser Comfort
Created: 1949
Estimate: $10,000 - $15,000 CAD
According to Heffel, this work details a recreation of an encounter between Scottish explorer Dr. John Rae and Inuit hunters.
@rigo@mamot.fr
2025-11-20 16:42:14

Art. 88b of the Digital Omnibus summarizes 25 years of my work in IT.

@bobmueller@mastodon.world
2025-11-20 15:30:05

Fascinating. #beatbox
instagram.com/reel/DQ7Lx-Gjvpn

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:40

Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow
Herlock Rahimi
arxiv.org/abs/2512.17878 arxiv.org/pdf/2512.17878 arxiv.org/html/2512.17878
arXiv:2512.17878v1 Announce Type: new
Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics.
A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments.
toXiv_bot_toot

@mgorny@social.treehouse.systems
2025-12-26 12:32:13

In the age of "#AI" assisted programming and "vibe coding", I don't feel like calling myself a programmer anymore. In fact, I think that "an artist" is more appropriate.
All the code I write is mine entirely. It might be buggy, it might be inconsistent, but it reflects my personality. I've put my metaphorical soul into it. It's a work of art.
If people want to call themselves "software developers", and want their work described as a glorified copy-paste, so be it. I'm a software artist now.
EDIT: "craftsperson" is also a nice term, per the comments.
#NoAI #NoLLM #LLM

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:10

Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
arxiv.org/abs/2512.17820 arxiv.org/pdf/2512.17820 arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot