Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Mediagazer@mstdn.social
2026-03-22 05:06:05

Sources: PinkNews, which touts itself as the "largest LGBTQ led media brand", is planning widespread redundancies and a move to a "reporter-free newsroom" (Jamie Wareham/QueerAF)
wearequeeraf.com/pinknews-to-m

@mszll@datasci.social
2025-12-22 11:39:52

Targeted cooling of urban cycling networks for heat-resilient mobility
#cycling!

Land-cover comparison for Midtown Manhattan under the (a) Base and (b) Tree-Planting scenarios, showing added canopy cover across the hottest 1,000 street segments.
@nfdi4culture@nfdi.social
2026-01-22 10:51:15

🚀 CfP: For all those who are interested in the development of systematic approaches and advanced technologies for handling heterogeneous, diverse and challenging humanities data:
📅 Until March 3rd you can apply for the International Workshop of Semantic Digital Humanities (@https://sigmoid.social/@semdh). This event enables a collaboration and networking across diverse fields and all members of Semantic Web, CH and DH communities.
➡️ CfP: :

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot

@relcfp@mastodon.social
2026-01-21 06:10:49

Echoes of Shakespeare: Intertextual Dialogues across Centuries
ift.tt/qfKI9M8
updated: Tuesday, January 20, 2026 - 1:35pmfull name / name of organization: Rachel Wifall / Saint…
via Input 4 RELCFP

In Hong Kong, companies have received “America 250” forms from the U.S. consulate
soliciting donations.
In Japan, companies have heeded the call and committed to tens of millions of dollars in contributions.
In Singapore, the American ambassador pressed for donations before a room full of executives at a dinner at one of the city-state’s most expensive hotels.
The solicitations from the diplomatic outposts come as Trump's allies are aggressively raising money for a…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@relcfp@mastodon.social
2026-02-15 17:47:14

2026 Wenshan x TSA International Conference: Shakespeare Across Centuries: Reception, Resonance, and Reinvention call-for-papers.sas.upenn.edu/

@relcfp@mastodon.social
2026-02-15 07:20:02

2026 Wenshan x TSA International Conference: Shakespeare Across Centuries: Reception, Resonance, and Reinvention call-for-papers.sas.upenn.edu/