Klingt anstrengend. Aber auch nach dem, was Meinungsbildung in einer Partei ernst genommen meint. #bdk25
"The Securing American Funding and Expertise from Adversarial Research Exploitation (SAFE) Act would deny federal funding to any U.S. scientist who collaborates with anyone “affiliated with a hostile foreign entity,” a category that includes four countries: China, Russia, Iran, and North Korea. The prohibited activities would include joint research, co-authorship on papers, and advising a foreign graduate student or postdoctoral fellow. The language is retroactive, meaning any interactions during the previous 5 years could make a scientist ineligible for future federal funding."
https://www.science.org/content/article/u-s-congress-considers-sweeping-ban-chinese-collaborations
regal
My 4th word was BILBO as in Baggins, but it seems it also has 2 other meanings,
1 A rapier; a sword; so named from Bilbao, in Spain.
2 A long bar or bolt of iron with sliding shackles, and a lock at the end, to confine the feet of prisoners or offenders, esp. on board of ships.
Polarisierungsunternehmer wie Poschardt, Böhmermann und Passmann liefern ihren Zielgruppen simple Lagerbildung – tiefgründiger Diskurs wird zur Schlagzeile degradiert. Netzpolitik beobachtet klug, wie die Debatte zur Klickmaschine wird. Qualität bleibt auf der Strecke. @netzpolitik #Polarisierung #Debattenkultur
You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
https://arxiv.org/abs/2512.17678 https://arxiv.org/pdf/2512.17678 https://arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot