Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@inthehands@hachyderm.io
2025-12-22 17:04:12

In the 1950s, the Air Force realized that planes were crashing because cockpits didn’t actually fit the pilots’ bodies. Wrong size = danger!! They commissioned a researcher to develop a new, more correct set of standard dimensions for the seat, yoke, etc.
That researcher, Gilbert S. Daniels, came up with 10 body measurements that matter to cockpit size. He gathered measurements of several thousand pilots. And the number of people who were at the average for all ten measurements? Zero. Not a single one.
“Average” proved to be a statistical construct, not a thing that actually exists as a person.
99percentinvisible.org/episode
3/

@dnddeutsch@pnpde.social
2025-10-23 07:05:29

Einer meiner Server wird grade von KI-Bots lahmgelegt. ShadowD3rk, ed3lmaus, OpenRPG und ein paar andere (Sub-)Domains sind deshalb grade u.U. schwer erreichbar.
Die Request landen zwar alle bei Iocaine und werden dort ziemlich effektiv abgehandelt, aber die schiere Masse von Anfragen sorgt trotzdem dafür, dass der Server am Limit läuft.
Wenn ihr KI wiedermal für euch rechtfertigt, denkt auch an solche Auswirkungen. Die Bots sind ein riesiges Problem für "kleine" Server…

@castarco@hachyderm.io
2025-11-22 08:08:13

I know this is me being an example of the Dunning-Kruger effect, but I can't help myself to stop believing that the whole #email infrastructure is completely fucked up and that it should be rewritten from scratch.
Anyone who can take me out of this intellectual hole with an explanation on why we haven't collectively invented something better already? (not better than email as a concept, I'm referring to the protocols & standards around it)
What are my unknown unknowns here?

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:31

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
arxiv.org/abs/2512.17341 mastoxiv.page/@arXiv_statML_bo
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
arxiv.org/abs/2512.17381 mastoxiv.page/@arXiv_csNI_bot/
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
arxiv.org/abs/2512.17419 mastoxiv.page/@arXiv_csSE_bot/
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
arxiv.org/abs/2512.17426 mastoxiv.page/@arXiv_statML_bo
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
arxiv.org/abs/2512.17450 mastoxiv.page/@arXiv_csCV_bot/
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
arxiv.org/abs/2512.17460 mastoxiv.page/@arXiv_csSE_bot/
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
arxiv.org/abs/2512.17462 mastoxiv.page/@arXiv_csIR_bot/
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
arxiv.org/abs/2512.17466 mastoxiv.page/@arXiv_eessSY_bo
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
arxiv.org/abs/2512.17470 mastoxiv.page/@arXiv_csAI_bot/
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
arxiv.org/abs/2512.17473 mastoxiv.page/@arXiv_eessSP_bo
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
arxiv.org/abs/2512.17488 mastoxiv.page/@arXiv_csCV_bot/
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
arxiv.org/abs/2512.17515 mastoxiv.page/@arXiv_eessIV_bo
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
arxiv.org/abs/2512.17517 mastoxiv.page/@arXiv_csCV_bot/
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
arxiv.org/abs/2512.17534 mastoxiv.page/@arXiv_physicsfl
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
arxiv.org/abs/2512.17562 mastoxiv.page/@arXiv_csSD_bot/
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
arxiv.org/abs/2512.17574 mastoxiv.page/@arXiv_csDC_bot/
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
arxiv.org/abs/2512.17585 mastoxiv.page/@arXiv_eessIV_bo
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
arxiv.org/abs/2512.17594 mastoxiv.page/@arXiv_csCR_bot/
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
arxiv.org/abs/2512.17630 mastoxiv.page/@arXiv_csCL_bot/
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
arxiv.org/abs/2512.17659 mastoxiv.page/@arXiv_statML_bo
toXiv_bot_toot

@memeorandum@universeodon.com
2025-12-20 02:56:01

Bill Clinton breaks silence on damning Epstein file photos with blistering accusation about Trump: Live updates (Brittany Chain/Daily Mail)
dailymail.co.uk/news/article-1
memeorandum.com/251219/p115#a2

@leftsidestory@mstdn.social
2025-09-24 00:30:02

On The Road - To Xi’An/ Departure 🔜
在路上 - 去西安/ 离 🔜
📷 Pentax MX
🎞️Kodak Double-X 5222
#filmphotography #Photography #blackandwhite

Kodak Double-X 5222 (FF)

English Alt Text:
A black-and-white urban scene featuring a large stone bridge or elevated roadway spanning the center. Below, a plaza with people walking and vehicles parked. Traditional East Asian buildings with tiered roofs line the area, contrasting with modern buildings and a communications tower in the background. A fence in the foreground displays Chinese characters. The image captures cultural contrast and active public space.

中文替代文字:
这是一张黑白城市照片,画面中央是一座大型石桥或高架…
Kodak Double-X 5222 (FF)

English Alt Text:
A black-and-white photo taken inside a large building with tall glass windows. Outside, traditional-style architecture is visible. Inside, people sit or stand near the windows. One person leans against a column, gazing out. Others sit on the floor, some chatting or using phones. The reflective floor adds depth. The scene contrasts modern interior design with historical exterior elements, evoking quiet observation and cultural reflection.

中文替代文字:
这是一张…
Kodak Double-X 5222 (FF)

English Alt Text:
A black-and-white image of a bustling indoor public space, likely a train station or airport. The ceiling is high with decorative panels and large murals on the walls. Crowds of people move through the space, many carrying bags or luggage. The lighting is bright, and the scene feels dynamic and timeless. The architecture and human activity suggest a place of transit and movement.

中文替代文字:
这是一张黑白照片,拍摄的是一个繁忙的室内公共空间,可能是火车站或机场。天花板高耸,装饰有图案板,墙上有大型壁画。人群穿梭其中,…
Kodak Double-X 5222 (FF)

English Alt Text:
A black-and-white photo of a platform. A group of people stands near the edge, partially obscured by a bright horizontal light streak across the middle of the image. The light creates a dramatic effect, hiding the upper bodies and faces of the individuals. Some appear to be talking, others waiting. A train is visible in the background with horizontal stripes and windows. The platform floor has a striped pattern, adding geometric contrast. The overall …
@jacobgudiol@mastodonsweden.se
2025-11-20 20:34:14

Bättre effekt för mRNA vaccin mot influensan jämfört med etablerat influensavaccin.
Dock också fler som fick akuta biverkningar direkt efter vaccineringen med mRNA vaccinet.
Pfizer's mRNA flu vaccine shows 34.5% greater efficacy than standard shot in phase 3 fierce…

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:00

Mitigating Forgetting in Low Rank Adaptation
Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
arxiv.org/abs/2512.17720 arxiv.org/pdf/2512.17720 arxiv.org/html/2512.17720
arXiv:2512.17720v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
toXiv_bot_toot