Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csGT_bot@mastoxiv.page
2025-12-10 07:58:51

Beyond Revenue and Welfare: Counterfactual Analysis of Spectrum Auctions with Application to Canada's 3800MHz Allocation
Sara Jalili Shani, Kris Joseph, Michael B. McNally, James R. Wright
arxiv.org/abs/2512.08106 arxiv.org/pdf/2512.08106 arxiv.org/html/2512.08106
arXiv:2512.08106v1 Announce Type: new
Abstract: Spectrum auctions are the primary mechanism through which governments allocate scarce radio frequencies, with outcomes that shape competition, coverage, and innovation in telecommunications markets. While traditional models of spectrum auctions often rely on strong equilibrium assumptions, we take a more parsimonious approach by modeling bidders as myopic and straightforward: in each round, firms simply demand the bundle that maximizes their utility given current prices. Despite its simplicity, this model proves effective in predicting the outcomes of Canada's 2023 auction of 3800 MHz spectrum licenses. Using detailed round-by-round bidding data, we estimate bidders' valuations through a linear programming framework and validate that our model reproduces key features of the observed allocation and price evolution. We then use these estimated valuations to simulate a counterfactual auction under an alternative mechanism that incentivizes deployment in rural and remote regions, aligning with one of the key objectives set out in the Canadian Telecommunications Act. The results show that the proposed mechanism substantially improves population coverage in underserved areas. These findings demonstrate that a behavioral model with minimal assumptions is sufficient to generate reliable counterfactual predictions, making it a practical tool for policymakers to evaluate how alternative auction designs may influence future outcomes. In particular, our study demonstrates a method for counterfactual mechanism design, providing a framework to evaluate how alternative auction rules could advance policy goals such as equitable deployment across Canada.
toXiv_bot_toot

@toxi@mastodon.thi.ng
2025-11-01 10:17:56

TIL about the Perverse Incentive (aka Cobra Effect), which describes the effect when "incentives are often designed to achieve short-term goals, but in the long run, they lead to bigger problems or undermine the original objectives":
en.wikipedia.org/wiki/Perverse

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 09:34:58

Robust Exploratory Stopping under Ambiguity in Reinforcement Learning
Junyan Ye, Hoi Ying Wong, Kyunghyun Park
arxiv.org/abs/2510.10260 arx…

@karlauerbach@sfba.social
2025-10-15 00:16:41

How does it feel to have given $60 of your money to bail out Argentina?
What, you did not know that $60 was pulled out of your pocket (and the pocket of every man, women, and child in the US) to be gifted unto Argentina?
pbs.org/newsho…

@arXiv_eessSY_bot@mastoxiv.page
2025-10-14 11:30:28

Robust Closed-Form Control for MIMO Nonlinear Systems under Conflicting Time-Varying Hard and Soft Constraints
Farhad Mehdifar, Charalampos P. Bechlioulis, Dimos V. Dimarogonas
arxiv.org/abs/2510.11393

@arXiv_csDB_bot@mastoxiv.page
2025-10-15 07:33:51

Aixel: A Unified, Adaptive and Extensible System for AI-powered Data Analysis
Meihui Zhang, Liming Wang, Chi Zhang, Zhaojing Luo
arxiv.org/abs/2510.12642

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 09:49:11

PromptFlow: Training Prompts Like Neural Networks
Jingyi Wang, Hongyuan Zhu, Ye Niu, Yunhui Deng
arxiv.org/abs/2510.12246 arxiv.org/pdf/251…

@arXiv_csSE_bot@mastoxiv.page
2025-10-15 08:32:42

Task-Aware Reduction for Scalable LLM-Database Systems
Marcus Emmanuel Barnes, Taher A. Ghaleb, Safwat Hassan
arxiv.org/abs/2510.11813 arxi…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 09:28:18

Distributionally Robust Control with End-to-End Statistically Guaranteed Metric Learning
Jingyi Wu, Chao Ning, Yang Shi
arxiv.org/abs/2510.10214

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot