Trump was sued on Friday by preservationists
asking a federal court to halt his White House ballroom project until it goes through multiple independent reviews and wins approval from Congress.
The National Trust for Historic Preservation, a privately funded group,
is asking the U.S. District Court to block Trump’s White House ballroom project,
which already has involved razing the East Wing,
until it goes through comprehensive design reviews, environmental asses…
Riccati-ZORO: An efficient algorithm for heuristic online optimization of internal feedback laws in robust and stochastic model predictive control
Florian Messerer, Yunfan Gao, Jonathan Frey, Moritz Diehl
https://arxiv.org/abs/2511.10473 https://arxiv.org/pdf/2511.10473 https://arxiv.org/html/2511.10473
arXiv:2511.10473v1 Announce Type: new
Abstract: We present Riccati-ZORO, an algorithm for tube-based optimal control problems (OCP). Tube OCPs predict a tube of trajectories in order to capture predictive uncertainty. The tube induces a constraint tightening via additional backoff terms. This backoff can significantly affect the performance, and thus implicitly defines a cost of uncertainty. Optimizing the feedback law used to predict the tube can significantly reduce the backoffs, but its online computation is challenging.
Riccati-ZORO jointly optimizes the nominal trajectory and uncertainty tube based on a heuristic uncertainty cost design. The algorithm alternates between two subproblems: (i) a nominal OCP with fixed backoffs, (ii) an unconstrained tube OCP, which optimizes the feedback gains for a fixed nominal trajectory. For the tube optimization, we propose a cost function informed by the proximity of the nominal trajectory to constraints, prioritizing reduction of the corresponding backoffs. These ideas are developed in detail for ellipsoidal tubes under linear state feedback. In this case, the decomposition into the two subproblems yields a substantial reduction of the computational complexity with respect to the state dimension from $\mathcal{O}(n_x^6)$ to $\mathcal{O}(n_x^3)$, i.e., the complexity of a nominal OCP.
We investigate the algorithm in numerical experiments, and provide two open-source implementations: a prototyping version in CasADi and a high-performance implementation integrated into the acados OCP solver.
toXiv_bot_toot
from my link log —
An orbital house of cards: frequent satellite megaconstellation close conjunctions.
https://arxiv.org/abs/2512.09643
saved 2025-12-11 https://
Source: Microsoft is in talks to design future custom chips with Broadcom, which would involve Microsoft switching its business from Marvell (Abram Brown/The Information)
https://www.theinformation.com/briefings/microsoft-discusses-custom-chips-broadcom
I was unreasonably giddy when I stumbled across the real-time discussion of implementation of a feature I use regularly in Shotcut — the proposal, the math, the initial implementation, and then how it evolved into the feature I use.
There's something so cool about seeing people nerd out about stuff and seeing development happen in the open.
Local Computation Algorithms for (Minimum) Spanning Trees on Expander Graphs
Pan Peng, Yuyang Wang
https://arxiv.org/abs/2602.07394 https://arxiv.org/pdf/2602.07394 https://arxiv.org/html/2602.07394
arXiv:2602.07394v1 Announce Type: new
Abstract: We study \emph{local computation algorithms (LCAs)} for constructing spanning trees. In this setting, the goal is to locally determine, for each edge $ e \in E $, whether it belongs to a spanning tree $ T $ of the input graph $ G $, where $ T $ is defined implicitly by $ G $ and the randomness of the algorithm. It is known that LCAs for spanning trees do not exist in general graphs, even for simple graph families. We identify a natural and well-studied class of graphs -- \emph{expander graphs} -- that do admit \emph{sublinear-time} LCAs for spanning trees. This is perhaps surprising, as previous work on expanders only succeeded in designing LCAs for \emph{sparse spanning subgraphs}, rather than full spanning trees. We design an LCA with probe complexity $ O\left(\sqrt{n}\left(\frac{\log^2 n}{\phi^2} d\right)\right)$ for graphs with conductance at least $ \phi $ and maximum degree at most $ d $ (not necessarily constant), which is nearly optimal when $\phi$ and $d$ are constants, since $\Omega(\sqrt{n})$ probes are necessary even for expanders. Next, we show that for the natural class of \emph{\ER graphs} $ G(n, p) $ with $ np = n^{\delta} $ for any constant $ \delta > 0 $ (which are expanders with high probability), the $ \sqrt{n} $ lower bound can be bypassed. Specifically, we give an \emph{average-case} LCA for such graphs with probe complexity $ \tilde{O}(\sqrt{n^{1 - \delta}})$.
Finally, we extend our techniques to design LCAs for the \emph{minimum spanning tree (MST)} problem on weighted expander graphs. Specifically, given a $d$-regular unweighted graph $\bar{G}$ with sufficiently strong expansion, we consider the weighted graph $G$ obtained by assigning to each edge an independent and uniform random weight from $\{1,\ldots,W\}$, where $W = O(d)$. We show that there exists an LCA that is consistent with an exact MST of $G$, with probe complexity $\tilde{O}(\sqrt{n}d^2)$.
toXiv_bot_toot
The #EU Cyber Resilience Act (Regulation (EU) 2024/2847) has now an Implementing Act:
https://digital-strategy.ec.europa.eu/en/factpages/cyber-resilience-act-imple…
Correlation of Rankings in Matching Markets
R\'emi Castera, Patrick Loiseau, Bary S. R. Pradelski
https://arxiv.org/abs/2512.05304 https://arxiv.org/pdf/2512.05304 https://arxiv.org/html/2512.05304
arXiv:2512.05304v1 Announce Type: new
Abstract: We study the role of correlation in matching markets, where multiple decision-makers simultaneously face selection problems from the same pool of candidates. We propose a model in which a candidate's priority scores across different decision-makers exhibit varying levels of correlation dependent on the candidate's sociodemographic group. Such differential correlation can arise in school choice due to the varying prevalence of selection criteria, in college admissions due to test-optional policies, or due to algorithmic monoculture, that is, when decision-makers rely on the same algorithms and data sets to evaluate candidates. We show that higher correlation for one of the groups generally improves the outcome for all groups, leading to higher efficiency. However, students from a given group are more likely to remain unmatched as their own correlation level increases. This implies that it is advantageous to belong to a low-correlation group. Finally, we extend the tie-breaking literature to multiple priority classes and intermediate levels of correlation. Overall, our results point to differential correlation as a previously overlooked systemic source of group inequalities in school, university, and job admissions.
toXiv_bot_toot
AWS launches DevOps Agent, an AI-enabled tool designed to help clients quickly identify root causes of outages and implement fixes, available in preview (Jordan Novet/CNBC)
https://www.cnbc.com/2025/12/02/amazon-launches-cloud-ai-tool…
Learning Paths to Multi-Sector Equilibrium: Belief Dynamics Under Uncertain Returns to Scale
Stefano Nasini, Rabia Nessah, Bertrand Wigniolle
https://arxiv.org/abs/2512.07013 https://arxiv.org/pdf/2512.07013 https://arxiv.org/html/2512.07013
arXiv:2512.07013v1 Announce Type: new
Abstract: This paper explores the dynamics of learning in a multi-sector general equilibrium model where firms operate under incomplete information about their production returns to scale. Firms iteratively update their beliefs using maximum a-posteriori estimation, derived from observed production outcomes, to refine their knowledge of their returns to scale. The implications of these learning dynamics for market equilibrium and the conditions under which firms can effectively learn their true returns to scale are the key objects of this study. Our results shed light on how idiosyncratic shocks influence the learning process and demonstrate that input decisions encode all pertinent information for belief updates. Additionally, we show that a long-memory (path-dependent) learning which keeps track of all past estimations ends up having a worse performance than a short-memory (path-independent) approach.
toXiv_bot_toot