
2025-07-29 09:17:01
Some open questions and conjectures about visibility and iteration in bounded convex domains in $\mathbb C^N$
Filippo Bracci, Ahmed Yekta \"Okten
https://arxiv.org/abs/2507.19967
Some open questions and conjectures about visibility and iteration in bounded convex domains in $\mathbb C^N$
Filippo Bracci, Ahmed Yekta \"Okten
https://arxiv.org/abs/2507.19967
Budda Baker 'excited' for Cardinals' future despite 'more losing than winning' over first eight seasons https://www.nfl.com/news/budda-baker-excited-for-cardinals-future-despite-more-losing-t…
Block Coordinate Descent Network Simplex for Optimal Transport
Lingrui Li, Nobuo Yamashita
https://arxiv.org/abs/2506.21231 https://a…
An Efficient Alternating Minimization Algorithm for Computing Quantum Rate-Distortion Function
Lingyi Chen, Deheng Yuan, Wenyi Zhang, Hao Wu, Huihui Wu
https://arxiv.org/abs/2507.19920
Fourth-Order Compact FDMs for Steady and Time-Dependent Nonlinear Convection-Diffusion Equations
Qiwei Feng, Catalin Trenchea
https://arxiv.org/abs/2507.18799 https://
Unfolding Iterators: Specification and Verification of Higher-Order Iterators, in OCaml
Ion Chirica, M\'ario Pereira
https://arxiv.org/abs/2506.20310 h…
Deciding Robust Instances of an Escape Problem for Dynamical Systems in Euclidean Space
Eike Neumann
https://arxiv.org/abs/2506.21481 https://
Utility-Driven Speculative Decoding for Mixture-of-Experts
Anish Saxena, Po-An Tsai, Hritvik Taneja, Aamer Jaleel, Moinuddin Qureshi
https://arxiv.org/abs/2506.20675
Complexity of PXP scars revisited
Pawel Caputa, Xuhao Jiang, Sinong Liu
https://arxiv.org/abs/2506.21156 https://arxiv.org/pdf/2506.2…
Figured out what I was doing wrong in the curve25519 refactoring: the state signal I was using to dispatch register file reads also glitched high for one cycle (not sure if this is technically a glitch since it's synchronous but whatever) at the end of each main loop iteration.
This is harmless if you have combinatorial reads, but if you have synchronous reads with latency it leads to the "read data ready" signal being asserted an extra time and some extra math operation…
Quantum Power Iteration Unified Using Generalized Quantum Signal Processing
Viktor Khinevich, Yasunori Lee, Nobuyuki Yoshioka, Wataru Mizukami
https://arxiv.org/abs/2507.11142
@… finance apps have been this way for years and are only getting worse. QuickBooks Online is even worse as you pay for the account and can’t remove all the ads from the dashboard. In their latest iteration they are now appearing on every screen excepts reports, and I fully expect for them to show up there as well. This following yet another price increase for features …
Chain-of-Experts: Unlocking the Communication Power of Mixture-of-Experts Models
Zihan Wang, Rui Pan, Jiarui Yao, Robert Csordas, Linjie Li, Lu Yin, Jiajun Wu, Tong Zhang, Manling Li, Shiwei Liu
https://arxiv.org/abs/2506.18945
EigenWave: An Optimal O(N) Method for Computing Eigenvalues and Eigenvectors by Time-Filtering the Wave Equation
Daniel Appelo, Jeffrey W. Banks, William D. Henshaw, Ngan Le, Donald W. Schwendeman
https://arxiv.org/abs/2507.18282
Symmetry breaking in time-dependent billiards
Anne K\'etri Pasquinelli da Fonseca, Edson Denis Leonel
https://arxiv.org/abs/2505.20488 https://
Faster Fixed-Point Methods for Multichain MDPs
Matthew Zurek, Yudong Chen
https://arxiv.org/abs/2506.20910 https://arxiv.org/pdf/2506…
Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
Choosing iteration maps for the parallel Pollard rho method
Finn Rudolph
https://arxiv.org/abs/2506.12844 https://arxiv.org/pdf/2506.…
Monotonicity properties of hyperbolic projections in holomorphic iteration
Argyrios Christodoulou, Konstantinos Zarvalis
https://arxiv.org/abs/2506.19562 h…
AI-Driven Tools in Modern Software Quality Assurance: An Assessment of Benefits, Challenges, and Future Directions
Ihor Pysmennyi, Roman Kyslyi, Kyrylo Kleshch
https://arxiv.org/abs/2506.16586
I’m late to the party but congratulations @…!
Christian Selig, developer of Apollo, joins Digg https://techcrunch.c…
Non-Euclidean Enriched Contraction Theory for Monotone Operators and Monotone Dynamical Systems
Diego Deplano, Sergio Grammatico, Mauro Franceschelli
https://arxiv.org/abs/2506.17990
The Minnesota assassinations, the attempted arson at Governor Shapiro's home, and the violent arrest of Senator Padilla are all straight out of the Jim Crow terrorism playbook. Same goes with cops aiming for journalists covering the ICE protests.
None of these are new tactics, just a new iteration.
-- Max Kennerly
There may be exactly $n$ $Q$-points
Lorenz Halbeisen, Silvan Horvath, Tan \"Ozalp
https://arxiv.org/abs/2507.15123 https://arxiv…
Quasinormal modes and grey-body factors of axial gravitational perturbations of regular black holes in asymptotically safe gravity
Qi-Long Shi, Rui Wang, Wei Xiong, Peng-Cheng Li
https://arxiv.org/abs/2506.16217
Solving nonconvex Hamilton--Jacobi--Isaacs equations with PINN-based policy iteration
Hee Jun Yang, Min Jung Kim, Yeoneung Kim
https://arxiv.org/abs/2507.15455
🔧 #Generators excel at lazy iteration and memory efficiency, implementing Iterator interface for foreach loops
⚡ #Fibers enable cooperative multitasking and nested suspension, perfect for #CLI
Deciding Termination of Simple Randomized Loops
\'El\'eanore Meyer, J\"urgen Giesl
https://arxiv.org/abs/2506.18541 https://
An Iterative PDE Based Illumination Restoration Scheme for Image Enhancement
Dragos-Patru Covei
https://arxiv.org/abs/2506.12560 https://
Iteration Steps of 3x 1 Problem
Youchun Luo
https://arxiv.org/abs/2506.23070 https://arxiv.org/pdf/2506.23070
Collaborative Editable Model
Kaiwen Tang, Aitong Wu, Yao Lu, Guangda Sun
https://arxiv.org/abs/2506.14146 https://arxiv.org/pdf/2506.…
Language Models Improve When Pretraining Data Matches Target Tasks
David Mizrahi, Anders Boesen Lindbo Larsen, Jesse Allardice, Suzie Petryk, Yuri Gorokhov, Jeffrey Li, Alex Fang, Josh Gardner, Tom Gunter, Afshin Dehghan
https://arxiv.org/abs/2507.12466
Value-Set Iteration: Computing Optimal Correlated Equilibria in Infinite-Horizon Multi-Player Stochastic Games
Jiarui Gan, Rupak Majumdar
https://arxiv.org/abs/2506.07186
Revisiting Randomization in Greedy Model Search
Xin Chen, Jason M. Klusowski, Yan Shuo Tan, Chang Yu
https://arxiv.org/abs/2506.15643 https://
PromptCanvas: Composable Prompting Workspaces Using Dynamic Widgets for Exploration and Iteration in Creative Writing
Rifat Mehreen Amin, Oliver Hans K\"uhle, Daniel Buschek, Andreas Butz
https://arxiv.org/abs/2506.03741
Two-dimensional greedy randomized Kaczmarz methods for solving large-scale linear systems
Tao Li, Meng-Long Xiao, Xin-Fang Zhang
https://arxiv.org/abs/2506.20940
Information Entropy-Based Scheduling for Communication-Efficient Decentralized Learning
Jaiprakash Nagar, Zheng Chen, Marios Kountouris, Photios A. Stavrou
https://arxiv.org/abs/2507.17426
Data warehouses, lakes, lakehouses, and more – our choices significantly affect operational costs and development speed. Join Lars Albertsson at Berlin Buzzwords to explore how different data processing paradigms impact deployment, failure handling, and data quality. Learn strategies to minimise costs and latency, bridge between paradigms, and enhance development iteration and operational efficiency.
Learn more:
Information Preserving Line Search via Bayesian Optimization
Robin Labryga, Tomislav Prusina, S\"oren Laue
https://arxiv.org/abs/2507.15485 https://…
Numerical and data-driven modeling of spall failure in polycrystalline ductile materials
Indrashish Saha, Lori Graham-Brady
https://arxiv.org/abs/2507.03706
A near-complete resolution of the exponential-time complexity of k-opt for the traveling salesman problem
Sophia Heimann, Hung P. Hoang, Stefan Hougardy
https://arxiv.org/abs/2507.12304
Checkmate: Zero-Overhead Model Checkpointing via Network Gradient Replication
Ankit Bhardwaj, Weiyang Wang, Jeremy Carin, Adam Belay, Manya Ghobadi
https://arxiv.org/abs/2507.13522
Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity
Wanjin Feng, Xingyu Gao, Wenqian Du, Hailong Shi, Peilin Zhao, Pengcheng Wu, Chunyan Miao
https://arxiv.org/abs/2506.12087
The escaping set in transcendental dynamics
Walter Bergweiler, Lasse Rempe
https://arxiv.org/abs/2507.11370 https://arxiv.org/pdf/250…
Boosting Accelerated Proximal Gradient Method with Adaptive Sampling for Stochastic Composite Optimization
Dongxuan Zhu, Weihuan Huang, Caihua Chen
https://arxiv.org/abs/2507.18277
Reference-Free Iterative Learning Model Predictive Control with Neural Certificates
Wataru Hashimoto, Kazumune Hashimoto, Masako Kishida, Shigemasa Takai
https://arxiv.org/abs/2507.14025
Structured Program Synthesis using LLMs: Results and Insights from the IPARC Challenge
Shraddha Surana, Ashwin Srinivasan, Michael Bain
https://arxiv.org/abs/2506.13820
Finding the Smallest Possible Exact Aggregation of a Markov Chain
Patrick Sonnentag
https://arxiv.org/abs/2507.11157 https://arxiv.or…
The Arrow-Hurwicz iteration for virtual element discretizations of the incompressible Navier-Stokes equations
Binbin Du, Shenxiang Cheng, Yue Yu, Chuanjun Chen
https://arxiv.org/abs/2507.12036
So i just threw together a first iteration of the switch engine PCB schematic in one crazed allnighter (thanks non 24 hour sleep schedules lol, its 0900 and I'm almost ready for bed).
It's missing some details like mounting holes and such, plus a few odds and ends like a 2.5V LDO and level shifter on the rs232 console, but I think it's 80% or so done. Lots of copy paste from kup-lulz and lcbringup which was of course the intent, those projects were intended to de-risk the s…
This https://arxiv.org/abs/2410.08476 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csNI_…
Forward Reverse Kernel Regression for the Schr\"{o}dinger bridge problem
Denis Belomestny, John. Schoenmakers
https://arxiv.org/abs/2507.00640 https:/…
Accelerating Large-Scale Regularized High-Order Tensor Recovery
Wenjin Qin, Hailin Wang, Jingyao Hou, Jianjun Wang
https://arxiv.org/abs/2506.09594 https:/…
Frank-Wolfe algorithm for star-convex functions
R. Diaz Millan, Orizon Pereira Ferreira, Julien Ugon
https://arxiv.org/abs/2507.17272 https://arxiv.org/pdf…
Lower Bounds for Error Coefficients of Griesmer Optimal Linear Codes via Iteration
Chaofeng Guan, Shitao Li, Gaojun Luo, Zhi Ma, Hong Wang
https://arxiv.org/abs/2507.05567
Alpay Algebra V: Multi-Layered Semantic Games and Transfinite Fixed-Point Simulation
Bugra Kilictas, Faruk Alpay
https://arxiv.org/abs/2507.07868 https://
This https://arxiv.org/abs/2505.12990 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qu…
This https://arxiv.org/abs/2408.05197 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees
Max L. N. Goncalves, Geovani N. Grapiglia
https://arxiv.org/abs/2507.17556 https://
Full normalization for $\kappa^ $-supercompactness
Farmer Schlutzenberg
https://arxiv.org/abs/2506.08287 https://arxiv.org/pdf/2506.0…
A parameterized block-splitting preconditioner for indefinite least squares problem
Davod Khojasteh Salkuyeh
https://arxiv.org/abs/2507.16938 https://arxiv…
Continuous Policy and Value Iteration for Stochastic Control Problems and Its Convergence
Qi Feng, Gu Wang
https://arxiv.org/abs/2506.08121 https://…
This https://arxiv.org/abs/2505.11862 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csLG_…
Shifted HSS preconditioners for the indefinite Helmholtz equation
Colin J Cotter, Kars Knook, Joshua Hope-Collins
https://arxiv.org/abs/2506.18694 https://…
K-ADAPT-VQE: Optimizing Molecular Ground State Searches by Chunking Operators
Tatiana Bespalova, Oumaya Ladhari, Guido Masella
https://arxiv.org/abs/2506.09658
AFLOW4: heading toward disorder
Simon Divilov, Hagen Eckert, Scott D. Thiel, Sean D. Griesemer, Rico Friedrich, Nicholas H. Anderson, Michael J. Mehl, David Hicks, Marco Esters, Nico Hotz, Xiomara Campilongo, Arrigo Calzolari, Stefano Curtarolo
https://arxiv.org/abs/2507.03422
A Denotational Semantics for Quantum Loops
Nicola Assolini, Alessandra Di Pierro
https://arxiv.org/abs/2506.23320 https://arxiv.org/p…
Accelerating Newton-Schulz Iteration for Orthogonalization via Chebyshev-type Polynomials
Ekaterina Grishina, Matvey Smirnov, Maxim Rakhuba
https://arxiv.org/abs/2506.10935
Parallel Polyhedral Projection Method for the Convex Feasibility Problem
Pablo Barros, Roger Behling, Vincent Guigues
https://arxiv.org/abs/2506.15895 http…
Error estimates and adaptivity for a least-squares method applied to the Monge-Amp\`ere equation
Alexandre Caboussat, Anna Peruso, Marco Picasso
https://arxiv.org/abs/2507.17569
Cost-Efficient LLM Training with Lifetime-Aware Tensor Offloading via GPUDirect Storage
Ziqi Yuan, Haoyang Zhang, Yirui Eric Zhou, Apoorve Mohan, I-Hsin Chung, Seetharami Seelam, Jian Huang
https://arxiv.org/abs/2506.06472
An inertial iteratively regularized extragradient method for bilevel variational inequality problems
M. Marques Alves, Kangming Chen, Ellen H. Fukuda
https://arxiv.org/abs/2507.16640
Variational quantum algorithms with invariant probabilistic error cancellation on noisy quantum processors
Yulin Chi, Hongyi Shi, Wen Zheng, Haoyang Cai, Yu Zhang, Xinsheng Tan, Shaoxiong Li, Jianwei Wang, Jiangyu Cui, Man-Hong Yung, Yang Yu
https://arxiv.org/abs/2506.07039
This https://arxiv.org/abs/2503.21224 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csLG_…
Dissipativity-based time domain decomposition for optimal control of hyperbolic PDEs
B\'alint Farkas, Birgit Jacob, Manuel Schaller, Merlin Schmitz
https://arxiv.org/abs/2507.07812
Low-rank Momentum Factorization for Memory Efficient Training
Pouria Mahdavinia, Mehrdad Mahdavi
https://arxiv.org/abs/2507.08091 https://arxiv.org/pdf/2507.08091 https://arxiv.org/html/2507.08091
arXiv:2507.08091v1 Announce Type: new
Abstract: Fine-tuning large foundation models presents significant memory challenges due to stateful optimizers like AdamW, often requiring several times more GPU memory than inference. While memory-efficient methods like parameter-efficient fine-tuning (e.g., LoRA) and optimizer state compression exist, recent approaches like GaLore bridge these by using low-rank gradient projections and subspace moment accumulation. However, such methods may struggle with fixed subspaces or computationally costly offline resampling (e.g., requiring full-matrix SVDs). We propose Momentum Factorized SGD (MoFaSGD), which maintains a dynamically updated low-rank SVD representation of the first-order momentum, closely approximating its full-rank counterpart throughout training. This factorization enables a memory-efficient fine-tuning method that adaptively updates the optimization subspace at each iteration. Crucially, MoFaSGD leverages the computed low-rank momentum factors to perform efficient spectrally normalized updates, offering an alternative to subspace moment accumulation. We establish theoretical convergence guarantees for MoFaSGD, proving it achieves an optimal rate for non-convex stochastic optimization under standard assumptions. Empirically, we demonstrate MoFaSGD's effectiveness on large language model alignment benchmarks, achieving a competitive trade-off between memory reduction (comparable to LoRA) and performance compared to state-of-the-art low-rank optimization methods. Our implementation is available at https://github.com/pmahdavi/MoFaSGD.
toXiv_bot_toot
A polynomial projective algorithm for convex feasibility problems with positive-definite constraints
Sergei Chubanov
https://arxiv.org/abs/2506.15484 https…
Stochastic gradient descent based variational inference for infinite-dimensional inverse problems
Jiaming Sui, Junxiong Jia, Jinglai Li
https://arxiv.org/abs/2506.08380
Glocal Smoothness: Line Search can really help!
Curtis Fox, Aaron Mishkin, Sharan Vaswani, Mark Schmidt
https://arxiv.org/abs/2506.12648 https://
Faster stochastic cubic regularized Newton methods with momentum
Yiming Yang, Chuan He, Xiao Wang, Zheng Peng
https://arxiv.org/abs/2507.13003 https://
Theoretical analysis and numerical solution to a vector equation $Ax-\|x\|_1x=b$
Yuezhi Wang, Gwi Soo Kim, Jie Meng
https://arxiv.org/abs/2507.04971 https:…
Heterogeneous and anisotropic elastic parameter estimation using a novel semi-analytical forward solver
Xiaopeng Zhu, Zhongyi Huang
https://arxiv.org/abs/2506.15185
Monotone and nonmonotone linearized block coordinate descent methods for nonsmooth composite optimization problems
Yassine Nabou, Lahcen El Bourkhissi, Sebastian U. Stich, Tuomo Valkonen
https://arxiv.org/abs/2506.12397
Faber polynomials in a deltoid region and power iteration momentum methods
Peter Cowal, Nicholas F. Marshall, Sara Pollock
https://arxiv.org/abs/2507.01885
Rate of metastability of an iterative algorithm for quadratic optimization
Paulo Firmino
https://arxiv.org/abs/2506.11342 https://arx…
Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters
Mathukumalli Vidyasagar
https://arxiv.org/abs/2506.11904 https://…
An optimal two-side Robin-Robin domain decomposition method for H(div)-elliptic problem
Na Xuyang
https://arxiv.org/abs/2506.12485 https://
SVD method for sparse recovery
Long Li, Liang Ding
https://arxiv.org/abs/2506.11379 https://arxiv.org/pdf/2506.11379
Faithful-Newton Framework: Bridging Inner and Outer Solvers for Enhanced Optimization
Alexander Lim, Fred Roosta
https://arxiv.org/abs/2506.13154 https://
SuperADMM: Solving Quadratic Programs Faster with Dynamic Weighting ADMM
P. C. N. Verheijen, D. Goswami, M. Lazar
https://arxiv.org/abs/2506.11608 https://…
Worst-Case Complexity of High-Order Algorithms for Pareto-Front Reconstruction
Andrea Cristofari, Marianna De Santis, Stefano Lucidi, Giampaolo Liuzzi
https://arxiv.org/abs/2506.11929
A semi-Lagrangian scheme for First-Order Mean Field Games based on monotone operators
Elisabetta Carlini, Valentina Coscetti
https://arxiv.org/abs/2506.10509
A Cubic Regularization Method for Multiobjective Optimization
Douglas S. Gon\c{c}alves, Max L. N. Gon\c{c}alves, Jefferson G. Melo
https://arxiv.org/abs/2506.08181
An Adaptive Order Caputo Fractional Gradient Descent Method for Multi-objective Optimization Problems
Barsha Shaw, Md Abu Talhamainuddin Ansary
https://arxiv.org/abs/2507.07674
A derivative-free regularization algorithm for equality constrained nonlinear least squares problems
Xi Chen, Jinyan Fan
https://arxiv.org/abs/2507.05623 h…
An inexact inertial projective splitting algorithm with strong convergence
M. Marques Alves, J. E. Navarro Caballero, R. T. Marcavillaca
https://arxiv.org/abs/2507.05382
This https://arxiv.org/abs/2403.18213 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
A variable dimension sketching strategy for nonlinear least-squares
Stefania Bellavia, Greta Malaspina, Benedetta Morini
https://arxiv.org/abs/2506.03965 h…
This https://arxiv.org/abs/2501.04034 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…