Accurate Solution to Overdetermined Linear Equations with Errors Using L1 Norm Minimization

    loading  Checking for direct PDF access through Ovid


It has been known for many years that a robust solution to an overdetermined system of linear equations Axb is obtained by minimizing the L1 norm of the residual error. A correct solution x to the linear system can often be obtained in this way, in spite of large errors (outliers) in some elements of the (m× n) matrix A and the data vector b. This is in contrast to a least squares solution, where even one large error will typically cause a large error in x. In this paper we give necessary and sufficient conditions that the correct solution is obtained when there are some errors in A and b. Based on the sufficient condition, it is shown that if k rows of [Ab] contain large errors, the correct solution is guaranteed if (mn)/n≥ 2k/σ, where σ > 0, is a lower bound of singular values related to A. Since m typically represents the number of measurements, this inequality shows how many data points are needed to guarantee a correct solution in the presence of large errors in some of the data. This inequality is, in fact, an upper bound, and computational results are presented, which show that the correct solution will be obtained, with high probability, for much smaller values of mn.

    loading  Loading Related Articles