Full Access

Proceedings

Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)

A Deterministic Linear Program Solver in Current Matrix Multiplication Time

pp.259 - 278
  • Abstract

    Interior point algorithms for solving linear programs have been studied extensively for a long time [e.g. Karmarkar 1984; Lee, Sidford FOCS’14; Cohen, Lee, Song STOC’19]. For linear programs of the form with n variables and d constraints, the generic case d = Ω(n) has recently been settled by Cohen, Lee and Song [STOC’19]. Their algorithm can solve linear programs in Õ(nω log(n/δ)) expected time1, where δ is the relative accuracy. This is essentially optimal as all known linear system solvers require up to O(nω) time for solving Ax = b. However, for the case of deterministic solvers, the best upper bound is Vaidya's 30 years old O(n2.5 log(n/δ)) bound [FOCS’89]. In this paper we show that one can also settle the deterministic setting by derandomizing Cohen et al.'s Õ(nω log(n/δ)) time algorithm. This allows for a strict Õ(nω log(n/δ)) time bound, instead of an expected one, and a simplified analysis, reducing the length of their proof of their central path method by roughly half. Derandomizing this algorithm was also an open question asked in Song's PhD Thesis.

    The main tool to achieve our result is a new data-structure that can maintain the solution to a linear system in subquadratic time. More accurately we are able to maintain in subquadratic time under 2 multiplicative changes to the diagonal matrix U and the vector v. This type of change is common for interior point algorithms. Previous algorithms [e.g. Vaidya STOC’89; Lee, Sidford FOCS’15; Cohen, Lee, Song STOC’19] required Ω(n2) time for this task. In [Cohen, Lee, Song STOC’19] they managed to maintain the matrix in subquadratic time, but multiplying it with a dense vector to solve the linear system still required Ω(n2) time. To improve the complexity of their linear program solver, they restricted the solver to only multiply sparse vectors via a random sampling argument. In comparison, our data-structure maintains the entire product additionally to just the matrix. Interestingly, this can be viewed as a simple modification of Cohen et al.'s data-structure, but it significantly simplifies their analysis of their central path method and makes their whole algorithm deterministic.