News for July 2018

Three Four papers for July. New sublinear graph algorithms, distribution testing under new models, and sublinear matrix algorithms. Onward ho…
(Sorry Amit, for missing your paper on sublinear matrix algorithms.)

Metric Sublinear Algorithms via Linear Sampling, by Hossein Esfandiari and Michael Mitzenmacher (arXiv). Consider a weighted clique \(G = (V,E)\) where \(V\) is a set of points in a metric space and edge weights are metric distances. In this setting, sublinear algorithms are those that make \(o(n^2)\) edge queries. This paper studies problems like densest subgraph and maxcut in this setting. The key method is a sparsifying algorithm that achieves the following. (I paraphrase their language.) Consider a positive parameter \(\alpha\), and let \(w(e)\) denote the weight of edge \(e\). The aim is to get a subgraph \(H\) that contains every edge \(e\) (in \(G\)) with independent probability \(\min(w(e)/\alpha, 1)\). Furthermore, this subgraph should be obtained in time linear in the number of edges in \(H\) (hence the title of the paper). For problems like 1/2-approximating the densest subgraph and PTASes for maxcut, the results show that for a carefully chosen \(\alpha\), approximate solutions on \(H\) give solutions of comparable quality on \(G\). These results cleanly generalize to settings where edge weights satisfy triangle inequality with some multiplicative penalty.

Sublinear Algorithms for (\(\Delta\) + 1) Vertex Coloring, by Sepehr Assadi, Yu Chen, and Sanjeev Khanna (arXiv). Arguably, the first thing you learn about vertex coloring is that a graph with maximum degree \(\Delta\) admits a \((\Delta+1)\)-coloring, that can be found in linear time. But what about sublinear time/space? I like this! You take a simple classical fact, throw in sublinear constraints, and it opens up a rich theory. This paper shows a non-adaptive \(O(n^{3/2})\)-time algorithm for this problem, and gives a nearly matching lower bound. There are also results for streaming and parallel computation, but let’s focus on the sublinear result. It is remarkable that there is no loss in colors in going to sublinear time. (In contrast, the papers shows an \(\Omega(n^2)\) lower bound for constructing a maximal matching.) The main technical tool is a list coloring result, where each vertex is given a list of colors and much choose its own from that list. Obviously, if each list is \([\Delta + 1]\), such a coloring is possible. The paper proves that even if each list is an independent \(O(\log n)\)-sized sample of \([\Delta+1]\), a valid coloring is still possible. The final algorithm is pretty involved, and uses this meta-algorithm as a building block.

Anaconda: A Non-Adaptive Conditional Sampling Algorithm for Distribution Testing, by Gautam Kamath and Christos Tzamos (ECCC). The standard model for distribution testing is access to samples from the unknown distribution \(\mathcal{D}\) with support \([n]\). This has attracted much attention with a rich set of results, and the complexity classic problems of uniformity, identity, and equivalence are well understood. But there are alternate models, such as the model of conditional samples (Chakraborty-Fischer-Goldhirsh-Matsliah ’13 and Canonne-Ron-Servedio ’14). For any subset \(S \subseteq [n]\), we can get a random sample from \(\mathcal{D}\) restricted to \(S\). This adds an algorithmic dimension to distribution testing. This paper studies the power of non-adaptive conditional (NACOND) queries. The main result is that uniformity, identity, and equivalence are testable with \(\mathrm{poly}(\log n)\) queries. (There are existing \(\Omega(\log n)\) lower bounds for all these problems.) The heart of these algorithms is a procedure ANACONDA that tries to find a set \(S\) where some element has a high probability, relative to the mass of \(S\).

Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices, by Amit Levi and Yuichi Yoshida (arXiv). When it comes to fundamental problems, it’s hard to beat quadratic minimization. Given a matrix \(A \in \mathbb{R}^{n\times n}\), we wish to find \(v \in \mathbb{R}^n\) that minimizes \(v^TAv\). (This is basically a singular value/vector problem.) One may have additional terms in the objective, depending on \(v^Tv\) or \(v^Tb\) (for fixed vector \(b\)). This paper gives sublinear algorithms for this problem. A natural approach is to simply subsample \(k\) rows and columns to get submatrix \(B\), solve the problem for \(B\), and hope for the best. This idea has a rich history from seminal work of Frieze-Kannan. Recently, Hayashi-Yoshida show that constant \(k\) (only depending on error parameters) suffice for getting a non-trivial approximation for this problem. Unfortunately, the solution error depends on the \(\ell_\infty\)-norm of the solution. This paper shows that for polylogarithmic \(k\), one can get an error depending on the \(\ell_2\)-norm of the solution. This is a significant improvement, especially for sparse solution vectors. The main technical workhorse is a new matrix decomposition theorem, that shows how any matrix can be written as a sum of a few block matrix, and a low-norm “error” matrix. Admirably, the paper shows a number of experiments,
showing the effectiveness of this technique for eigenvalue computations. It’s very nice to see how ideas from sublinear algorithms might have a practical impact.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.