# News for June 2019

We’ve got four papers this month. A mix of distribution testing, matrix problems, and a different paper on the power of sampling.

On the Complexity of Estimating the Effective Support Size, by Oded Goldreich (ECCC). In distribution testing, a classic problem is that of approximating the support size. By a (now) classic result of Valiant-Valiant, the complexity of this problem is $$\Theta(n/\log n)$$. This paper raises the question of approximating the “effective” support, and that too with more than just samples from the distribution. The $$\epsilon$$-support of discrete distribution $$\mathcal{D}$$ is the smallest support among any distribution $$\mathcal{D}’$$ such that $$\|\mathcal{D} – \mathcal{D}’\|_1 \leq \epsilon$$ (or TV-distance). Denote this as $$supp_\epsilon(\mathcal{D})$$. One can also consider a bicriteria version. Given approximation parameter $$f$$ and thresholds $$\epsilon_1 < \epsilon_2$$, we need to provide a number in the range $$[supp_{\epsilon_2}(\mathcal{D}), f\cdot supp_{\epsilon_1}(\mathcal{D})]$$. The primary model studied allows for random samples and evaluation queries (where one gets the probability of any known element of the domain). In this model, for arbitrary $$\epsilon_1, \epsilon_2$$, there is a continuum of algorithms, trading off query complexity with approximation. At one end, for $$f = O(\log n)$$, the query complexity is $$\widetilde{O}(1/\epsilon_1)$$. At the other end, for $$f=1$$, the query complexity is $$O(\log^* n)$$. (Here, $$n = supp_{\epsilon_1}(\mathcal{D})$$.) There are lower bounds showing the necessity of evaluation queries for subpolynomial query complexities.

Communication and Memory Efficient Testing of Discrete Distributions, by Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, and Sankeerth Rao (arXiv). This paper adds additional computational constraints to the distribution testing problem. In the streaming setting, the algorithm is only allowed a single pass over the random samples. It has $$m$$ bits of storage, much smaller than $$n$$, the distribution size. The aim is to minimize the number of samples required for uniformity (and closeness) testing. For uniformity testing in the streaming setting, the paper gives an algorithm that uses $$\widetilde{O}(m + n/m)$$ samples. The standard collision algorithm requires $$\Theta(\sqrt{n})$$ storage (to store the samples), while this result gives a non-trivial bound for $$m \ll \sqrt{n}$$. There are lower bounds showing that these bounds are basically tight. In the distributed distribution testing problem, there are a number of processors, each holding $$\ell$$ samples. A referee asks a question to the processor, whose answers are broadcast to everyone. The aim is to minimize communication cost. For uniformity testing in this setting, there is a protocol using $$O(\sqrt{n\log n/\ell})$$ bits of communication. As a sanity check, note that for $$\ell = \sqrt{n}$$, one processor could simply run the collision algorithm locally to report the result.

Querying a Matrix through Matrix-Vector Products by Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang (arXiv). Consider $$n \times d$$ matrix $$M$$ over some field $$\mathbb{F}$$. One gets access to this matrix through matrix-vector products, and wishes to test some matrix property. This is a natural model in many settings, and generalizes the classic setting of query access. A subtle point is that one can only right multiply with “query” vectors, and there are problems where left multiplication can change the complexity. (A nice example in the paper is testing if a square matrix is symmetric. With both left and right multiplications, this is easy, since we can directly access rows and columns. By only accessing columns, this is non-trivial.) This paper studies a number of problems, broadly classified as linear algebra problems, statistics problems, and graph problems. Some highlights are: testing symmetry can be done in $$O(1)$$ queries, and the maximum eigenvalue can be approximated in $$O(\log n)$$ queries adaptively (but there is an $$\Omega(n)$$ non-adaptive lower bound). For graph problems, here’s an interesting discovery. If $$M$$ is the adjacency matrix, connectivity requires $$\Omega(n/\log n)$$ queries. But if $$M$$ is the signed edge-vertex matrix, then this can be done in $$poly(\log n)$$ queries. This paper provides a number of interesting directions and problems to study.

The Adversarial Robustness of Sampling by Omri Ben-Eliezer and Eylon Yogev (arXiv). This isn’t a property testing paper, but how can one ignore a paper on understanding the power of sampling? It’s convenient to think of the following setting. An algorithm gets a stream of $$n$$ numbers, and has to answer some questions about the stream (say, the median or other quantiles). The simplest strategy is to take a small random sample of the stream. But what if an adversary was generating the stream, depending on the current sample? Under what circumstances is the sample still “representative” of the stream? The specific results of this paper require getting into set systems and VC dimension, which I’ll leave for the sake of simplicity. Let’s go back to the median question. To get an approximate median, a constant number of samples suffice. A consequence of the main result is that if one takes $$O(\log n)$$ samples, then this is robust to adversarial streams. On the other hand, lower bounds show that constant sized samples can be fooled by an adversary. The paper is a really interesting read, and is a nice take on “standard facts” that we take for granted.

# News for February 2019

A lean month by our (now high) standards. Three papers, on local computational algorithms for spanners, sublinear set cover, and quantum distribution testing.

Local Computation Algorithms for Spanners, by Merav Parter, Ronitt Rubinfeld, Ali Vakilian, Anak Yodpinyanee (arXiv). Consider the problem of computing a $$k$$-spanner for a graph $$G$$. This is a subgraph $$H$$, where (for any pair of vertices) the shortest path distance in $$H$$ is at most $$k$$ times the corresponding distance in $$G$$. A Local Computation Algorithm (LCA) essentially gives sublinear query access to (some spanner) $$H$$, without any preprocessing on $$G$$. In other words, an LCA takes as input an edge $$(i,j)$$ in $$G$$, and in sublinear time says “yes” or “no”. Despite no preprocessing, it is guaranteed that all the “yes” answers form a spanner, and the number of “yes” answers is small. One of the main challenges in LCAs for graph problems is getting a polynomial dependence on $$\Delta$$, the maximum degree. (Many results end up with an exponential dependence on $$\Delta$$, essentially assuming bounded-degree.) This paper give an LCA outputting spanners of optimal size for $$k=3,5$$, with query complexity $$n^{1-\alpha}$$ (for constant $$\alpha$$ depending on $$k$$). The second result works for general $$k$$, but has a query complexity that is $$O(\Delta^4n^{2/3})$$.

Set Cover in Sub-linear Time, by Piotr Indyk, Sepideh Mahabadi, Ronitt Rubinfeld, Ali Vakilian, Anak Yodpinyanee (arXiv). Set cover needs no introduction. This paper considers sublinear algorithms for set cover. It is convenient to think of the instance as a binary $$m \times n$$ matrix, for $$m$$ sets over a universe of size $$n$$. The query model gives access to the $$j$$th non-zero entry in a desired column/row. The aim is this paper is to actually construct a set cover, and not just approximate its size (the latter problem with initiated in Nguyen-Onak). The first result shows a black-box conversion of any set cover algorithm into a sublinear query complexity algorithm. Any $$\alpha$$-approximation for set cover can be converted into an $$O(\alpha)$$-approximation, that makes $$O(m(n/k)^{\beta} + nk)$$ (where $$k$$ is the optimal set cover size, and $$\beta < 1$$ depends on $$\alpha$$). For large $$k$$, the paper gives an $$O(mn/k)$$ query, $$O(\log n)$$-approximation algorithm. There are various lower bounds given, that show these dependencies are necessary.

Distributional Property Testing in a Quantum World by András Gilyén and Tongyang Li (arXiv). There are two aspects to quantum distribution testing. First, we can think of faster distribution testing (for classical problems) using quantum computation. Second, we can generalize the problem of distribution testing to its quantum equivalent, called density operator testing (this papers gives the first result for this quantum generalization). A distribution is a non-negative vector in $$\mathbb{R}^n$$ with unit $$l_1$$-norm. The quantum generalization, a density operator, is a PSD matrix in $$\mathbb{C}^n$$ with unit trace. This paper has a number of results, along both aspects. For example, for the well-known problem of testing equality of distributions under total variation distance, the papers gives an $$\widetilde{O}(\sqrt{n}/\epsilon)$$ tester (an improvement over classical lower bounds). For testing equality of density operators under the same distance, the paper gives an $$O(n/\epsilon)$$ tester (note that the trivial bound is $$O(n^2)$$, since the operator is a matrix).

# News for November 2018

Alas! Sorry for the delay, but November was a good month with five seven papers. We missed a few papers from September, so please look at the updated post for September as well. (Update: Thanks to Omri Ben-Eliezer and an anonymous poster for pointing out two missed papers.)

From Local to Robust Testing via Agreement Testing, by Irit Dinur, Tali Kaufman, and Noga Ron-Zewi (ECCC). Consider a code, which is simply a subset of $$C \subseteq \mathbb{F}^n_q$$. (Typically, we think of linear codes, which form a linear subspace.) The study of locally testable codes is a rich area. Consider an input $$x \in \mathbb{F}^n_q$$. A local tester for code/property $$C$$ queries, according to some distribution, a subset $$K$$ of $$Q$$ coordinates in $$x$$, and rejects if the “view” $$x|_K$$ is inconsistent with any codeword. The rejection probability should be proportional to the distance of $$x$$ to $$C$$, denoted $$dist(x,C)$$. A robust tester demands the average distance of $$x|_K$$ to the corresponding property $$C|_K$$ must be proportional to $$dist(x,C)$$. This is a much stronger demand than that of just local testing. The main result is local testability implies robust testability for the special class of lifted codes. The proof goes via agreement testing, where an algorithm gets access to some fragments of $$x$$, and must decide if these fragments are consistent with a codeword.

Testing local properties of arrays, by Omri Ben-Eliezer (ECCC). Consider a function $$f:[n]^d \to \Sigma$$, where $$\Sigma$$ is finite. Now, suppose we define a property $$\mathcal{P}$$ with respect to a set of forbidden $$[k]^d$$ (consecutive) subarrays. Such a property is called $$k$$-local. The inspiration for this work is a line of research on testing of image properties. Note that for $$k=2,3$$, this notion subsumes a number of properties, such as monotonicity, $$c$$-Lipschitz, submodularity, convexity, etc. The main result is a one-sided, non-adaptive tester for all such properties. Ignoring $$\epsilon, k$$ dependencies, the query complexity is $$O(\log n)$$ for $$d=1$$ and $$O(n^{d-1})$$ for $$d > 1$$. Remarkably, there is no dependence on the alphabet size $$|\Sigma|$$. Moreover, the bound above is optimal for one-sided, non-adaptive testers. The basic idea to pick blocks of various sizes, and query all points on the boundary. Then, the tester can determine (by sort of brute force search) if this is consistent with some function in $$\mathcal{P}$$. The key is to prove that for a function that is far from $$\mathcal{P}$$, there are many blocks that cannot be consistent with $$\mathcal{P}$$.

Erasures vs. Errors in Local Decoding and Property Testing, by Sofya Raskhodnikova, Noga Ron-Zewi and Nithin Varma (ECCC). The main theme of this paper is the notion of erasure-resilient testing. Consider property $$\mathcal{P}$$. Suppose $$\alpha$$-fraction of the coordinates of input $$x$$ is hidden. Thus, queries to these coordinates of $$x$$ return nothing. Our aim is to accept if there is some “filling in” of the hidden coordinates that make $$x$$ satisfy $$\mathcal{P}$$, and reject if for all fillings, $$x$$ remains from from $$\mathcal{P}$$. If $$\alpha = 0$$, this is standard property testing. Suppose there exists an $$(\alpha, \alpha+\epsilon)$$-tolerant tester for $$\mathcal{P}$$. (I’m fudging factors a bit here for readability.) We get an $$\epsilon$$-tester with $$\alpha$$-fraction of erasures, by simply filling $$x$$ in arbitrarily. So this model is in between vanilla and tolerant testing. The main result is a separation. There is a property that is constant-query testable under erasures, but requires $$n^{\Omega(1)}$$ queries to tolerant test (with the above parameters). One of the main technical results is a list decoder for the Hadamard code that can tolerate erasures of any $$\alpha \lt 1$$.

Domain Reduction for Monotonicity Testing: A o(d) Tester for Boolean Functions on Hypergrids, by Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri (ECCC). Consider monotonicity testing of Boolean functions of the hypergrid, $$f:[n]^d \to \{0,1\}$$. It is known that there are $$O(d)$$ testers with complexity independent of $$n$$, while for $$n=2$$, there are $$o(d)$$ testers. The main result is a tester with both these properties. The paper gives an $$O(d^{5/6})$$ tester for such functions. The main technical ingredient is a domain reduction theorem. Consider restrictions of $$f$$ to a uniform random $$[poly(d\epsilon^{-1})]^d$$ hypergrid. The paper proves that if $$f$$ is $$\epsilon$$-far from monotone, then (with high probability) so is the restriction. Thus, for the purposes of monotonicity testing, one can simply assume that $$n = poly(d)$$. This gives a black-box method to get testers independent of $$n$$.

On the Testability of Graph Partition Properties, by Yonatan Nakar and Dana Ron (ECCC). Consider property testing on dense graphs. The seminar result of Goldreich-Goldwasser-Ron studies graph partitioning properties. Such a property is defined as follows. Fix constant $$k$$. We wish to partition the vertices of graph $$G$$ into $$V_1, V_2, \ldots, V_k$$. For each $$i$$, $$|V_i|/n \in [\rho^L_i, \rho^U_i]$$. Furthermore, for each $$i, j$$, the number of edges between $$V_i, V_j$$ must be in $$[\rho^L_{i,j}n^2, \rho^U_{i,j}n^2]$$. (All the $$\rho$$s are parameters of the property.) This subsumes properties like $$k$$-colorability, and the classic result is a $$poly(1/\epsilon)$$-tester for these properties. Note that the edge conditions are absolute, with respect to $$n^2$$, and not with respect to $$|V_i|\cdot |V_j|$$. This paper allows for relative bounds of the form $$[\rho^L_{i,j}|V_i|\cdot |V_j|, \rho^U_{i,j}|V_i|\cdot |V_j|]$$. This is far more expressive, subsuming properties like having large cliques, or large complete bipartite graphs. One of the main results is a (two-sided) $$poly(1/\epsilon)$$ tester for all such properties. Furthermore, there is a characterization of such properties that are one-sided testable in $$poly(1/\epsilon)$$ queries. For such properties, it is proven that the density conditions (between $$V_i, V_j$$) must either be unconstrained or have $$\rho^L_{i,j} = \rho^U_{i,j}$$ be either 1 or 0.

Limits of Ordered Graphs and Images, by Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida (arXiv). The theory of graph limits is a deep topic, with connections to property testing, regularity lemmas, and analysis of real-world networks. Roughly speaking, consider a sequence $$\{G_n\}$$ of graphs where the (relative) frequency of any fixed subgraph converges. Then, we can define a limit of the graphs, called a graphon, which is a measurable function $$W:[0,1]^2 \to [0,1]$$. This paper discovers appropriate limits of ordered graphs. Note that images can be represented as ordered graphs. An ordered graph is a symmetric function $$G:[n]^2 \to \{0,1\}$$ (with the trivial automorphism group). The graphon definitions and limits do not generalize here. There is a sequence of ordered graphs where the ordered subgraph frequencies converge, but one cannot associate any graphon as the limit. This paper discovered a new object called an orderon, which is a function $$W:([0,1]^2)^2 \to [0,1]$$. Orderons can be shown to be the limits of sequences of ordered graphs.

Two Party Distribution Testing: Communication and Security, by Alexandr Andoni, Tal Malkin, and Negev Shekel Nosatzki (arXiv). This paper gives a new take on distribution testing. Suppose Alice and Bob have each have access to $$t$$ samples from (unknown) distributions $$p$$ and $$q$$ respectively. Their aim is to distinguish $$p = q$$ from $$\|p-q\|_{TV} > \epsilon$$. How much communication is required? In the standard setting, it is known that $$n^{2/3}$$ (ignoring $$\epsilon$$) is necessary and sufficient. In the communication setting, the bound turns out to basically be $$\Theta((n/t)^2)$$. Observe how for $$t=n^{2/3}$$, the communication is $$n^{2/3}$$ (saying that Alice is simply going to send all her samples to Bob). But for larger $$t$$, the communication decreases. Since Alice can learn more about her distribution, she can use sketching techniques to reduce her communication. There is an analogous result for testing independence of two distributions. Furthermore, one can even require these protocols to be secure, so Alice and Bob learn nothing about each others samples (other than the final answer).

# News for July 2018

Three Four papers for July. New sublinear graph algorithms, distribution testing under new models, and sublinear matrix algorithms. Onward ho…
(Sorry Amit, for missing your paper on sublinear matrix algorithms.)

Metric Sublinear Algorithms via Linear Sampling, by Hossein Esfandiari and Michael Mitzenmacher (arXiv). Consider a weighted clique $$G = (V,E)$$ where $$V$$ is a set of points in a metric space and edge weights are metric distances. In this setting, sublinear algorithms are those that make $$o(n^2)$$ edge queries. This paper studies problems like densest subgraph and maxcut in this setting. The key method is a sparsifying algorithm that achieves the following. (I paraphrase their language.) Consider a positive parameter $$\alpha$$, and let $$w(e)$$ denote the weight of edge $$e$$. The aim is to get a subgraph $$H$$ that contains every edge $$e$$ (in $$G$$) with independent probability $$\min(w(e)/\alpha, 1)$$. Furthermore, this subgraph should be obtained in time linear in the number of edges in $$H$$ (hence the title of the paper). For problems like 1/2-approximating the densest subgraph and PTASes for maxcut, the results show that for a carefully chosen $$\alpha$$, approximate solutions on $$H$$ give solutions of comparable quality on $$G$$. These results cleanly generalize to settings where edge weights satisfy triangle inequality with some multiplicative penalty.

Sublinear Algorithms for ($$\Delta$$ + 1) Vertex Coloring, by Sepehr Assadi, Yu Chen, and Sanjeev Khanna (arXiv). Arguably, the first thing you learn about vertex coloring is that a graph with maximum degree $$\Delta$$ admits a $$(\Delta+1)$$-coloring, that can be found in linear time. But what about sublinear time/space? I like this! You take a simple classical fact, throw in sublinear constraints, and it opens up a rich theory. This paper shows a non-adaptive $$O(n^{3/2})$$-time algorithm for this problem, and gives a nearly matching lower bound. There are also results for streaming and parallel computation, but let’s focus on the sublinear result. It is remarkable that there is no loss in colors in going to sublinear time. (In contrast, the papers shows an $$\Omega(n^2)$$ lower bound for constructing a maximal matching.) The main technical tool is a list coloring result, where each vertex is given a list of colors and much choose its own from that list. Obviously, if each list is $$[\Delta + 1]$$, such a coloring is possible. The paper proves that even if each list is an independent $$O(\log n)$$-sized sample of $$[\Delta+1]$$, a valid coloring is still possible. The final algorithm is pretty involved, and uses this meta-algorithm as a building block.

Anaconda: A Non-Adaptive Conditional Sampling Algorithm for Distribution Testing, by Gautam Kamath and Christos Tzamos (ECCC). The standard model for distribution testing is access to samples from the unknown distribution $$\mathcal{D}$$ with support $$[n]$$. This has attracted much attention with a rich set of results, and the complexity classic problems of uniformity, identity, and equivalence are well understood. But there are alternate models, such as the model of conditional samples (Chakraborty-Fischer-Goldhirsh-Matsliah ’13 and Canonne-Ron-Servedio ’14). For any subset $$S \subseteq [n]$$, we can get a random sample from $$\mathcal{D}$$ restricted to $$S$$. This adds an algorithmic dimension to distribution testing. This paper studies the power of non-adaptive conditional (NACOND) queries. The main result is that uniformity, identity, and equivalence are testable with $$\mathrm{poly}(\log n)$$ queries. (There are existing $$\Omega(\log n)$$ lower bounds for all these problems.) The heart of these algorithms is a procedure ANACONDA that tries to find a set $$S$$ where some element has a high probability, relative to the mass of $$S$$.

Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices, by Amit Levi and Yuichi Yoshida (arXiv). When it comes to fundamental problems, it’s hard to beat quadratic minimization. Given a matrix $$A \in \mathbb{R}^{n\times n}$$, we wish to find $$v \in \mathbb{R}^n$$ that minimizes $$v^TAv$$. (This is basically a singular value/vector problem.) One may have additional terms in the objective, depending on $$v^Tv$$ or $$v^Tb$$ (for fixed vector $$b$$). This paper gives sublinear algorithms for this problem. A natural approach is to simply subsample $$k$$ rows and columns to get submatrix $$B$$, solve the problem for $$B$$, and hope for the best. This idea has a rich history from seminal work of Frieze-Kannan. Recently, Hayashi-Yoshida show that constant $$k$$ (only depending on error parameters) suffice for getting a non-trivial approximation for this problem. Unfortunately, the solution error depends on the $$\ell_\infty$$-norm of the solution. This paper shows that for polylogarithmic $$k$$, one can get an error depending on the $$\ell_2$$-norm of the solution. This is a significant improvement, especially for sparse solution vectors. The main technical workhorse is a new matrix decomposition theorem, that shows how any matrix can be written as a sum of a few block matrix, and a low-norm “error” matrix. Admirably, the paper shows a number of experiments,
showing the effectiveness of this technique for eigenvalue computations. It’s very nice to see how ideas from sublinear algorithms might have a practical impact.

# News for April 2018

Three Four papers for April: a new take on linearity testing, results on sublinear algorithms with advice, histogram testing, and distributed inference problems. (Edit: Sorry Clément, for missing your paper on distributed inference!)

Testing Linearity against Non-Signaling Strategies, by Alessandro Chiesa, Peter Manohar, and Igor Shinkar (ECCC). This paper gives a new model for property testing, through the notion of non-signaling strategies. The exact definitions are quite subtle, but here’s a condensed view. For $$S \subseteq \{0,1\}^n$$, let an $$S$$-partial function be one that is only defined on $$S$$. Fix a “consistency” parameter $$k$$. Think of the “input” as a collection of distributions, $$\{\mathcal{F}_S\}$$, where each $$|S| \leq k$$ and $$\mathcal{F}_S$$ is a distribution of $$S$$-partial functions. We have a local consistency requirement: $$\{\mathcal{F}_S\}$$ and $$\{\mathcal{F}_T\}$$ must agree (as distributions) on restrictions to $$S \cap T$$. In some sense, if we only view pairs of these distributions of partial functions, it appears as if they come from a single distributions of total functions. Let us focus on the classic linearity tester of Blum-Luby-Rubinfeld in this setting. We pick random $$x, y, x+y \in \{0,1\}^n$$ as before, and query these values on a function $$f \sim {\mathcal{F}_{x,y,x+y}}$$. The main question addressed is what one can say about $$\{\mathcal{F}_S\}$$, if this linearity test passes with high probability. Intuitively (but technically incorrect), the main result is that $$\{\mathcal{F}_S\}$$ is approximated by a “quasi-distribution” of linear functions.

An Exponential Separation Between MA and AM Proofs of Proximity, by Tom Gur, Yang P. Liu, and Ron D. Rothblum (ECCC). This result follows a line of work on understanding sublinear algorithms in proof systems. Think of the standard property testing setting. There is a property $$\mathcal{P}$$ of $$n$$-bit strings, an input $$x \in \{0,1\}^n$$, and a proximity parameter $$\epsilon > 0$$. We add a proof $$\Pi$$ that the tester (or the verifier) is allowed to use, and we define soundness and completeness in the usual sense of Arthur-Merlin protocols. For a $$\mathbb{MA}$$-proof of proximity, the proof $$\Pi$$ can only depend on the string $$x$$. In a $$\mathbb{AM}$$-proof of proximity, the proof can additionally depend on the random coins of the tester (which determine the indices of $$x$$ queried). Classic complexity results can be used to show that the latter subsume the former, and this paper gives a strong separation. Namely, there is a property $$\mathcal{P}$$ where any $$\mathbb{MA}$$-proof of proximity protocol (or tester) requires $$\Omega(n^{1/4})$$-queries of the input $$x$$, but there exists an $$\mathbb{AM}$$-proof of proximity protocol making $$O(\log n)$$ queries. Moreover, this property is quite natural; it is simply the encoding of permutations.

Testing Identity of Multidimensional Histograms, by Ilias Diakonikolas, Daniel M. Kane, and John Peebles (arXiv). A distribution over $$[0,1]^d$$ is a $$k$$-histogram if the domain can be partitioned into $$k$$ axis-aligned cuboids where the probability density function is constant. Recent results show that such histograms can be learned in $$k \log^{O(d)}k$$ samples (ignoring dependencies on accuracy/error parameters). Can we do any better for identity testing? This paper gives an affirmative answer. Given a known $$k$$-histogram $$p$$, one can test (in $$\ell_1$$-distance) whether an unknown $$k$$-histogram $$q$$ is equal to $$p$$ in (essentially) $$\sqrt{k} \log^{O(d)}(dk)$$ samples. There is a nearly matching lower bound, when $$k = \exp(d)$$.

Distributed Simulation and Distributed Inference, by Jayadev Acharya, Clément L. Canonne, and Himanshu Tyagi (arXiv   ECCC). This papers introduces a model of distributed simulation, which generalizes distribution testing and distributed density estimation. Consider some unknown distribution $$\mathcal{D}$$ with support $$[k]$$, and a “referee” who wishes to generate a single sample from $$\mathcal{D}$$ (alternately, she may wish to determine if $$\mathcal{D}$$ has some desired property). The referee can communicate with “players”, each of whom can generate a single independent sample from $$\mathcal{D}$$. The catch is that each player can communicate at most $$\ell$$ < $$log_2k$$ bits (otherwise, the player can simply communicate the sampled element). How many players are needed for the referee to generate a single sample? The paper first proves that this task is basically impossible with a (worst-case) finite number of players, but can be done with expected $$O(k/2^\ell)$$ players (and this is optimal). This can plugged into standard distribution testing results, to get inference results in this distributed, low-communication setting. For example, the paper shows that identity testing can be done with $$O(k^{3/2}/2^\ell)$$ players.

# News for January 2018

And now, for the first papers of 2018! It’s a slow start with only four papers (or technical three “standard property testing” papers, and one non-standard paper).

Adaptive Boolean Monotonicity Testing in Total Influence Time, by Deeparnab Chakrabarty and C. Seshadhri (arXiv ECCC). The problem of testing monotonicity of Boolean functions $$f:\{0,1\}^n \to \{0,1\}$$ has seen a lot of progress recently. After the breakthrough results of Khot-Minzer-Safra giving a $$\widetilde{O}(\sqrt{n})$$ non-adaptive tester, Blais-Belovs proved the first polynomial lower bound for adaptive testers, recently improved to $$O(n^{1/3})$$ by Chen, Waingarten, and Xi. The burning question: does adaptivity help? This result shows gives an adaptive tester that runs in $$O(\mathbf{I}(f))$$, the total influence of $$f$$. Thus, we can beat these lower bounds (and the non-adaptive complexity) for low influence functions.

Adaptive Lower Bound for Testing Monotonicity on the Line, by Aleksandrs Belovs (arXiv). More monotonicity testing! But this time on functions $$f:[n] \to [r]$$. Classic results on property testing show that monotonicity can be tested in $$O(\varepsilon^{-1}\log n)$$ time. A recent extension of these ideas by Pallavoor-Raskhodnikova-Varma replace the $$\log n$$ with $$\log r$$, an improvement for small ranges. This paper proves an almost matching lower bound of $$(\log r)/(\log\log r)$$. The main construction can be used to give a substantially simpler proof of an $$\Omega(d\log n)$$ lower bound for monotonicity testing on hypergrids $$f:[n]^d \to \mathbb{N}$$. The primary contribution is giving explicit lower bound constructions and avoiding Ramsey-theoretical arguments previously used for monotonicity lower bounds.

Earthmover Resilience and Testing in Ordered Structures, by Omri Ben-Eliezer and Eldar Fischer (arXiv). While there has been much progress on understanding the constant-time testability of graphs, the picture is not so clear for ordered structures (such as strings/matrices). There are a number of roadblocks (unlike the graph setting): there are no canonical testers for, say, string properties, there are testable properties that are not tolerant testable, and Szemeredi-type regular partitions may not exist for such properties. The main contribution of this paper is to find a natural, useful condition on ordered properties such that the above roadblocks disappear hold, and thus we have strong testability results. The paper introduces the notion of Earthmover Resilient properties (ER). Basically, a graph properties is a property of symmetric matrices that is invariant under permutation of base elements (rows/columns). An ER property is one that is invariant under mild perturbations of the base elements. The natural special cases of ER properties are those over strings and matrices, and it is includes all graph properties as well as image properties studied in this context. There are a number of characterization results. Most interestingly, for ER properties of images (binary matrices) and edge-colored ordered graphs, the following are equivalent: existence of canonical testers, tolerant testability, and regular reducibility.

Nondeterminisic Sublinear Time Has Measure 0 in P, by John Hitchcock and Adewale Sekoni (arXiv). Not your usual property testing paper, but on sublinear (non-deterministic) time nonetheless. Consider the complexity class of $$NTIME(n^\delta)$$, for $$\delta < 1$$. This paper shows that this complexity class is a "negligible" fraction of $$P$$. (The analogous result was known for $$\alpha < 1/11$$ by Cai-Sivakumar-Strauss.) This requires a technical concept of measure for languages and complexity classes. While I don’t claim to understand the details, the math boils down to understanding the following process. Consider some language $$\mathcal{L}$$ and a martingale betting process that repeatedly tries to guess the membership of strings $$x_1, x_2, \ldots$$ in a well-defined order. If one can define such a betting process with a limited computational resource that has unbounded gains, then $$\mathcal{L}$$ has measure 0 with respect to that (limited) resource.

# News for October 2017

After the flood of papers over the past few months, it’s reduced to “merely” six papers this October. And only two papers on distribution testing 🙂 (Update: Gautam pointed out another paper that also does distribution testing under Wasserstein distance.)

Proofs of Proximity for Distribution Testing, by Alessandro Cheisa and Tom Gur (ECCC). Take 1 on distribution testing. As usual, we have a distribution $$\mathcal{D}$$ over $$[n]$$, a property $$\mathbf{P}$$ of distributions, and a proximity parameter $$\varepsilon > 0$$. But our tester is now aided by a purported proof (or a prover). If $$\mathcal{D} \in \mathbf{P}$$, there must exist a proof/prover strategy that makes the tester accept. If $$\mathcal{D}$$ is $$\varepsilon$$-far from $$\mathbf{P}$$, for any proof/prover strategy, the tester must reject with high probability. This paper studies a number of settings: deterministic vs randomized testers, proof (a la $$\mathbb{NP}$$ or $$\mathbb{MA}$$) vs provers (a la $$\mathbb{IP}$$). There are a number of very intriguing results, so here are some highlights. For every property, there is a near-linear proof that allows for $$O(\sqrt{n})$$ testers. For every property, either the proof length or the tester (sample) complexity is at least $$\Omega(\sqrt{s})$$, where $$s$$ is the optimal complexity of a vanilla tester. But there exist prover strategies that can beat this lower bound.

Wasserstein Identity Testing, by Shichuan Deng, Wenzheng Li, and Xuan Wu (arXiv). For Take 2 on distribution testing, this paper considers the problem on continuous distributions. In all results on distribution testing, the sample complexity depends on the support size. This breaks down in this setting, so this paper focuses on identity testing according to Wasserstein distance (as opposed to $$\ell_p$$-norms). A previous paper of Do Ba-Nguyen-Nguyen-Rubinfeld also considers the same problem, where the domain is assumed to be $$[\Delta]^d$$. In this paper, we assume that the domain $$X$$ is endowed with a metric space, to allow for the definition of Wasserstein/earthmover distance between distributions. The final result is technical, depends on the size of nets in $$X$$, and is shown to be optimal for testing equality with a known distribution $$p$$. The paper also gives an instance optimal for (almost) all $$p$$, a la Valiant-Valiant for the discrete case.

Improved Bounds for Testing Forbidden Order Patterns, by Omri Ben-Elizer and Clément Canonne (arXiv). Any function from $$f:D \to \mathbb{R}$$, where $$D$$ is a total order, can be thought to induce a permutation, based on the order of function values. Consider $$f:[n] \to \mathbb{R}$$ and permutation $$\pi \in S_k$$. The property of $$\pi$$-free permutations is the set of $$f$$ such that no restriction of $$f$$ (to a subdomain of size $$k$$) induces $$\pi$$. Newman et al proved that this property can be tested non-adaptively in (ignoring $$\varepsilon$$) $$O(n^{1-1/k})$$ samples. Furthermore, for non-monotone $$\pi$$, there is a non-adaptive $$\Omega(\sqrt{n})$$ lower bound. This paper has a number of results that shed significant light on the non-adaptive complexity. The upper bound is improved to $$O(n^{1-1/(k-1)})$$, and there exist a class of permutations (for every $$k$$) for which this is the optimal complexity. Furthermore, for a random $$\pi$$, the paper shows a lower bound of $$\Omega(n^{1-1/(k-3)})$$. There is an intriguing conjecture on the optimal complexity for any $$k$$, that has an intricate dependence on the structure of $$k$$. On the adaptive side, there is an interesting hierarchy for testing $$(1,3,2)$$-freeness, depending on the number of rounds of adaptivity. There is an $$O(n^{1/(r+1)})$$ tester with $$r$$-rounds of adaptivity, and any such tester requires $$O(n^{1/2^{r+3}})$$ queries.

A $$o(d)\cdot \mathrm{poly}\log n$$ Monotonicity Tester for Boolean Functions
over the Hypergrid $$[n]^d$$
, by Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri (arXiv). Monotonicity testing has seen much progress in the past few years. This paper focuses on monotonicity testing over the hypergrid, of functions $$f:[n]^d \to \{0,1\}$$. For $$n=2$$ (the hypercube), the result of Khot-Minzer-Safra gives a $$\widetilde{\sqrt{d}}$$ time (non-adaptive) tester. Previously, the best known tester for general $$n$$ took $$O(d\log d)$$ queries, by Berman-Raskhodnikova-Yaroslavtsev. This paper breaks the barrier of $$d$$ queries for the hypergrid, by giving a $$O(d^{5/6}\log n)$$ time tester. The main technical ingredient is a new isoperimetric inequality for the directed “augmented” hypergrid, where pairs differing on one coordinate by a power of 2 are also connected. The intriguing open question that remains is the existence of testers with query complexity sublinear in $$d$$ and independent of $$n$$.

Provable and practical approximations for the degree distribution using sublinear graph samples, by Talya Eden, Shweta Jain, Ali Pinar, Dana Ron, and C. Seshadhri (arXiv). The degree distribution of a simple, undirected graph $$G$$ is the sequence $$\{n_d\}$$, where $$n_d$$ is the number of vertices of degree $$d$$. The properties of the degree distribution have played a significant role in network science and the mathematical study of real-world graphs. It is often the first quantity computed in a “data analysis” of a massive graph. This paper addresses the problem of getting (bicriteria) estimates for all values of the degree distribution, in sublinear time. The main result gives sublinear time algorithms for computing degree distributions with sufficiently “fat” tails, as measured by certain fatness indices. For the important special case, when the degree distribution is a power law ($$n_d \propto 1/d^{\gamma}$$), this result yields strictly sublinear algorithms. Interestingly, the main result involves the h-index of the degree sequence, inspired by bibliographic metrics. This problem is practically important, and the paper demonstrates the quality of the sublinear algorithm in practice.

On the Complexity of Sampling Vertices Uniformly from a Graph, by Flavio Chierichetti and Shahrzad Haddadan (arXiv). This paper isn’t your traditional property testing paper, but is very relevant to those of us interested in graph property testing. One common query model in graph property testing is access to uniform random vertices. In practice though (think of a graph like the Facebook friendship network or the web graph), this is quite challenging. We typically have access to a few seed vertices, and we can “crawl” the graph. A natural approach is to perform a random walk (in the hope of mixing quickly) to generate random vertices. We might attempt rejection sampling or Metropolis-Hastings on top of that to get uniform random vertices. A recent result of Chierichetti et al give an algorithm for this problem using $$O(t_{mix} d_{avg})$$ samples, where $$t_{mix}$$ is the mixing time (of the random walk on the graph) and $$d_{avg}$$ is the average degree. This paper proves that this bound is optimal, for most reasonable choices of these parameters.

# News for July 2017

We’re seeing lots of papers in the summer. I guess the heat and sun (and more time?) is bringing out the good stuff. Distribution testing, from testing to streaming, hypergraph testing, and bunch of papers on graph testing. And just for the record: in what follows, $$n$$ is the support size in distribution testing, it’s the number of vertices in graph testing, and it’s the length when the input is a string. And $$m$$ is the number of edges when the input is a graph. Amen. (Update: added a paper on testing Dyck languages. 11 papers this month, I think that’s a PTReview record.)

Which Distribution Distances are Sublinearly Testable?, by Constantinos Daskalakis, Gautam Kamath, and John Wright (Arxiv). Distribution testing is seeing much progress these days. Many questions in this area can be formulated as: given a known distribution $$q$$, how many samples from an unknown distribution $$p$$ are required to determine the distance between $$p$$ and $$q$$? For the “identity testing” setting, we are given discrete distribution $$q$$, two distance measures $$d_1, d_2$$, and proximity parameters $$\varepsilon_1, \varepsilon_2$$. How many samples from unknown distribution $$p$$ are required to distinguish $$d_1(p,q) \leq \varepsilon_1$$ from $$d_2(p,q) \geq \varepsilon_2$$? Observe the asymmetric nature of the question. The choices for the distributions are most of the standard ones: TV, KL-divergence, Hellinger, $$\chi^2$$, and $$\ell_2$$. And the paper basically completes the entire picture by giving matching upper and lower bounds (in terms of the support size $$n$$) and showing certain settings that are untestable with sublinear sample complexity. The results also consider the “equivalence/closeness testing” case, where $$q$$ is also unknown. There are many interesting aspects of this collection of results. For example, when $$d_1$$ is KL-divergence instead of $$\chi^2$$, the complexity can jump from $$\Theta(\sqrt{n})$$ to $$\Theta(n/\log n)$$.

We have two independent papers with similar results (and titles!).

Differentially Private Identity and Closeness Testing of Discrete Distributions, by Maryam Aliakbarpour, Ilias Diakonikolas, and Ronitt Rubinfeld (arXiv). Differentially Private Testing of Identity and Closeness of Discrete Distributions, by Jayadev Acharya, Ziteng Sun, and Huanyu Zhang (arXiv). Both these papers study the problem of identity testing over TV-distance, in the differentially private setting. In essence, the distribution testing algorithm should not be able to distinguish between two “neighboring” input distributions $$p, p’$$. The main result, achieved in both papers, is such an algorithm for identity (and closeness) testing whose sample complexity is quite close to the best non-differentially private algorithms. At a high level, the approaches used are also similar, specifically using a result of Goldreich on black-box reductions from identity to uniformity testing, and designing a private version of Paninski’s uniformity tester.

Estimating parameters associated with monotone properties, by Carlos Hoppen, Yoshiharu Kohayakawa, Richard Lang, Hanno Lefmann, and Henrique Stagni (arXiv). The study of monotone graph properties has rich history in graph property testing. This paper focuses on constant-query estimation of monotone graph parameters. A monotone graph property is one that is closed under subgraphs, and a monotone parameter is one that is non-increasing for subgraphs. Given a family of (constant-size) graphs $$\mathcal{F}$$, a graph is $$\mathcal{F}$$-free if it contains no subgraph in $$\mathcal{F}$$. For any graph $$G$$, define the parameter $$z_\mathcal{F}$$ to be (essentially) the log of the number of $$\mathcal{F}$$-free spanning subgraphs of $$G$$. This parameter has been studied in the context of subgraph counting problems. This papers studies the complexity of estimating $$z_\mathcal{F}$$ up to additive error $$\varepsilon$$. Most results of this flavor typically involve complexities that are towers in $$1/\varepsilon$$, but this result builds connections with the breakthrough subgraph removal lemma of Fox, to yield complexities that are towers in $$\log(1/\varepsilon)$$.

Testable Bounded Degree Graph Properties Are Random Order Streamable, by Morteza Monemizadeh, S. Muthukrishnan, Pan Peng, and Christian Sohler (arXiv). This paper proves a nice connection between sublinear and streaming algorithms. A number of properties, such as connectivity, subgraph-freeness, minor-closed properties, are constant-time testable in the bounded-degree graph setting. The main result here shows that any such property can also be tested using constant space (in terms of words) when the graph is a random order stream. In contrast, existing work by Huang and Peng proves $$\Omega(n)$$ space lower bounds for some of these properties for adversarial edge order. The main technical ingredient is an algorithm that maintains distributions of constant-sized rooted subgraphs in the random-order streaming setting.

On Testing Minor-Freeness in Bounded Degree Graphs With One-Sided Error, by Hendrik Fichtenberger, Reut Levi, Yadu Vasudev, and Maximilian Wötzel (arXiv). A major open problem in testing of bounded-degree graphs is the one-sided testability of minor-closed properties. Benjamini, Schramm, and Shapira first proved that minor-closed properties are two-sided testable, and conjectured that the one-sided complexity is $$O(\sqrt{n})$$. Czumaj et al prove this result for the specific property of being $$C_k$$-minor free (where $$C_k$$ is the $$k$$-cycle). This paper gives a $$O(n^{2/3})$$ algorithm for testing whether a graph is $$k \times 2$$-grid minor-free. The latter class includes graphs that are cycles with non-intersecting chords and cactus graphs. A common approach in one-sided graph testing is to use the existence of a “nice” decomposition
of the graph into small well-behaved pieces, and argue that the tester only has to look within these pieces. This result employs a recent partitioning scheme of Lenzen and Levi into pieces of size $$O(n^{1/3})$$. Unfortunately, this partitioning may remove many edges, and the tester is forced to find minors among these cut edges. The combinatorics of the $$k \times 2$$-grid minor ensures that any such large cut always contains such a minor, and it can be detected efficiently.

Testing bounded arboricity, by Talya Eden, Reut Levi, and Dana Ron (arXiv). The arboricity of a graph is the minimum number of spanning forests that a graph can be decomposed into. It is a 2-approximation of the maximum average degree of a subgraph, and thus is a measure of whether “local” density exists. For example, the arboricity of all minor-closed graphs is constant. The main result in this paper is a property tester for arboricity. Formally, the paper provides an algorithm that accepts if the graph is $$\varepsilon$$-close to having arboricity $$\alpha$$, and rejects if the graph is $$c\varepsilon$$-far from having arboricity $$3\alpha$$ (for large constant $$c$$). Ignoring dependencies on $$\varepsilon$$, the running time is $$O(n/\sqrt{m} + n\alpha/m)$$, which is proven to be optimal. The result builds on distributed graph algorithms for forest decompositions.The constant of $$3$$ can be brought down arbitrarily close to $$2$$, and it is an intriguing question if it can be reduced further.

On Approximating the Number of k-cliques in Sublinear Time, by Talya Eden, Dana Ron, and C. Seshadhri (arXiv). This paper studies the classic problem of approximating the number of $$k$$-cliques of a graph, in the adjacency list representation. The main result is an algorithm that $$(1+\varepsilon)$$-approximates $$C_k$$ (the number of $$k$$-cliques) in time $$O(n/C^k_k + m^{k/2}/C_k)$$. (This expression ignored polylogs and $$\varepsilon$$.) Somewhat surprisingly, this strange expression is the optimal complexity. It is a generalization of previous result of Goldreich-Ron (average degree/edge counting, $$k=2$$) and Eden et al (triangle counting, $$k=3$$). The algorithm is quite intricate, and is achieved by building on various sampling techniques in these results.

A characterization of testable hypergraph properties, by Felix Joos, Jaehoon Kim, Daniela Kühn, and Deryk Osthus (arXiv). A fundamental result of Alon, Fischer, Newman, and Shapira characterized all constant-time testable (dense) graph properties, through a connection to the regularity lemma. This paper essentially extends that characterization to $$k$$-uniform hypergraph properties. Essentially, the main results shows the testability of property $$\bf{P}$$ is equivalent to $$\bf{P}$$ being “regular-reducible”. This is a rather involved definition that captures properties where hypergraph regularity can be used for testing. Furthermore, this can also be used to prove that any testable property also has a constant-time distance estimation algorithm.

Distributed Testing of Conductance, by Hendrik Fichtenberger and Yadu Vasudev (arXiv). There has been recent interest in property testing in the distributed graph framework. The problem studied here in graph conductance. Specifically, we wish to accept a graph with conductance $$\Phi$$ and reject a graph that is far from having conductance $$\Phi^2$$. The setting is the CONGEST framework, where each vertex has a processor and can only communicate $$O(\log n)$$ amount of data to its neighbors in a single round. The main result gives an algorithm with $$O(\log n)$$ rounds, which is shown to be optimal. Standard testing algorithms for conductance basically perform a collection of random walks from some randomly chosen seeds vertices. An important insight in this paper is that testing algorithms for conductance can be simulated in the CONGEST model without having to maintain all the random walks explicitly. It suffices to transfer a small amount of information that tracks overall statistics of the walks, which suffice for the testing problem.

Improved bounds for testing Dyck languages, by Eldar Fischer, Frédéric Magniez, Tatiana Starikovskaya (arXiv). Testing membership in languages is a classic problem in property testing. Fix a language $$L$$ over an alphabet $$\Sigma$$. Given a string $$x$$ of length $$n$$, the tester should accept if $$x \in L$$ and reject if $$x$$ is far (in terms of Hamming distance) from $$L$$. This paper focuses the simple, yet fundamental setting of $$L$$ being a Dyck language, the set of strings of perfectly balanced parentheses. Let $$D_k$$ be the Dyck language using $$k$$ types of parentheses. The classic paper of Alon et al on regular language testing also gives a constant time tester for $$k=1$$, and subsequent work of Parnas, Ron, Rubinfeld proves, for $$k \geq 2$$, testing complexity bounds of $$O(n^{2/3})$$ and $$\Omega(n^{1/11})$$. This paper gives significant improvements in testing complexity: an upper bound of $$O(n^{2/5+\delta})$$ (for any $$\delta > 0$$) and a lower bound of $$\Omega(n^{1/5})$$. The lower bound proof introduces some new techniques for proving that similarity of transcript distributions of a deterministic algorithm over two distributions of inputs (the key step in any Yao’s minimax lemma application). The proof constructs a third transcript distribution using a special “wild” transcript that represents the (low probability) event of the algorithm “learning” too much. A careful construction of this distribution can used to upper bound the distance between the original transcript distributions.

# News for April 2017

April was a busy month for property testing. A mixture of distribution testing, junta testing, Fourier transforms, matrix problems, and graph testing. Let’s go!
(Update: thanks to Omri Ben Eliezer for correcting some errors in our post. Also, he pointed out that we missed posting about by a property testing paper by Gishboliner-Shapira that appeared in Nov 2016.)

Fourier-Based Testing for Families of Distributions, by Clement Canonne, Ilias Diakonikolas, and Alistair Stewart (ECCC). Readers of PTReview will be familiar with great progress made over the past few years in distribution testing. We wish to determine some property $$\mathcal{P}$$ of a discrete distribution $$D$$ of support size $$n$$, using iid samples from the distribution. The challenge is that the number of samples should be $$o(n)$$. This paper gives a general framework for any property $$\mathcal{P}$$, such that all members (distributions) have sparse Fourier transforms. One of the main tools is a tester for determining if $$D$$ has small Fourier support (and find the restriction to this support). This is a really neat unifying methods that subsumes much of the past work. Moreover, this method leads to new testers for a variety of classes of distributions, including that of multinomial distributions.

Testing from One Sample: Is the casino really using a riffle shuffle, by Constantinos Daskalakis, Nishanth Dikkala, and Nick Gravin (arXiv). This paper introduces a novel twist on distribution testing: suppose $$\mathcal{D}$$ was generated through a Markov Chain, and we only get to see a sequence of samples generated through a random walk. Observe that we do not have the classic iid assumption any more. What can be said now? There is a challenge in defining “closeness” of distributions, which is done by looking at the variation distance between sequences generated through random walks of a carefully chosen length. The focus is on identity testing in two different regimes. First, with no assumptions on the Markov chain, there are general results that take $$O(n)$$ queries (note that the Markov chain has size $$O(n^2)$$). Secondly, for special “sparse” Markov Chains such as various models of card shuffling, one can get the number of samples to be logarithmic in size of the state space. Thus, one effectively gets to see a single shuffle process (which is like a walk in a Markov Chain), and can still decide if it is generating a uniform distribution.

Settling the query complexity of non-adaptive junta testing, by Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie (arXiv). Ah, progress on a classic open problem! A function $$f:\{0,1\}^n \to \{0,1\}$$ is a $$k$$-junta if it only depends on $$k$$ variables. After a long line of work, Blais provided a non-adaptive $$\widetilde{O}(k^{3/2})$$ tester and in a separate result of Blais gave an adaptive (ignoring $$\epsilon$$ factors) $$\widetilde{O}(k)$$ tester. Previous to this paper, the best non-adaptive lower bound was essentially $$\Omega(k)$$, ignoring polylog factors. This paper finally provides a matching non-adaptive lower bound of $$\Omega(k^{3/2})$$, up to polylog factors! This basically settles the knowledge (adaptive and non-adaptive) of this problem, up to polylogs.

An Adaptive Sublinear-Time Block Sparse Fourier Transform, by Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (arXiv).
(We should have caught this when it was first published in Feb 2017, but at least we caught the update!) Consider a vector $$X$$ whose Fourier transform we wish to compute. Suppose there are only $$k$$ non-zero coefficients (typically, this is what we care about). We can actually compute these coefficients using $$O(k\log n)$$ queries, which is strongly sublinear when $$k \ll n$$. An important question is when the sample complexity can go below $$k\log n$$, which is optimal in general. Often sparsity Fourier coefficients have structure that can be exploited. This paper studies block-sparsity, where all non-zero coefficients occur in $$k_0$$ contiguous blocks (in Fourier space) of size $$k_1$$. Thus, the total sparsity is $$k = k_0k_1$$. This paper designs an algorithm with adaptive query complexity $$O(k_0k_1 + k_0\log n)$$, thereby circumventing the $$k\log n$$ barrier for $$k_0 \ll k$$. Interestingly, the paper gives lower bounds showing that adaptivity is necessary for removing the $$\log n$$ factor.

Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices, by Cameron Musco and David P. Woodruff (arXiv). Few problems occur as often as computing a low-rank approximation of a matrix $$A$$. Formally, one wishes to compute a rank $$k$$-matrix $$B$$ such that $$\|A-B\|_F$$ is minimized. SVD is the obvious method to do this, but is computationally expensive. There are a number of sampling methods, but all of them require reading all of $$A$$. This paper gives a truly sublinear-time algorithm (for small $$k$$) for PSD matrices. Formally, the algorithm presented here requires $$O(n \cdot \mathrm{poly}(k))$$ running time, where $$n$$ is the dimension size of $$A$$. It is well-known from previous work that there exist a subset of $$k$$ columns that form a good low-rank approximation. Yet finding them requires reading the whole matrix to compute various sampling strategies. The intuition here is that large entries cannot “hide” in arbitrary locations in a PSD matrix.

Testing hereditary properties of ordered graphs and matrices, by Noga Alon, Omri Ben-Eliezer, and Eldar Fischer (arXiv). Classic results by Alon-Shapira and Alon-Fischer-Newman-Shapira have shown that hereditary graph properties are testable in constant time. There is a deep connection between testing such properties and the existence of subgraph removal lemmas. The latter assert that, given some family $$\mathcal{F}$$ of forbidden subgraphs, if an $$n$$-vertex graph is $$\epsilon$$-far from being $$\mathcal{F}$$-free, then it must contain $$\delta n^q$$ copies of some $$F \in \mathcal{F}$$ (where $$F$$ has $$q$$ vertices, $$\delta$$ is a function only of $$\epsilon$$). This paper generalizes these theorems to general matrices over finite domains and ordered edge-colored graphs, where vertices in the graphs have a fixed ordering. Thus, one does not require the symmetry (over isomorphism) of graph properties for testability. These theorems lead to testability results for large classes of properties defined through forbidden substructures.

Removal Lemmas with Polynomial Bounds, by Lior Gishboliner and Asaf Shapira (arXiv). (Paper originally appeared in Nov 2016, but we missed it.) As mentioned above, there is a close connection between property testing of dense graphs and graph removal lemmas. Yet the property testers that result from these lemmas, like the celebrated triangle removal lemma, have terrible tower-like dependences on the parameter $$\epsilon$$. When can we get polynomial dependences on $$\epsilon$$ (which is called “easy testability”)? This fundamental question is answered in this paper. Consider the case when $$\mathcal{F}$$ is finite. If $$\mathcal{F}$$ contains a bipartite graph, the complement of a bipartite graph, and a split graph, then there are polynomially bounded graph removal lemmas (and hence easy property testers) for this family. Furthermore, one cannot drop any of these requirements! (A split graph is one where vertices can be partitioned into a set spanning a clique, and a set spanning an independent set.) There are generalizations to infinite families of graphs. An important corollary of this result is a proof of easy testability of semi-algebraic properties. This captures properties of graphs where vertices are points in Euclidean space and edges are present when the points satisfy some polynomial constraint (in the coordinates).

# News for January 2017

2017 starts off rather slow for property testing. Though, we have an intriguing paper to report – an experimental analysis of a classic sublinear algorithm.

Evaluating a Sublinear-time Algorithm for the Minimum Spanning Tree Weight Problem, by Gabriele Santi and Leonardo De Laurentiis (arXiv). The Chazelle-Rubinfeld-Trevisan Minimum Spanning Tree algorithm is a classic in sublinear algorithms. This algorithms provides a $$(1+\varepsilon)$$ approximation to the MST in time independent of the number of vertices (although it does depend on the average degree). But how this compare with Prim’s algorithm on real instances, in a real (not theoretical) computer? This intriguing paper does a detailed experimental comparison. Having done experimental graph algorithms myself, I can attest to the various challenges: how to choose a test set of graphs? How to set error parameters? Can data structure optimization on the coding side beat asymptotic improvements? This paper does a series of experiments on synthetic graph generators (such as Erdős-Rényi, Barabási-Albert, Watts-Strogatz models). They do validate the basic CRT algorithm at scale, by showing that it is faster than Prim for graphs with more than a million edges. Their experiments suggest that the sublinear-time algorithm gives little benefits when $$\varepsilon \leq 0.2$$. The paper has many experiments for a variety of settings, and the authors do a comprehensive study of the various parameters. I’d definitely recommend to anyone interested in exploring how property testing might influence algorithms in the real world.