News for April 2022

We have…I don’t know, I’ve lost count of the number of papers this month. It’s a big bonanza. Sublinear algorithms for edit distance, planar graphs, distributions, bipartite graphs, groups, error correcting codes, Bayesian nets, polynomials…

Let’s proceed with the spread.

Improved Sublinear-Time Edit Distance for Preprocessed Strings by Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos (arXiv). The edit distance between strings is a classic and important problem in algorithms. You might recall that classic \(O(n^2)\) algorithm to compute the edit distance between strings of length \(n\). It has been show that getting a \(O(n^{2-\delta})\) time algorithm is SETH-hard. But what can be done in sublinear time? This paper considers the preprocessed version: suppose we can perform near-linear preprocessing on the strings. We now want to distinguish between edit distance between \(\leq k\) and \(\geq k\cdot n^{o(1)}\). This paper shows that with near-linear preprocessing on strings, one can solve this problem in \(k \cdot n^{o(1)}\) time.

Optimal Closeness Testing of Discrete Distributions Made Complex Simple by (our own) Clément L. Canonne and Yucheng Sun (arXiv). Given two distributions \(p, q\) over support \([k]\), the aim is to distinguish between (i) the distributions being equal, and (ii) the total variation distance between \(p, q\) being at least \(\epsilon\). The tester should has a failure probability of at most \(\delta\). A recent work nails down the sample complexity with respect to all parameters. This paper gives a simpler proof of the main result. Earlier proofs used Poissonization tricks and fairly clever arguments about Poisson random variables. This proof is much more transparent, and uses an identity that relates the expectation of a random variable to its characteristic function. A nice feature of this proof is that it works directly with the multinomial distribution, which means a fixed number of samples (rather than choosing the number of samples from a distribution).

Tolerant Bipartiteness Testing in Dense Graphs by Arijit Ghosh, Gopinath Mishra, Rahul Raychaudhury, and Sayantan Sen (arXiv). Testing bipartiteness of dense graphs is about as a classic as it gets. We wish to distinguish a bipartite graph from one that requires \(\varepsilon n^2\) edge removals to make it bipartite. Readers of this blog should know that there is a \(\widetilde{O}(\varepsilon^{-2})\)-query property tester for this problem. (Ok, so now you know.) This paper studies the tolerant version of bipartiteness testing. Note that this is equivalent to approximating the maxcut, up to additive error \(\varepsilon n^2\). Classic approximation algorithms show that the latter can be done in \(\widetilde{O}(\varepsilon^{-6})\) queries and \(\exp(\widetilde{O}(\varepsilon^{-2}))\) time. This paper considers the easier problem of distinguishing whether the distance to bipartiteness is at most \(\varepsilon\) or at least \(2 \varepsilon\). This problem is solved in \(\widetilde{O}(\varepsilon^{-3})\) queries and \(\exp(\widetilde{O}(\varepsilon^{-1}))\).

Properly learning monotone functions via local reconstruction by Jane Lange, Ronitt Rubinfeld, Arsen Vasilyan (arXiv). Ah yes, monotone functions. An ongoing love (obsession? interest?) for property testing people. This paper studies the problem of proper learning of Boolean valued monotone functions over the Boolean hypercube. Given access to uniform random evaluations of a monotone function \(f:\{0,1\}^n \to \{0,1\}\), we wish to compute a monotone function \(g\) that approximates the original function. Classic results from Fourier analysis show that an approximation can be learned using \(\exp(\sqrt{n}/\varepsilon)\) queries. But this approximation function might not be monotone, and only yields improper learning. This paper gives a proper learner that outputs a monotone approximation, in roughly the same query complexity. This result directly gives a constant tolerance monotonicity tester for Boolean functions. The paper uses recent results from distributed algorithms and local computation. It also leads to tolerant testers for monotonicity over posets with small diameter.

Massively Parallel Computation and Sublinear-Time Algorithms for Embedded Planar Graphs by Jacob Holm and Jakub Tětek (arXiv). Sublinear algorithms for planar graphs is another ongoing love (at least for me). This paper considers a new take of this problem: suppose we have access to a geometric embedding of a planar graph \(G\). Can we get sublinear algorithms for a variety of problems? This paper first shows how to construct a convenient decomposition, called an \(r\)-division, in sublinear time. This division can be used to approximate Lipschitz graph parameters, such as maximum matching sizes, maximum independent set, etc. The paper also shows how to compute an \(r\)-division in the MPC model, which solves many classic graph problems (connected components, matchings, etc.) in \(O(1)\) rounds. There is a (conditional) lower bound showing that, without an embedding, it is not possible to solve such problems in \(O(1)\) rounds (and sublinear space per processor).

Independence Testing for Bounded Degree Bayesian Network by Arnab Bhattacharyya, Clément L. Canonne (again, our own), and Joy Qiping Yang (arXiv). Given a distribution \(\mathcal{P}\) on the Boolean hypercube \(\{0,1\}^n\), the problem is to determine whether \(\mathcal{P}\) is a product distribution. In general, this problem requires \(\Omega(2^n)\) samples. Suppose \(\mathcal{P}\) has a sparse, “efficient” description. Can we do better? This paper shows that when \(\mathcal{P}\) is generated by a Bayesian network (with bounded indegree), then the independence testing problem can be solved with a \(\widetilde{O}(n/\varepsilon^2)\) samples. Think of a Bayesian network as a DAG, where each vertex generates a Bernoulli random variable. The variable at a vertex depends only the outcomes at its neighborhood.

Low Degree Testing over the Reals by Vipul Arora, Arnab Bhattacharyya, Noah Fleming, Esty Kelman, and Yuichi Yoshida (arXiv, ECCC). The problem testing low degree polynomials goes back to the birth of property testing. This paper studies real valued polynomials, in the distribution free setting. Formally, we have query access to a function \(f: \mathbb{R}^d \to \mathbb{R}\). The distance is measured with respect to an unknown distribution \(\mathcal{D}\) over the domain. This paper shows that the real low degree testing problem can be solved in \(poly(d\varepsilon^{-1})\) queries (under some reasonableness conditions on the distribution). The approach is go to via the “self-correct and test” approach: try to compute a low degree polynomial that fits some sampled data, and then check how far the self-corrected version is from another sample.

Testing distributional assumptions of learning algorithms by Ronitt Rubinfeld and Arsen Vasilyan (arXiv). Consider the problem of learning a halfspace over \(\mathbb{R}^n\). If the underlying distribution is Gaussian, then this class can be learned in \(n^{poly(\varepsilon^{-1})}\) samples. If the distribution is arbitrary, no \(2^{o(n)}\) algorithm is known despite much research. This paper introduces the notion of having a tester-learner pair. The tester first checks if the input distribution is “well-behaved” (Gaussian-like). If the tester passes, then we run the learner. Indeed, this perspective goes back to some of the original motivations for property testing (when is testing faster than learning). The intriguing aspect of this problem is that we do not have efficient testers for determining if an input distribution is Gaussian. This paper circumvents that problem by estimating certain moments of the distribution. If these moments agree with the moments of a Gaussian, then the learner is guaranteed to succeed. We get the best of both worlds: if the input distribution is Gaussian, the learning is done correctly. If the learner succeeds, then then output (hypothesis) is guaranteed to be correct, regardless of the input distribution.

Testability in group theory by Oren Becker, Alexander Lubotzky, and Jonathan Mosheiff (arXiv). This paper is the journal version of a result of the authors, and it gives a group theoretic presentation of a property testing result. Consider the following problem. The input is a pair permutations \((\sigma_1, \sigma_2)\) over \([n]\). The aim is to test whether they commute: \(\sigma_1 \sigma_2 = \sigma_2 \sigma_1\). Another result of the authors gives a tester that makes \(O(\varepsilon^{-1})\) queries. They refer to this problem as “testing the relation” \(XY = YX\). This paper gives a grand generalization of that result, best explained by another example. Consider another relation/property denoted \(\{XZ = ZX, YZ = ZY\}\). This property consists of all triples of permutations \((\sigma_1, \sigma_2, \sigma_3)\), where \(\sigma_3\) commutes with the other two. A consequence of the main theorem is that this property is not testable with query complexity independent of \(n\). The main result of this paper is a characterization of testable relations, which goes via studying the expansion of an infinite graph associated with the relation.

Testing Positive Semidefiniteness Using Linear Measurements by Deanna Needell, William Swartworth, and David P. Woodruff (arXiv). The input is a \(d \times d\) real, symmetric matrix \(M\) and we wish to determine if it is positive semidefinite (all eigenvalues are positive). For the testing problem, we reject when the minimum eigenvalue is at most \(-\varepsilon \|M\|_2\). (The paper also considers general Schatten \(p\)-norms.) This paper gives a list of results for non-adaptive vs adaptive, and one-sided vs two-sided testers. There are two access models considered: a single query consists of either a (i) matrix-vector product \(Mx\) or (ii) vector-matrix-vector product \(y^TMx\). Typical models that query entries of the matrix require strong bounds on the entries, which is less reasonable in practical situations. An interesting discovery is that the non-adaptive, one-sided complexity is \(\Theta(\sqrt{d}\varepsilon^{-1})\) while the two-sided bound is independent of \(d\).

Relaxed Locally Decodable and Correctable Codes: Beyond Tensoring by Gil Cohen and Tal Yankovitz (ECCC). Locally decodable and correctable codes are a fundamental object of study in property testing (and TCS in general). Consider a locally correctable code (LCC). Given a string \(x\), the decoder/corrector makes \(q\) queries to \(x\), and outputs a symbol. We can think of the output collectively as a string \(y\). If \(x\) is a codeword, then \(y = x\). Otherwise, \(dist(y,z) \leq \varepsilon\), where \(z\) is some codeword close to \(x\). In the relaxed version, the corrector is allowed to output \(\bot\), denoting that it has discovered corruption. The distance is only measured in the coordinates where the corrector does not output \(\bot\). Thus, the corrector gets a “free pass” if it outputs \(\bot\). But note that when \(x\) is a codeword, the output must be exactly \(x\). This paper gives a Relaxed LCC with query complexity \((\log n)^{O(\log\log\log n)}\), a significant improvement over the previous best \((\log n)^{O(\log\log n)}\). It is know from previous work that the query complexity must be \(\Omega(\sqrt{\log n})\).

Verifying The Unseen: Interactive Proofs for Label-Invariant Distribution Properties by Tal Herman and Guy Rothblum (arXiv). This paper considers the distribution testing problem in the context of interactive proofs. The verifier, who wishes to test a property of a distribution \(\mathcal{P}\), interacts with a prover who knows the distribution. The guarantee required is the standard one for interactive proof systems: in the YES case, an honest prover should be able to convince the verifier. In the NO case, no prover can convince the verifier with high probability. There are two important parameters of interest: the sample complexity of the verifier, and the communication complexity of the messages. It is useful to consider the two extremes. In one extreme, the verifier can simply solve the problem herself, ignoring the prover. This could require \(\Theta(n/\log n)\) queries (for the hardest properties like entropy and support size). Another extreme is for the honest prover to simply send an approximate description of the distribution, which takes \(O(n)\) bits. The prover can just test equality to the prover message, which only takes \(\Theta(\sqrt{n})\) queries. This paper shows a 2-round protocol for any (label-invariant) property where both the communication and the sample complexity can be made \(\Theta(\sqrt{n})\). This result shows the power of interaction for distribution testing problems.

One thought on “News for April 2022

  1. Pingback: News for April 2023 | Property Testing Review

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.