Monthly Archives: January 2021

News for December 2020

Happy 2021! We now cover the last papers of 2020: a (smaller) spread with graph property testing, distribution property testing, and circuit complexity.

Almost Optimal Bounds for Sublinear-Time Sampling of k-Cliques: Sampling Cliques is Harder Than Counting by Talya Eden, Dana Ron, and Will Rosenbaum (arXiv). In many settings, sampling and counting are tightly coupled; the classic example being permanent estimation and sampling random bipartite matchings.This paper investigates the problem of sampling uniform random cliques, and unearths an interesting separation of approximately uniform sampling from approximate counting. Let’s consider the fundamental problem of edge and triangle counting, in graphs of constant arboricity. This class, which contains all minor-closed families, has deep connections to clique counting; clique counting can be done in linear time for bounded arboricity graphs.) The input connected graph \(G\) has \(n\) vertices, \(m\) edges, and \(t\) triangles. We assume the standard query model, with uniform random vertices, neighbor, and degree queries. Our aim is to get \((1+\varepsilon)\)-approximations for \(m\) and \(t\) (in general, any \(k\)-clique count). It is known from previous work that edge estimation can be done in \(O(1)\) time and triangle estimation can be done in \(O(n/t)\) time (ignoring log and \(\varepsilon\) dependencies). Moreover, these bounds is optimal. Can one get similar bounds for approximately sampling a uniform random edge/triangle? Previous work of Eden-Rosenbaum achieve such a bound for sampling edges. Surprisingly, this paper shows that triangle sampling is fundamentally harder, and requires \(\Omega(n^{3/4}/\sqrt{t})\) queries. The main result gives a precise and optimal bound for uniform sampling \(k\)-cliques in graphs of arboricity \(\alpha\).

Testing and reconstruction via decision trees by Guy Blanc, Jane Lange, and Li-Yang Tan (arXiv). A property tester is an (sublinear) algorithm that distinguishes an object with a property from those that are far. Reconstruction is a much stronger sublinear algorithm, which actually provides query access to the closest object. Suppose we have query access to an \(n\)-variate Boolean function \(f\), and the property \(\mathcal{P}\) of interest is size-\(s\) decision trees. A reconstructor would give query access to a function \(g \in \mathcal{P}\), such that \(dist(f,g) = O(OPT_f)\), where \(OPT_f\) is the distance of \(f\) to \(\mathcal{P}\). Observe that a reconstructor automatically gives a distance estimator (just estimate \(dist(f,g)\) by random sampling). One of the main results of this paper is a (weaker, bicriteria) reconstructor, where \(g\) is guaranteed to have decision-tree complexity \(s^{O(\log^2s)}\). Each query to \(g\) is computed in \(poly(\log s, 1/\varepsilon)\cdot n\) time. This reconstructor leads to a number of testing results, and reconstructors for other properties like low Fourier degree. Another interesting result proven: if there exists an efficient reconstructor that gets \(g\) of size \(s\), then there exists an efficient proper learner for decision trees (a big open problem).

Testing Product Distributions: A Closer Look by Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, and N. V. Vinodchandran (arXiv). This paper is on distribution testing, where all distributions involved are promised to be product distributions. Let \(P, Q\) be distributions over \(\Sigma^n\), where \(\Sigma\) is some (small) finite set. The problems studied are identity testing (\(Q\) is known/fixed, and one gets samples from unknown \(P\)) and closeness testing (both unknown). It is known from previous work that these problems can basically be solved with \(O(n|\Sigma|\varepsilon^{-2})\) samples. Note that the domain size is exponential in \(n\); thus, these bounds give an exponential improvement over identity/closeness testing in the vanilla (non-product) setting. Tolerant testing is of special significance here. Because the domain is so large, it makes sense to allow for some distance even in the “yes” case. The main result of this paper is to completely map out the complexities for identity/closeness testing where we wish to distinguish \(d_1(P,Q) < \varepsilon_1\) from \(d_2(P,Q) > \varepsilon_2\), where \(d_1, d_2\) are potentially different distance measures. For example, the paper considers total variation (TV), KL, Hellinger, and \(\chi^2\) distances. An important improvement achieved is for identity testing where \(d_1\) is Hellinger and \(d_2\) is TV distance: for certain values of \(\varepsilon_1, \varepsilon_2\), one can get a tester of optimal sample complexity \(O(\sqrt{n|\Sigma|})\). Previous bounds only gave a \(O(n|\Sigma|)\) sample complexity. There are a number of new lower bounds for different combinations of \(d_1, d_2\) distance pairs.