# A Pedagogical reference to kick off the New Year

Our own Clément Canonne has written a beautiful survey which is now available in FnT book format from now publishers. This appears to be a very promising read — especially for the Distribution Testers among you. Today’s post is a mere advertisement for this beautiful survey/book which is clearly the result of a dedicated pursuit.

Let me now dig into this survey a teeny tiny bit. One among the many cool features of this survey is that it uses one central example (testing goodness-of-fit) to give a unified treatment to the diverse tools and techniques used in distribution testing. Another plus for me is the historical notes section that accompanies every chapter. In particular, I really liked jumping into the informative history section at the end of Chapter 2 which has an almost story like feel to it. If the above points do not catch your fancy, then please try opening the survey. You will be hardpressed to find a book that is typeset in such an aesthetically pleasing way with colored fonts to emphasize various parameters in several intricate proofs. Happy Reading!

# News for November 2022

All the best to everyone for a Happy 2023. The holiday season is ripe with lots of papers to engage our readers. So, we have nine papers and we hope some of those will catch your fancy. As a new year treat, we also feature Gilmer’s constant lower bound on the union-closed sets problem of Frankl. On we go.

Sublinear Time Algorithms and Complexity of Approximate Maximum Matching by Soheil Behnezhad, Mohammad Roghani, Aviad Rubinstein (arXiv) This paper makes significantly advances our understanding of the maximum matching problem in the sublinear regime. Your goal is to estimate the size of the maximum matching and you may assume that you have query access to the adjacency list of your graph. Our posts from Dec 2021 and June 2022 reported some impressive progress on this problem. The upshot from these works essentially said that you can beat greedy matching and obtain a $$\frac{1}{2} + \Omega(1)$$ approximate maximum matching in sublinear time. Let me first go over the algorithmic results from the current paper. The paper shows the following two algorithmic results:

(1) An algorithm that runs in time $$n^{2 – \Omega_{\varepsilon}(1)}$$ and returns a $$2/3 – \varepsilon$$ approximation to maximum matching in general graphs, and

(2) An algorithm that runs in time $$n^{2 – \Omega_{\varepsilon}(1)}$$ and returns a $$2/3 + \varepsilon$$ approximation to maximum matching size in bipartite graphs.

The question remained — can we show a lower bound that grows superlinearly with $$n$$. The current work achieves this and shows that even on bipartite graphs, you must make at least $$n^{1.2 – o(1)}$$ queries to the adjacency list to get a better than $$2/3 + \Omega(1)$$ approximation. (An aside: A concurrent work by Bhattacharya-Kiss-Saranurak from December also obtains similar algorithmic results for approximating the maximum matching size in general graphs).

Directed Isoperimetric Theorems for Boolean Functions on the Hypergrid and an $$\widetilde{O}(n \sqrt d)$$ Monotonicity Tester by Hadley Black, Deeparnab Chakrabarty, C. Seshadhri (arXiv) Boolean Monotonicity testing is as classic as classic gets in property testing. Encouraged by the success of isoperimetric theorems over the hypercube domain and the monotonicity testers powered by these isoperimetries (over the hypercube), one may wish to obtain efficient monotonicity testers for the hypergrid $$[n]^d$$. Indeed, the same gang of authors as above showed in a previous work that a Margulis style directed isoperimetry can be extended from the lowly hypercube to the hypergrid. This resulted in a tester with $$\widetilde{O}(d^{5/6})$$ queries. The more intricate task of proving a directed Talagrand style isoperimetry that underlies the Khot-Minzer-Safra breakthrough was a challenge. Was. The featured work extends this isoperimetry from the hypercube to the hypergrid and this gives a tester with query complexity $$\widetilde{O}(n \sqrt d)$$ which is an improvement over the $$d^{5/6}$$ bound for domains where $$n$$ is (say) some small constant. But as they say, when it rains, it pours. This brings us to a concurrent paper with the same result.

Improved Monotonicity Testers via Hypercube Embeddings by Mark Braverman, Subhash Khot, Guy Kindler, Dor Minzer (arXiv) Similar to the paper above, this paper also obtains monotonicity testers over the hypergrid domain, $$[n]^d$$, with $$\widetilde{O}(n^3 \sqrt d)$$ queries. This paper also presents monotonicity testers over the standard hypercube domain — $$\{0,1\}^d$$ in the $$p$$-biased setting. In particular, their tester issues $$\widetilde{O}(\sqrt d)$$ queries to successfully test monotonicity on the $$p$$-biased cube. Coolly enough, this paper also proves directed Talagrand style isoperimetric inequalities both over the hypergrid and the $$p$$-biased hypercube domains.

Toeplitz Low-Rank Approximation with Sublinear Query Complexity by Michael Kapralov, Hannah Lawrence, Mikhail Makarov, Cameron Musco, Kshiteej Sheth (arXiv) Another intriguing paper for the holiday month. So, take a Toeplitz matrix. Did you know that any psd Toeplitz matrix admits a (near-optimal in the Frobenius norm) low-rank approximation which is itself Toeplitz? This is a remarkable statement. The featured paper proves this result and uses it to get more algorithmic mileage. In particular, suppose you are given a $$d \times d$$ Toeplitz matrix $$T$$. Armed with the techniques from the paper you get algorithms that return a Toeplitz matrix $$\widetilde{T}$$ with rank slightly bigger than $$rank(T)$$ which is a very good approximation to $$T$$ in the Frobenius norm. Moreover, the algorithm only issues a number of queries sublinear in the size of $$T$$.

Sampling an Edge in Sublinear Time Exactly and Optimally by Talya Eden, Shyam Narayanan and Jakub Tětek (arXiv) Regular readers of PTReview are no strangers to the fundamental task of sampling a random edge from a graph which you can access via query access to its vertices. Of course, you don’t have direct access to the edges of this graph. This paper considers the task of sampling a truly uniform edge from the graph $$G = (V,E)$$ with $$|V| = n, |E| = m$$. In STOC 22, Tětek and Thorup presented an algorithm for a relaxation of this problem where you want an $$\varepsilon$$-approximately unifrom edge. This algorithm runs in time $$O\left(\frac{n}{\sqrt{m}} \cdot \log(1/\varepsilon) \right)$$. The featured paper presents an algorithm that samples an honest to goodness uniform edge in expected time $$O(n/\sqrt{m})$$. This closes the problem as we already know a matching lower bound. Indeed, just consider a graph with $$O(\sqrt m)$$ vertices which induce a clique and all the remaining components are singletons. You need to sample at least $$\Omega(n/\sqrt m)$$ vertices before you see any edge.

Support Size Estimation: The Power of Conditioning by Diptarka Chakraborty, Gunjan Kumar, Kuldeep S. Meel (arXiv) This work considers the classic problem of support size estimation with a slight twist. You are given access to a stronger (conditioning based) sampling oracle. Let me highlight one of the results from this paper. So, you are given a distribution $$D$$ where $$supp(D) \subseteq [n]$$. You want to obtain an estimate to $$supp(D)$$ that lies within $$supp(D) \pm \varepsilon n$$ with high probability. Suppose you are also given access to the following sampling oracle. You may choose any subset $$S \subseteq [n]$$ and you may request a sample $$x \sim D\vert_S$$. An element $$x \in S$$ is returned with probability $$D\vert_S(x) = D(x)/D(S)$$ (for simplicity of this post, let us assume $$D(S) > 0$$). In addition, this oracle also reveals for you the value $$D(x)$$. The paper shows that the algorithmic task of obtaining a high probability estimate to the support size (to within $$\pm \varepsilon n$$) with this sampling oracle admits a lower bound of $$\Omega(\log (\log n)$$ calls to the sampling oracle.

Computing (1+epsilon)-Approximate Degeneracy in Sublinear Time by Valerie King, Alex Thomo, Quinton Yong (arXiv) Degeneracy is one of the important graph parameters which is relevant to several problems in algorithmic graph theory. A graph $$G = (V,E)$$ is $$\delta$$-degenerate if all induced subgraphs of $$G$$ contain a vertex with degree at most $$\delta$$. The featured paper presents algorithms for a $$(1 + \varepsilon)$$-approximation to degeneracy of $$G$$ where you are given access to $$G$$ via its adjacency list.

Learning and Testing Latent-Tree Ising Models Efficiently by Davin Choo, Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros (arXiv) Ising models are emerging as a rich and fertile frontier for Property Testing and Learning Theory researchers (at least to the uninitiated ones like me). This paper considers latent-tree ising models. These are ising models that can only be observed at their leaf nodes. One of the results in this paper gives an algorithm for testing whether the leaf distributions attached to two latent-tree ising models are close or far in the TV distance.

A constant lower bound for the union-closed sets conjecture by Justin Gilmer (arXiv) The union-closed sets conjecture of Frankl states that for any union closed set system $$\mathcal{F} \subseteq 2^{[n]}$$, it holds that there is a mysterious element $$i \in [n]$$ that shows up in at least $$c = 1/2$$ of the sets in $$\mathcal{F}$$. Gilmer took a first swipe on this problem and gave a constant lower bound of $$c = 0.01$$. This has already been improved by at least four different groups to $$\frac{3-\sqrt{5}}{2}$$, a bound which is the limit of Gilmer’s method (which takes all of only 9 pages!).

The key lemma Gilmer proves is the following. Suppose you sample two sets: $$A, B \sim \mathcal{D}_n$$ (iid) from some distribution $$\mathcal{D}_n$$ over the subsets of $$[n]$$. Suppose for every index $$i \in [n]$$, it holds that the probability that the element $$i$$ shows up in the random set $$A$$ is at most $0.01$. Then you have $$H(A \cup B) \geq 1.26 H(A)$$. This is all you need to finish Gilmer’s proof (of $$c = 0.01$$). The remaining argument is as follows. Suppose, by the way of contradiction, that no element shows up in at least $$0.01$$ fraction of sets in the union closed family $$\mathcal{F}$$. An application of the key lemma would then give $$H(A \cup B) > H(A)$$ which is a contradiction if $$A,B$$ are chosen uniformly from $$\mathcal{F}$$. The proof of the key lemma is also fairly slick and uses pretty simple information theoretic tools.

# News for July 2022

Last month saw a flurry of activity in Property Testing. We had thirteen papers!! Without further ado, let us dig in.

Testing of Index-Invariant Properties in the Huge Object Model (by Sourav Chakraborty, Eldar Fischer, Arijit Ghosh, Gopinath Mishra, and Sayantan Sen)(arXiv) This paper explores a class of distribution testing problems in the Huge Object Model introduced by Goldreich and Ron (see our coverage of the model here). A quick refresher of this model: so, suppose you want to test whether a distribution $$\mathcal{D}$$ supported over, say the boolean hypercube $$\{0,1\}^n$$ has a certain property $$\mathcal{P}$$. You pick a string $$x \sim \mathcal{D}$$ where the length of $$x$$ is $$n$$. In situations where $$n$$ is really large, you might not want to read all of $$x$$ and you may instead want to read only a few bits from it. To this end, Goldreich and Ron formulated a model where you have query access to the strings you sample. The distribution $$\mathcal{D}$$ is deemed to be $$\varepsilon$$-far from $$\mathcal{P}$$ if $$EMD(\mathcal{D}, \mathcal{P}) \geq \varepsilon$$ (here $$EMD$$ denotes the earthmover distance with respect to the relative Hamming distance between bitstrings). In this model, one parameter of interest is the query complexity of your tester.

One of the results in the featured paper above shows the following: Let $$\sf{MONOTONE}$$ denote the class of monotone distributions supported over $$\{0,1\}^n$$ (a distribution $$D$$ belongs to the class $$\sf{MONOTONE}$$ if $$D(x) \leq D(y)$$ whenever $$0^n \preceq x \preceq y \preceq 1^n$$). Let $$\mathcal{B}_d$$ denote the class of distributions supported over $$\{0,1\}^n$$ whose supports have VC dimension at most $$d$$. Let $$\mathcal{P} = \sf{MONOTONE} \cap \mathcal{B}_d$$. Then, for any $$\varepsilon > 0$$, you can test whether a distribution $$\mathcal{D} \in \mathcal{P}$$ or whether it is $$\varepsilon$$ far from $$\mathcal{P}$$ with query complexity $$poly(1/\varepsilon)$$. In fact, the paper shows this for a much richer class $$\mathcal{P}$$ which is the class of so-called index-invariant distributions with bounded VC-dimensions. The paper also shows the necessity of both of these conditions for efficient testability. Do check it out!

Identity Testing for High-Dimensional Distributions via Entropy Tensorization (by Antonio Blanca, Zongchen Chen, Daniel Štefankovič, and Eric Vigoda)(arXiv)

This paper considers a classic in distribution testing. Namely, the problem of testing whether the hidden input distribution $$\pi$$ is identical to an explicitly given distribution $$\mu$$. Both distributions are supported over a set $$\Omega$$. The caveat is $$\Omega$$ is some high dimensional set (think $$\Omega = [k]^n$$) and that it has a size that grows exponentially in $$n$$. In this case, identity testing has sample complexity $$\Omega(k^{n/2})$$ even when $$\mu$$ is the uniform distribution. In an attempt to overcome this apparent intractability of identity testing in high dimensions, this paper takes the following route: in addition to the standard sample access to $$\pi$$, you also assume access to a stronger sampling oracle from $$\pi$$. And now you would like to understand for which class of explicitly given distributions $$\mu$$ can you expect algorithms with efficient sample complexity (assuming the algorithm is equipped with this stronger sampling oracle). For any $$i \in [n]$$ and $$\omega \in \Omega$$, the stronger oracle considered in this work allows you to sample $$x \sim \pi_{\omega(-i)}$$ where $$\pi_{\omega(-i)}$$ denotes the conditional marginal distribution of $$\pi$$ over the $$i$$-th coordinate when the remaining coordinates have been fixed according to $$\omega$$.

The paper shows if the known distribution $$\mu$$ satisfies some approximate tensorization of entropy criterion, then identity testing with such distributions $$\mu$$ can be done with $$\tilde{O}(n/\varepsilon)$$ queries. Thanks to the spectral independence toolkit pioneered by Anari et al, it turns out that the approximate tensorization property holds for a rich class of distributions. (A side note to self: It looks like I am running out of reasons to postpone learning about the new tools like Spectral Independence.)

Near-Optimal Bounds for Testing Histogram Distributions (by Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Sihan Liu)(arXiv) Histograms comprise one of the most natural and widely used ways for summarizing some relevant aspects of massive datasets. Let $$\Omega$$ denote an $$n$$-element dataset (with elements being $$\{1,2, \ldots, n \}$$). A $$k$$-histogram is a function that is piecewise constant over $$k$$ interval pieces. This paper studies the sample complexity of the following fundamental task: given a distribution $$\mathcal{P}$$ supported over $$\Omega$$, is $$\mathcal{P}$$ a $$k$$-histogram or is $$\mathcal{P}$$ far from being a $$k$$-histogram. The main result of the paper is a (near) sample optimal algorithm for this problem. Specifically, this paper shows that $$k$$-histogram testing has sample complexity $$\Theta\left(\sqrt{nk}/\varepsilon + k/\varepsilon^2 + \sqrt{n}/\varepsilon^2\right)$$.

Comments on “Testing Conditional Independence of Discrete Distributions” (by Ilmun Kim)(arXiv) Probability is full of subtleties and conditional probability is perhaps the biggest landmine of subtleties in this venerable discipline. The featured paper closely examines some subtleties in Theorem 1.3 of the CDKS18 paper on testing conditional independence of discrete distributions. Essentially, this theorem undertakes the following endeavor: you would like to test whether a bivariate discrete distribution has independent marginals conditioned on values assumed by a third random variable. Theorem 1.3 of CDKS18 asserts that there exists a computationally efficient tester for conditional independence with small sample complexity. The featured paper fixes the sample complexity bound claimed in Theorem 1.3 of CDKS18.

Cryptographic Hardness of Learning Halfspaces with Massart Noise (by Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi, and Lisheng Ren)(arXiv) The study of robust supervised learning in high dimensions has seen a lot of impressive progress in the last few years. The paper under review presents sample complexity lower bounds for the task of learning halfspaces in this overarching framework. Let us unpack this paper slowly. So, let us recall the classic task of learning halfspaces in $$\mathbb{R}^n$$. You know the drill. I have a known concept class $$\mathcal{C}$$ (comprising of boolean functions) in my hand. Unbeknownst to you, I have a boolean function $$f \in \mathcal{C}$$. You get as input a multiset $$\{x_i, f(x_i)\}_{i \in [s]}$$ of labeled examples from a distribution $$\mathcal{D}$$ where $$x_i \sim \mathcal{D}_x$$ and $$\mathcal{D}_x$$ is fixed but arbitrary. Your goal is to develop an algorithm that returns a hypothesis with a small misclassification rate. The classic stuff.

Now, consider the same setup with a little twist: the so-called Massart noise setup. The labels $$f(x_i)$$ are no longer reliable and the label on each $$x_i$$ gets flipped adversarially with probability $$\eta_i \leq \eta < 1/2$$. In a breakthrough Diakonikolas, Gouleakis, and Tzamos made the first algorithmic progress on this problem and gave algorithms with running time $$poly(n/\varepsilon)$$ and misclassification rate $$\eta + \varepsilon$$. The current paper shows a lower-bound result. Assuming the hardness of the so-called “Learning With Errors” problem, this paper shows that under Massart Noise, it is not possible for a polynomial time learning algorithm to achieve a misclassification rate of $$o(\eta)$$.

Locally-iterative (Δ+1)-Coloring in Sublinear (in Δ) Rounds (by Xinyu Fu, Yitong Yin, and Chaodong Zheng)(arXiv) A time-honored problem in Distributed Computing is Distributed graph coloring. Let us first understand what problem this paper studies. So, you are given a graph $$G = (V,E)$$ with maximum degree $$\Delta$$. In a seminal work, Szegedy and Vishwanathan introduced the framework of locally-iterative algorithms as a natural family of distributed graph coloring algorithms. These algorithms proceed in $$r$$ rounds. In each round, you update the color of a vertex $$v$$ where the new color of $$v$$ is a function of the current color of $$v$$ and the current color of its neighbors. The current paper shows that you can in the locally-iterative framework, you can in fact, obtain a proper coloring of $$G$$ with $$\Delta(G) + 1$$ colors in $$r = O(\Delta^{3/4} \log \Delta) + \log^* n$$ rounds.

Learning Hierarchical Structure of Clusterable Graphs (by Michael Kapralov, Akash Kumar, Silvio Lattanzi, Aida Mousavifar)(arXiv) [Disclaimer: I am one of the authors of this paper.] Hierarchical clustering of graph data is a fundamentally important task in the current big data era. In 2016, Dasgupta introduced the notion of Dasgupta cost which essentially allows one to measure the quality of a hierarchical clustering. This paper presents algorithms that can estimate the Dasgupta Cost of a graph coming from a special family of $$k$$-clusterable graphs in the semi-supervised setting. These graphs have $$k$$ clusters. These clusters are essentially subsets of vertices that induce expanders and these clusters are sparsely connected to each other. We are given query access to the adjacency list of $$G$$. Also, for an initial “warmup” set of randomly chosen vertices, we are told the clusters they belong to. Armed with this setup, this paper presents algorithms that run in time $$\approx \sqrt{n}$$ and return an estimate to the Dasgupta Cost of $$G$$ which is within a $$\approx \sqrt{\log k}$$ factor of the optimum cost.

Finding a Hidden Edge (by Ron Kupfer and Noam Nisan)(arXiv) Let us consider as a warmup (as done in the paper) the following toy problem. You have a graph on $$n$$ vertices whose edge set $$E$$ is hidden from you. Your objective is to return any $$(i,j) \in E$$. The only queries you are allowed are of the following form. You may consider any subset $$Q \subseteq V \times V$$ and you can ask whether $$Q$$ contains any edge. A simple binary search solves this question with $$\log m$$ queries (where $$m = {n \choose 2}$$). However, if you want a non-adaptive algorithm for this problem (unlike binary search) you can show that any deterministic algorithm must issue $$m$$ non-adaptive queries. Turns out randomness can help you get away with only $$O(\log^2m)$$ non-adaptive queries for this special toy problem. Now, let me describe the problem considered in this work in earnest. Suppose the only queries you are allowed are of the following form: you may pick any $$S \subseteq V$$ and you may ask whether the graph induced on $$S$$ contains an edge. The paper’s main result is that there is an algorithm for finding an edge in $$G$$ which issues nearly linear in $$n$$ many non-adaptive queries. The paper also presents an almost matching lower bound.

On One-Sided Testing Affine Subspaces (by Nader Bshouty)(ECCC) Dictatorship testing is one of the classics in property testing of boolean functions. A more generalized problem considers testing whether the presented function is a $$k$$-monomial. If you are a regular reader of the posts on PTReview, you might have seen this problem essentially asks you to test whether a boolean function $$f \colon \mathcal{F}^n \to {0,1}$$ is an indicator of an $$(n-d)$$ dimensional affine/linear subspace of $$\mathcal{F}^n$$ (here $$\mathcal{F}$$ denotes a finite field). Namely, you would like to test whether the set $$f^{-1}$$ is an $$(n-k)$$ dimensional affine subspace of $$\mathcal{F}^n$$. The paper under review improves the state-of-the-art query complexity for this problem from a previous value of $$O\left(|\mathcal{F}|/\varepsilon\right)$$ to $$\tilde{O}\left(1/\varepsilon\right)$$.

Non-Adaptive Edge Counting and Sampling via Bipartite Independent Set Queries (by Raghavendra Addanki, Andrew McGregor, and Cameron Musco)(arXiv) If you have been around the PTReview corner for a while, you know that sublinear time estimation of graph properties is one of our favorite pastimes here. Classic work in this area considers the following queries: vertex degree queries, $$i$$-th neighbor queries, and edge existence queries. This classic query model has received a lot of attention and thanks to the work of Eden and Rosenbaum we know algorithms for near-uniform edge sampling with query complexity $$O(n/\sqrt{m}) \cdot poly(\log n) \cdot poly(1/\varepsilon)$$. Motivated by a desire to obtain more query-efficient algorithms, Beame et al. introduced an augmented query model where you are also allowed the following queries: you may pick $$L, R \subseteq V$$ and you get a yes/no response indicating whether there exists an edge in $$E(L, R)$$. These are also called the bipartite independent set (BIS) queries. The featured paper shows that with (BIS) queries you get non-adaptive algorithms for near-uniform edge sampling with query complexity being a mere $$\widetilde{O}(\varepsilon^{-4} \log^6 n)$$. The main result of the paper gives a non-adaptive algorithm for estimating the number of edges in $$G$$ with query complexity (under BIS) being a mere $$\widetilde{O}(\varepsilon^{-5} \log^5 n)$$.

A Query-Optimal Algorithm for Finding Counterfactuals (by Guy Blanc, Caleb Koch, Jane Lange, Li-Yang Tan)(arXiv) Given an abstract space $$X^d$$, an instance $$x^* \in X^d$$ and a model $$f$$ (which you think of as a boolean function over $$X^d$$), a point $$x’ \in X^d$$ is called a counterfactual to $$x^*$$ if $$x^*, x’$$ differ in few features (i.e., have a small Hamming distance) and $$f(x^*) \neq f(x’)$$. Ideally, you would like to find counterfactuals that are as close to each other in Hamming Distance. The main result of this paper is the following: Take a monotone model $$f \colon \{0,1\}^d \to \{0,1\}$$, an instance $$x^* \in \{0,1\}^d$$ with small sensitivity (say $$\alpha$$). Then there exists an algorithm that makes at most $$\alpha^{\Delta(x^*)}$$ queries to $$f$$ and returns all optimal counterfactuals of $$f$$. Here $$\Delta(x^*) = \min_{x \in \{0,1\}^d} \{\Delta_H(x, x^*) \colon f(x) \neq f(x^*) \}$$. The paper also proves a matching lower bound on query complexity which is obtained by some monotone model $$f$$.

A Sublinear-Time Quantum Algorithm for Approximating Partition Functions (by Arjan Cornelissen and Yassine Hamoudi)(arXiv) For the classical Hamiltonian $$H \colon \Omega \to \{0,1, \ldots, n\}$$, at inverse temperature $$\beta$$, the probability, under the so-called Gibbs distribution, assigned to a state $$x \in \Omega$$ is proportional to $$\exp(-\beta H(x))$$. The partition function is given by $$Z(\beta) = \sum_{x \in \Omega} \exp(-\beta H(x))$$. At high temperatures (or low values of $$\beta$$) the partition function is typically easy to compute. However, the low-temperature regime is often challenging. You use MCMC methods to compute $$Z(\infty)$$. In particular, you write this as the following telescoping product $$Z(\infty) = Z(0) \cdot \prod_{i = 0}^{i = \ell – 1} \frac{Z(\beta_{i+1})}{Z(\beta_i)}$$ where $$0 = \beta_1 < \beta_2 < \ldots < \beta_{\ell} = \infty$$ is some increasing sequence of inverse temperatures with limited fluctuations in Gibbs distribution between two consecutive values and you use MCMC methods to estimate each of the $$\ell$$ ratios in the above product. The main result of this paper presents a quantum algorithm that on input a Gibbs distribution generated by a Markov Chain with a large spectral gap performs sublinearly few steps (in size of the logarithm of the state space) of the quantum walk operator and returns a $$\pm \varepsilon Z(\infty)$$ additive estimate to $$Z(\infty)$$.

A Near-Cubic Lower Bound for 3-Query Locally Decodable Codes from Semirandom CSP Refutation (by Omar Alrabiah, Venkatesan Guruswami, Pravesh Kothari, and Peter Manohar)(ECCC) If you made it till here, it is time for a treat. Let us close (hopefully, I did not miss any papers this time!) with a breakthrough in Locally Decodable Codes. So, for 2-query LDCs, we know fairly tight bounds on the block length. For 3-query LDCs, on the other hand, we know a sub-exponential upper bound on the block length. However, the best-known lower bound on the block length was merely quadratic. The featured paper improves this to a cubic lower bound on the block length. The main tool used to achieve this is a surprising connection between the existence of locally decodable codes and the refutation of Boolean CSP instances with limited randomness. This looks like a fantastic read to close off this month’s report!

# News for March 2022

This was a relatively sleepy month with only two property testing papers. Do let us know if we missed any. Let us dig in. (EDIT: Two updates.)

1. I missed two papers. One on the estimation of quantum entropies and the other on algorithms and lower bounds for estimating MST and TSP costs.
2. Finally, I forgot to welcome our new editor. Welcome onboard, Nithin Varma!!

Private High-Dimensional Hypothesis Testing by Shyam Narayanan (arXiv) This paper continues the novel study of distribution testing under the constraints brought forth by differential privacy extending the work of Canonne-Kamath-McMillan-Ullman-Zakynthinou (henceforth CKMUZ, covered in our May 2019 post). In particular, the paper presents algorithms with optimal sample complexity for private identity testing of $$d$$-dimensional Gaussians. In more detail, the paper shows that can be done with a mere $$\widetilde{O}\left( \frac{d^{1/2}}{\alpha^2} + \frac{ d^{1/3} }{ \alpha^{4/3} \cdot \varepsilon^{2/3}} + \frac{1}{\alpha \cdot \varepsilon} \right)$$. Here $$\alpha$$ is the proximity parameter and $$\varepsilon$$ is the privacy parameter. Combined with a previous result of Acharya-Sun-Zhang, the paper proves that private identity testing of $$d$$-dimensional Gaussians is doable with a sample complexity smaller than that of private identity testing of discrete distributions over a domain of size $$d$$ thereby refuting a conjecture of CKMUZ.

Differentially Private All-Pairs Shortest Path Distances: Improved Algorithms and Lower Bounds by Badih Ghazi, Ravi Kumar, Pasin Manurangsi and Jelani Nelson (arXiv) Adam Sealfon considered the classic All Pairs Shortest Path Problem (the APSP problem) with privacy considerations in 2016. In the $$(\varepsilon, \delta)$$-DP framework, Sealfon presented an algorithm which on input an edge-weighted graph $$G=(V,E,w)$$ adds Laplace noise to all edge weights and computes the shortest paths on this noisy graph. The output of the algorithm satisfies that the estimated distance between every pair is within an additive $$\pm O(n \log n/\varepsilon)$$ of the actual distance (the absolute value of this parameter is called the accuracy of the algorithm). Moreover, this error is tight up to a logarithmic factor if the algorithm is required to release the shortest paths. The current paper shows you can privately release all the pairwise distances while suffering only a sublinear accuracy if you additionally release the edge weights (in place of releasing the shortest paths). In particular, this paper presents an $$\varepsilon$$-DP algorithm with sublinear $$\widetilde{O}(n^{2/3})$$ accuracy.

Quantum algorithms for estimating quantum entropies by Youle Wang, Benchi Zhao, Xin Wang (arXiv) So, remember our post from December on sublinear quantum algorithms for estimation of quantum (von Neumann) entropy? The current paper begins by noting that the research so far (along the lines of the work above) assumes access to a quantum query model for the input state which we do not yet know how to construct efficiently. This paper addresses this issue and gives quantum algorithms to estimate the von Neumann entropy of a $$n$$-qubit quantum state $$\rho$$ by using independent copies of the input state.

Sublinear Algorithms and Lower Bounds for Estimating MST and TSP Cost in General Metrics by Yu Chen, Sanjeev Khanna, Zihan Tan (arXiv) As mentioned in the title, this paper studies sublinear algorithms for the metric MST and the metric TSP problem. The paper obtains a wide assortment of results and shows that both these problems admit an $$\alpha$$-approximation algorithm which uses $$O(n/\alpha)$$ space. This algorithm assumes that the input is given as a stream of $$n \choose 2$$ metric entries. Under this model, the paper also presents an $$\Omega(n/\alpha^2)$$ space lower bound. Let me highlight one more result from the paper. In a previous news (from June 2020), we covered a result detailing a better than $$2$$-approximation for the graphic TSP and $$(1,2)$$ TSP which runs in sublinear time. This paper extends this result and obtains better than $$2$$-approximation for TSP on a relaively richer class of metrics.

# News for November 2021

The holiday season is just around the corner. Best wishes to you and your loved ones in advance from us here at PTReview. This month, we had four five papers in all. To prepare for the festive mood associated with the onset of December, also included are brief updates on the recently disproven implicit graph conjecture. Let’s dig in. (And please, do let us know if we missed your paper).

(EDIT: Thanks to our readers for pointing out a paper we missed.)

Downsampling for Testing and Learning in Product Distributions by Nathaniel Harms, Yuichi Yoshida (arXiv). Contemplating on connections in algorithms used for testing a diverse collection of function properties, this paper provides a unified and generalized view of a technique: which the authors call downsampling. As the paper explains, the name is motivated by analogy to image/signal processing tasks. Very often, these tasks involve two steps. In the first step, you break the input domain into “grid cells”. You use oracle calls to the function to obtain a crude approximation over all these cells. In the second step, you learn this gridded-up or coarsened function cell-by-cell.

This two-step attack could just be what the doctor ordered for your favorite function property testing problem: in particular, it has been put to work for approximately testing visual properties, approximating distance to monotonicity in high dimensions, testing $$k$$-monotone functions, and more. However, if you wanted to obtain property testers using this approach in the distribution-free setup, your ordeal might be far from over. The unknown distribution your domain is equipped with can mess with geometric arguments your gridding approach hopes to exploit. This is precisely the setup considered in the paper (i.e, distribution-free testing of function properties).

The essence of downsampling, is captured by a slick proposition that prescribes coarsening as your goto weapon if the
1) fraction of cells on which $$f$$ is not constant, and
2) a measure of how non-uniform the unknown distribution D your domain is equipped with is

are both small.

Equipped with this machinery, the paper tackles the task of designing distribution-free testers for boolean monotonicity with the underlying domain being $$\mathbb{R}^d$$. The argument is pretty short and improves upon the sample complexity of the corresponding result in the paper by Black-Chakrabarti-Seshadhri. Do check it out, looks like some nice ammo to add to your toolkit.

Let’s stay on board for sublinear time algorithms for gap-edit distance.

Gap Edit Distance via Non-Adaptive Queries: Simple and Optimal by Elazar Goldenberg, Tomasz Kociumaka, Robert Krauthgamer, Barna Saha (arXiv). The edit distance problem needs no introduction. This paper studies the problem of approximating edit distance in sublinear time. In particular, this is formalized by introducing the gapped version of the problem. This entails the following computational task: Fix $$k \in \mathbb{N}$$ and some $$c > 0$$. Given a pair $$x, y$$ of strings over a finite alphabet, decide whether the edit distance between $$x,y$$ is at most $$k$$ or whether it is at least $$k^c$$. This paper resolves the non-adaptive query complexity of edit distance and proves that the above gapped version can be decided in at most $$O\left(\frac{n}{k^{c – 1/2}}\right)$$ queries. The paper also proves that this bound is almost optimal up to some polylog factors.

Next up, we have time-optimal algorithms for maximum matching and vertex cover! Sweet, right?

Time-Optimal Sublinear Algorithms for Matching and Vertex Cover by Soheil Behnezhad (arXiv). This paper gives algorithms for the classic problem of estimating the size of vertex cover and maximum matching in graphs in sublinear time (in both adjacency matrix and adjacency list models). This is another nice read for the holiday season — after all the paper obtains time-optimal algorithms (up to polylog factors) in both of these models for a multiplicative-additive $$(2, \varepsilon n)$$ algorithms. Let me set up some historical context to appreciate the maximum matching result better.

In a classic work, Parnas and Ron gave sublinear time algorithms that obtain a $$(2, \varepsilon n)$$ estimate to the size of a maximum matching in a graph using query access to the adjacency list in sublinear time. Their sublinear time algorithm is inspired by ideas with roots in distributed algorithms. In particular, their algorithm returns in time
$$\Delta^{O(\log {\frac{\delta}{\varepsilon}})}$$ (where $$\Delta = \Delta(G)$$ denotes the degree bound) an estimate $$\text{EST}$$ to the size of the max-matching where $$OPT \leq \text{EST} \leq 2 \cdot OPT + \varepsilon n$$. Unfortunately, algorithms in this Parnas-Ron Framework must necessarily suffer a quasi-polynomial running time because of lower bounds from distributed algorithms. Using a randomized greedy approach to estimating the size of a maximum matching, Yoshida, Yamamoto, and Ito gave the first algorithm which returned a $$(2, \varepsilon n)$$-estimate to maximum matching in time $$poly(\Delta/\varepsilon)$$. This is where the story essentially froze — though there were results that improved the dependence on $$\Delta$$ to $$O(\Delta^2)$$. Unfortunately, this is not truly sublinear for large $$\Delta$$. In another direction, Kapralov-Mitrovic-Norouzi Fard-Tardos gave an algorithm that estimates the maximum matching size in time $$O(\Delta)$$ (which is truly sublinear) but it no longer guarantees a $$(2, \varepsilon n)$$-approximation and instead returns a $$(O(1), \varepsilon n)$$-approximation. The current paper, as remarked above achieves the best of both worlds.

Final stop, updates on the implicit graph conjecture.

The Implicit Graph Conjecture is False by Hamed Hatami, Pooya Hatami (arXiv). While not a property testing paper per se, it is always good to see an old conjecture resolved one way or the other. In this paper, the Hatami brothers (Pooya and Hamed) refute a three decade old conjecture — the one mentioned in the title. Here is a brief history. Sampath Kannan, Moni Naor and Steven Rudich defined the notion of implicit representation of graphs. Take a family of $$\mathcal{F}$$ of graphs and consider a $$n$$ vertex graph $$G \in \mathcal{F}$$. You say this family admits an efficient implicit representation if there exists an assignment of $$O(\log n)$$ length labels to all the vertices in $$V(G)$$ such that the adjacencies between every pair of vertices is a function of the labels of the corresponding pair. Crucially, the labeling function may depend on the family, but not on individual graphs in the family. What is cool about families admitting such an efficient implicit representation, is that the number of $$n$$-vertex graphs in this family cannot be any bigger than $$2^{O(n \log n)}$$ — that is such families have at most factorial growth rate. Implicit graph conjecture asserts that for every hereditary graph family, the converse also holds. Namely, if the hereditary family has at most factorial growth rate, then the family admits efficient implicit representations. The key, as shown in this paper (which spans all of six pages!) is to choose as your hereditary graph class, the closure of a random collection of graphs. The authors show that now your hand is forced and your representation will no longer be efficient. However, annoyingly, your hereditary graph class need not have a factorial growth rate as taking the closure of a random collection expands to contain all graphs and has growth rate $$2^{\Omega(n^2)}$$. The cool thing is, you can avoid this issue by choosing a random collection of slightly sparse random graphs (with $$n^{2-\varepsilon}$$ edges). Interestingly, this gives you enough ammo to finally control the growth rate which in turn allows the authors to slay this conjecture.

Sublinear quantum algorithms for estimating von Neumann entropy by Tom Gur, Min-Hsiu Hsieh, Sathyawageeswar Subramanian (arXiv). This paper presents quantum algorithms for the problem of obtaining multiplicative estimates of the Shannon entropy of the familiar classical distributions and the more exotic von Neumann entropy of mixed quantum systems. In particular, the paper presents an $$\widetilde{O}(n^{\frac{1+\eta }{2\gamma^2 }})$$-query quantum algorithm that achieves a $$\gamma$$-multiplicative approximation algorithm for the Shannon Entropy of an input distribution $$\mathbf{p}$$ supported on a universe of size $$n$$ where $$H(\mathbf{p}) \geq \gamma/\eta$$. As if this were not already cool enough, the paper also presents sublinear quantum algorithms for estimating Von Neumann Entropy as well. This is supplemented by a lower bound of $$\Omega(n^{\frac{1}{3 \gamma^2}})$$ queries for achieving a $$\gamma$$-factor approximation for any of these two kinds of entropies.

# News for August 2021

This month saw four papers. One on group testing, another on distribution testing, yet another which makes progress on testing problems on decision trees and the last one on graph property testing. Without further ado, let’s dive in.

Group Testing with Non-identical Infection Probabilities by Mustafa Doger, Sennur Ulukus (arXiv) Consider the classic group testing problem. Here the setup is the following. You are given a bunch of individuals from a population $$\mathcal{P}$$. You have an infection vector which records the infection status of each individual in the population where the $$i$$-th individual is infected with probability $$p_i$$. You want to recover all the infected individuals. You are allowed to group individuals together and you can test the entire group in a single shot. If the group tests negative, you are happy all the tested individuals are off the hook. Otherwise, if the group tests positive, you need more tests for further classification. This paper proposes a greedy way to build pools of individuals you would test. The pools are built adaptively: as in future pools are built using the knowledge of how the preceding tests fared. The key result in the paper upperbounds the number of tests performed in terms of the entropy of the infection vector.

Uniformity Testing in the Shuffle Model: Simpler, Better, Faster by Clément L. Canonne, Hongyi Lyu (arXiv) Differentially private distribution testing as a research area has been gathering momentum steadily over the last few years. If you read our last month’s post, you might recall there are a wide variety of models of DP each corresponding to a different “threat model”. The most stringent among the most explored models is the “local model”, the least stringent being the “central model” and there is an intermediate threat model, the so called “shuffle model“. This paper simplifies the analysis of uniformity testing algorithm under the shuffle model and presents an algorithm with sample complexity $$O(k^{3/4})$$ for testing uniformity over a support of size $$[k]$$.

On Learning and Testing Decision Tree by Nader H. Bshouty, Catherine A. Haddad-Zaknoon (arXiv) In our December 2020 post, we covered a result of Blanc et al., which proves the following: Suppose you are given a boolean function $$f$$ and the property $$\mathcal{P}$$ of size-$$s$$ decision trees. The result of Blanc et al gives you a function $$g \in \mathcal{P}$$ with $$dist(f,g) = O(dist (f, \mathcal{P}))$$ where $$g \in$$ is guaranteed to have decision tree complexity $$s^{O(\log^2 s)}$$. This result implies a bi-criteria tester for the following property: is $$f \in \mathcal{P}$$ or is $$f$$ $$\varepsilon$$-far from having decision tree complexity $$\phi(s) = s^{O(\log^3 s)}$$. The current paper improves this result by presenting a property tester with $$\phi(s) = s^{O(\log^2 s)}$$.

The complexity of testing all properties of planar graphs, and the role of isomorphism by Sabyasachi Basu, Akash Kumar, C. Seshadhri (arXiv) (Disclaimer: I am one of the authors of this paper). This paper presents a result that I, in my biased opinion, find interesting. So, here is the setup. You are given a bounded degree planar graph. And I cook up some God-forsaken property and ask you to test it. Turns out, no matter how devilishly I cooked up the property, you can test in with $$\exp(O(\varepsilon^{-2}))$$ queries. The nice happenstance is that you also have a matching lower bound of $$\exp(\Omega(\varepsilon^{-2}))$$ queries! And interestingly, this lower bound is witnessed by the very natural property of testing isomorphism to a fixed graph which means that isomorphism is the hardest property of planar graphs.

# News for May 2021

We hope you are all staying safe. With massive vaccination programs across the globe we hope you and your loved ones are getting back to what used to be normal. With that out of the way, let us circle back to Property Testing. This month was less sleepy as compared to the two preceding months and we saw six papers in total (two of them explore problems in quantum property testing). Without further ado, let us take a deeper dive.

GSF-locality is not sufficient for proximity-oblivious testing, by Isolde Adler, Noleen Kohler, Pan Peng (arXiv) The notion of proximity oblivious testers was made explicit in the seminal work of Goldreich and Ron in 2009 [GR09]. A proximity oblivious tester for a graph property is a constant query tester that rejects a graph with probability that monotonically increases with distance to the property. (Edit: Correction) A property is called proximity oblivious testable (or PO testable) if it has a one sided proximity oblivious tester. [GR09] gave a characterization of which properties $$\Pi$$ are PO testable in the bounded degree model if and only if it is a “local” property of some kind which satisfies a certain non propagation condition. [GR09] conjectured that all such “local” properties satisfy this non propagation condition. This paper refutes the above conjecture from [GR09].

Coming up next. More action on triangle freeness.

Testing Triangle Freeness in the General Model in Graphs with Arboricity $$O(\sqrt n)$$, by Reut Levi (arXiv) PTReview readers are likely to be aware that triangle freeness has been a rich source of problems for developing new sublinear time algorithms. This paper considers the classic problem of testing triangle freeness in general graphs. In the dense case, algorithms with running time depending only on $$\varepsilon$$ are known thanks to the work of Alon, Fischer, Krivelevich and Szegedy. In the bounded degree case, Goldreich and Ron gave testers with query complexity $$O(1/\varepsilon)$$. This paper explores the problem in general graph case and proves an upper bound of $$O(\Gamma/d_{avg} + \Gamma)$$ where $$\Gamma$$ is the arboricity of the graph. The author also shows that this upperbound is tight for graphs with arboricity at most $$O(\sqrt n)$$. Curiously enough, the algorithm does not take arboricity of the graph as an input and yet $$\Gamma$$ (the arboricity) shows up in the upper and lower bounds.

Testing Dynamic Environments: Back to Basics, by Yonatan Nakar and Dana Ron (arXiv) Goldreich and Ron introduced the problem of testing “dynamic environments” in 2014. Here is the setup for this problem. You are given an environment that evolves according to a local rule. Your goal is to query some of the states in the system at some point of time and determine if the system is evolving according to some fixed rule or is far from it. In this paper, the authors consider environments defined by elementary cellular automata which evolve according to threshold rules as one of the first steps towards understanding what makes a dynamic environment tested efficiently. The main result proves the following: if your local rules satisfy some conditions, you can use a meta algorithm with query complexity $$poly(1/\varepsilon)$$ which is non adaptive and has one sided error. And all the threshold rules indeed satisfy these conditions which means they can be tested efficiently.

Identity testing under label mismatch, by Clement Canonne and Karl Wimmer (arXiv) This paper considers a classic problem distribution testing with the following twist. Let $$q$$ denote a distribution supported on $$[n]$$. You are given access to samples from another distribution $$p$$ where $$p = q \circ \pi$$ where $$\pi$$ is some unknown permutation. Thus, I relabel the data and I give you access to samples from the relabeled dataset. Under this promise, note that identity testing becomes a trivial problem if $$q$$ is known to be uniform over $$[n]$$. The authors develop algorithms for testing and tolerant testing of distributions under this additional promise of $$p$$ being a permutation of some known distribution $$q$$. The main result shows as exponential gap between the sample complexity of testing and tolerant testing under this promise. In particular, identity testing under the promise of permutation has sample complexity $$\Theta(\log^2 n)$$ whereas tolerant identity testing under this promise has sample complexity $$\Theta(n^{1-o(1)})$$.

Testing symmetry on quantum computers, by Margarite L. LaBorde and Mark M. Wilde (arXiv) This paper develops algorithms which test symmetries of a quantum states and changes generated by quantum circuits. These tests additionally also quantify how symmetric these states (or channels) are. For testing what are called “Bose states” the paper presents efficient algorithms. The tests for other kinds of symmetry presented in the paper rely on some aid from a quantum prover.

Quantum proofs of proximity, by Marcel Dall’Agnol, Tom Gur, Subhayan Roy Moulik, Justin Thaler (ECCC) The sublinear time (quantum) computation model has been gathering momentum steadily over the past several years. This paper seeks to understand the power of $${\sf QMA}$$ proofs of proximity for property testing (recall $${\sf QMA}$$ is the quantum analogue of $${\sf NP}$$). On the algorithmic front, the paper develops sufficient conditions for properties to admit efficient $${\sf QMA}$$ proofs of proximity. On the complexity front, the paper demonstrates a property which admits an efficient $${\sf QMA}$$ proof but does not admit a $${\sf MA}$$ or an interactive proof of proximity.

# News for February 2021

We got quite some action last month. We saw five papers. A lot of action in graph world and some action in quantum property testing which we hope you will find appetizing. Also included is a result on sampling uniformly random graphlets.

Testing Hamiltonicity (and other problems) in Minor-Free Graphs, by Reut Levi and Nadav Shoshan (arXiv). Graph Property Testing has been explored pretty well for dense graphs (and reasonably well for bounded degree graphs). However, testing properties in the general case still remains an elusive goal. This paper makes contributions in this direction and as a first result it gives an algorithm for testing Hamiltonicity in minor free graphs (with two sided error) with running time $$poly(1/\varepsilon)$$. Let me begin by pointing out that Hamiltonicity is an irksome property to test in the following senses.

• It is neither monotone nor additive. So the partition oracle based algorithms do not immediately imply a tester (with running time depending only on $$\varepsilon$$ for Hamiltonicity. This annoyance bugs you even in the bounded degree case.
• Czumaj and Sohler characterized what graph properties are testable with one-sided error in general planar graphs. In particular, they show a property of general planar graphs is testable iff this property can be reduced to testing for a finite family of finite forbidden subgraphs. Again, Hamiltonicity does not budge to this result.
• There are (concurrent) results by Goldreich and Adler-Kohler which show that with one-sided error, Hamiltonicity cannot be tested with $$o(n)$$ queries.

The paper shows that distance to Hamiltonicity can be exactly captured in terms of a certain combinatorial parameter. Thereafter, the paper tries to estimate this parameter after cleaning up the graph a little. This allows them to estimate the distance to Hamiltonicity and thus also implies a tolerant tester (restricted to mino-free graphs).

Testing properties of signed graphs, by Florian Adriaens, Simon Apers (arXiv). Suppose I give you a graph $$G=(V,E)$$ where all edges come with a label: which is either “positive” or “negative”. Such signed graphs are used to model various scientific phenomena. Eg, you can use these to model interactions between individuals in social networks into two categories like friendly or antagonistic.

This paper considers property testing problems on signed graphs. The notion of farness from the property extends naturally to these graphs (both in the dense graph model and the bounded degree model). The paper contains explores three problems in both of these models: signed triangle freeness, balance and clusterability. Below I will zoom into the tester for clusterability in the bounded degree setting developed In the paper. A signed graph is considered clusterable if you can partition the vertex set into some number of components such that the edges within any component are all positive and the edges running across components are all negative.

The paper exploits a forbidden subgraph characterization of clusterability which shows that any cycle with exactly one negative edge is a certificate of non-clusterability of $$G$$. The tester runs multiple random walks from a handful of start vertices to search for these “bad cycles” by building up on ideas in the seminal work of Goldreich and Ron for testing bipariteness. The authors put all of these ideas together and give a $$\widetilde{O}(\sqrt n)$$ time one-sided tester for clusterability in signed graphs.

Local Access to Random Walks, by Amartya Shankha Biswas, Edward Pyne, Ronitt Rubinfeld (arXiv). Suppose I give you a gigantic graph (with bounded degree) which does not fit in your main memory and I want you to solve some computational problem which requires you to solve longish random walks of length $$t$$. And lots of them. It would be convenient to not spend $$\Omega(t)$$ units of time performing every single walk. Perhaps it would work just as well for you to have an oracle which provides query access to a $$Position(G,s,t)$$ oracle which returns the position of a walk from $$s$$ at time $$t$$ of your choice. Of course, you would want the sequence of vertices returned to behave consistently with some actual random walk sampled from the distribution of random walks starting at $$s$$. Question is: Can I build you this primitive? This paper answers this question in affirmative  and shows that for graphs with spectral gap $$\Delta$$, this can be achieved with running time $$\widetilde{O}(\sqrt n/\Delta)$$ per query. And you get the guarantee that the joint distribution of the vertices you return at queried times is $$1/poly(n)$$ close to the uniform distribution over such walks in $$\ell_1$$.  Thus, for a random $$d$$-regular graph, you get running times of the order $$\widetilde{O}(\sqrt n)$$ per query. The authors also show tightness of this result by showing to get subconstant error in $$\ell_1$$, you necessarily need $$\Omega(\sqrt n/\log n)$$ queries in expectation.

Efficient and near-optimal algorithms for sampling connected subgraphs, by Marco Bressan (arXiv). As the title suggests, this paper considers efficient algorithms for sampling a uniformly random $$k$$-graphlet from a given graph $$G$$ (for $$k \geq 3$$). Recall, a $$k$$-graphlet refers to a collection of $$k$$-vertices which induce a connected graph in $$G$$. The algorithm considered in the paper is pretty simple. You just define a Markov Chain $$\mathcal{G}_k$$ with all $$k$$-graphlets as its state space. Two states in $$\mathcal{G}_k$$ are adjacent iff their intersection is a $$(k-1)$$-graphlet. To obtain a uniformly random sample, a classical idea is to just run this Markov Chain and obtain an $$\varepsilon$$-uniform sample. However, the gap between upper and lower bounds on the mixing time of this walk is of the order $$\rho^{k-1}$$ where $$\rho = \Delta/\delta$$ (that is the ratio of maximum and minimum degrees to the power $$k-1$$). The paper closes this gap up to logarithmic factors and shows that the mixing time of the walk is at most $$t_{mix}(G) \rho^{k-1} \log(n/\varepsilon)$$. It also proves an almost matching lower bound. Further, the paper also presents an algorithm with event better running time to return an almost uniform $$k$$-graphlet. This exploits a previous observation: sampling a uniformly random $$k$$-graphlet is equivalent to sampling a uniformly random edge in $$\mathcal{G}_{k-1}$$. The paper then proves a lemma which upperbounds the relaxation time of walks in $$\mathcal{G}_k$$ to walks in $$\mathcal{G}_{k-1}$$. And then you upperbound the mixing time in terms of the relaxation time to get an improved expected running time of the order $$O(t_{mix}(G) \cdot \rho^{k-2} \cdot \log(n/\varepsilon)$$.

Toward Instance-Optimal State Certification With Incoherent Measurements, by Sitan Chen, Jerry Li, Ryan O’Donnell (arXiv). The problem of quantum state certification has gathered interest over the last few years. Here is the setup: you are given a quantum state $$\sigma \in \mathbb{C}^{d \times d}$$ and you are also given $$N$$ copies of an unknown state $$\rho$$. You want to distinguish between the following two cases: Does $$\rho = \sigma$$ or is $$\sigma$$ at least $$\varepsilon$$-far from $$\rho$$ in trace norm? Badescu et al showed in a recent work that if entangled measurements are allowed, you can do this with a mere $$O(d/\varepsilon^2)$$ copies of $$\rho$$. But using entangled states comes with its own share of problems. On the other hand if you disallow entanglement, as Bubeck et al show, you need $$\Omega(d^{3/2}/\varepsilon^2)$$ measurements. This paper asks: for which states $$\sigma$$ can you improve upon this bound. The work takes inspirations from a la “instance optimal” bounds for identity testing. Authors show a fairly general result which (yet again) confirms that the quantum world is indeed weird. In particular, the main result of the paper implies that the copy complexity of (the quantum analog of) identity testing in the quantum world (with non-adaptive queries) grows as $$\Theta(d^{1/2}/\varepsilon^2)$$. That is, the number of quantum measurements you need increases with $$d$$ (which is the stark opposite of the behavior you get in the classical world).

# News for November 2020

Highlights in property testing this month, include developments across the board. We have a nice potpourri of results which will catch your fancy. From a result which shows the tightness of quadratic gap between query complexities of adaptive vs non-adaptive testers for dense graph properties, to isoperimetric Talagrand-like inequalities for real valued functions over the hypercube, to results which consider face off between quantum computing and distribution testing and more. Looks like the festive season came early for the PTReview readers.

Non-adaptive vs Adaptive Queries in the Dense Graph Testing Model (by Oded Goldreich and Avi Wigderson) (ECCC) It has been long established in the dense graph property testing literature that the query complexity of adaptive testers and non-adaptive testers are only a quadratic factor apart. This work asks: is this gap necessarily tight and answers in the affirmative. The main conceptual tool used in the paper is the notion of robustly self-ordered graphs and local self-ordering procedures for these graphs (which was developed by the same authors and was covered in our September Post). These results use these tools to port lower bounds on property testing problems for bit strings to graph properties.

Direct Sum and Partitionability Testing over General Groups (by Andrej Bogdanov and Gautam Prakriya) (ECCC) In their celebrated work, Blum, Luby and Rubinfeld gave a four query test to determine whether a function $$f \colon \mathbb{F}_2^n \to \mathbb{F}_2$$ is affine. This paper considers a generalization of the notion of affinity to functions $$f \colon \{0,1\}^n \to G$$ where $$G$$ is an Abelian group. The generalization sough asks: does $$f$$ have the form $$f = \sum x_ig_i + g_0$$ for group elements $$g_0, g_1, \cdots, g_n \in G$$. In this setting, the BLR analysis does not apply as the domain and range are note equipped with the same group structure. This work presents a four query test for the above problem. In fact, the test is more general and can be used to decide whether a function $$f \colon D_1 \times D_2 \ldots \times D_n \to G$$ from a product domain $$D_1 \times D_2 \ldots \times D_n$$ to an Abelian group $G$ is a direct sum if it has the form $$f(x_1, x_2, \cdots, x_n) = \sum f_i(x_i)$$ which resolves a conjecture of Dinur and Golubev. The work also presents results for testing whether a function $$f$$ defined from a product domain to an Abelian group $$G$$ is a direct product.

Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing (by Hadley Black, Iden Kalemaj and Sofya Raskhodnikova) (ArXiv) This paper generalizes the celebrated isoperimetric inequalities of boolean functions over the hypercube to the setting of real valued functions. This generalized isoperimetry is then put to work to obtain a monotonicity tester for $$f \colon \{0,1\}^n \to \mathbb{R}$$. The tester makes $$\min(\tilde{O}(r \sqrt n), O(n))$$ non-adaptive queries (where $$r$$ is the size of image of $$f$$) and has one-sided error. The authors generalize the KMS inequality by a Boolean decomposition theorem which allows them to represent any real valued function over the cube as a collection of Boolean functions over the cube which capture $$f$$’s distance to montonicity as well the structure of violations to monotonicity in $$f$$.

Expander Random Walks: A Fourier Analytic Approach (by Gil Cohen, Noam Peri and Amnon Ta-Shma) (ECCC) While this is a not exactly a property testing paper, how can you not report a paper which proves a significant strengthening of the classic CLT for Markov Chains (alright, for random walks on expanders) with respect to the TV distance? Let us now unpack the above a little. So, consider the following setup: Suppose you have a $$d$$-regular expander $$G$$. Now, imagine running the following process $$\textsf{(P)}$$:

1) Label half the vertices in $$G$$ as $$+1$$ and the other half as $$-1$$ arbitrarily.
2) Take a $$t-1$$ step random walk which involves $$t$$ vertices: say $$v_1, v_2, \ldots v_t$$.
3) Finally, return the boolean label of all of the visited vertices. This gives a bit-vector $$x \in \{-1,1\}^t$$.

Now, let us ask: Which class of functions get fooled by this process?

The main result of the paper shows that this process fools

1) Symmetric functions
2) The class $$AC^0$$
3) Functions $$f$$ which are computable by low width read once branching programs.

In more detail, let $$f$$ be a symmetric function and let $$G$$ be a $$\lambda$$-spectral expander. For the process $$\textsf{(P)}$$ defined above, it follows by the spectral expansion of $$G$$, that $$discrepancy(f) = |E_{x \sim \textsf{(P)}} [f(x)] – E_{x \sim U_t} [f(x)]|$$ is small.

Now note that the total variation distance is precisely equal to the maximum discrepancy you get over all symmetric functions $$f$$. This leads to the connection that allows the authors to upgrade CLT for expander random walks as they are able to show a CLT for Markov Chains with respect to the TV distance (as opposed to the CLT with respect to the Kolmogorov Distance which is what was known from the work of Kipnis and Vadhan since 1986).

New Sublinear Algorithms and Lower Bounds for LIS Estimation (by Ilan Newman and Nithin Varma) (ArXiv) As the title suggests, this paper considers the task of estimating the length of the longest increasing sequence in an array. In particular, it gives a non-adaptive algorithm which estimates the LIS length within an additive error of $$\pm \varepsilon n$$ in $$\tilde{O}\left(\frac{r}{\varepsilon^3}\right)$$ queries (where $$r$$ is the number of distinct elements in $$A$$). On the lower bound side, the paper presents a lower bound of $$(\log n)^{\Omega(1/\varepsilon)}$$ non-adaptive queries. The lower bound construction uses ideas from lower bound of Ben-Eliezer, Canonne, Letzter and Waingarten on testing monotone pattern freeness.

Stability and testability: equations in permutations (by Oren Becker, Alexander Lubotzky, AND Jonathan Mosheiff) (ArXiv) Consider the following question: Suppose I ask if two permuations $$A,B \in Sym(n)$$. Suppose I want to decide whether these permutations commute or whether they are far from permutations which commute. Can this question be answered in time independent of $$n$$? The authors answer this question in the affirmative. They show a simple Sample and Substitute procedure which essentially does the following: take samples $$x_1, x_2, cdots x_k \sim [n]$$. And accept iff $$ABx_i = BAx_i$$ for each $$i$$. The authors call this particular set of equations/constraints (checking whether $$AB = BA$$) stable: as it admits a Sample and Substitute style tester with query complexity independent of $$[n]$$. This allows the authors to consider the group theoretic notion of stability as a property testing concept and allows them to examine the notion of stability from a computational standpoint.

StoqMA meets distribution testing (by Yupan Liu) (ArXiv) This paper shows a containment result. A certain complexity class $$\textsf{eStoqMA}$$ is shown to be a subset of $$\textsf{StoqMA}$$. Let us informally define what these classes are. A language $$L \in \textsf{StoqMA}$$ if there exists a verifier (which is a reversible quantum circuit) such that on input a string $$x$$ such that for every $$x \in L$$ there exists a witness quantum state which makes the verifier accept with probability at least $$2/3$$. And in case $$x \not\in L$$, for every quantum state the verifier rejects $$x$$ with probability at least $$2/3$$. The paper characterizes this class from a distribution testing point of view. And this connection allows the author to conclude that $$\textsf{eStoqMA} = \textsf{MA}$$. Here, $$\textsf{eStoqMA}$$ is a sub-class of $$\textsf{StoqMA}$$ where all YES instances have an easy witness.

(Edit: Added later) We missed the following two papers. Thanks to our readers for pointing it out.

Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu (by Arnab Bhattacharya, Sutanu Gayen, Eric Price and N.V. Vinodchandran) (ArXiv) Graphical models are a convenient way to describe high dimensional data in terms of its dependence structure. An important question in this area concerns learning/inferring the underlying graphical model from iid samples. This paper considers the problem when the underlying graph is in fact a tree. Thus, the challenge is to learn a tree structured distribution. The main result is that given sample access to a tree structured distribution, you can recover an $$\varepsilon$$ approximate tree with good probability in $$O(n/\varepsilon)$$.

Relaxed Locally Correctable Codes with Improved Parameters (by Vahid R. Asadi and Igor Shinkar) (ECCC) In their work on Robust PCPs of Proximity, Ben-Sasson et al. asked whether it is possible to obtain a $$q$$-query (Relaxed) Locally Decodable Code whose block length is strictly smaller than the best known bounds on block lengths for Locally Decodable Codes. Recall with relaxed LDCs you are allowed to abort outputting the $$i^{th}$$ bit of the message should you detect that the received word is not a valid codeword. This paper makes progress on this problem and shows that you can actually construct an $$O(q)$$-query relaxed LDCs which encode a message of length $$k$$ using $$O(k^{1 + 1/q})$$ queries. This matches some of the known lower bounds for constant query LDCs which thus makes progress towards understanding the gap between LDCs and relaxed LDCs in the regime $$q = O(1)$$.

# News for August 2020

Last month saw action in property testing across the board: graphs, distributions and functions were all objects of study in papers that came out last month. Also included is a separation result between quantum and classical query complexities resolving a conjecture of Aaronson and Ambainis. Here is a brief report.

Testing asymmetry in bounded degree graphs, by Oded Goldreich (ECCC). This paper studies a natural graph property hitherto not considered in the property testing literature. Namely, the question of testing whether a graph is asymmetric or whether it is far from being asymmetric. A graph is said to be asymmetric if its automorphism group is trivial (that is, it only contains the identity permutation). One of the results in the paper says that this problem is easy in the dense graph model – which is a side result of the paper. This is because all dense graphs are $$O(\log n/n)$$-close to being asymmetric. To see this, the paper points out that a simple randomized process which takes $$G$$ as input and returns an asymmetric graph by changing very few edges. This process asks you to do the following: Take a set $$S \subseteq V$$ with $$|S| = O(\log n)$$ nodes and replace the subgraph they induce with a random graph. Moreover,  randomize all the edges between $$S$$ and $$V \setminus S$$. What you can show is that in this modified graph, any automorphism (whp) will map $$S$$ to itself. And all the remaining vertices behave (whp) in a unique manner which is peculiar to it. In particular, this means that any automorphism better not map a vertex $$v$$ in $$V \setminus S$$ to any other vertex in $$V \setminus S$$. And this finishes the argument. The main result explores the bounded degree model. By a simple argument, you can show that testing asymmetry is easy if all the connected components have size $$s(n) \leq o\left(\frac{\log n}{\log {\log n}}\right)$$. Thus, the challenging case is when you have some connected components of a larger size. In this case, what Goldreich shows is the following: If all the components have size at most $$s(n)$$ where $$s(n) \geq \Omega\left(\frac{\log n}{\log {\log n}}\right)$$, then you can test asymmetry (with one-sided-error) in $$O(n^{1/2} \cdot s(n)/\epsilon)$$ queries.  Moreover, the paper also shows a two-sided-lower bound of $$\Omega\left(n/s(n)\right)^{1/2}$$ queries which holds as long as $$\epsilon \leq O(1/s(n)$$. This leaves open the bounded degree question of determining the query complexity of testing asymmetry in the general case as the paper also points out.

On testability of first order properties in Bounded degree graphs, by Isolde Adler, Noleen Köhler and Pan Peng (arXiv). One of the well understood motivations for property testing begins in the following manner. Decision problems are typically hard to solve if they involve a universal quantifier — either $$\exists$$ or $$\forall$$. One way around this hardness is to do the following ritual: relax the problem by dropping the universal quantifier, define a notion of distance between objects in your universe and ask a promise problem instead. Indeed, if you take your favorite property testing problem, you will note that it precisely fits in the template above. How about making this business more rigorous in bounded degree graph property model? This is precisely the content of this work which considers the face off between (First Order) logic and property testing in the bounded degree model. The authors show some interesting results. They begin by showing that in the bounded degree model, you can show that all first order graph properties which involve a single quantifier $$Q \in \{\forall, \exists\}$$ are testable with constant query complexity.
If you find this baffling, it would be good to remind yourself that not all graph properties can be expressed in the language of first order logic with a single quantifier! So, you can rest easy. The graph properties you know are not constant time testable are most assuredly not expressible with a single quantifier in First Order Logic. However, this work shows more. It turns out, that “any FO property that is defined by a formula with quantifier prefix $$\exists^* \forall^*$$ is testable”. Moreover, there do exist FO graph properties defined by the quantifier prefix $$\forall^* \exists^*$$ which are not testable. Thus, this work achieves results in the bounded degree model which are kind of analogous to the results in the dense graph model by Alon et al [1]. On a final note, I find the following one sentence summary of the techniques used to prove the lower bound rather intriguing: “[the paper] obtains the lower bound by a first-order formula that defines a class of bounded-degree expanders, based on zig-zag products of graphs.”

On graphs of bounded degree that are far from being Hamiltonian, by Isolde Adler and Noleen Köhler (arXiv). This paper explores the question of testing Hamiltonicity in the bounded degree model. The main result of the paper is that Hamiltonicity is not testable with one-sided error with $$o(n)$$ queries. PTReview readers might recall from our July Post a concurrent paper by Goldriech [2] which achieves the same lower bound on query complexity in the two-sided error model (the authors call attention to [2] as well). One of the interesting feature of this result is that the lower bounds are obtained by an explicit deterministic reduction as opposed to the usual randomized reduction. Like the authors point out, this offers more insights into structural complexity of instances that are far from being Hamiltonian. We point out that this also differs from how the lower bound is derived in [2] — which is via local hardness reductions to a promise problem of 3 CNF satisfiability.

An optimal tester for $$k$$-linear, by Nader Bshouty (ECCC). This paper explores two related questions. We call a function $$f \colon \{0,1\}^n \to \{0,1\}$$ $$k$$-linear if it equals the $$\sum_{i \in S} x_i$$ for some $$S \subseteq [n]$$ of size exactly $$k$$. A boolean function is said to be $$k$$-linear* if it is $$j$$ linear for a fixed $$j$$ where $$j \in \{0,1,2, \cdots, k\}$$. The paper proves the following theorems.

1. There exists a non-adaptive one-sided distribution free tester for $$k$$-linear* with query complexity being $$O\left(k \log k + \frac{1}{\varepsilon}\right)$$. This matches the two-sided lower bound (where the underlying distribution is uniform) by Blais et al [3].
2. Using a reduction from $$k$$-linear* to $$k$$-linear, the paper shows one can obtain a non-adpative two-sided distribution free tester for $$k$$-linear with same query complexity as the above result. The lower bound from Blais et al applies here also (in fact, they prove a lower bound on $$k$$-linearity).
3. Next up, the paper has a couple of lower bound results to accompany this. One of these results reveals the price you pay for being one-sided and exact (that is, you insist on the function being exactly $$k$$-linear). Turns out, now you have a non-adaptive one-sided uniform distribution lower bound of $$\widetilde{\Omega}(k) \log n + \Omega(1/\varepsilon)$$. If you allow adaptivity instead, the paper shows a lower bound of $$\widetilde{\Omega}(\sqrt k)\log n + \Omega(1/\varepsilon)$$.

Amortized Edge Sampling, by Talya Eden, Saleet Mossel and Ronitt Rubinfeld (arXiv). Consider the following setup. You are given query access to adjacency list of a graph $$G$$ with $$n$$ vertices and $$m$$ edges. You can make degree queries and neighbor queries. Suppose I ask you to sample a single edge from this graph from a distribution that is pointwise $$\varepsilon$$ close to the uniform distribution. Eden and Rosenbaum already showed how you can achieve this with a budget of $$\widetilde{\Theta}(n/\sqrt m)$$ queries. Starting from this jump off point, the authors ask whether you can circumvent this lower bound if you want to return multiple samples from a distribution which is again pointwise close to uniform. The paper answers this question in the affirmative and shows that if you know the number of samples, $$q$$, in advance you can get away with an amortized bound of $$O*(\sqrt q n/\sqrt m)$$ on the total number of queries needed.

On the High Accuracy Limitation of Adaptive Property Estimation, by Yanjun Han (arXiv). Take a discrete distribution $$\mathcal{P}$$ with support size $$k$$ and consider the task of estimating some symmetric property of $$\mathcal{P}$$ to a small $$\pm \varepsilon$$ additive error. Here, a symmetric property refers to a “nice” functional defined over the probability simplex, i.e., it refers to functions $$F \colon \Delta_k \to \mathbb{R}$$ where $$F(p) = \sum_{i=1}^{k} f(p_i)$$ where $$f \colon (0,1) \to \mathbb{R}$$. A naive attack to these estimation tasks goes through the following ritual: you get your hands on the empirical distribution, you plug it in $$F$$ and you hope for the best. Turns out, you are out of luck if the function $$f$$ is non-smooth and in these cases you end up with a suboptimal estimator. Previous works have also looked at more sophisticated estimators (like the local moment matching or LMM and profile maximum likelihood or PML estimator). Turns out, using the LMM or PML estimator leads to optimal sample complexity for a handful of symmetric properties (as long as $$\varepsilon \geq n^{-1/3}$$). This paper considers the question of what can you say for supersmall values of $$\varepsilon$$ where $$n^{-1/2} \leq \varepsilon \leq n^{-1/3}$$. (The $$n^{-1/2}$$ appears because there are estimators that use the knowledge of $$f$$ and $$\varepsilon$$ can be driven all the way down to $$n^{-1/2}$$ for these estimators). The paper focuses on estimators which do not exploit any structure in $$f$$. In particular, the paper specializes this question to PML and shows a fundamental limitation on PML which means that the PML approach fails to be sample optimal for the entire range of $$\varepsilon$$ and is sample optimal only for $$\varepsilon >> n^{-1/3}$$ — which also confirms a conjecture of Han and Shiragur (and refutes a conjecture of Acharya et al. who postulated this is sample optimal for the entire range of $$\varepsilon$$).

$$k$$-Forrelation Optimally Separates Quantum and Classical Query
Complexity
, by Nikhil Bansal and Makrand Sinha (arXiv). Understanding the power of quantum query over classical queries is a well motivated problem with a rich history. One of the biggest questions in this area asks for the largest separation between classical and quantum query complexities. In a breakthrough, Aaronson and Ambainis [4] showed a fundamental simulation result which confirmed that you can simulate $$q$$ quantum queries with $$O(N^{1 – 1/2q})$$ classical queries in the randomized decision tree model of computation as long as $$q = O(1)$$. In the same paper, the authors also showed that the standard forrelation* problem exhibits a $$1$$ versus $$\widetilde{\Omega}(\sqrt n)$$ separation. This means that for $$q = 1$$, you essentially have optimal separation. But what about $$q > 1$$? To this end, Aaronson and Ambainis conjectured that a certain problem which they called $$k$$-forrelation — which can be computed with $$q = k/2$$ queries requires at least $$\widetilde{\Omega}(n^{1-1/k})$$ classical queries. The current work precisely confirms this conjecture.

(*) The forrelation problem asks you to decide whether one Boolean function is highly correlated with the Fourier transform of a second function.

(Edit: Added Later) Simon Apers points out a paper by Shrestov, Storozhenko and Wu that we missed. (Thanks Simon)! Here is a brief report on that paper.

An optimal separation of randomized and quantum query complexity (by Alexander Shrestov, Andrey Storozhenko and Pei Wu)(arXiv) Similar to the paper by Bansal and Sinha [BS20] mentioned above, this paper also resolves the conjecture by Aaronson and Ambainis proving the same result. Like the paper also notes, the techniques in both of these works are completely different and incomparable. On the one hand [BS20] proves the separation for an explicit function as opposed to a function chosen uniformly at random from a certain set as considered in this work. On the other hand, the separation result shown in [BS20] only applies when the query algorithm returns the correct answer with probability at least $$1/2 + 1/poly(\log n)$$ — in contrast the results in this paper apply even when the query algorithm is required to have probability of correctness be a constant at least $$1/2$$. In addition, this work also proves the $$\ell$$-Fourier weight conjecture of Tal which is of independent interest beyond quantum computing.

So, it looks like all in all we had a great month with two concurrent papers both resolving Aaronson Ambainis conjecture (yet again after two concurrent papers on testing Hamiltonicity)!

References:

[1] Noga Alon, Eldar Fischer, Michael Krivelevich, and Mario Szegedy. Efficient testing of
large graphs. Combinatorica, 20(4):451–476, 2000.

[2] Oded Goldreich. On testing hamiltonicity in the bounded degree graph model. Electronic Colloquium on Computational Complexity (ECCC), (18), 2020

[3] Eric Blais, Joshua Brody, and Kevin Matulef. Property testing lower bounds via communication complexity. CCC 2011

[4] Scott Aaronson and Andris Ambainis. Forrelation: A problem that optimally separates quantum from classical computing. SIAM J. Comput., 47(3):982–1038, 2018