News for February 2022

This month has seen a flurry of activity in sublinear algorithms and a diverse collection of papers have come up, with topics ranging from differentially private sublinear algorithms to local testers for multiplicity codes. Apologies to the readers for the delay in putting this post together!

Almost-Optimal Sublinear-Time Edit Distance in the Low Distance Regime by Karl Bringmann, Alejandro Cassis, Nick Fischer and Vasileios Nakos (arXiv)

This paper considers the problem of gap edit distance, i.e., of determining if the edit distance between two strings \(x\) and \(y\) is at most \(k\) or at least \(K\). Their main result is an algorithm that runs in time \(O(n/k + \text{poly}(k))\) and solves the problem for \(K = k \cdot 2^{\tilde{O}(\sqrt{\log k})}\). The paper improves upon earlier results of Goldenberg, Krauthgamer and Saha (2019) and Kociumaka and Saha (2020) who solved the problem for \(K = k^2\) with the same asymptotic guarantee on the query complexity.

One of the interesting takeaways from the paper is that the complexity of solving the gap Hamming distance and gap edit distance are similar in the low distance regime. For both, the complexity of solving the \((k, k^{1+o(1)})\)-gap problem is \(n/k^{1 \pm o(1)}\). This needs to be contrasted with the fact that solving \((k, \Omega(n))\)-gap edit distance requires \(\Omega(\sqrt{k})\) queries as shown by Batu, Ergun, Kilian, Magen and Raskhodnikova (2003), whereas \((k, \Omega(n))\)-gap Hamming distance can be solved in \(O(1)\) time.

These results are incomparable to those obtained by Goldenberg, Kociumaka, Krauthgamer and Saha (which was discussed in our November 2021 post), where they give a nonadaptive algorithm with complexity \(O(n/k^{1.5})\) for \((k, k^2)\)-gap edit distance problem. The algorithm in the present paper is adaptive and works faster for smaller values of \(k\).

Privately Estimating Graph Parameters in Sublinear time by Jeremiah Blocki, Elena Grigorescu, Tamalika Mukherjee (arXiv)

Differentially private approximation algorithms for optimization problems on graphs is a well-studied topic. This paper opens up an exciting research direction by initiating a systematic study of the design of differentially private sublinear-time algorithms. The setting is that graphs are viewed as databases and two graphs are neighboring if they differ in an edge (or a node). An algorithm \(A\) is \(\epsilon\)-differentially private if for every pair of edge-neighboring(or node-neighboring) graphs \(G, G’\) and for every subset \(S\) of outputs, \(\Pr[A(G) \in S] \leq \exp(\epsilon) \cdot \Pr[A(G’) \in S]\).

The paper presents \(\epsilon\)-differentially private sublinear-time algorithms for well-studied problems such as estimating the average degree, the size of a min vertex cover and the size of a maximum matching. These algorithms access the input graphs via neighbor queries and degree queries.

In addition to providing a strong privacy guarantee, their algorithms nearly match the approximation and complexity guarantees of their non-differentially private counterparts. The main idea seems to be the formalization of a sensitivity notion, which they refer to as Global Coupled Sensitivity, and bounding it for the known sublinear-time algorithms for the aforementioned problems. Finally, they add Laplace noise calibrated with this sensitivity value to the output of the algorithms to make them differentially private.

Testability and Local Certification of Monotone Properties in Minor-closed Classes by Louis Esperet And Sergey Norin (arXiv)

One of the major interests in graph property testing is to characterize which properties are testable, i.e, can be \(\epsilon\)-tested with query complexity that depends only on the parameter \(\epsilon\). The question of testability is well-understood in the dense graph model as well as the bounded degree model. This paper concerns itself with testability questions in the general model or the sparse model of graph property testing, where graphs are represented as adjacency lists with no bound on the maximum degree.

The authors prove that every monotone property of minor-closed graph classes is testable with one-sided error, where a property is monotone if it is closed under taking subgraphs and a graph class is minor-closed if it is closed under taking minors. A crucial fact to be noted here is that a tester is allowed to make only uniformly random nerighbor queries.

This result is a significant generalization of a 2019 result by Czumaj and Sohler, who proved that for every finite set of graphs \(\mathcal{H}\), every \(\mathcal{H}\)-free property of minor-closed graph classes is testable with one-sided error, where a graph satisfies \(\mathcal{H}\)-freeness if none of its subgraphs belong to \(\mathcal{H}\).

They show an interesting consequence of their results to designing a short local certification scheme for monotone properties of minor-closed graph classes. Roughly speaking, they show the existence of a prover-verifier system for the aforementioned testing problem where proofs of length \(O(\log n)\) are assigned to each vertex and that verifier needs to observe only the proofs assigned to a vertex and its neighbors.

The plane test is a local tester for Multiplicity Codes by Dan Karliner, Roie Salama, and Amnon Ta-Shma (ECCC)

Multiplicity codes are a generalization of Reed-Muller codes and was first studied by Guruswami and Wang (2013) and Kopparty, Saraf and Yekhanin (2014). The messages here are polynomials of degree \(d\) over \(m\) variables and the codeword corresponding to a polynomial \(p\) is the evaluation of \(p\) and of all of its directional derivatives of order upto \(s\) over all the points in \(\mathbb{F}_q^m\), where \(q\) is a prime power.

Even though multiplicity codes are known to be locally decodable, it was open whether they are locally testable. Local testers for Reed Muller codes work by restricting the evaluations to a uniformly random line in \(\mathbb{F}_q^m\) and checking whether it corresponds to the evaluations of a degree \(d\) univariate polynomial. The authors first show that such a tester does not work for the case of multiplicity codes when \(d\) is large. They then show that a plane test is a good local tester for multiplicity codes even for larger values of \(d\). Specifically, a plane test checks whether the restriction of a given word, which is purportedly the evaluation of a polynomial of degree \(d\) and of its derivatives, to a uniformly random plane in \(\mathbb{F}_q^m\) is a bivariate multiplicity code of degree \(d\).

We conclude the post with a short note from Nader Bshouty and Oded Goldreich on a fundamental characterization result in property testing.

On properties that are non-trivial to test by Nader H. Bshouty and Oded Goldreich (ECCC)

A property on binary strings is nontrivial if for infinitely many \(n\), the property contains at least one string of length \(n\) and at most \(2^{n – \Omega(n)}\) strings of length \(n\). The note shows that every nontrivial property requires \(\Omega(1/\epsilon)\) queries to \(\epsilon\)-test.

News for January 2022

A slow month to start 2022, as far as property testing (and myself) are concerned — “only” 3 papers, and a delay of several days in posting this. Let’s jump in with quantum testing!

Testing matrix product states, by Mehdi Soleimanifar and John Wright (arXiv). Suppose you are given a state \(|\psi\rangle\) of \(n\) qubits, and want to know “how entangled” this whole thing is: for instance, is \(|\psi\rangle\) a product state (no entanglement between the \(n\) qudits)? More generally, the “amount of entanglement” allowed is captured by an integer \(r\), the bond dimension, where product state corresponds to \(r=1\), and larger \(r\) allows for more entanglement. This paper then considers the following property testing question: how many copies of \(|\psi\rangle\) are needed to test whether it has bond dimension at most \(r\), or is \(\varepsilon\)-far from every such state (in trace distance)? While the case \(r=1\) had been previously considered, this paper considers the general case; and, in particular, shows a qualitative gap between \(r=1\) (for which a constant number of copies, \(O(1/\varepsilon^2)\), suffice) and \(r\geq 2\) (for which they show the number of states is \(\Omega(\sqrt{n}/\varepsilon^2)\), and \(O(n r^2/\varepsilon^2)\)).

Constant-time one-shot testing of large-scale graph states, by Hayata Yamasaki and Sathyawageeswar Subramanian (arXiv). In this paper, the authors consider the task of testing if the physical error rate of a given system is below a given threshold — namely, the threshold below which fault-tolerant measurement-based quantum computation (MBQC) becomes feasible. Casting this into the framework of property testing, the paper shows that measuring very few (a constant number!) of the input state is enough to test whether the error rate is low.

And, to conclude, a paper which escaped us in December, on private distribution testing:

Pure Differential Privacy from Secure Intermediaries, by Albert Cheu and Chao Yan (arXiv). Throwback to April 2020 and August 2021, which covered results on distribution testing (uniformity testing!) under the shuffle model of differential privacy. Namely, there was an upper bound of $$ O( k^{2/3}/(\alpha^{4/3}\varepsilon^{2/3})\log^{1/3}(1/\delta) + k^{1/2}/(\alpha\varepsilon) \log^{1/2}(1/\delta) + k^{1/2}/\alpha^2)$$ samples for testing uniformity of distributions over \([k]\), to distance \(\alpha\), under \((\varepsilon,\delta)\)shuffle privacy (so, approximate privacy: \(\delta>0\)). A partial lower bound existed for pure differential privacy, i.e., when \(\delta=0\): however, no upper bound was known for pure shuffle privacy.
Until now: this new paper shows that pure DP basically comes at no cost, by providing an \((\varepsilon,0)\)-shuffle private testing algorithm with sample complexity $$ O( k^{2/3}/(\alpha^{4/3}\varepsilon^{2/3}) + k^{1/2}/(\alpha\varepsilon) + k^{1/2}/\alpha^2)$$ The paper actually does a lot more, focusing on a different problem, private summation; and the testing upper bound is a corollary of the new methods they develop in the process.

News for December 2021

Happy 2022 everyone! We now cover the last papers of 2021. Our readers have pointed out a number of papers from previous months that we missed (sorry). To adequately cover (and publicize) these papers, we’ve decided to just incorporate those papers on this post rather than update previous posts.

We have a breakthrough on Locally Testable Codes, sublinear algorithms for testing pattern freeness, new thoughts on convexity testing, and a perspective on sampling community structure. And here’s the final list for 2021.

Locally Testable Codes with constant rate, distance, and locality by Irit Dinur, Shai Evra, Ron Livne, Alexander Lubotzky, and Shahar Mozes (ECCC, posted Nov 2021). A Locally Testable Code (LTC) is one where the set of codewords has a property tester. The study of LTCs started with the PCP theorem, but blossomed into a deep research agenda. The holy grail is a \(c^3\)-LTC: one with constant rate, constant distance, and constant query property tester. Just to recall, the rate \(r \in (0,1)\) is the ratio of the message length to the encoding length. The distance \(\delta \in (0,1)\) is the (normalized) Hamming distance between codewords (we typically expect this to be constant for an error correcting code). And, most familiar to us, \(q\) is the number of queries required to property test codewords. The aim is to get all these parameters to be constants independent of the message length. The classic linearity test of BLR basically proves that the Hadamard code is 3-query testable. But the Hadamard code has poor (exponentially small) rate. Low-degree tests prove that Reed-Muller codes are constant-query testable, but they have inverse polynomial rate. Further work constructed LTCs with inverse polylogarithmic rate, meaning that the encoding has length \(n\cdot poly(\log n)\) for message length \(n\). There was even some belief that any LTC must have a rate going to zero. This paper invalidates that belief, and gives a truly \(c^3\)-construction. The construction is based on a 2-dimensional simplicial complex based on Cayley graphs. This is basically a graph with hyperedges corresponding to “squares” of vertices. Concurrent and independent work of Breuckmann-Ebedhardt and Panteleev-Kalachev also have similar constructions, though their perspective is of quantum codes.

On the Locally Testable Code of Dinur et al. (2021) by Oded Goldreich (ECCC). As the title says, this short paper gives a high level discussion of the previous result. It abstracts out some of the graph theoretic aspects, to explain connections to more combinatorial constructions. The construction and analysis is based on expander codes. These codes (e.g. low-density parity check, LDPC codes) have constant rate and distance, but we do not know how to test them. A string far from a codeword might only fail on (say) one parity check constraint. Interpreting the Dinur et al as an expander code, we essentially get a collection of correlated parity constraints. Hence, a string that is far from a codeword violates many constraints, leading to a tester. (Of course, this is an extremely watered down version of the analysis idea.) The construction takes two Cayley graphs on a non-Abelian group (which form the vertices). Specific 4-cycles in this graph (together with a constant sized tensor code) are used to make the final codewords.

Strongly Sublinear Algorithms for Testing Pattern Freeness by Ilan Newman and Nithin Varma (arXiv, posted Jun 2021). Pattern freeness is a major generalization of monotonicity testing. The input is (query access to) a function \(f: [n] \mapsto \mathbb{R}\). Let \(\pi: [k] \mapsto [k]\) be a permutation. The input is \(pi\)-free if no restriction of \(f\) to \(k\) elements induces the permutation \(pi\). Newman-Rabinovich-Rajendraprasad-Sohler introduced this problem. When \(k=2\), this property is precisely monotonicity, for which (many) \(O(\varepsilon^{-1} \log n)\) property testers are known. Is such a bound achievable for \(k \geq 3\)? Interestingly, for non-adaptive algorithms, an optimal bound of \(\Theta(n^{1-1/(k-1)})\) (ignoring, \(\varepsilon, k\) dependencies) was achieved by Ben-Eliezer and Canonne. This papers shows that there is an adaptive algorithm that makes \(n^{o(1)}\) queries, thereby proving a separation between adaptive and non-adaptive algorithms. A key idea is to visualize the input as a set of points in the plane, and construct a coarse grid that partitions the points. There is a highly non-trivial case analysis on top of this that leads to a \(O(\sqrt{n})\)-query algorithm (this already beats the non-adaptive lower bound).

Parameterized Convexity Testing by Abhiruk Lahiri, Ilan Newman, and Nithin Varma (arXiv, posted Oct 2021). Given a function \(f:[n] \mapsto \mathbb{R}\), the property of interest is (discrete) convexity. The discrete derivatives should be monotonically non-decreasing. It is known that there is a \(O(\varepsilon^{-1}\log n)\) query non-adaptive tester for this property. This paper studies this problem from the perspective of parameterized property testing, where the range is restricted. The main result is that if the number of distinct discrete derivates is at most \(s\), there is a \(O(\varepsilon^{-1} \log s)\) query non-adaptive tester. Moreover, this bound is optimal even for adaptive algorithms. This picture parallels that of monotonicity. Interestingly, when there are only two distinct discrete derivatives, the tester can be made deterministic.

Modularity and edge sampling by Colin McDiarmid and Fiona Skerman (arXiv). Not your typical sublinear algorithms paper, but certainly of interest to us. The main problem is to understand when a random samples of edges in a graph can reveal information about the “community structure”. The modularity of a graph is a measure of this structure. Essentially, a partition has low modularity if the number of edges crossing the partition is close to that of a random graph with the same degree distribution. A partition of high modularity has significant fewer edges, and is thus a valid “clustering” of the graph. The modularity of the graph is obtained by maximizing over partitions. (The normalization of modularity is chosen so that the partition generally has a constant number of sets.) This paper proves a concentration inequality for the modularity of a constant sample of uniform random edges. The modularity of a sample of \(\Theta(\varepsilon^{-5})\) edges is an additive \(\varepsilon\)-approximation of the overall modularity, whp.

News for November 2021

The holiday season is just around the corner. Best wishes to you and your loved ones in advance from us here at PTReview. This month, we had four five papers in all. To prepare for the festive mood associated with the onset of December, also included are brief updates on the recently disproven implicit graph conjecture. Let’s dig in. (And please, do let us know if we missed your paper).

(EDIT: Thanks to our readers for pointing out a paper we missed.)

Downsampling for Testing and Learning in Product Distributions by Nathaniel Harms, Yuichi Yoshida (arXiv). Contemplating on connections in algorithms used for testing a diverse collection of function properties, this paper provides a unified and generalized view of a technique: which the authors call downsampling. As the paper explains, the name is motivated by analogy to image/signal processing tasks. Very often, these tasks involve two steps. In the first step, you break the input domain into “grid cells”. You use oracle calls to the function to obtain a crude approximation over all these cells. In the second step, you learn this gridded-up or coarsened function cell-by-cell.

This two-step attack could just be what the doctor ordered for your favorite function property testing problem: in particular, it has been put to work for approximately testing visual properties, approximating distance to monotonicity in high dimensions, testing \(k\)-monotone functions, and more. However, if you wanted to obtain property testers using this approach in the distribution-free setup, your ordeal might be far from over. The unknown distribution your domain is equipped with can mess with geometric arguments your gridding approach hopes to exploit. This is precisely the setup considered in the paper (i.e, distribution-free testing of function properties).

The essence of downsampling, is captured by a slick proposition that prescribes coarsening as your goto weapon if the
1) fraction of cells on which \(f\) is not constant, and
2) a measure of how non-uniform the unknown distribution D your domain is equipped with is

are both small.

Equipped with this machinery, the paper tackles the task of designing distribution-free testers for boolean monotonicity with the underlying domain being \(\mathbb{R}^d\). The argument is pretty short and improves upon the sample complexity of the corresponding result in the paper by Black-Chakrabarti-Seshadhri. Do check it out, looks like some nice ammo to add to your toolkit.

Let’s stay on board for sublinear time algorithms for gap-edit distance.

Gap Edit Distance via Non-Adaptive Queries: Simple and Optimal by Elazar Goldenberg, Tomasz Kociumaka, Robert Krauthgamer, Barna Saha (arXiv). The edit distance problem needs no introduction. This paper studies the problem of approximating edit distance in sublinear time. In particular, this is formalized by introducing the gapped version of the problem. This entails the following computational task: Fix \(k \in \mathbb{N}\) and some \(c > 0\). Given a pair \(x, y\) of strings over a finite alphabet, decide whether the edit distance between \(x,y\) is at most \(k\) or whether it is at least \(k^c\). This paper resolves the non-adaptive query complexity of edit distance and proves that the above gapped version can be decided in at most \(O\left(\frac{n}{k^{c – 1/2}}\right)\) queries. The paper also proves that this bound is almost optimal up to some polylog factors.

Next up, we have time-optimal algorithms for maximum matching and vertex cover! Sweet, right?

Time-Optimal Sublinear Algorithms for Matching and Vertex Cover by Soheil Behnezhad (arXiv). This paper gives algorithms for the classic problem of estimating the size of vertex cover and maximum matching in graphs in sublinear time (in both adjacency matrix and adjacency list models). This is another nice read for the holiday season — after all the paper obtains time-optimal algorithms (up to polylog factors) in both of these models for a multiplicative-additive \((2, \varepsilon n)\) algorithms. Let me set up some historical context to appreciate the maximum matching result better.

In a classic work, Parnas and Ron gave sublinear time algorithms that obtain a \((2, \varepsilon n)\) estimate to the size of a maximum matching in a graph using query access to the adjacency list in sublinear time. Their sublinear time algorithm is inspired by ideas with roots in distributed algorithms. In particular, their algorithm returns in time
\(\Delta^{O(\log {\frac{\delta}{\varepsilon}})}\) (where \(\Delta = \Delta(G)\) denotes the degree bound) an estimate \(\text{EST}\) to the size of the max-matching where \(OPT \leq \text{EST} \leq 2 \cdot OPT + \varepsilon n\). Unfortunately, algorithms in this Parnas-Ron Framework must necessarily suffer a quasi-polynomial running time because of lower bounds from distributed algorithms. Using a randomized greedy approach to estimating the size of a maximum matching, Yoshida, Yamamoto, and Ito gave the first algorithm which returned a \((2, \varepsilon n)\)-estimate to maximum matching in time \(poly(\Delta/\varepsilon)\). This is where the story essentially froze — though there were results that improved the dependence on \(\Delta\) to \(O(\Delta^2)\). Unfortunately, this is not truly sublinear for large \(\Delta\). In another direction, Kapralov-Mitrovic-Norouzi Fard-Tardos gave an algorithm that estimates the maximum matching size in time \(O(\Delta)\) (which is truly sublinear) but it no longer guarantees a \((2, \varepsilon n)\)-approximation and instead returns a \((O(1), \varepsilon n)\)-approximation. The current paper, as remarked above achieves the best of both worlds.

Final stop, updates on the implicit graph conjecture.

The Implicit Graph Conjecture is False by Hamed Hatami, Pooya Hatami (arXiv). While not a property testing paper per se, it is always good to see an old conjecture resolved one way or the other. In this paper, the Hatami brothers (Pooya and Hamed) refute a three decade old conjecture — the one mentioned in the title. Here is a brief history. Sampath Kannan, Moni Naor and Steven Rudich defined the notion of implicit representation of graphs. Take a family of \(\mathcal{F}\) of graphs and consider a \(n\) vertex graph \(G \in \mathcal{F}\). You say this family admits an efficient implicit representation if there exists an assignment of \(O(\log n)\) length labels to all the vertices in \(V(G)\) such that the adjacencies between every pair of vertices is a function of the labels of the corresponding pair. Crucially, the labeling function may depend on the family, but not on individual graphs in the family. What is cool about families admitting such an efficient implicit representation, is that the number of \(n\)-vertex graphs in this family cannot be any bigger than \(2^{O(n \log n)}\) — that is such families have at most factorial growth rate. Implicit graph conjecture asserts that for every hereditary graph family, the converse also holds. Namely, if the hereditary family has at most factorial growth rate, then the family admits efficient implicit representations. The key, as shown in this paper (which spans all of six pages!) is to choose as your hereditary graph class, the closure of a random collection of graphs. The authors show that now your hand is forced and your representation will no longer be efficient. However, annoyingly, your hereditary graph class need not have a factorial growth rate as taking the closure of a random collection expands to contain all graphs and has growth rate \(2^{\Omega(n^2)}\). The cool thing is, you can avoid this issue by choosing a random collection of slightly sparse random graphs (with \(n^{2-\varepsilon}\) edges). Interestingly, this gives you enough ammo to finally control the growth rate which in turn allows the authors to slay this conjecture.

(EDIT: Omitted Stop, added retroactively).

Sublinear quantum algorithms for estimating von Neumann entropy by Tom Gur, Min-Hsiu Hsieh, Sathyawageeswar Subramanian (arXiv). This paper presents quantum algorithms for the problem of obtaining multiplicative estimates of the Shannon entropy of the familiar classical distributions and the more exotic von Neumann entropy of mixed quantum systems. In particular, the paper presents an \(\widetilde{O}(n^{\frac{1+\eta }{2\gamma^2 }})\)-query quantum algorithm that achieves a \(\gamma\)-multiplicative approximation algorithm for the Shannon Entropy of an input distribution \(\mathbf{p}\) supported on a universe of size \(n\) where \(H(\mathbf{p}) \geq \gamma/\eta\). As if this were not already cool enough, the paper also presents sublinear quantum algorithms for estimating Von Neumann Entropy as well. This is supplemented by a lower bound of \(\Omega(n^{\frac{1}{3 \gamma^2}})\) queries for achieving a \(\gamma\)-factor approximation for any of these two kinds of entropies.

News for October 2021

The month of September was quite busy, with seven papers, spanning (hyper)graphs, proofs, probability distributions, and sampling.

Better Sum Estimation via Weighted Sampling, by Lorenzo Beretta and Jakub Tětek (arXiv). This paper considers the following question: “given a large universe of items, each with an unknown weight, estimate the total weight to a multiplicative \(1\pm \varepsilon\).” The key is in the type of access you have to those items: here, the authors consider the setting where items can be sampled proportionally to their unknown weights, and show improved bounds on the sample/query complexity in this model. And there something for everyone: they also discuss connections to edge estimation in graphs (assuming random edge queries) and to distribution testing (specifically, in the “dual” or “probability-revealing” models of Canonne–Rubinfeld and Onak–Sun).

This gives us an easy segue to distribution testing, which is the focus of the next two papers.

As Easy as ABC: Adaptive Binning Coincidence Test for Uniformity Testing, by Sudeep Salgia, Qing Zhao, and Lang Tong (arXiv). Most of the work in distribution testing (from the computer science community) focuses on discrete probability distributions, for several reasons. Including a technical one: total variation distance is rather fickle with continuous distributions, unless one makes some assumption on the unknown distribution. This paper does exactly this: assuming the unknown distribution has a Lipschitz density function, it shows how to test uniformity by adaptively discretizing the domain, achieving (near) sample complexity.

Exploring the Gap between Tolerant and Non-tolerant Distribution Testing, by Sourav Chakraborty, Eldar Fischer, Arijit Ghosh, Gopinath Mishra, and Sayantan Sen (arXiv). It is known that tolerant testing of distributions can be much harder than “standard” testing – for instance, for identity testing, the sample complexity can blow up by nearly a quadratic factor, from \(\sqrt{n}\) to \(\frac{n}{\log n}\)! But is it the worse that can happen, in general, for other properties? This work explores this question, and answers it in some notable cases of interest, such as for label-invariant (symmetric) properties.

And now, onto graphs!

Approximating the Arboricity in Sublinear Time, by Talya Eden, Saleet Mossel, and Dana Ron (arXiv). The arboricity of a graph is the minimal number of spanning forests required to cover all its edges. Many graph algorithms, especially sublinear-time ones, can be parameterized by this quantity: which is very useful, but what do you do if you don’t know the arboricity of your graph? Well, then you estimate it. Which this paper shows how to do efficiently, given degree and neighbor queries. Moreover, the bound they obtain — \(\tilde{O}(n/\alpha)\) queries to obtain a constant-factor approximation of the unknown arboricity \(\alpha\) — is optimal, up to logarithmic factors in the number of vertices \(n\).

Sampling Multiple Nodes in Large Networks: Beyond Random Walks, by Omri Ben-Eliezer, Talya Eden, Joel Oren, and Dimitris Fotakis (arXiv). Another thing which one typically wants to do with very large graphs is sample nodes from them, either uniformly or according to some prescribed distribution. This is a core building block in many other algorithms; unfortunately, approaches to do so via random walks will typically require a number of queries scaling with the mixing time \(t_{\rm mix}(G)\) of the graph \(G\), which might be very small for nicely expanding graphs, but not so great in many practical settings. This paper proposes and experimentally evaluates a different algorithm which bypasses this linear dependence on \(t_{\rm mix}(G)\), by first going through a random-walk-based “learning” phase (learn something about the structure of the graph) before using this learned structure to perform faster sampling, focusing on small connected components.

Why stop at graphs? Hypergraphs!

Hypergraph regularity and random sampling, by Felix Joos, Jaehoon Kim, Daniela Kühn, Deryk Osthus (arXiv). The main result in this paper is a hypergraph analogue of a result of Alon, Fischer, Newman and Shapira (for graphs), which roughly states that if a hypergraph satisfies some regularity condition, then so does with high probability a randomly sampled sub-hypergraph — and conversely. This in turn has direct implications to characterizing which hypergraph properties are testable: see the companion paper, by the same authors.
(Note: this paper is a blast from the past, as the result it shows was originally established in the linked companion paper, from 2017; however, the authors split this paper in two this October, leading to this new, standalone paper.)

And, to conclude, Arthur, Merlin, and proofs:

Sample-Based Proofs of Proximity, by Guy Goldberg, Guy Rothblum (ECCC). Finally, consider the setting of interactive proofs of proximities (IPPs), where the prover is as usual computationally unbounded, but the verifier must run in sublinear time (à la property testing). This has received significant interest in the past years: but what if the verifier didn’t even get to make queries, but only got access to uniformly random locations of the input? These “SIPP” (Sample-based IPPs), and their non-interactive counterpart SAMPs (Sample-based Merlin-Arthur Proofs of Proximity) are the object of study of this paper, which it introduces and motivates in the context, for instance, of delegation of computation for sample-based algorithms.

News for September 2021

A nice array of papers for this month, ranging from distribution testing, dense graph property testing, sublinear graph algorithms, and various mixtures of them. Let us now sample our spread! (Adding another semistreaming paper who achieved similar results to another posted paper. -Ed)

A Lower Bound on the Complexity of Testing Grained Distributions by Oded Goldreich and Dana Ron (ECCC). A discrete distribution is called \(m\)-grained if all probabilities are integer multiples of \(1/m\). This paper studies the complexity of testing this property of distributions. For simplicity, consider the property of being \(n/2\)-grained, where the support size is \(n\). The classic lower bound for testing uniformity shows that \(\Omega(\sqrt{n})\) samples are required to distinguish the uniform distribution from a distribution uniform on \(n/2\) elements. Thus, we get a lower bound of \(\Omega(\sqrt{n})\) for testing \(n/2\)-grainedness (if I am permitted to use that word). This paper proves a lower bound of \(\Omega(n^c)\), for all constant \(c < 1\). It is conjectured that the lower bound is actually \(\Omega(n/\log n)\), which would match the upper bound (for any label-invariant property).

Testing Distributions of Huge Objects by Oded Goldreich and Dana Ron (ECCC). This paper introduced a new model that marries distribution testing with property testing on strings. The “object” of interest is a distribution \(\mathcal{D}\) over strings of length \(n\). We wish to test if \(\mathcal{D}\) possesses some property. The tester can get a random string \(x\) from the distribution, and can query any desired index of \(x\). The distance between distributions is defined using the earthmover distance (where we use the Hamming distance between strings). This model is called the DoHO (Distributions of Huge Objects) model. There are many questions posed and connections drawn to classical property testing and distribution testing. What I find interesting is a compelling application: the distribution \(\mathcal{D}\) may represent noisy or perturbed versions of a single object. The DoHO model gives a natural generalization of standard property testing to noisy objects. This paper considers problems such as testing if \(\mathcal{D}\) is: a random perturbation of a string, or a random cyclic shift, or a random isomorphism of a graph.

Sublinear Time and Space Algorithms for Correlation Clustering via Sparse-Dense Decompositions by Sepehr Assadi and Chen Wang (arXiv). Correlation clustering is a classic problem where edges in a graph are labeled ‘+’ or ‘-‘, denoting whether these edges should be uncut or cut. The aim is to cluster the graph minimizing the total “disagreements” (cut ‘+’ edges or uncut ‘-‘ edges). This paper gives an \(O(1)\)-approximation algorithm that runs in \(O(n\log^2n)\) time; this is the first sublinear time approximation algorithm for this problem. Correlation clustering has seen results for the property testing/sublinear algorithms community, first by Bonchi, Garcia Soriano, and Kutzkov. But previous results were essentially on the dense graph model, giving \(O(\epsilon n^2)\) error assuming adjacency matrix input. This paper considers access to the adjacency list of ‘+’ edges. Interestingly (from a technical standpoint), the key tool is a new sparse-dense decomposition. Such decompositions emerged from the seminal work of Assadi-Chen-Khanna for sublinear \((\Delta+1)\)-colorings, and it is great to see applications beyond coloring.

Sublinear-Time Computation in the Presence of Online Erasures by Iden Kalemaj, Sofya Raskhodnikova, and Nithin Varma (arXiv). Can property testing be done when portions of the input are hidden? This question was first raised by Dixit-Raskhodnikova-Thakurta-Varma, who gave a model of erasure-resilient testing. There is an adversary who hides (erases) part of the input function; queries to those parts just yield a dummy symbol. This paper defines an online version of this model. There is an erasure parameter \(t\). On each query by the property tester, the adversary can erase \(t\) values of the function. Consider the property of classic linearity of functions \(f:\{0,1\}^d \rightarrow \{0,1\}\). The BLR tester queries triples of pairs \((x,y, x \oplus y)\). Observe how this tester is easily defeated by our adversary, by erasing the value \(f(x\oplus y)\). One of the main results of this paper is a \(O(1/\varepsilon)\)-query tester for linearity, that works for any constant erasure parameter \(t\). Note that this matches the bound for the standard setting. There are a number of results for other classic properties, such as monotonicity (sortedness) and Lipschitz.

Sublinear Time Eigenvalue Approximation via Random Sampling by Rajarshi Bhattacharjee, Cameron Musco, and Archan Ray (arXiv). Consider the problem of estimating all the eigenvalues of a real, symmetric \(n \times n\) matrix \(M\) with bounded entries, in sublinear time. The main result shows that the eigenvalues of a uniform random \(O(\epsilon^{-4}\log n)\) principal submatrix can be used to approximate all eigenvalues of \(M\) up to additive error \(\epsilon n\). One can think of this as a sort of concentration inequality for eigenvalues. This result follows (and builds upon) work of Bakshi-Chepurko-Jayaram on property testing semidefiniteness. The key idea is that eigenvectors corresponding to large eigenvalues have small infinity norm: intuitively, since all entries in \(M\) are bounded, such an eigenvector must have its mass spread out among many coordinates. Hence, we can get information about it by randomly sampling a few coordinates. The paper also shows that approach of taking principal submatrices requires taking \(\Omega(\epsilon^{-2})\) columns/rows.

Deterministic Graph Coloring in the Streaming Model by Sepehr Assadi, Andrew Chen, and Glenn Sun (arXiv). This is technically not a sublinear algorithms paper (well ok, streaming is sublinear, but we tend not to cover the streaming literature. Maybe we should? – Ed.) But, I think that the connections are of interest to our community of readers. The main tool of sublinear \((\Delta+1)\)-coloring algorithm of Assadi-Chen-Khanna is the palette sparsification lemma (\(\Delta\) is the maximum degree). This lemma shows that vertices can randomly shorten their ”palette” of colors, after which all colorings from these palettes lead to very few monochromatic edges. This is an immensely powerful tool, since one can get immediately sublinear complexity algorithms in many models: adjacency list, streaming, distributed. Is the randomness necessary? Note that these algorithms run in \(\Omega(n)\) time/space, so it is conceivable that deterministic sublinear algorithms exists. This paper shows that randomization is necessary in the semi-streaming model (space \(O(n poly(\log n))\)). Indeed, there exist no deterministic semi-streaming algorithms that can achieve even \(\exp(\Delta^{o(1)})\) colorings.

Adversarially Robust Coloring for Graph Stream by Amit Chakrabarti, Prantar Ghosh, and Manuel Stoeckl (arXiv). This paper studies the same problem as the above, but presents the results in a different way. In a randomized algorithm, we normally think of an adversary that fixes a (hard) input, and the algorithm then makes its random decisions. An adaptive adversary is one that changes the input (stream) based on the decisions of an algorithm. In this definition, a robust algorithm is one that can give correct answers, even for adversarially generated output. A deterministic algorithm is automatically robust. This paper show that there do not exist semi-streaming algorithms that can achieve \((\Delta+1)\)-colorings. The quantitative lower bound is weaker (\(\Omega(\Delta^2)\) colors), but it is against a stronger adversary.

News for August 2021

This month saw four papers. One on group testing, another on distribution testing, yet another which makes progress on testing problems on decision trees and the last one on graph property testing. Without further ado, let’s dive in.

Group Testing with Non-identical Infection Probabilities by Mustafa Doger, Sennur Ulukus (arXiv) Consider the classic group testing problem. Here the setup is the following. You are given a bunch of individuals from a population \(\mathcal{P}\). You have an infection vector which records the infection status of each individual in the population where the \(i\)-th individual is infected with probability \(p_i\). You want to recover all the infected individuals. You are allowed to group individuals together and you can test the entire group in a single shot. If the group tests negative, you are happy all the tested individuals are off the hook. Otherwise, if the group tests positive, you need more tests for further classification. This paper proposes a greedy way to build pools of individuals you would test. The pools are built adaptively: as in future pools are built using the knowledge of how the preceding tests fared. The key result in the paper upperbounds the number of tests performed in terms of the entropy of the infection vector.

Uniformity Testing in the Shuffle Model: Simpler, Better, Faster by Clément L. Canonne, Hongyi Lyu (arXiv) Differentially private distribution testing as a research area has been gathering momentum steadily over the last few years. If you read our last month’s post, you might recall there are a wide variety of models of DP each corresponding to a different “threat model”. The most stringent among the most explored models is the “local model”, the least stringent being the “central model” and there is an intermediate threat model, the so called “shuffle model“. This paper simplifies the analysis of uniformity testing algorithm under the shuffle model and presents an algorithm with sample complexity \(O(k^{3/4})\) for testing uniformity over a support of size \([k]\).

On Learning and Testing Decision Tree by Nader H. Bshouty, Catherine A. Haddad-Zaknoon (arXiv) In our December 2020 post, we covered a result of Blanc et al., which proves the following: Suppose you are given a boolean function \(f\) and the property \(\mathcal{P}\) of size-\(s\) decision trees. The result of Blanc et al gives you a function \(g \in \mathcal{P}\) with \(dist(f,g) = O(dist (f, \mathcal{P}))\) where \(g \in \) is guaranteed to have decision tree complexity \(s^{O(\log^2 s)}\). This result implies a bi-criteria tester for the following property: is \(f \in \mathcal{P}\) or is \(f\) \(\varepsilon\)-far from having decision tree complexity \(\phi(s) = s^{O(\log^3 s)}\). The current paper improves this result by presenting a property tester with \(\phi(s) = s^{O(\log^2 s)}\).

The complexity of testing all properties of planar graphs, and the role of isomorphism by Sabyasachi Basu, Akash Kumar, C. Seshadhri (arXiv) (Disclaimer: I am one of the authors of this paper). This paper presents a result that I, in my biased opinion, find interesting. So, here is the setup. You are given a bounded degree planar graph. And I cook up some God-forsaken property and ask you to test it. Turns out, no matter how devilishly I cooked up the property, you can test in with \(\exp(O(\varepsilon^{-2}))\) queries. The nice happenstance is that you also have a matching lower bound of \(\exp(\Omega(\varepsilon^{-2}))\) queries! And interestingly, this lower bound is witnessed by the very natural property of testing isomorphism to a fixed graph which means that isomorphism is the hardest property of planar graphs.

Workshop on Algorithms for Large Data: We found WALD(O), and so can you!

Ainesh Bakshi, Rajesh Jayaram, and Samson Zhou are organizing a 3-day Workshop on Algorithms for Large Data (nicely abbreviated as WALD(O), the O standing for Online), featuring many talks which should be of interest to the readers of this blog, as well as an open problems and a poster sessions, and a junior/senior lunch. As the organizers describe it:

This workshop aims to foster collaborations between researchers across multiple disciplines through a set of central questions and techniques for algorithm design for large data. We will focus on topics such as sublinear algorithms, randomized numerical linear algebra, streaming and sketching, and learning and testing.

The workshop will take place on August 23 — August 25 (ET). Attendance is free, but registration is required by August 20th. More details at https://waldo2021.github.io/

New for July 2021

This month saw three papers appear online, together covering a rather broad range of topics: testing of regular languages, distribution testing under differential privacy, and local testability from high-dimensional expanders. Let’s dive in!

Property Testing of Regular Languages with Applications to Streaming Property Testing of Visibly Pushdown Languages, by Gabriel Bathie and Tatiana Starikovskaya (paper). Let \(L\in \Sigma^\ast\) be a regular language recognized by an automation with \(m\) states and \(k\) connected components: given as input a word \(u\in \Sigma^n\), what is the query complexity to test membership to \(L\) in Hamming distance? Edit distance? Or, more generally, weighted edit distance, where each letter of the word \(u\) comes with a weight? In this paper, the authors focus on non-adaptive, one-sided errors testing algorithms, for which they show an upper bound of \(q=O(k m \log(m/\varepsilon)/\varepsilon)\) queries (with running time \(O(m^2 q)\)), which they complement by a query complexity lower bound of \(\Omega(\log(1/\varepsilon)/\epsilon)\), thus matching the upper bound for languages recognized by constant-size automata. The guarantee for the upper bound is with respected to weighted edit distance, and thus implies the same upper bound for testing with respect to Hamming distance.
To conclude, the authors use an existing connection to streaming property testing to obtain new algorithms for property testing of visibly pushdown languages (VPL) in the streaming model, along with a new lower bound in that model.

High dimensional expansion implies amplified local testability, by Tali Kaufman and Izhar Oppenheim (arXiv). This paper sets out to show that codes that arise from high-dimensional expanders are locally testable (membership to the code can be tested using very few queries). To do so, the authors define a new notion of high-dimensional expanding system (HDE system), as well as that of amplified local testability, a stronger notion than local testability; they then prove that a code based on a HDE system satisfies this stronger notion. Moreover, they show that many well-known families of codes are, in fact, HDE system codes, and therefore satisfy this stronger notion of local testability as well.

Finally, a survey on differential privacy, with a foray into distribution testing:

Differential Privacy in the Shuffle Model: A Survey of Separations, by Albert Cheu (arXiv). If you are familiar with differential privacy (DP), you may recall that there are several notions of DP, each meant to address a different “threat model” (depending on whom you trust with your data). Shuffle DP is one of them, intermediate between “central” DP and the more stringent “local” DP. Long story short: with shuffle DP, the tradeoff between privacy and accuracy can be strictly in-between what’s achievable in central and local DP, and that’s the case for one of the usual suspects of distribution testing, uniformity testing (“I want to test if the data uniformly distributed, but now, with privacy of that data in mind”). The survey discusses what is known about this in Sections 3.3 and 6, and what the implications are; but there are quite a few questions left unanswered… Long story short: a very good introduction to shuffle privacy, and to open problems in that area!

News for June 2021

A quieter month after last month’s bonanza. One (applied!) paper on distribution testing, a paper on tolerant distribution testing, and a compendium of open problems. (Ed: alas, I missed the paper on tolerant distribution testing, authored by one of our editors. Sorry, Clément!)

Learning-based Support Estimation in Sublinear Time by Talya Eden, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, and Tal Wagner (arXiv). A classic problem in distribution testing is that of estimating the support size \(n\) of an unknown distribution \(\mathcal{D}\). (Assume that all elements in the support have probability at least \(1/n\).) A fundamental result of Valiant-Valiant (2011) proves that the sample complexity of this problem is \(\Theta(n/\log n)\). A line of work has emerged in trying to reduce this complexity, with additional sources of information. Canonne-Rubinfeld (2014) showed that, if one can query the exact probabilities of elements, then the complexity can be made independent of \(n\). This paper studies a robust version of this assumption: suppose, we can get constant factor approximations to the probabilities. Then, the main result is that we can get a query complexity of \(n^{1-1/\log(\varepsilon^{-1})} \ll n/\log n\) (where the constant \(\varepsilon\) denotes the additive approximation to the support size). This paper also does empirical experiments to show that the new algorithm is indeed better in practice. Moreover, it shows that existing methods degraded rapidly with poorer probability estimates, while the new algorithm maintains its accuracy even with such estimates.

The Price of Tolerance in Distribution Testing by Clément L. Canonne, Ayush Jain, Gautam Kamath, and Jerry Li (arXiv). While we have seen many results in distribution testing, the subject of tolerance is one that hasn’t received as much attention. Consider the problem of testing if unknown distribution \(\mathcal{p}\) (over domain \([n]\)) is the same as known distribution \(\mathcal{q}\). We wish to distinguish \(\varepsilon_1\)-close from \(\varepsilon_2\)-far, under total variation distance. When \(\varepsilon_1\) is zero, this is the standard property testing setting, and classic results yield \(\Theta(\sqrt{n})\) sample complexity. If \(\varepsilon_1 = \varepsilon_2/2\), then we are looking for a constant factor approximation to the distance. And the complexity is \(\Theta(n/\log n)\). Surprisingly, nothing was known in better. Until this paper, that is. The main result gives a complete characterization of sample complexity (up to log factors), for all values of \(\varepsilon_1, \varepsilon_2\). Remarkably, the sample complexity has an additive term \((n/\log n) \cdot (\varepsilon_1/\varepsilon^2_2)\). Thus, when \(\varepsilon_1 > \sqrt{\varepsilon_2}\), the sample complexity is \(\Theta(n/\log n)\). When \(\varepsilon_1\) is smaller, the main result gives a smooth dependence on the sample complexity. One the main challenges is that existing results use two very different techniques for the property testing vs constant-factor approximation regimes. The former uses simpler \(\ell_2\)-statistics (e.g. collision counting), while the latter is based on polynomial approximations (estimating moments). The upper bound in this paper shows that simpler statistics based on just the first two moments suffice to getting results for all regimes of \(\varepsilon_1, \varepsilon_2\).

Open Problems in Property Testing of Graphs by Oded Goldreich (ECCC). As the title clearly states, this is a survey covering a number of open problems in graph property testing. The broad division is based on the query model: dense graphs, bounded degree graphs, and general graphs. A reader will see statements of various classic open problems, such as the complexity of testing triangle freeness for dense graphs and characterizing properties that can be tested in \(poly(\varepsilon^{-1})\) queries. Arguably, there are more open problems (and fewer results) for testing in bounded degree graphs, where we lack broad characterizations of testable properties. An important, though less famous (?), open problem is that of the complexity of testing isomorphism. It would appear that the setting of general graphs, where we know the least, may be the next frontier for graph property testing. A problem that really caught my eye: can we transform testers that work for bounded degree graphs into those that work for bounded arboricity graphs? The latter is a generalization of bounded degree that has appeared in a number of recent results on sublinear graph algorithms.