Author Archives: Clement Canonne

News for April 2020

April is now behind us, and we hope you and your families are all staying safe and healthy. We saw six seven property papers appear online last month, so at least there is some reading ahead of us! A mixture of privacy, quantum, high-dimensional distributions, and juntas (juntæ?). A lot of distribution testing, overall.

Connecting Robust Shuffle Privacy and Pan-Privacy, by Victor Balcer, Albert Cheu, Matthew Joseph, and Jieming Mao (arXiv). This paper considers a recent notion of differential privacy called shuffle privacy, where users have sensitive data, a central untrusted server wants to do something with that data (for instance, say… testing its distribution), and a trusted middle-man/entity shuffles the users’ messages u.a.r. to bring in a bit more anonymity. As it turns out, testing uniformity (or identity) of distributions in the shuffle privacy model is (i) much harder than without privacy constraints; (ii) much harder than with ‘usual’ (weaker) differential privacy (iii) much easier than with local privacy; (iv) related to the sample complexity under another privacy notion, pan-privacy. It’s a brand exciting new world out there!

(Note: for the reader interested in keeping track of identity/uniformity testing of probability distributions under various privacy models, I wrote a very short summary of the current results here.)

Entanglement is Necessary for Optimal Quantum Property Testing, by Sebastien Bubeck, Sitan Chen, and Jerry Li (arXiv). The analogue of uniformity testing, in the quantum world, is testing whether a quantum state is equal (or far from) the maximally mixed state. It’s known that this task has “quantum sample complexity” (number of measurements) \(\Theta(d/\varepsilon^2)\) (i.e., square root dependence on the dimension of the state, \(d^2\)). But this requires entangled measurements, which may be tricky to get (or, in my case, understand): what happens if the measurements can be adaptive, but not entangled? In this work, the authors show that, under this weaker access model \(\Omega(d^{4/3}/\varepsilon^2)\) measurements are necessary: adaptivity alone won’t cut it. It may still help though: without either entanglement nor adaptivity, the authors also show a \(\Omega(d^{3/2}/\varepsilon^2)\) measurements lower bound.

Testing Data Binnings, by Clément Canonne and Karl Wimmer (ECCC). More identity testing! Not private and not quantum for this one, but… not quite identity testing either. To paraphrase the abstract: this paper introduces (and gives near matching bounds for) the related question of identity up to binning, where the reference distribution \(q\) is over \(k \ll n\) elements: the question is then whether there exists a suitable binning of the domain \([n]\) into \(k\) intervals such that, once binned, \(p\) is equal to \(q\).”

Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models, by Antonio Blanca, Zongchen Chen, Daniel Štefankovič, and Eric Vigoda (arXiv). Back to identity testing of distributions, but for high-dimensional structured ones this one. Specifically, this paper focuses on the undirected graphical models known as restricted Boltzmann machines, and provides efficient algorithms for identity testing and conditional hardness lower bounds depending on the type of correlations allowed in the graphical models.

Robust testing of low-dimensional functions, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv). Junta testing is a classical, central problem in property testing, with motivations and applications in machine learning and complexity. The related (and equally well-motivated) question of junta testing of functions on \(\mathbb{R}^d\) (instead of the Boolean hypercube) was recently studied by the same authors; and the related (and, again, equally well-motivated) question of tolerant junta testing on the Boolean hypercube was also recently studied (among other works) by the same authors. Well, this paper does it all, and tackles the challenging (and, for a change, equally well-motivated!) question of tolerant testing of juntas on \(\mathbb{R}^d\).

Differentially Private Assouad, Fano, and Le Cam, by Jayadev Acharya, Ziteng Sun, and Huanyu Zhang (arXiv). Back to probability distributions and privacy. This paper provides differentially private analogues of the classical eponymous statistical inference results (Assouad’s lemma, Fano’s inequality, and Le Cam’s method). In particular, it gives ready-to-use, blackbox tools to prove testing and learning lower bounds for distributions in the differentially private setting, and shows how to use them to easily derive, and rederive, several lower bounds.

Edit: We missed one!

Learning and Testing Junta Distributions with Subcube Conditioning, by Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten (arXiv). This paper focuses on the subcube conditioning model of (high-dimensional) distribution testing, where the algorithm can fix some variables to values of its choosing and get samples conditioned on those variables. Extending and refining techniques from a previous work by a (sub+super)set of the authors, the paper shows how to optimally learn and test junta distributions in this framework—with exponential savings with respect to the usual i.i.d. sampling model.

News for January 2020

The first month of 2020 is behind us, and it’s already looking good! Four papers last month, spanning quantum, graphs, and probability distributions.

On Efficient Distance Approximation for Graph Properties, by Nimrod Fiat and Dana Ron (arXiv). In the dense graph (i.e., adjacency-matrix) model, one is given a distance parameter \(\varepsilon\) and granted query access to the adjacency matrix of a graph \(G\), and seeks to determine something about the distance of \(G\) to a prespecified property \(\mathcal{P}\) of interest (i.e., the fraction of the matrix that needs to be changed for \(G\) to satisfy the property). Testing requires to distinguish whether that distance is zero, or at least \(\varepsilon\); many results over the past years have shown that many properties can be tested with a number of queries depending only on \(\varepsilon\) (and not on \(n=|G|\). This work focuses on the harder problem of distance estimation, or, equivalently, tolerant testing: that is, estimate this distance up to \(\pm\varepsilon\). The authors introduce a general framework to get distance approximation algorithms from a “complicated” property by decomposing it into simpler properties and using algorithms for those. Applying this framework to a few flagship properties, they then show that \(P_3\)-freeness, induced \(P_4\)-freeness, and cordiality are properties which have efficient distance estimation algorithms (independent of \(n\), and polynomial in \(1/\varepsilon\)).

Minimax Optimal Conditional Independence Testing, by Matey Neykov, Sivaraman Balakrishnan, and Larry Wasserman (arXiv). Given i.i.d. draws from a triple \((X,Y,Z)\), how hard is it to check whether \(X \perp Y \mid Z\), that is, “\(X\) and \(Y\) are independent conditioned on \(Z\) (or far from it)?” This is the problem on conditional independence testing, which was covered back in the days for the case where \(X,Y,Z\) are discrete. Well, this new work takes the fight out of the discrete world: extending the results and techniques from the discrete case, it provides optimal bound for the continuous case: where \(Z\) is on \([0,1]\), and then when all three \(X,Y, Z\) are continuous.

How symmetric is too symmetric for large quantum speedups?, by Shalev Ben-David and Supartha Podder (arXiv); and Can graph properties have exponential quantum speedup?, by Andrew M. Childs and Daochen Wang (arXiv). Both these works independently study the relation between the (bounded-error) randomized and quantum query complexities of any graph property \(f\), in the dense graph (adjacency-matrix) model. In particular, how much advantage do quantum algorithms provide for those?
As it turns out, not so much: for those functions, both papers show the two quantities are always polynomially related (\(R(f) \leq Q(f) \leq O(R(f)^6))\)) in the dense-graph model. As a corollary, this implies that testing any such property won’t benefit much from quantum (that is, at most polynomially)…. at least in this model. In the adjacency list model (also known as the bounded-degree graph model), the first paper conjectures that exponential query complexity improvements are possible; and the second paper provides an example, establishing it. Altogether, this fully settles an open problem of Ambainis, Childs, and Liu, and Montanaro and de Wolf.

News for October 2019

Last month was a lean month, with only three papers: one on direct product testing, one on finding forbidden patterns in a sequence, and one (an update of a paper which we had missed in the Spring) on quantum distribution testing.

Direct sum testing – the general case, by Irit Dinur and Konstantin Golubev (ECCC). Say a function \(f\colon \prod_{i=1}^d [n_i] \to \mathbb{F}_2\) is a direct product if it can be factored as \(f(x_1,\dots,x_d)=\sum_{i=1}^d f_i(x_i)\), where \(f_i\colon [n_i]\to\mathbb{F}_2\).
This paper provides a 4-query tester (i.e., a proximity oblivious tester (POT)) for the direct product property, reminiscent of (and relying on) the BLR linearity test: specifically, draw two subsets \(S,T\subseteq [d]\) and two inputs \(x,y\in \prod_{i=1}^d [n_i]\) u.a.r., and accept iff
\(f(x)+f(x_Sy)+f(x_Ty)+f(x_{S\Delta T}y) = 0\,.\)
The main theorem of the paper is to show that the probability that this simple test rejects is lower bounded (up to a constant factor) by the distance of \(f\) to direct-product-ness. (The authors also provide a different POT making \(d+1\) queries, but with a simpler analysis.)

Finding monotone patterns in sublinear time, by Omri Ben-Eliezer, Clément Canonne, Shoham Letzter, and Erik Waingarten (ECCC). Given a function \(f\colon [n]\to\mathbb{R}\), a monotone subsequence of size \(k\) is a \(k\)-tuple of indices \(i_1 < \dots <i_k\) such that \(f(i_j) < f(i_{j+1})\) for all \(j\). This work considers (non-adaptive) one-sided testing of monotone-subsequence-freeness, or, equivalently, the task of finding such a monotone subsequence in a function promised to contain many of them. (This, in particular, generalizes the problem of one-sided monotonicity testing, which is the case \(k=2\).) The main result is a full characterization of the query complexity of this question (for constant \(k\)): strange as the exponent may seem, \(\Theta_\varepsilon( (\log n)^{\lfloor \log_2 k\rfloor} )\) queries are necessary and sufficient. The proof relies on a structural dichotomy result, stating that any far-from-free sequence either contains “easy to find” increasing subsequences with increasing gaps between the elements, or has a specific hierarchical structure.

Quantum Closeness Testing: A Streaming Algorithm and Applications, by Nengkun Yu (arXiv). This paper is concerned with quantum distribution testing in the local model, which only allows a very restricted (albeit, as the author argues, more natural and easier to implement) type of measurements, and is particularly well-suited to a streaming setting. The main contribution of this paper is to show a connection to classical distribution testing, allowing one to obtain quantum distribution testing upper bounds from their classical distribution testing counterparts. In more detail, the paper shows that, from local measurements to two \(d\)-dimensional quantum states \(\rho,\sigma\), one can provide access to two classical distributions \(p,q\) on \(\approx d^2\) elements such that (i) \(\| p-q\|_2 \approx \|\rho-\sigma\|_2/d\) and (ii) \(\| p\|_2,\| q\|_2 = O(1/d)\).
Using this connection, the paper proceeds to establish a variety of upper bounds for testing several distribution properties in the local quantum model.

Open Problems From the Workshop on Data Summarization and the Workshop on Local Algorithms

A blast from the (recent) past: the open problems from the Workshop on Data Summarization held in March 2018 in the University of Warwick have been posted online, and are available here.

Fast-forward to this summer: the recently concluded 3rd Workshop on Local Algorithms, which took place last month at ETH Zurich, featured an open problems session. The list of open problems can be found in this link, or here for a PDF version.

News for July 2019

We saw quite an activity last month, with a total of 9 papers on property testing or related areas.

Let’s start with a paper on PCPs:
Revisiting Alphabet Reduction in Dinur’s PCP, by Venkatesan Guruswami, Jakub Opršal, and Sai Sandeep (ECCC). As mentioned in the title, this work provides an alternate proof of one of the two parts of the Dinur’s PCP theorem proof. In that proof, the argument goes by alternating two main steps: (i) amplifying the soundness gap, at the price of increasing the alphabet size (and the instance size); (ii) reducing the alphabet size, at the price of increasing the instance size. From the observation that step (i) creates some structure in the instances that can be leveraged, the authors provide an alternative construction for (ii), leading to an arguably simpler analysis of the soundness of the underlying test. A nice throwback to some of the roots of property testing.

Then, four papers on distribution testing:
Testing Mixtures of Discrete Distributions, by Maryam Aliakbarpour, Ravi Kumar, Ronitt Rubinfeld (arXiv). One of the most studied problems in distribution testing is identity testing (a.k.a., goodness-of-fit): given a reference distribution \(q\) over a domain of size \(n\) and i.i.d. samples from an unknown \(p\), decide between \(p=q\) and \(d_\textrm{TV}(p,q) > \varepsilon\). While this is now well-understood and can be done with a strongly sublinear number of samples (namely, \(\sqrt{n})\)), the case of tolerant testing is much sadder: Valiant and Valiant proved that \(\Theta(n/\log n)\) samples were required for distinguishing \(d_\textrm{TV}(p,q) < \varepsilon/2\) from \(d_\textrm{TV}(p,q) > \varepsilon\). So, noise-tolerance is almost as hard as it gets… except, as this works suggests and studies, if one has some guarantees on the noise! What if the noise in the completeness case was just that \(q\) is perturbed by some (known, or available via samples as well) noise coming from a second distribution \(\eta\)? I.e., the question is now to test whether \(p\) if of the form \(\alpha q + (1-\alpha) \eta\) for some \(\alpha\in [0,1]\), or if it is \(\varepsilon\)-far from all mixtures of \(q\) and \(\eta\). The authors have various results on this question and some extensions, but the upshot is: “with structured noise, strongly sublinear sample complexity is once again possible.”

Can Distributed Uniformity Testing Be Local?
, by Uri Meir, Dor Mintzer, and Rotem Oshman (ACM PODC Proceedings). More on identity testing, specifically uniformity testing. No noise here, but another constraint: the samples are distributed. There are \(k\) players, each holding \(q\) i.i.d. samples from a distribution \(p\) over \([n]\); each can send \(\ell=1\) bit to a central referee, in a non-interactive fashion. Is \(p\) the uniform distribution, or is it \(\varepsilon\)-far from it? The authors here consider the case where \(k,n,\varepsilon\) are fixed, and prove lower bounds on the number \(q=q(k,n,\varepsilon)\) of samples each player must hold in order to perform this uniformity testing task. (Note: their lower bounds hold even in the public-randomness setting.) Motivated by the LOCAL model, they focus on 3 cases of interest, for which they obtain tight or near-tight bounds on \(q\) (in view of upper bounds from previous work of a subset of the authors): (1) when the referee’s decision has to be the \(\textsf{AND}\) of all \(n\) bits she receives; (2) when the referee can do something more involved, and use a threshold function; and (3) when the referee can use an arbitrary function of those \(n\) messages. Underlying those lower bounds is a neat application of Boolean Fourier analysis, recasting the analysis of the standard “Paninski” lower bound instance and the player’s decision functions as a question on Boolean functions.

Domain Compression and its Application to Randomness-Optimal Distributed Goodness-of-Fit, by Jayadev Acharya, Clément Canonne, Yanjun Han, Ziteng Sun, and Himanshu Tyagi (arXiv,ECCC). Remember the paper just above? Now, focus on the case (3), where the referee can use any function of the message, allow each player to send \(\ell\) bits instead of one, but give only one sample (instead of \(q\)) to each player. This becomes a setting we have talked about before on this blog (specifically, with three papers on these three months). Those three papers established the optimal number of player \(k\), in terms of the domain size \(n\), the distance parameter \(\varepsilon\), and the number of bits per player \(\ell\), in both the private-coin and public-coin settings. This new work interpolates between those two extremes, and gives the tight answer to the general question: if there are \( 0\leq s \leq \log n\) bits of public randomness available, what is the optimal number of players to perform identity testing? Behind the upper bounds is a new notion of (randomness-efficient) domain compression they introduce: how to reduce the domain size of the probability distributions while preserving the \(\ell_1\) distances between them as much as possible?

Towards Testing Monotonicity of Distributions Over General Posets, by Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, and Anak Yodpinyanee (arXiv). Away from identity testing, let’s now consider another central question, testing monotonicity of distributions. Specifically, a distribution \(p\) over a given poset \((\mathcal{X}, \prec)\) is said to be monotone if \(p(x) \leq p(y)\) whenever \(x\prec y\). While the question is fully understood for the familiar case of the line \(\mathcal{X} = \{1,2,\dots,n\}\), the case of general posets is much murkier, and more or less uncharted. This paper initiates a general study of the question, improving on previously known bounds for the matching poset and Boolean hypercube, and providing several new results and techniques for the general case, to establish both upper and lower bounds on the sample complexity.

And now… graphs!
Expansion Testing using Quantum Fast-Forwarding and Seed Sets, by Simon Apers (arXiv). This paper is concerned with (bicriteria) testing of vertex expansion of a given graph \(G=(V,E)\) in the bounded-degree graph model. Loosely speaking, given a value \(\Phi\) and query access (i.e., sampling a node u.a.r., querying the degree of a node, and querying the \(i\)-th neighbor of a node) to a graph \(G\) with maximum degree \(d=\Theta(1)\), the goal is to distinguish between (i) \(G\) has vertex expansion at most \(\Phi\) and (ii) \(G\) is \(\varepsilon\)-far from any \(G’\) with vertex expansion at most \(\Phi^2\) (simplifying and dropping some technicalities). The previous state-of-the-art was a \(\tilde{O}_{\varepsilon}(n^{1/2}/\Phi^2)\) query complexity for classical algorithms, and a \(\tilde{O}_{\varepsilon}(\min(n^{1/3}/\Phi^2),n^{1/2}/\Phi^2))\) query complexity upper bound for quantum testers. The current work improves on this by providing a \(\tilde{O}_{\varepsilon}(n^{1/3}/\Phi)\) quantum tester. Graphs expand—and so does the gap between classical and quantum testing.

Walking Randomly, Massively, and Efficiently, by Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (arXiv). One very useful primitive, in many graph property testing results and of course well beyond property testing, is the ability to perform random walks on graphs. In the case of large, distributed (or merely impractically massive) graphs, however, it is not clear how to implement this primitive efficiently. This work addresses this question in the MPC (Massive Parallel Computation) model, and shows how to perform independently, from all of the \(n\) nodes of a graph (i.e., machine), an \(\ell\)-length random walk with only \(O(\log \ell)\) rounds of communication and \(O(n^\alpha)\) space per machine, for any constant \(\alpha \in(0,1)\). This round complexity matches a known (conditional) lower bound, and thus the result is as good as it gets — further, the authors obtain such MPC algorithms for both undirected and directed graph, somehow leveraging the latter case to handle the (significantly more complicated) directed case. As an application of this efficient random walk primitive, they provide MPC property testing algorithms for both bipartiteness and vertex expansion (same definition as in the paper above: vertex expansion \(\Phi\) vs. \(\varepsilon\)-far from any \(G’\) with vertex expansion \(\Phi^2\)) in the general graph model. (Beyond property testing, another application—the main one in the paper— is computing PageRank in both graphs and directed graphs.)

A Lower Bound on Cycle-Finding in Sparse Digraphs
, by Xi Chen, Tim Randolph, Rocco Servedio, and Timothy Sun (arXiv). In this paper, the authors tackle the problem of one-sided testing of acyclicity of directed graphs, in the bounded-degree graph model (equivalently, on the task of finding a cycle in a digraph promised to be far from acyclic). Previously, an \(\Omega(\sqrt{n})\) lower bound for this task had been shown by Bender and Ron, based on a birthday-paradox-type argument; this work improves on this hardness result, showing that \(\tilde{\Omega}(n^{5/9})\) queries are necessary. Interestingly, whether achieving any \(o(n)\) query complexity is possible remains open.

Constant-Time Dynamic \((\Delta+1)\)-Coloring and Weight Approximation for Minimum Spanning Forest: Dynamic Algorithms Meet Property Testing, by Monika Henzinger and Pan Peng (arXiv). Finally, the last paper covered this month is concerned with dynamic graph algorithms: where updates to a graph (which can be both edge additions and deletions) come sequentially, and the objective is to maintain, e.g., a coloring, or an approximation of some function of the graph, at any time using as little time per update. This paper advocates the use of techniques from property testing (loosely speaking, as their locality and query-efficiency translates to fast update times) to design better dynamic graph algorithms. It illustrates this idea by obtaining several new results, in particular an (amortized) \(O(1)\)-update-time randomized algorithm to maintain a \((\Delta+1)\)-coloring in a graph with degree bound \(\Delta\). The algorithm relies on a random ranking of the nodes which, combined with a suitable randomized update rule for recoloring a vertex when needed, ensure that the recolorings will not (with high probability) “propagate” to much. This is the first time that this idea, previously used in several testing and sublinear-time papers, is used in the context of dynamic graph algorithms—adding an edge between the two areas, hopefully only the first of many.

Update: we did miss a paper! Make it ten in July…
Nearly optimal edge estimation with independent set queries, by Xi Chen, Amit Levi, Erik Waingarten (arXiv). Consider the following type of access to an unknown graph \(G=(V,E)\) on \(n=|V|\) nodes: on query \(S\subseteq V\), the oracle returns 1 if \(S\) is is an independent set, and 0 otherwise. (This sounds quite a bit like what is allowed in the group testing literature.) How many such queries are necessary to get a multiplicative estimate of the number of edges, \(m=|E|\)? In this paper, the authors improve on the previous bound of \(\min(\sqrt{m}, n^2/m)\) (ignoring logarithmic factors), obtaining an \(\min(\sqrt{m}, n/m)\) upper bound — complemented by a (nearly, up to those pesky log factors) matching lower bound.

If we missed a paper this month, or represented some result above, please mention it in the comments.

News for April 2019

After a quite slow month of March, things sped up quite significantly in April: six different papers, ranging from graph testing to function testing to quantum distribution testing!

Update (05/04): We missed one. Seven!

Junta correlation is testable, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv). Junta testing really has seen a lot of attention and progress over the pass couple years! In this new work, the focus is on the tolerant version of (Boolean) junta testing, where the goal is to decide whether a given function \(f\colon \{-1,1\}^n\to \{-1,1\}\) is close to some \(k\)-junta, or far from any such simple function. The current paper improves on previous work of Blais, Canonne, Eden, Levi, and Ron, which showed how to distinguish between \(\varepsilon\)-close to \(k\)-junta and \(16\varepsilon\)-far from \(k\)-junta in \(O_{k,\varepsilon}(1)\) queries, in two ways: (i) the gap-factor of 16 can now be made arbitrarily small, yielding the first fully tolerant non-trivial junta tester; and (ii) the new algorithm also identifies the underlying “core junta” (up to permutation of the coordinates). Besides the other results contained in the paper (such as results for a relaxation of the tolerant question akin to that of Blais et al.), a key aspect of this paper is to go beyond the use of (set) influence as a proxy for distance to junta-ness which underlied all previous work, thus potentially opening a new avenue towards get fully tolerant, \(\mathrm{poly}(k,1/\varepsilon)\)-query junta testing algorithms.

Testing Tensor Products, by Irit Dinur and Konstantin Golubev (arXiv). In this paper, the authors consider the following testing question: given query access to a Boolean function \(f\colon [n]^d \to \mathbb{F}_2\), test whether \(f\) is of the form \(f(x) = f_1(x_1)+\dots f_d(x_d)\) (equivalently, this is the task of testing whether a \(d\)-dimensional tensor with \(\pm 1\) is of rank \(1\)). They provide two different proximity-oblivious testers (POT) for this task: a \(4\)-query one, reminiscent and building upon the BLR linearity test; as well as a \((d+1)\)-query POT (which they dub “Shapka Test,” one can assume for their love of warm and comfortable hats), whose analysis is significantly simpler.

Changing direction, the next paper we saw is concerned with quantum testing of (regular, good ol’ classical) probability distributions:

Quantum Algorithms for Classical Probability Distributions, by Aleksandrs Belovs (arXiv). This work on quantum distribution testing considers 4 of the existing models of access to samples from an unknown probability distribution, analyzes their relationship, and argues their merits. Further, the autho establishes that for the simple hypothesis testing question of distinguishing between two (known, fixed) probability distributions \(p,q\), in all four models the optimal sample complexity is proportional to the inverse of the Hellinger distance between \(p\) and \(q\)—in contrast to the classical setting, where it is known to be proportional to the squared inverse of this distance.

Turning to graphs, the next work considers clustering of bounded-degree graphs, a topic we recently discussed as well:

Robust Clustering Oracle and Local Reconstructor of Cluster Structure of Graphs, by Pan Peng (arXiv). Consider the following noisy model: an unknown bounded-degree graph (of maximum degree \(d\)) \(G\) which is \((k, \phi_{\rm in},\phi_{\rm out})\)-clusterable (i.e., clusterable in at most \(k\) clusters of inner and outer conductances bounded by \(\phi_{\rm in},\phi_{\rm out}\)) is adversarially modified in at most \($\varepsilon d n\) edges between clusters. This noise model, introduced in this work, leads to the natural questions of “recovering the underlying graph \(G\)”: this is what the author tackles, by designing sublinear-time local clustering oracles and local reconstruction algorithms (local filters) in this setting. Further, in view of the noise model reminiscent of property testing in the bounded-degree graph setting, connections to testing clusterability are discussed; the implications of the results for testing clusterability are discussed in Section 1.4.

A Faster Local Algorithm for Detecting Bounded-Size Cuts with Applications to Higher-Connectivity Problems, by Sebastian Forster and Liu Yang (arXiv). The authors focus on the problem of finding an edge (or vertex) cut in a given graph from a local viewpoint, i.e., in the setting of local computation algorithms, and provide a slew of results in detecting bounded-size cuts. This in turn has direct implications for testing \(k\)-edge and \(k\)-vertex connectivity, directed and undirected, in both the bounded-degree and general (unbounded degree) models, improving on or subsuming the current state-of-the-art across the board (see Section 1.2.4 of the paper, and the very helpful Tables 3 and 4, for a summary of the improvements).

Random walks and forbidden minors II: A \(\mathrm{poly}(d\varepsilon^−1)\)-query tester for minor-closed properties of bounded degree graphs, by Akash Kumar, C. Seshadhri, and Andrew Stolman (ECCC). Following their breakthrough from last May, the authors are at it again! Leveraging a random-walk approach, they resolve the following open question in testing bounded-degree graph: is there a \(\mathrm{poly}(1/\varepsilon)\)-query tester for planarity (and, more generality, for minor-closed graph properties)? The previous state-of-the-art, due to Levi and Ron, had a quasipolynomial dependence on \(1/\varepsilon\).
Without spoiling too much: the answer to the open question is yes— and further the authors settle it by showing an even more general result on testing \(H\)-freeness (for any constant-size graph \(H\)).

Update (05/04):

Testing Unateness Nearly Optimally, by Xi Chen and Erik Waingarten (arXiv). Recall that a function \(f\colon \{0,1\}^n\to \{0,1\}\) is said to be unate if there exists some \(s\in\{0,1\}^n\) such that \(f(\cdot \oplus s)\) is monotone; i.e., if \(f\) is either non-increasing or non-decreasing in each coordinate. Testing unateness has seen a surge of interest over the past year or so; this work essentially settles the question, up to polylogarithmic factors in \(n\) (and the exact dependence on \(\varepsilon\)). Namely, the authors present and analyze an \(\tilde{O}(n^{2/3}/\varepsilon^2)\)-query adaptive tester for unateness, which nearly matches the \(\tilde{\Omega}(n^{2/3})\)-query lower bound for adaptive testers previously established by Chen, Waingarten, and Xie.

News for March 2019

Last month was very calm. Eerily calm in fact: no property testing or related sublinear-time algorithm in view!* Gird your loins for the April batch, I reckon?

\({}^\ast\) That we saw, at least. if we missed some, please mention it in the comments below!)

Update: As mentioned in the comments, we did indeed missed two (related) works on distribution estimation from a competitive viewpoint. Namely, for a large class of properties (entropy, distance to uniformity, support size…), Hao, Orlitsky, Suresh, and Wu provide in the first paper (arXiv 1904.00070) an “instance-optimal(ish)” estimator which does “as well” with \(m/\sqrt{\log m}\) samples than the natural and naive empirical estimator would do with \(m\) (thus, in some sense, amplifying the data size). In the following paper (arXiv 1903.01432), Hao and Orlitsky improve this and remove the “ish” to get the optimal amplification \(m/\log m\).

News for December 2018

Happy near year, and best wishes to those close and \(\varepsilon\)-far! December concluded the year with 4 new preprints, spanning quite a lot of the property testing landscape:

Testing Stability Properties in Graphical Hedonic Games, by Hendrik Fichtenberger and Anja Rey (arXiv). The authors of this paper consider the problem of deciding whether a given hedonic game possesses some “coalition stability” in a property testing framework. Namely, recall that a hedonic game is a game where players (nodes) form coalitions (subsets of nodes) based on their individual preferences and local information about the considered coalition, thus resulting in a partition of the original graph.
Several notions exist to evaluate how good such a partition is, based on how “stable” the given coalitions are. This work focuses on hedonic games corresponding to bounded-degree graphs, introducing and studying the property testing question of deciding (for several such notions of stability) whether a given game admits a stable coalition structure, or is far from admitting such a partition.

Spectral methods for testing cluster structure of graphs, by Sandeep Silwal and Jonathan Tidor (arXiv). Staying among bounded-degree graphs, we turn to testing clusterability of graphs, the focus of this paper. Given an \(n\)-node graph \(G\) of degree at most \(d\) and parameters \(k, \phi\), say that \(G\) is \((k, \phi)\)-clusterable if it can be partitioned in \(k\) parts of inner conductance at least \(\phi\).
Analyzing properties of a random walk on \(G\), this work gives a bicriterion guarantee (\((k, \phi)\)-clusterable vs. \(\varepsilon\)-far from \((k, \phi^\ast)\)-clusterable, where \(\phi^\ast \approx \varepsilon^2\phi^2\)) for the case \(k=2\), improving on previous work by Czumaj, Peng, and Sohler’15.

We then switch from graphs to probability distributions with our third paper:

Inference under Information Constraints I: Lower Bounds from Chi-Square Contraction, by Jayadev Acharya, Clément Canonne, and Himanshu Tyagi (arXiv). (Disclaimer: I’m one of the authors.) In this paper, the first of an announced series of three, the authors generalize the settings of two previous works we covered here and there to consider the general question of distribution testing and learning when the \(n\) i.i.d. samples are distributed among \(n\) players, which each can only communicate their sample to the central algorithm by respecting some pre-specified local information constraint (e.g., privacy, or noise, or communication budget). This paper develops a general lower bound framework to study such questions, with a systematic focus on the power of public vs. private randomness between the \(n\) parties, and instantiate it to obtain tight bounds in the aforementioned locally private and communication-limited settings. (Spoiler: public randomness strictly helps, but not always.)

Finally, after games, graphs, and distributions, our fourth paper of the month concerns testing of functions:

Partial Function Extension with Applications to Learning and Property Testing, by Umang Bhaskar and Gunjan Kumar (arXiv). This work focuses on a problem quite related to property testing, that of partial function extension: given as input \(n\) pairs point/value from a purported function on a domain \(X\) of size \(|X| > n\), one is tasked with deciding whether there does exist (resp., with finding) a function \(f\) on \(X\) consistent with these \(n\) values which further satisfies a specific property, such as linearity or convexity. This is indeed very reminiscent of property testing, where one gets to query these \(n\) points and must decide (approximate) consistency with such a well-behaved function. Here, the authors study the computational hardness of this partial function extension problem, specifically for properties such as subadditivity and XOS (a sub-property of subadditivity); and as corollaries obtain new property testers for the classes of subadditive and XOS functions.

As usual, if you know of some work we missed from last December, let us know in the comments!

News for September 2018

Last month was slower than usual, with only one property testing paper. We did miss a few papers, so hopefully this list is now up to date.

Property testing and expansion in cubical complexes, by David Garber, Uzi Vishne (arXiv). Consider the question of testing if an arbitrary function \(f\colon V\times V \to\{-1,1\}\) is of the form \(f(x,y) = h(x)h(y)\) for some \(h\colon V\to\{-1,1\}\). An intuitive one-sided test, shown to work by Lubotzky and Kaufman (2014), is to pick uniformly random \(x,y,z\in V\) and check that \(f(x,y)f(y,z)f(z,x)=1\). This paper considers the high-dimensional generalization of testing the property that a function\(f\colon V\times V \times V\times V \to\{-1,1\}\) is of the form \(f(w,x,y,z) = \alpha\cdot h(w,x)h(y,x)h(y,z) h(w,z)\), for some \(h\colon V\times V\to\{-1,1\}\) and sign \(\alpha\in\{-1,1\}\). The authors derive necessary and sufficient conditions for testability of this property, by formulating it in the language of incidence geometry and exploiting this connection.

Local Computation Algorithms for the Lovász Local Lemma, by Dimitris Achlioptas, Themis Gouleakis and Fotis Iliopoulos (arXiv). There has been significant work in the past decade on constructive versions of the Lovász Local Lemma (LLL), since the seminal work of Moser-Tardos. This paper designs news Local Computation Algorithms (LCAs) for the LLL. It’s best to consider the problem of \(k\)-SAT. Consider a \(k\)-CNF \(\phi\) with \(n\) variables, \(m\) clauses, where every variable is in at most \(d\) clauses. By the LLL, if \(d \leq 2^k/ke\), then \(\phi\) is satisfiable. An LCA would compute any bit of a satisfying assignment, by making sublinear queries into \(\phi\). This was first studied by Rubinfeld-Tamir-Vardi-Xie. Their LCA would make polylogarithmically many queries, but requires a stronger condition that what LLL achieves. This paper gives the first sublinear LCA with precisely the LLL conditions, though the number of queries is \(n^\beta\) (for \(\beta \lt 1\)). The main result is an LCA for an abstract LLL formulation, that also leads to LCAs for graph coloring. Roughly speaking, for a graph with maximum degree \(\Delta\) where all neighborhoods are sufficiently far from cliques, the LLL shows that the chromatic number bound of \(\Delta + 1\) can be beaten. This result gives an LCA for graph coloring under these LLL conditions.

Sublinear Time Low-Rank Approximation of Distance Matrices, by Ainesh Bakshi and David P. Woodruff (arXiv). Consider two sets of points \(P\) and \(Q\) in a metric space, with \(m\) and \(n\) points respectively. The \(m \times n\) distance matrix \(A\) contains all pairwise distances between them. This paper studies approximating \(A\) using a low rank representation, without reading all the entries in \(A\). The main result is as follows. For rank parameter \(k\), let \(A_k\) be the closest (by Frobenius norm) rank-\(k\)-approximation to \(A\). There is a \(O(m^{1+\gamma} + n^{1+\gamma}poly(k\epsilon^{-1}))\) (for arbitrary \(\gamma > 0\)) algorithm that outputs a rank \(k\)-matrix \(B\) with the following property: \(\|A-B\|^2_F \leq \|A-A_k\|^2_F + \epsilon \|A\|^2_F\). Interestingly, there is a lower bound showing that a \(o(mn)\) algorithm cannot get a multiplicative approximation. One technical ingredient is a method to sample column norms of \(A\), under an approximate triangle inequality constraint. This allows one to compute smaller matrices that approximate \(A\), on which one can directly compute an approximate rank-\(k\) decomposition.

On Solving Linear Systems in Sublinear Time, by Alexandr Andoni, Robert Krauthgamer, Yosef Pogrow (arXiv). Solving Laplacian linear systems is an immensely deep area, with lots of exciting recent work. This paper studies sublinear algorithms for such problems. Consider a Laplacian matrix \(L\) (think of \(I – A/d\), for adjacency matrix \(A\) of a \(d\)-regular graph). The aim is to solve \(Lx = b\), for \(x, b \in {\mathbb R}^n\). Let the solution be \(x^*\). The main result shows that one can approximate any entry in sublinear time. Specifically, for any coordinate \(i\), one can output an approximate \(\hat{x_i}\) such that \(|\hat{x_i} – x^*_i| \leq \|x^*\|_\infty\). The running time is essentially \(d\epsilon^{-2}\kappa^3\), where \(\kappa\) is a bound on the condition number of \(L\). There are generalizations for Symmetrically Diagonally Dominant (SDD) matrices, a generalization of Laplacians. There is an \(\Omega(n^{1/d^2})\) lower bound for solving general PSD systems, and a lower bound showing that \(\Omega(\kappa^2)\) queries into \(b\) are necessary.

News for June 2018

The summer gets off to a flying start, with three property testing papers, spanning differential privacy, distribution testing, and juntas in Gaussian space!

On closeness to \(k\)-wise uniformity, by Ryan O’Donnell and Yu Zhao (arXiv)
In this paper, the authors consider the following structural question about probability distributions over the Boolean hypercube \(\{-1,1\}^n\): ” what is the relation between total variation distance \(\delta\) to \(k\)-wise independence, and bound \(\varepsilon\) on the Fourier coefficients of the distribution on degrees up to \(k\)?”

While this question might seem a bit esoteric at first glance, it has direct and natural applications to derandomization, and of course to distribution testing (namely, to test \(k\)-wise independence and its generalization, \((\varepsilon, k)\)-wise independence of distributions over the hypercube).

The main contribution here is to improve (by a \((\log n)^{O(k)}\) factor) the bounds on \(\delta (n,k,\varepsilon)\) over the previous work by Alon et al. [AAK+07], making them either tight (for \(k\) even) or near-tight. To do so, the authors introduce a new hammer to the game, using linear programming duality in the proof of both their upper and lower bounds.

Property Testing for Differential Privacy, by Anna Gilbert and Audra McMillan (arXiv)
Differential privacy, as introduced by Dwork et al., needs no introduction. Property testing, especially on this website, needs even less. What about a combination of the two? Namely, given black-box access to an algorithm claiming to perform a differentially private computation, how to test whether this is indeed the case?

Introducing and considering this quite natural question for the first time [01/31/2019: see erratum below], this work shows, roughly speaking, that testing differential privacy is hard. Specifically, they show that for many notions of differential privacy (pure, approximate, and their distributional counterparts), testing is either impossible or possible but not with a sublinear number of queries (even when the tester is provided with side information about the black-box). In other terms, as the authors put it: trusting the privacy of an algorithm “requires compromise by either the verifier or algorithm owner” (and, in the latter case, even then it’s not a simple matter).

Is your data low-dimensional?, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv)
(Well, is it?) To state it upfront, I am biased here, as it is a problem I was very eager to see investigated to begin with. To recap, the question is as follows: “given query access to some unknown Boolean-valued function \(f\colon \mathbb{R}^n \to \{-1,1\}\) over the high-dimensional space \(\mathbb{R}^n\) endowed with the Gaussian measure, how can one check whether \(f\) only depends on “few” (i.e., \(k \ll n\)) variables?”

This is the continuous, Gaussian version of the (quite famous) junta testing problem, which has gathered significant attention over the past years (the Gaussian version has, to the best of my knowledge, never been investigated). Now, the above formulation has a major flaw: specifically, it is uninteresting. In Gaussian space*, who cares about the particular basis I expressed my input vector in? So a more relevant question, and that that the authors tackle, is the more robust and natural one: “given query access to some unknown Boolean-valued function \(f\colon \mathbb{R}^n \to \{-1,1\}\) over the high-dimensional space \(\mathbb{R}^n\) endowed with the Gaussian measure, how can one check whether \(f\) only depends on a low-dimensional linear combination of the variables?” Or, put differently, does all the relevant information for \(f\) live in a low-dimensional subspace?

De, Mossel, and Neeman show how can do this, non-adaptively, with a query complexity independent of the dimension \(n\) (hurray!), but instead polynomial in \(k\), the distance parameter \(\varepsilon\), and the surface area \(s\) of \(f\). And since this last parameter may seem quite arbitrary, they also proceed to show that a polynomial dependence in this \(s\) is indeed required.

*”In Gaussian space, no one can hear you change basis?”


Erratum (01/31/2019): It was brought to our attention that our overview of “Property Testing for Differential Privacy” was overlooking a key part of the literature; specifically, a work of Dixit et al. (TCC 2013)  which introduces this very question. From the abstract:

How does one ensure that [those third-parties] have implemented their algorithms in a way which meet the specifications of the privacy requirements? […] In this work, we propose a new approach to the above problem which we call privacy testing. We do this by formulating the above problem in the well-studied framework of property testing.