Videos from the WoLA 2020 workshop

The 4th Workshop on Local Algorithms (WoLA 2020) recently concluded: aimed at “fostering dialogue and cross-pollination of ideas between the various communities” related to local algorithms, broadly construed, it featured invited and contributed talks on a variety of topics, many (if not most) very relevant to the sublinear algorithms and property testing community.

Because of the online format of the workshop (imposed by the current circumstances), all talks were recorded and posted online. As such, all videos all available on the workshop’s website and YouTube channel: a good list of resources to peruse!

News for July 2020

We hope you’re all staying safe and healthy! To bring you some news (and distraction?) during this… atypical summer,here are the recent papers on property testing and sublinear algorithms we saw appear this month. Graphs, probability distributions, functions… there is a something for everyone.

On Testing Hamiltonicity in the Bounded Degree Graph Model, by Oded Goldreich (ECCC). The title sort of gives it away: this relatively short paper shows that testing whether an unknown bounded-degree graph has a Hamiltonian path (or Hamiltonian cycle) in the bounded-degree model requires a number of queries linear in \(n\), the number of nodes. The results also hold for directed graphs (with respect to directed Hamiltonian path or cycle), and are shown via a local reduction to a promise problem of satisfiability of 3CNF formulae. Also included: a complete proof of the linear lower bound for another problem, Independent Set Size; and an open problem: what is the query complexity of testing graph isomorphism in the bounded-degree model?

Local Access to Sparse Connected Subgraphs Via Edge Sampling, by Rogers Epstein (arXiv). Given access to a connected graph \(G=(V,E)\), can we efficiently provide access to some sparse connected subgraph \(G’=(V,E’)\subseteq G\) with \(|E’|\ll |E|\)? This question, well-studied in particular for the case where \(G\) had bounded degree and the goal is to achieve \(|E’|\leq (1-\varepsilon)|V|\), is the focus of this paper which provides a trade-off between the query complexity of the oracle and \(|E’|\). Specifically, for every parameter \(T\), one can give oracle access to \(G’\) with \(|E’|=O(|V|T)\), with a query complexity \(=\tilde{O}(|E|/T)\).

Switching gears, we move from graphs to probability distributions:

Tolerant Distribution Testing in the Conditional Sampling Model, by Shyam Narayanan (arXiv). In the conditional sampling model for distribution testing, which we have covered a few times on this blog, the algorithm at each step gets to specify a subset \(S\) of the domain, and observe a sample from the distribution conditioned on \(S\). As it turns out, this can speed things up a lot: as Canonne, Ron, and Servedio (2015) showed, even tolerant uniformity testing, which with i.i.d. samples requires a near-linear (in the domain size \(n\)) number of samples, can be done in a constant number of conditional queries. Well, sort of constant: no dependence on \(n\), but the dependence on the distance parameter \(\varepsilon\) was, in CRS15, quite bad: \(\tilde{O}(1/\varepsilon^{20})\). This work gets rid of this badness, and shows the (nearly) optimal \(\tilde{O}(1/\varepsilon^{2})\) query complexity! Among other results, it also generalizes it to tolerant identity testing (\(\tilde{O}(1/\varepsilon^{4})\)), for which previously no constant-query upper bound was known. Things have become truly sublinear.

Interactive Inference under Information Constraints, by Jayadev Acharya, Clément Canonne, Yuhan Liu, Ziteng Sun, and Himanshu Tyagi (arXiv). Say you want to do uniformity/identity testing (or learn, but let’s focus on testing) on a discrete distribution, but you can’t actually observe the i.i.d. samples: instead, you can only do some sort of limited, “local” measurement on each sample. How hard is the task, compared to what you’d do if you fully had the samples? This setting, which captures things like distributed testing with communication or local privacy constraints, erasure channels, etc., was well-understood from previous recent work in the non-adaptive setting. But what if the “measurements” could be made adaptively? This paper shows general lower bounds for identity testing and learning, as a function of the type of local measurement allowed: as a corollary, this gives tight bounds for communication constraints and local privacy, and shows the first separation between adaptive and non-adaptive uniformity testing, for a type of “leaky” membership query measurement.

Efficient Parameter Estimation of Truncated Boolean Product Distributions, by Dimitris Fotakis, Alkis Kalavasis, and Christos Tzamos (arXiv). Suppose there is a fixed and unknown subset \(S\) of the hypercube, a “truncation” set, which you can only accessible via membership query; and you receive i.i.d. samples from an unknown product distribution on the hypercube, truncated on that set \(S\) (for instance, because your polling strategy or experimental measurements have limitations). Can you still learn that distribution efficiently? Can you test it for various properties, as you typically really would like to? (or is it just me?) This paper identifies some natural sufficient condition on \(S\), which they call fatness, under which the answer is a resounding yes. Specifically, if \(S\) satisfies this condition, one can actually generate honest-to-goodness i.i.d. samples (non-truncated) from the true distribution, given truncated samples!

Leaving distribution testing, our last paper is on testing functions in the distribution-free model:

Downsampling for Testing and Learning in Product Distributions, by Nathaniel Harms and Yuichi Yoshida (arXiv). Suppose you want to test (or learn) a class of Boolean functions \(\mathcal{C}\) over some domain \(\Omega^n\), with respect to some (unknown) product distribution (i.e., in the distribution-free testing model, or PAC-learning model). This paper develops a general technique, downsampling, which allows one to reduce such distribution-free testing of \(\mathcal{C}\) under a product distribution to testing \(\mathcal{C}\) over \([r]^d\) under the uniform distribution, for a suitable parameter \(r=r(d,\varepsilon,\mathcal{C})\). This allows the authors, among many other things and learning results, to easily re-establish (and, in the second case, improve upon) recent results on testing of monotonicity over \([n]^d\) (uniform distribution) and over \(\mathbb{R}^d\) (distribution-free).

Policy on reporting papers

While we at PTReview always look through the posted papers, we do not check for correctness. We make a serious attempt to make sure the paper is reasonable. In a few instances, we have decided not to post a (topically relevant) paper, because it looks absolutely wrong. Our position is: the benefit of doubt goes to the author, and a borderline paper should be posted. We are only curating relevant tech reports, not passing judgment on results.

In some borderline cases, readers familiar with the subject complained to us that the paper should be not be considered a scientific contribution (because of, say, unspecified algorithms, blatantly incorrect or unverifiable central claims). These are cases where we were also unsure of the paper. We have usually removed/not posted such papers.

If the paper author(s) feels that his/her paper should nonetheless be posted, then they should email us at little.oh.of.n@gmail.com. As long as the paper is not complete nonsense and appears to cite relevant history, we will defer to the authors’ wishes.

News for June 2020

Sublinear algorithms in times of social distancing…always something exciting. This month we have a slew of results on sublinear algorithms for classic graph problems.

(Ed: We have removed a previously posted paper due to correctness concerns raised by our readers. Please look at the post on our paper policy.)

Palette Sparsification Beyond (∆ + 1) Vertex Coloring by Noga Alon and Sepehr Assadi (arXiv). A basic fact from graph theory is that any graph has a \((\Delta+1)\)-coloring, where \(\Delta\) is the maximum degree. Followers of property testing are likely familiar with a fantastic result of Assadi-Chen-Khanna (ACK) on sublinear algorithms, that gives a sublinear algorithm for \((\Delta+1)\)-coloring. (The running time is \(\widetilde{O}(n^{3/2})\), where \(n\) is the number of vertices.) The key tool is a palette sparsification theorem: suppose each vertex is given a “palette” of \((\Delta+1)\) colors. Each vertex randomly sparsifies its palette by sampling \(O(\log n)\) colors, and is constrained to only use these colors. Remarkably, whp the graph can still be properly colored. This tool is at the heart of sublinear time/space algorithms for coloring. This paper gives numerous extensions to this theorem, where one can tradeoff a larger initially palette for a smaller final sample. Another extension is for triangle-free graphs, where the initial palette is of size \(O(\Delta/\ln \Delta)\) and the sample is of size \(O(\Delta^\gamma + \sqrt{\ln n})\) (for parameter \(\gamma < 1\). This leads to an \(O(n^{3/2 + \gamma})\) time algorithm for \(O(\Delta/\ln \Delta)\) coloring of triangle-free graphs.

When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear-Time by Sepehr Assadi and Shay Solomon (arXiv). Taking off from sublinear coloring algorithms, one can ask if there are sublinear time algorithms for Maximal Independent Set (MIS) and Maximal Matching (MM). Alas, ACK prove that this is impossible. This paper investigates when one can get a sublinear time algorithm for these problems. For graph \(G\), let \(\beta(G)\) be the “neighborhood independence number”, the size of the largest independent set contained in a vertex neighborhood. This papers shows that both problems can be solved in \(\widetilde{O}(n \beta(G))\) time. Examples of natural classes of graphs where \(\beta(G)\) is constant: line graphs and unit-disk graphs. An interesting aspect is that MIS algorithm is actually deterministic! It’s the simple marking algorithm that rules out neighborhoods of chosen vertices; the analysis shows that not much time is wasted in remarking the same vertex.

Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation by Yu Chen, Sampath Kannan, and Sanjeev Khanna (arXiv). This paper studies sublinear algorithms for the metric TSP problem. The input is an \(n \times n\) distance matrix. One can 2-approximate the TSP by computing the MST, and a result of Czumaj-Sohler gives a \((1+\varepsilon)\)-approximation algorithm for the latter, running in \(O(n\varepsilon^{-O(1)})\) time. The main question is: can one beat the 2-factor approximation in sublinear time? This paper considers the graphic TSP setting, where the distance matrix corresponds to the shortest path metric of an unweighted graph. One result is a \((2-\varepsilon_0)\)-approximation algorithm (for an explicit constant \(\varepsilon_0\)) that runs in \(\widetilde{O}(n)\) time. For the important \((1,2)\) TSP setting (all distances are either 1 or 2), the paper gives a \(O(n^{1.5})\) time 1.63-approximation algorithm. Interestingly, there is a lower bound showing that \((1+\varepsilon)\)-approximations, for arbitrarily small \(\varepsilon\), cannot be achieved in \(o(n^2)\) time. One of the key tools is sublinear algorithms for estimating the maximum matching size, itself a well-studied problem in the community.

News for May 2020

Last month saw activity across a diverse collection of topics in sublinear algorithms. In particular, we had the following five six papers. (Sorry, I missed one)

One-Sided Error Testing of Monomials and Affine Subspaces by Oded Goldreich and Dana Ron (ECCC). This work focuses on one-sided testing of two kinds of problems (and their variants):
1. Testing Monomials: Suppose you are given a function \(f \colon \{0,1\}^n \to \{0,1\}\). Is \(f = \wedge_{i \in I} x_i\) (that is, is \(f\) a monotone monomial).
2. Testing Affine Subspaces: Consider the task of testing whether a \(f \colon \mathcal{F}^n \to \{0,1\}\) is the indicator of an \((n-k)\)-dimensional affine space for some \(k\) (where \(\mathcal{F}\) is a finite field).

The paper shows that the general problem — the one in which the arity of the monomial (resp the co-dimension of the subspace) is not specified has one-sided query complexity \(\widetilde{O}(1/\varepsilon)\). The same holds for testing whether the arity of the monomial is at most \(k\) (resp the co-dimension of the subspace is at most \(k\)). Finally, the exact problem which seeks to test whether the arity of the monomial is exactly \(k\) (resp the co-dimension of the space is exactly \(k\)) has query complexity \(\Omega(\log n)\). For two sided testers however, size oblivious testers are known for this problem. Thus, like the authors remark, two-sided error is inherent in the case of the exact version of the problem.

Sampling Arbitrary Subgraphs Exactly Uniformly in Sublinear Time by Hendrik Fichtenberger, Mingze Gao, Pan Peng (arXiv). Readers of PT Review are no strangers to the problem of counting cliques in sublinear time (with a certain query model). Building on tools from [1], in [2], Eden-Ron-Seshadhri gave the first algorithms for counting number of copies \(K_r\) in a graph \(G\) to within a \((1 \pm \varepsilon)\) multiplicative factor. En route to this result, they also gave a procedure to sample cliques incident to some special set \(S \subseteq V(G)\). The query model in [2] allowed the following queries: a u.a.r vertex query, degree query, \(i^{th}\) neighbor query and a pair query which answers whether a pair \((u,v)\) forms an edge. The work under consideration shows a result which I personally find remarkable: given the additional ability to get a u.a.r edge sample, we can do the following. For any graph \(H\) we can obtain a uniformly random subgraph isomorphic to \(H\) in \(G\). Let that sink in: this work shows that you can sample \(H\) exactly uniformly from the graph \(G\).

Finding Planted Cliques in Sublinear Time by Jay Mardia, Hilal Asi, Kabir Aladin Chandrasekher (arXiv). Planted Clique is a time honored problem in average case complexity. This classic problem asks the following: You are given a \(G \sim \mathcal{G}(n, 1/2)\). Suppose I select a subset of \(k\) vertices in this graph and put a clique on the subgraph they induce. In principle it is possible to recover the clique I planted if \(k > (2 + \varepsilon) \log n\). But it seems you get polynomial time algorithms only when \(k \geq \Omega(\sqrt n)\) even after you throw SDPs at the problem. Moreover, so far, the algorithms which recover the planted \(k\)-clique were known to take \(\widetilde{O}(n^2)\) time. This work shows that you actually get algorithms which take time \(\widetilde{O}(n^{3/2})\) if \(k \geq \Omega(\sqrt{n \log n})\). The key idea is to first obtain a “core” part of the clique of size \(O(\log n)\) in time \(\widetilde{O}(n^2/k)\). This is followed up with a clique completion routine where you mark all vertices connected to the entire core as being potentially in the clique. The paper also shows a conditional lower bound result which shows that given query access to adjacency matrix of the graph, a natural family of non-adaptive algorithms cannot recover a planted \(k\) clique in time \(o\left(\frac{n}{k}\right)^3\) (for \(k \geq \widetilde{\Omega}(\sqrt n))\).

A robust multi-dimensional sparse Fourier transform in the continuous setting by Yaonan Jin, Daogao Liu and Zhao Song (arXiv). Suppose you are given an unknown signal whose Fourier Spectrum is k-sparse (that is, there are at most k dominant Fourier Coefficients and all the others are zero or close to zero). Significant research effort has been devoted to learn these signals leading to works which study this problem for multi-dimensional discrete setting and in the one-dimensional continuous case. The \(d\)-dimensional continuous case \((d = \Theta(1))\) was largely unexplored. This work makes progress on this frontier by making some natural assumptions on the unknown signal. In particular, the paper assumes that the frequencies — which are vectors \(f_i’s \in R^d\) — are well separated and satisfy \(\|f_i – f_j\|_2 \leq \eta\) and that all \({f_i}_{i \in [k]} \subseteq [-F, F]^d\) sit inside a bounded box.
The authors assume sample access to the signal in the sense that at any desired timestep \(\tau\), the algorithm can sample the signal’s value. With this setup, the authors show that all the dominant frequencies can be recovered with a \(O_d(k \cdot \log(F/\eta))\) samples by considering a relatively small time horizon.

Extrapolating the profile of a finite population by Soham Jana, Yury Polyanskiy, Yihong Wu (arXiv). Consider the following setup. You are given a universe \(k\) balls. Ball come in up to \(k\) different colors. Say you \(\theta_j\) balls in color \(j\) for each \(j \in [k]\). One of the fundamental problems in statistics considers taking samples \(m\) balls from the universe and attempts estimating “population profile” (that is, the number of balls in each color). Historically, it is known that unless an overwhelming majority of the universe has been seen, one cannot estimate the empirical distribution of colors. This paper shows that in the sublinear regime, with \(m \geq \omega(k/\log k)\), it is possible to consistently estimate the population profile in total variation. And once you have a handle on the empirical distribution of the population, you can go ahead and learn lots of interesting label invariant properties of your universe (things like entropy, number of distinct elements etc).

(Edit added later)

Testing Positive Semi-Definiteness via Random Submatrices by Ainesh Bakshi, Nadiia Chepurko, Rajesh Jayaram (arXiv). Suppose I give you a PSD matrix \(A \in R^{n \times n}\). You know that all of its principle submatrices are also PSD. What if \(A\) was \(\varepsilon\)-far from the PSD cone (in a sense I will define soon)? What can you say about the eigenvalues of principle submatrices of \(A\) now? In this paper, the authors tackle precisely this question. The paper defines a matrix \(A\) to be \(\varepsilon\)-far in \(\ell_2^2\) distance from the PSD Cone if you have that \(\min_{B \geq 0: B \in R^{n \times n}}\|A – B\|_F^2 \geq \varepsilon n^2\). You are allowed to randomly sample a bunch of principle submatrices (of order roughly \(O(1/\varepsilon)\) by \(O(1/\varepsilon)\) and check if they are PSD. Armed with this setup, the paper gives a non-adaptive one sided tester for this problem which makes \(\widetilde{O}(1/\varepsilon^4)\) queries. The paper also supplements this result with a lower bound of \(\widetilde{\Omega}(1/\varepsilon^2)\) queries.

If I missed something, please let me know. This is my first post on PT Review and I might have botched up a few things.

References

[1] Talya Eden, Amit Levi, Dana Ron and C. Seshadhri. Approximately Counting Triangles in Sublinear Time. 56th Annual Symposium on Foundations of Computer Science, 2015

[2] Talya Eden, Dana Ron and C. Seshadhri. On approximating the number of k-cliques in sublinear time. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing 2018.

News for April 2020

April is now behind us, and we hope you and your families are all staying safe and healthy. We saw six seven property papers appear online last month, so at least there is some reading ahead of us! A mixture of privacy, quantum, high-dimensional distributions, and juntas (juntæ?). A lot of distribution testing, overall.

Connecting Robust Shuffle Privacy and Pan-Privacy, by Victor Balcer, Albert Cheu, Matthew Joseph, and Jieming Mao (arXiv). This paper considers a recent notion of differential privacy called shuffle privacy, where users have sensitive data, a central untrusted server wants to do something with that data (for instance, say… testing its distribution), and a trusted middle-man/entity shuffles the users’ messages u.a.r. to bring in a bit more anonymity. As it turns out, testing uniformity (or identity) of distributions in the shuffle privacy model is (i) much harder than without privacy constraints; (ii) much harder than with ‘usual’ (weaker) differential privacy (iii) much easier than with local privacy; (iv) related to the sample complexity under another privacy notion, pan-privacy. It’s a brand exciting new world out there!

(Note: for the reader interested in keeping track of identity/uniformity testing of probability distributions under various privacy models, I wrote a very short summary of the current results here.)

Entanglement is Necessary for Optimal Quantum Property Testing, by Sebastien Bubeck, Sitan Chen, and Jerry Li (arXiv). The analogue of uniformity testing, in the quantum world, is testing whether a quantum state is equal (or far from) the maximally mixed state. It’s known that this task has “quantum sample complexity” (number of measurements) \(\Theta(d/\varepsilon^2)\) (i.e., square root dependence on the dimension of the state, \(d^2\)). But this requires entangled measurements, which may be tricky to get (or, in my case, understand): what happens if the measurements can be adaptive, but not entangled? In this work, the authors show that, under this weaker access model \(\Omega(d^{4/3}/\varepsilon^2)\) measurements are necessary: adaptivity alone won’t cut it. It may still help though: without either entanglement nor adaptivity, the authors also show a \(\Omega(d^{3/2}/\varepsilon^2)\) measurements lower bound.

Testing Data Binnings, by Clément Canonne and Karl Wimmer (ECCC). More identity testing! Not private and not quantum for this one, but… not quite identity testing either. To paraphrase the abstract: this paper introduces (and gives near matching bounds for) the related question of identity up to binning, where the reference distribution \(q\) is over \(k \ll n\) elements: the question is then whether there exists a suitable binning of the domain \([n]\) into \(k\) intervals such that, once binned, \(p\) is equal to \(q\).”

Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models, by Antonio Blanca, Zongchen Chen, Daniel Štefankovič, and Eric Vigoda (arXiv). Back to identity testing of distributions, but for high-dimensional structured ones this one. Specifically, this paper focuses on the undirected graphical models known as restricted Boltzmann machines, and provides efficient algorithms for identity testing and conditional hardness lower bounds depending on the type of correlations allowed in the graphical models.

Robust testing of low-dimensional functions, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv). Junta testing is a classical, central problem in property testing, with motivations and applications in machine learning and complexity. The related (and equally well-motivated) question of junta testing of functions on \(\mathbb{R}^d\) (instead of the Boolean hypercube) was recently studied by the same authors; and the related (and, again, equally well-motivated) question of tolerant junta testing on the Boolean hypercube was also recently studied (among other works) by the same authors. Well, this paper does it all, and tackles the challenging (and, for a change, equally well-motivated!) question of tolerant testing of juntas on \(\mathbb{R}^d\).

Differentially Private Assouad, Fano, and Le Cam, by Jayadev Acharya, Ziteng Sun, and Huanyu Zhang (arXiv). Back to probability distributions and privacy. This paper provides differentially private analogues of the classical eponymous statistical inference results (Assouad’s lemma, Fano’s inequality, and Le Cam’s method). In particular, it gives ready-to-use, blackbox tools to prove testing and learning lower bounds for distributions in the differentially private setting, and shows how to use them to easily derive, and rederive, several lower bounds.

Edit: We missed one!

Learning and Testing Junta Distributions with Subcube Conditioning, by Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten (arXiv). This paper focuses on the subcube conditioning model of (high-dimensional) distribution testing, where the algorithm can fix some variables to values of its choosing and get samples conditioned on those variables. Extending and refining techniques from a previous work by a (sub+super)set of the authors, the paper shows how to optimally learn and test junta distributions in this framework—with exponential savings with respect to the usual i.i.d. sampling model.

News for March 2020

I hope all of you are keeping safe and healthy in these difficult times. Thank heavens we have our research and our math to keep our sanity…

This month has seen two papers, one on testing variable partitions and one on distributed isomorphism testing.

Learning and Testing Variable Partitions by Andrej Bogdanov and Baoxiang Wang (arXiv). Consider a function \(f:\Sigma^n \to G\), where \(G\) is Abelian group. Let \(V\) denote the set of variables. The function \(f\) is \(k\)-separable if there is a partition \(V_1, V_2, \ldots, V_k\) of \(V\) such that \(f(V)\) can be expressed as the sum \(f_1(V_1) + f_2(V_2) + \ldots + f_k(V_k)\). This is an obviously natural property to study, though the specific application mentioned in the paper is high-dimensional reinforcement learning control. There are a number of learning results, but we’ll focus on the main testing result. The property of \(k\)-separability can be tested with \(O(kn^3/\varepsilon)\) queries, for \(\Sigma = \mathbb{Z}_q\) (and distance between functions is the usual Hamming distance). There is an analogous result (with different query complexity) for \(\Sigma = \mathbb{R}\). It is also shown that testing 2-separability requires \(\Omega(n)\) queries, even with 2-sided error. The paper, to its credit, also has empirical studies of the learning algorithm with applications to reinforcement learning.

Distributed Testing of Graph Isomorphism in the CONGEST model by Reut Levi and Moti Medina (arXiv). This result follows a recent line of research in distributed property testing algorithms. The main aim is to minimize the number of rounds of (synchronous) communication for a property testing problem. Let \(G_U\) denote the graph representing the distributive network. The aim is to test whether an input graph \(G_K\) is isomorphic to \(G_U\). The main property testing results are as follows. For the dense graph case, isomorphism can be property tested (with two-sided error) in \(O(D + (\varepsilon^{-1}\log n)^2) \) rounds, where \(D\) is the diameter of the graph and \(n\) is the number of nodes. (And, as a reader of this blog, you probably know what \(\varepsilon\) is already…). There is a standard \(\Omega(D)\) lower bound for distributed testing problems. For various classes of sparse graphs (like bounded-degree minor-free classes), constant time isomorphism (standard) property testers are known. This paper provides a simulation argument showing that standard/centralized \(q\)-query property testers can be implemented in the distributed model, in \(O(Dq)\) rounds (this holds for any property, not just isomorphism). Thus, these simulations imply \(O(D)\)-round property testers for isomorphism for bounded-degree minor-free classes.

News for February 2020

Despite a wealth of February conference deadlines, papers were fairly sparse. We found two EDIT: three EDIT EDIT: four papers for the month of February, please share if we missed any relevant papers.

Monotone probability distributions over the Boolean cube can be learned with sublinear samples, by Ronitt Rubinfeld and Arsen Vasilyan (arXiv). By now, it is well known that assuming an (unknown) distribution enjoys some sort of structure can lead to more efficient algorithms for learning and testing. Often one proves that the structure permits a convenient representation, and exploits this representation to solve the problem at hand. This paper studies the learning of monotone distributions over the Boolean hypercube. The authors exploit and extend a structural statement about monotone Boolean functions by Blais, Håstad, Servedio, and Tan, using it to provide sublinear algorithms for estimating the support size, distance to uniformity, and the distribution itself.

Locally Private Hypothesis Selection, by Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, and Huanyu Zhang (arXiv). Given a collection of \(k\) distributions and a set of samples from one of them, can we identify which distribution it is? This paper studies this problem (and an agnostic generalization of it) under the constraint of local differential privacy. The authors show that this problem requires \(\Omega(k)\) samples, in contrast to the \(O(\log k)\) complexity in the non-private model. Furthermore, they give \(\tilde O(k)\)-sample upper bounds in various interactivity models.

Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning, by Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, and N. V. Vinodchandran (arXiv). Given samples from two distributions, can you estimate the total variation distance between them? This paper gives a framework for solving this problem for structured distribution classes, including Ising models, Bayesian networks, Gaussians, and causal models. The approach can be decomposed properly learning the distributions, followed by estimating the distance between the two hypotheses. Challenges arise when densities are hard to compute exactly.

Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Discrete Distributions, by Yi Hao and Alon Orlitsky (arXiv). The histogram of a dataset is the collection of frequency counts of domain elements. The profile of a dataset can be succinctly described as the histogram of the histogram. Recent works have shown that, in some sense, discarding information about your dataset by looking solely at the profile can be beneficial for certain problems in which it is “universal”. This work explores two new quantities, the entropy and dimension of the profile, which turn out to play a key role in quantifying the performance of estimators based on the profile.

News for January 2020

The first month of 2020 is behind us, and it’s already looking good! Four papers last month, spanning quantum, graphs, and probability distributions.

On Efficient Distance Approximation for Graph Properties, by Nimrod Fiat and Dana Ron (arXiv). In the dense graph (i.e., adjacency-matrix) model, one is given a distance parameter \(\varepsilon\) and granted query access to the adjacency matrix of a graph \(G\), and seeks to determine something about the distance of \(G\) to a prespecified property \(\mathcal{P}\) of interest (i.e., the fraction of the matrix that needs to be changed for \(G\) to satisfy the property). Testing requires to distinguish whether that distance is zero, or at least \(\varepsilon\); many results over the past years have shown that many properties can be tested with a number of queries depending only on \(\varepsilon\) (and not on \(n=|G|\). This work focuses on the harder problem of distance estimation, or, equivalently, tolerant testing: that is, estimate this distance up to \(\pm\varepsilon\). The authors introduce a general framework to get distance approximation algorithms from a “complicated” property by decomposing it into simpler properties and using algorithms for those. Applying this framework to a few flagship properties, they then show that \(P_3\)-freeness, induced \(P_4\)-freeness, and cordiality are properties which have efficient distance estimation algorithms (independent of \(n\), and polynomial in \(1/\varepsilon\)).

Minimax Optimal Conditional Independence Testing, by Matey Neykov, Sivaraman Balakrishnan, and Larry Wasserman (arXiv). Given i.i.d. draws from a triple \((X,Y,Z)\), how hard is it to check whether \(X \perp Y \mid Z\), that is, “\(X\) and \(Y\) are independent conditioned on \(Z\) (or far from it)?” This is the problem on conditional independence testing, which was covered back in the days for the case where \(X,Y,Z\) are discrete. Well, this new work takes the fight out of the discrete world: extending the results and techniques from the discrete case, it provides optimal bound for the continuous case: where \(Z\) is on \([0,1]\), and then when all three \(X,Y, Z\) are continuous.

How symmetric is too symmetric for large quantum speedups?, by Shalev Ben-David and Supartha Podder (arXiv); and Can graph properties have exponential quantum speedup?, by Andrew M. Childs and Daochen Wang (arXiv). Both these works independently study the relation between the (bounded-error) randomized and quantum query complexities of any graph property \(f\), in the dense graph (adjacency-matrix) model. In particular, how much advantage do quantum algorithms provide for those?
As it turns out, not so much: for those functions, both papers show the two quantities are always polynomially related (\(R(f) \leq Q(f) \leq O(R(f)^6))\)) in the dense-graph model. As a corollary, this implies that testing any such property won’t benefit much from quantum (that is, at most polynomially)…. at least in this model. In the adjacency list model (also known as the bounded-degree graph model), the first paper conjectures that exponential query complexity improvements are possible; and the second paper provides an example, establishing it. Altogether, this fully settles an open problem of Ambainis, Childs, and Liu, and Montanaro and de Wolf.