News for August 2020

Last month saw action in property testing across the board: graphs, distributions and functions were all objects of study in papers that came out last month. Also included is a separation result between quantum and classical query complexities resolving a conjecture of Aaronson and Ambainis. Here is a brief report.

Testing asymmetry in bounded degree graphs, by Oded Goldreich (ECCC). This paper studies a natural graph property hitherto not considered in the property testing literature. Namely, the question of testing whether a graph is asymmetric or whether it is far from being asymmetric. A graph is said to be asymmetric if its automorphism group is trivial (that is, it only contains the identity permutation). One of the results in the paper says that this problem is easy in the dense graph model – which is a side result of the paper. This is because all dense graphs are \(O(\log n/n)\)-close to being asymmetric. To see this, the paper points out that a simple randomized process which takes \(G\) as input and returns an asymmetric graph by changing very few edges. This process asks you to do the following: Take a set \(S \subseteq V\) with \(|S| = O(\log n)\) nodes and replace the subgraph they induce with a random graph. Moreover,  randomize all the edges between \(S\) and \(V \setminus S\). What you can show is that in this modified graph, any automorphism (whp) will map \(S\) to itself. And all the remaining vertices behave (whp) in a unique manner which is peculiar to it. In particular, this means that any automorphism better not map a vertex \(v\) in \(V \setminus S\) to any other vertex in \(V \setminus S\). And this finishes the argument. The main result explores the bounded degree model. By a simple argument, you can show that testing asymmetry is easy if all the connected components have size \(s(n) \leq o\left(\frac{\log n}{\log {\log n}}\right)\). Thus, the challenging case is when you have some connected components of a larger size. In this case, what Goldreich shows is the following: If all the components have size at most \(s(n)\) where \(s(n) \geq \Omega\left(\frac{\log n}{\log {\log n}}\right)\), then you can test asymmetry (with one-sided-error) in \(O(n^{1/2} \cdot s(n)/\epsilon)\) queries.  Moreover, the paper also shows a two-sided-lower bound of \(\Omega\left(n/s(n)\right)^{1/2}\) queries which holds as long as \(\epsilon \leq O(1/s(n)\). This leaves open the bounded degree question of determining the query complexity of testing asymmetry in the general case as the paper also points out.

On testability of first order properties in Bounded degree graphs, by Isolde Adler, Noleen Köhler and Pan Peng (arXiv). One of the well understood motivations for property testing begins in the following manner. Decision problems are typically hard to solve if they involve a universal quantifier — either \(\exists\) or \(\forall\). One way around this hardness is to do the following ritual: relax the problem by dropping the universal quantifier, define a notion of distance between objects in your universe and ask a promise problem instead. Indeed, if you take your favorite property testing problem, you will note that it precisely fits in the template above. How about making this business more rigorous in bounded degree graph property model? This is precisely the content of this work which considers the face off between (First Order) logic and property testing in the bounded degree model. The authors show some interesting results. They begin by showing that in the bounded degree model, you can show that all first order graph properties which involve a single quantifier \(Q \in \{\forall, \exists\}\) are testable with constant query complexity.
If you find this baffling, it would be good to remind yourself that not all graph properties can be expressed in the language of first order logic with a single quantifier! So, you can rest easy. The graph properties you know are not constant time testable are most assuredly not expressible with a single quantifier in First Order Logic. However, this work shows more. It turns out, that “any FO property that is defined by a formula with quantifier prefix \(\exists^* \forall^*\) is testable”. Moreover, there do exist FO graph properties defined by the quantifier prefix \(\forall^* \exists^*\) which are not testable. Thus, this work achieves results in the bounded degree model which are kind of analogous to the results in the dense graph model by Alon et al [1]. On a final note, I find the following one sentence summary of the techniques used to prove the lower bound rather intriguing: “[the paper] obtains the lower bound by a first-order formula that defines a class of bounded-degree expanders, based on zig-zag products of graphs.”

On graphs of bounded degree that are far from being Hamiltonian, by Isolde Adler and Noleen Köhler (arXiv). This paper explores the question of testing Hamiltonicity in the bounded degree model. The main result of the paper is that Hamiltonicity is not testable with one-sided error with \(o(n)\) queries. PTReview readers might recall from our July Post a concurrent paper by Goldriech [2] which achieves the same lower bound on query complexity in the two-sided error model (the authors call attention to [2] as well). One of the interesting feature of this result is that the lower bounds are obtained by an explicit deterministic reduction as opposed to the usual randomized reduction. Like the authors point out, this offers more insights into structural complexity of instances that are far from being Hamiltonian. We point out that this also differs from how the lower bound is derived in [2] — which is via local hardness reductions to a promise problem of 3 CNF satisfiability.

An optimal tester for \(k\)-linear, by Nader Bshouty (ECCC). This paper explores two related questions. We call a function \(f \colon \{0,1\}^n \to \{0,1\}\) \(k\)-linear if it equals the \(\sum_{i \in S} x_i\) for some \(S \subseteq [n]\) of size exactly \(k\). A boolean function is said to be \(k\)-linear* if it is \(j\) linear for a fixed \(j\) where \(j \in \{0,1,2, \cdots, k\}\). The paper proves the following theorems.

  1. There exists a non-adaptive one-sided distribution free tester for \(k\)-linear* with query complexity being \(O\left(k \log k + \frac{1}{\varepsilon}\right)\). This matches the two-sided lower bound (where the underlying distribution is uniform) by Blais et al [3].
  2. Using a reduction from \(k\)-linear* to \(k\)-linear, the paper shows one can obtain a non-adpative two-sided distribution free tester for \(k\)-linear with same query complexity as the above result. The lower bound from Blais et al applies here also (in fact, they prove a lower bound on \(k\)-linearity).
  3. Next up, the paper has a couple of lower bound results to accompany this. One of these results reveals the price you pay for being one-sided and exact (that is, you insist on the function being exactly \(k\)-linear). Turns out, now you have a non-adaptive one-sided uniform distribution lower bound of \(\widetilde{\Omega}(k) \log n + \Omega(1/\varepsilon)\). If you allow adaptivity instead, the paper shows a lower bound of \(\widetilde{\Omega}(\sqrt k)\log n + \Omega(1/\varepsilon)\).

Amortized Edge Sampling, by Talya Eden, Saleet Mossel and Ronitt Rubinfeld (arXiv). Consider the following setup. You are given query access to adjacency list of a graph \(G\) with \(n\) vertices and \(m\) edges. You can make degree queries and neighbor queries. Suppose I ask you to sample a single edge from this graph from a distribution that is pointwise \(\varepsilon\) close to the uniform distribution. Eden and Rosenbaum already showed how you can achieve this with a budget of \(\widetilde{\Theta}(n/\sqrt m)\) queries. Starting from this jump off point, the authors ask whether you can circumvent this lower bound if you want to return multiple samples from a distribution which is again pointwise close to uniform. The paper answers this question in the affirmative and shows that if you know the number of samples, \(q\), in advance you can get away with an amortized bound of \(O*(\sqrt q n/\sqrt m)\) on the total number of queries needed.

On the High Accuracy Limitation of Adaptive Property Estimation, by Yanjun Han (arXiv). Take a discrete distribution \(\mathcal{P}\) with support size \(k\) and consider the task of estimating some symmetric property of \(\mathcal{P}\) to a small \(\pm \varepsilon\) additive error. Here, a symmetric property refers to a “nice” functional defined over the probability simplex, i.e., it refers to functions \(F \colon \Delta_k \to \mathbb{R}\) where \(F(p) = \sum_{i=1}^{k} f(p_i)\) where \(f \colon (0,1) \to \mathbb{R}\). A naive attack to these estimation tasks goes through the following ritual: you get your hands on the empirical distribution, you plug it in \(F\) and you hope for the best. Turns out, you are out of luck if the function \(f\) is non-smooth and in these cases you end up with a suboptimal estimator. Previous works have also looked at more sophisticated estimators (like the local moment matching or LMM and profile maximum likelihood or PML estimator). Turns out, using the LMM or PML estimator leads to optimal sample complexity for a handful of symmetric properties (as long as \(\varepsilon \geq n^{-1/3}\)). This paper considers the question of what can you say for supersmall values of \(\varepsilon\) where \(n^{-1/2} \leq \varepsilon \leq n^{-1/3}\). (The \(n^{-1/2}\) appears because there are estimators that use the knowledge of \(f\) and \(\varepsilon\) can be driven all the way down to \(n^{-1/2}\) for these estimators). The paper focuses on estimators which do not exploit any structure in \(f\). In particular, the paper specializes this question to PML and shows a fundamental limitation on PML which means that the PML approach fails to be sample optimal for the entire range of \(\varepsilon\) and is sample optimal only for \(\varepsilon >> n^{-1/3}\) — which also confirms a conjecture of Han and Shiragur (and refutes a conjecture of Acharya et al. who postulated this is sample optimal for the entire range of \(\varepsilon\)).

\(k\)-Forrelation Optimally Separates Quantum and Classical Query
Complexity
, by Nikhil Bansal and Makrand Sinha (arXiv). Understanding the power of quantum query over classical queries is a well motivated problem with a rich history. One of the biggest questions in this area asks for the largest separation between classical and quantum query complexities. In a breakthrough, Aaronson and Ambainis [4] showed a fundamental simulation result which confirmed that you can simulate \(q\) quantum queries with \(O(N^{1 – 1/2q})\) classical queries in the randomized decision tree model of computation as long as \(q = O(1)\). In the same paper, the authors also showed that the standard forrelation* problem exhibits a \(1\) versus \(\widetilde{\Omega}(\sqrt n)\) separation. This means that for \(q = 1\), you essentially have optimal separation. But what about \(q > 1\)? To this end, Aaronson and Ambainis conjectured that a certain problem which they called \(k\)-forrelation — which can be computed with \(q = k/2\) queries requires at least \(\widetilde{\Omega}(n^{1-1/k})\) classical queries. The current work precisely confirms this conjecture.

(*) The forrelation problem asks you to decide whether one Boolean function is highly correlated with the Fourier transform of a second function.

(Edit: Added Later) Simon Apers points out a paper by Shrestov, Storozhenko and Wu that we missed. (Thanks Simon)! Here is a brief report on that paper.

An optimal separation of randomized and quantum query complexity (by Alexander Shrestov, Andrey Storozhenko and Pei Wu)(arXiv) Similar to the paper by Bansal and Sinha [BS20] mentioned above, this paper also resolves the conjecture by Aaronson and Ambainis proving the same result. Like the paper also notes, the techniques in both of these works are completely different and incomparable. On the one hand [BS20] proves the separation for an explicit function as opposed to a function chosen uniformly at random from a certain set as considered in this work. On the other hand, the separation result shown in [BS20] only applies when the query algorithm returns the correct answer with probability at least \(1/2 + 1/poly(\log n)\) — in contrast the results in this paper apply even when the query algorithm is required to have probability of correctness be a constant at least \(1/2\). In addition, this work also proves the \(\ell\)-Fourier weight conjecture of Tal which is of independent interest beyond quantum computing.

So, it looks like all in all we had a great month with two concurrent papers both resolving Aaronson Ambainis conjecture (yet again after two concurrent papers on testing Hamiltonicity)!

References:

[1] Noga Alon, Eldar Fischer, Michael Krivelevich, and Mario Szegedy. Efficient testing of
large graphs. Combinatorica, 20(4):451–476, 2000.


[2] Oded Goldreich. On testing hamiltonicity in the bounded degree graph model. Electronic Colloquium on Computational Complexity (ECCC), (18), 2020

[3] Eric Blais, Joshua Brody, and Kevin Matulef. Property testing lower bounds via communication complexity. CCC 2011

[4] Scott Aaronson and Andris Ambainis. Forrelation: A problem that optimally separates quantum from classical computing. SIAM J. Comput., 47(3):982–1038, 2018

Videos from the WoLA 2020 workshop

The 4th Workshop on Local Algorithms (WoLA 2020) recently concluded: aimed at “fostering dialogue and cross-pollination of ideas between the various communities” related to local algorithms, broadly construed, it featured invited and contributed talks on a variety of topics, many (if not most) very relevant to the sublinear algorithms and property testing community.

Because of the online format of the workshop (imposed by the current circumstances), all talks were recorded and posted online. As such, all videos all available on the workshop’s website and YouTube channel: a good list of resources to peruse!

News for July 2020

We hope you’re all staying safe and healthy! To bring you some news (and distraction?) during this… atypical summer,here are the recent papers on property testing and sublinear algorithms we saw appear this month. Graphs, probability distributions, functions… there is a something for everyone.

On Testing Hamiltonicity in the Bounded Degree Graph Model, by Oded Goldreich (ECCC). The title sort of gives it away: this relatively short paper shows that testing whether an unknown bounded-degree graph has a Hamiltonian path (or Hamiltonian cycle) in the bounded-degree model requires a number of queries linear in \(n\), the number of nodes. The results also hold for directed graphs (with respect to directed Hamiltonian path or cycle), and are shown via a local reduction to a promise problem of satisfiability of 3CNF formulae. Also included: a complete proof of the linear lower bound for another problem, Independent Set Size; and an open problem: what is the query complexity of testing graph isomorphism in the bounded-degree model?

Local Access to Sparse Connected Subgraphs Via Edge Sampling, by Rogers Epstein (arXiv). Given access to a connected graph \(G=(V,E)\), can we efficiently provide access to some sparse connected subgraph \(G’=(V,E’)\subseteq G\) with \(|E’|\ll |E|\)? This question, well-studied in particular for the case where \(G\) had bounded degree and the goal is to achieve \(|E’|\leq (1-\varepsilon)|V|\), is the focus of this paper which provides a trade-off between the query complexity of the oracle and \(|E’|\). Specifically, for every parameter \(T\), one can give oracle access to \(G’\) with \(|E’|=O(|V|T)\), with a query complexity \(=\tilde{O}(|E|/T)\).

Switching gears, we move from graphs to probability distributions:

Tolerant Distribution Testing in the Conditional Sampling Model, by Shyam Narayanan (arXiv). In the conditional sampling model for distribution testing, which we have covered a few times on this blog, the algorithm at each step gets to specify a subset \(S\) of the domain, and observe a sample from the distribution conditioned on \(S\). As it turns out, this can speed things up a lot: as Canonne, Ron, and Servedio (2015) showed, even tolerant uniformity testing, which with i.i.d. samples requires a near-linear (in the domain size \(n\)) number of samples, can be done in a constant number of conditional queries. Well, sort of constant: no dependence on \(n\), but the dependence on the distance parameter \(\varepsilon\) was, in CRS15, quite bad: \(\tilde{O}(1/\varepsilon^{20})\). This work gets rid of this badness, and shows the (nearly) optimal \(\tilde{O}(1/\varepsilon^{2})\) query complexity! Among other results, it also generalizes it to tolerant identity testing (\(\tilde{O}(1/\varepsilon^{4})\)), for which previously no constant-query upper bound was known. Things have become truly sublinear.

Interactive Inference under Information Constraints, by Jayadev Acharya, Clément Canonne, Yuhan Liu, Ziteng Sun, and Himanshu Tyagi (arXiv). Say you want to do uniformity/identity testing (or learn, but let’s focus on testing) on a discrete distribution, but you can’t actually observe the i.i.d. samples: instead, you can only do some sort of limited, “local” measurement on each sample. How hard is the task, compared to what you’d do if you fully had the samples? This setting, which captures things like distributed testing with communication or local privacy constraints, erasure channels, etc., was well-understood from previous recent work in the non-adaptive setting. But what if the “measurements” could be made adaptively? This paper shows general lower bounds for identity testing and learning, as a function of the type of local measurement allowed: as a corollary, this gives tight bounds for communication constraints and local privacy, and shows the first separation between adaptive and non-adaptive uniformity testing, for a type of “leaky” membership query measurement.

Efficient Parameter Estimation of Truncated Boolean Product Distributions, by Dimitris Fotakis, Alkis Kalavasis, and Christos Tzamos (arXiv). Suppose there is a fixed and unknown subset \(S\) of the hypercube, a “truncation” set, which you can only accessible via membership query; and you receive i.i.d. samples from an unknown product distribution on the hypercube, truncated on that set \(S\) (for instance, because your polling strategy or experimental measurements have limitations). Can you still learn that distribution efficiently? Can you test it for various properties, as you typically really would like to? (or is it just me?) This paper identifies some natural sufficient condition on \(S\), which they call fatness, under which the answer is a resounding yes. Specifically, if \(S\) satisfies this condition, one can actually generate honest-to-goodness i.i.d. samples (non-truncated) from the true distribution, given truncated samples!

Leaving distribution testing, our last paper is on testing functions in the distribution-free model:

Downsampling for Testing and Learning in Product Distributions, by Nathaniel Harms and Yuichi Yoshida (arXiv). Suppose you want to test (or learn) a class of Boolean functions \(\mathcal{C}\) over some domain \(\Omega^n\), with respect to some (unknown) product distribution (i.e., in the distribution-free testing model, or PAC-learning model). This paper develops a general technique, downsampling, which allows one to reduce such distribution-free testing of \(\mathcal{C}\) under a product distribution to testing \(\mathcal{C}\) over \([r]^d\) under the uniform distribution, for a suitable parameter \(r=r(d,\varepsilon,\mathcal{C})\). This allows the authors, among many other things and learning results, to easily re-establish (and, in the second case, improve upon) recent results on testing of monotonicity over \([n]^d\) (uniform distribution) and over \(\mathbb{R}^d\) (distribution-free).

Policy on reporting papers

While we at PTReview always look through the posted papers, we do not check for correctness. We make a serious attempt to make sure the paper is reasonable. In a few instances, we have decided not to post a (topically relevant) paper, because it looks absolutely wrong. Our position is: the benefit of doubt goes to the author, and a borderline paper should be posted. We are only curating relevant tech reports, not passing judgment on results.

In some borderline cases, readers familiar with the subject complained to us that the paper should be not be considered a scientific contribution (because of, say, unspecified algorithms, blatantly incorrect or unverifiable central claims). These are cases where we were also unsure of the paper. We have usually removed/not posted such papers.

If the paper author(s) feels that his/her paper should nonetheless be posted, then they should email us at little.oh.of.n@gmail.com. As long as the paper is not complete nonsense and appears to cite relevant history, we will defer to the authors’ wishes.

News for June 2020

Sublinear algorithms in times of social distancing…always something exciting. This month we have a slew of results on sublinear algorithms for classic graph problems.

(Ed: We have removed a previously posted paper due to correctness concerns raised by our readers. Please look at the post on our paper policy.)

Palette Sparsification Beyond (∆ + 1) Vertex Coloring by Noga Alon and Sepehr Assadi (arXiv). A basic fact from graph theory is that any graph has a \((\Delta+1)\)-coloring, where \(\Delta\) is the maximum degree. Followers of property testing are likely familiar with a fantastic result of Assadi-Chen-Khanna (ACK) on sublinear algorithms, that gives a sublinear algorithm for \((\Delta+1)\)-coloring. (The running time is \(\widetilde{O}(n^{3/2})\), where \(n\) is the number of vertices.) The key tool is a palette sparsification theorem: suppose each vertex is given a “palette” of \((\Delta+1)\) colors. Each vertex randomly sparsifies its palette by sampling \(O(\log n)\) colors, and is constrained to only use these colors. Remarkably, whp the graph can still be properly colored. This tool is at the heart of sublinear time/space algorithms for coloring. This paper gives numerous extensions to this theorem, where one can tradeoff a larger initially palette for a smaller final sample. Another extension is for triangle-free graphs, where the initial palette is of size \(O(\Delta/\ln \Delta)\) and the sample is of size \(O(\Delta^\gamma + \sqrt{\ln n})\) (for parameter \(\gamma < 1\). This leads to an \(O(n^{3/2 + \gamma})\) time algorithm for \(O(\Delta/\ln \Delta)\) coloring of triangle-free graphs.

When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear-Time by Sepehr Assadi and Shay Solomon (arXiv). Taking off from sublinear coloring algorithms, one can ask if there are sublinear time algorithms for Maximal Independent Set (MIS) and Maximal Matching (MM). Alas, ACK prove that this is impossible. This paper investigates when one can get a sublinear time algorithm for these problems. For graph \(G\), let \(\beta(G)\) be the “neighborhood independence number”, the size of the largest independent set contained in a vertex neighborhood. This papers shows that both problems can be solved in \(\widetilde{O}(n \beta(G))\) time. Examples of natural classes of graphs where \(\beta(G)\) is constant: line graphs and unit-disk graphs. An interesting aspect is that MIS algorithm is actually deterministic! It’s the simple marking algorithm that rules out neighborhoods of chosen vertices; the analysis shows that not much time is wasted in remarking the same vertex.

Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation by Yu Chen, Sampath Kannan, and Sanjeev Khanna (arXiv). This paper studies sublinear algorithms for the metric TSP problem. The input is an \(n \times n\) distance matrix. One can 2-approximate the TSP by computing the MST, and a result of Czumaj-Sohler gives a \((1+\varepsilon)\)-approximation algorithm for the latter, running in \(O(n\varepsilon^{-O(1)})\) time. The main question is: can one beat the 2-factor approximation in sublinear time? This paper considers the graphic TSP setting, where the distance matrix corresponds to the shortest path metric of an unweighted graph. One result is a \((2-\varepsilon_0)\)-approximation algorithm (for an explicit constant \(\varepsilon_0\)) that runs in \(\widetilde{O}(n)\) time. For the important \((1,2)\) TSP setting (all distances are either 1 or 2), the paper gives a \(O(n^{1.5})\) time 1.63-approximation algorithm. Interestingly, there is a lower bound showing that \((1+\varepsilon)\)-approximations, for arbitrarily small \(\varepsilon\), cannot be achieved in \(o(n^2)\) time. One of the key tools is sublinear algorithms for estimating the maximum matching size, itself a well-studied problem in the community.

News for May 2020

Last month saw activity across a diverse collection of topics in sublinear algorithms. In particular, we had the following five six papers. (Sorry, I missed one)

One-Sided Error Testing of Monomials and Affine Subspaces by Oded Goldreich and Dana Ron (ECCC). This work focuses on one-sided testing of two kinds of problems (and their variants):
1. Testing Monomials: Suppose you are given a function \(f \colon \{0,1\}^n \to \{0,1\}\). Is \(f = \wedge_{i \in I} x_i\) (that is, is \(f\) a monotone monomial).
2. Testing Affine Subspaces: Consider the task of testing whether a \(f \colon \mathcal{F}^n \to \{0,1\}\) is the indicator of an \((n-k)\)-dimensional affine space for some \(k\) (where \(\mathcal{F}\) is a finite field).

The paper shows that the general problem — the one in which the arity of the monomial (resp the co-dimension of the subspace) is not specified has one-sided query complexity \(\widetilde{O}(1/\varepsilon)\). The same holds for testing whether the arity of the monomial is at most \(k\) (resp the co-dimension of the subspace is at most \(k\)). Finally, the exact problem which seeks to test whether the arity of the monomial is exactly \(k\) (resp the co-dimension of the space is exactly \(k\)) has query complexity \(\Omega(\log n)\). For two sided testers however, size oblivious testers are known for this problem. Thus, like the authors remark, two-sided error is inherent in the case of the exact version of the problem.

Sampling Arbitrary Subgraphs Exactly Uniformly in Sublinear Time by Hendrik Fichtenberger, Mingze Gao, Pan Peng (arXiv). Readers of PT Review are no strangers to the problem of counting cliques in sublinear time (with a certain query model). Building on tools from [1], in [2], Eden-Ron-Seshadhri gave the first algorithms for counting number of copies \(K_r\) in a graph \(G\) to within a \((1 \pm \varepsilon)\) multiplicative factor. En route to this result, they also gave a procedure to sample cliques incident to some special set \(S \subseteq V(G)\). The query model in [2] allowed the following queries: a u.a.r vertex query, degree query, \(i^{th}\) neighbor query and a pair query which answers whether a pair \((u,v)\) forms an edge. The work under consideration shows a result which I personally find remarkable: given the additional ability to get a u.a.r edge sample, we can do the following. For any graph \(H\) we can obtain a uniformly random subgraph isomorphic to \(H\) in \(G\). Let that sink in: this work shows that you can sample \(H\) exactly uniformly from the graph \(G\).

Finding Planted Cliques in Sublinear Time by Jay Mardia, Hilal Asi, Kabir Aladin Chandrasekher (arXiv). Planted Clique is a time honored problem in average case complexity. This classic problem asks the following: You are given a \(G \sim \mathcal{G}(n, 1/2)\). Suppose I select a subset of \(k\) vertices in this graph and put a clique on the subgraph they induce. In principle it is possible to recover the clique I planted if \(k > (2 + \varepsilon) \log n\). But it seems you get polynomial time algorithms only when \(k \geq \Omega(\sqrt n)\) even after you throw SDPs at the problem. Moreover, so far, the algorithms which recover the planted \(k\)-clique were known to take \(\widetilde{O}(n^2)\) time. This work shows that you actually get algorithms which take time \(\widetilde{O}(n^{3/2})\) if \(k \geq \Omega(\sqrt{n \log n})\). The key idea is to first obtain a “core” part of the clique of size \(O(\log n)\) in time \(\widetilde{O}(n^2/k)\). This is followed up with a clique completion routine where you mark all vertices connected to the entire core as being potentially in the clique. The paper also shows a conditional lower bound result which shows that given query access to adjacency matrix of the graph, a natural family of non-adaptive algorithms cannot recover a planted \(k\) clique in time \(o\left(\frac{n}{k}\right)^3\) (for \(k \geq \widetilde{\Omega}(\sqrt n))\).

A robust multi-dimensional sparse Fourier transform in the continuous setting by Yaonan Jin, Daogao Liu and Zhao Song (arXiv). Suppose you are given an unknown signal whose Fourier Spectrum is k-sparse (that is, there are at most k dominant Fourier Coefficients and all the others are zero or close to zero). Significant research effort has been devoted to learn these signals leading to works which study this problem for multi-dimensional discrete setting and in the one-dimensional continuous case. The \(d\)-dimensional continuous case \((d = \Theta(1))\) was largely unexplored. This work makes progress on this frontier by making some natural assumptions on the unknown signal. In particular, the paper assumes that the frequencies — which are vectors \(f_i’s \in R^d\) — are well separated and satisfy \(\|f_i – f_j\|_2 \leq \eta\) and that all \({f_i}_{i \in [k]} \subseteq [-F, F]^d\) sit inside a bounded box.
The authors assume sample access to the signal in the sense that at any desired timestep \(\tau\), the algorithm can sample the signal’s value. With this setup, the authors show that all the dominant frequencies can be recovered with a \(O_d(k \cdot \log(F/\eta))\) samples by considering a relatively small time horizon.

Extrapolating the profile of a finite population by Soham Jana, Yury Polyanskiy, Yihong Wu (arXiv). Consider the following setup. You are given a universe \(k\) balls. Ball come in up to \(k\) different colors. Say you \(\theta_j\) balls in color \(j\) for each \(j \in [k]\). One of the fundamental problems in statistics considers taking samples \(m\) balls from the universe and attempts estimating “population profile” (that is, the number of balls in each color). Historically, it is known that unless an overwhelming majority of the universe has been seen, one cannot estimate the empirical distribution of colors. This paper shows that in the sublinear regime, with \(m \geq \omega(k/\log k)\), it is possible to consistently estimate the population profile in total variation. And once you have a handle on the empirical distribution of the population, you can go ahead and learn lots of interesting label invariant properties of your universe (things like entropy, number of distinct elements etc).

(Edit added later)

Testing Positive Semi-Definiteness via Random Submatrices by Ainesh Bakshi, Nadiia Chepurko, Rajesh Jayaram (arXiv). Suppose I give you a PSD matrix \(A \in R^{n \times n}\). You know that all of its principle submatrices are also PSD. What if \(A\) was \(\varepsilon\)-far from the PSD cone (in a sense I will define soon)? What can you say about the eigenvalues of principle submatrices of \(A\) now? In this paper, the authors tackle precisely this question. The paper defines a matrix \(A\) to be \(\varepsilon\)-far in \(\ell_2^2\) distance from the PSD Cone if you have that \(\min_{B \geq 0: B \in R^{n \times n}}\|A – B\|_F^2 \geq \varepsilon n^2\). You are allowed to randomly sample a bunch of principle submatrices (of order roughly \(O(1/\varepsilon)\) by \(O(1/\varepsilon)\) and check if they are PSD. Armed with this setup, the paper gives a non-adaptive one sided tester for this problem which makes \(\widetilde{O}(1/\varepsilon^4)\) queries. The paper also supplements this result with a lower bound of \(\widetilde{\Omega}(1/\varepsilon^2)\) queries.

If I missed something, please let me know. This is my first post on PT Review and I might have botched up a few things.

References

[1] Talya Eden, Amit Levi, Dana Ron and C. Seshadhri. Approximately Counting Triangles in Sublinear Time. 56th Annual Symposium on Foundations of Computer Science, 2015

[2] Talya Eden, Dana Ron and C. Seshadhri. On approximating the number of k-cliques in sublinear time. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing 2018.

News for April 2020

April is now behind us, and we hope you and your families are all staying safe and healthy. We saw six seven property papers appear online last month, so at least there is some reading ahead of us! A mixture of privacy, quantum, high-dimensional distributions, and juntas (juntæ?). A lot of distribution testing, overall.

Connecting Robust Shuffle Privacy and Pan-Privacy, by Victor Balcer, Albert Cheu, Matthew Joseph, and Jieming Mao (arXiv). This paper considers a recent notion of differential privacy called shuffle privacy, where users have sensitive data, a central untrusted server wants to do something with that data (for instance, say… testing its distribution), and a trusted middle-man/entity shuffles the users’ messages u.a.r. to bring in a bit more anonymity. As it turns out, testing uniformity (or identity) of distributions in the shuffle privacy model is (i) much harder than without privacy constraints; (ii) much harder than with ‘usual’ (weaker) differential privacy (iii) much easier than with local privacy; (iv) related to the sample complexity under another privacy notion, pan-privacy. It’s a brand exciting new world out there!

(Note: for the reader interested in keeping track of identity/uniformity testing of probability distributions under various privacy models, I wrote a very short summary of the current results here.)

Entanglement is Necessary for Optimal Quantum Property Testing, by Sebastien Bubeck, Sitan Chen, and Jerry Li (arXiv). The analogue of uniformity testing, in the quantum world, is testing whether a quantum state is equal (or far from) the maximally mixed state. It’s known that this task has “quantum sample complexity” (number of measurements) \(\Theta(d/\varepsilon^2)\) (i.e., square root dependence on the dimension of the state, \(d^2\)). But this requires entangled measurements, which may be tricky to get (or, in my case, understand): what happens if the measurements can be adaptive, but not entangled? In this work, the authors show that, under this weaker access model \(\Omega(d^{4/3}/\varepsilon^2)\) measurements are necessary: adaptivity alone won’t cut it. It may still help though: without either entanglement nor adaptivity, the authors also show a \(\Omega(d^{3/2}/\varepsilon^2)\) measurements lower bound.

Testing Data Binnings, by Clément Canonne and Karl Wimmer (ECCC). More identity testing! Not private and not quantum for this one, but… not quite identity testing either. To paraphrase the abstract: this paper introduces (and gives near matching bounds for) the related question of identity up to binning, where the reference distribution \(q\) is over \(k \ll n\) elements: the question is then whether there exists a suitable binning of the domain \([n]\) into \(k\) intervals such that, once binned, \(p\) is equal to \(q\).”

Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models, by Antonio Blanca, Zongchen Chen, Daniel Štefankovič, and Eric Vigoda (arXiv). Back to identity testing of distributions, but for high-dimensional structured ones this one. Specifically, this paper focuses on the undirected graphical models known as restricted Boltzmann machines, and provides efficient algorithms for identity testing and conditional hardness lower bounds depending on the type of correlations allowed in the graphical models.

Robust testing of low-dimensional functions, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv). Junta testing is a classical, central problem in property testing, with motivations and applications in machine learning and complexity. The related (and equally well-motivated) question of junta testing of functions on \(\mathbb{R}^d\) (instead of the Boolean hypercube) was recently studied by the same authors; and the related (and, again, equally well-motivated) question of tolerant junta testing on the Boolean hypercube was also recently studied (among other works) by the same authors. Well, this paper does it all, and tackles the challenging (and, for a change, equally well-motivated!) question of tolerant testing of juntas on \(\mathbb{R}^d\).

Differentially Private Assouad, Fano, and Le Cam, by Jayadev Acharya, Ziteng Sun, and Huanyu Zhang (arXiv). Back to probability distributions and privacy. This paper provides differentially private analogues of the classical eponymous statistical inference results (Assouad’s lemma, Fano’s inequality, and Le Cam’s method). In particular, it gives ready-to-use, blackbox tools to prove testing and learning lower bounds for distributions in the differentially private setting, and shows how to use them to easily derive, and rederive, several lower bounds.

Edit: We missed one!

Learning and Testing Junta Distributions with Subcube Conditioning, by Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten (arXiv). This paper focuses on the subcube conditioning model of (high-dimensional) distribution testing, where the algorithm can fix some variables to values of its choosing and get samples conditioned on those variables. Extending and refining techniques from a previous work by a (sub+super)set of the authors, the paper shows how to optimally learn and test junta distributions in this framework—with exponential savings with respect to the usual i.i.d. sampling model.

News for March 2020

I hope all of you are keeping safe and healthy in these difficult times. Thank heavens we have our research and our math to keep our sanity…

This month has seen two papers, one on testing variable partitions and one on distributed isomorphism testing.

Learning and Testing Variable Partitions by Andrej Bogdanov and Baoxiang Wang (arXiv). Consider a function \(f:\Sigma^n \to G\), where \(G\) is Abelian group. Let \(V\) denote the set of variables. The function \(f\) is \(k\)-separable if there is a partition \(V_1, V_2, \ldots, V_k\) of \(V\) such that \(f(V)\) can be expressed as the sum \(f_1(V_1) + f_2(V_2) + \ldots + f_k(V_k)\). This is an obviously natural property to study, though the specific application mentioned in the paper is high-dimensional reinforcement learning control. There are a number of learning results, but we’ll focus on the main testing result. The property of \(k\)-separability can be tested with \(O(kn^3/\varepsilon)\) queries, for \(\Sigma = \mathbb{Z}_q\) (and distance between functions is the usual Hamming distance). There is an analogous result (with different query complexity) for \(\Sigma = \mathbb{R}\). It is also shown that testing 2-separability requires \(\Omega(n)\) queries, even with 2-sided error. The paper, to its credit, also has empirical studies of the learning algorithm with applications to reinforcement learning.

Distributed Testing of Graph Isomorphism in the CONGEST model by Reut Levi and Moti Medina (arXiv). This result follows a recent line of research in distributed property testing algorithms. The main aim is to minimize the number of rounds of (synchronous) communication for a property testing problem. Let \(G_U\) denote the graph representing the distributive network. The aim is to test whether an input graph \(G_K\) is isomorphic to \(G_U\). The main property testing results are as follows. For the dense graph case, isomorphism can be property tested (with two-sided error) in \(O(D + (\varepsilon^{-1}\log n)^2) \) rounds, where \(D\) is the diameter of the graph and \(n\) is the number of nodes. (And, as a reader of this blog, you probably know what \(\varepsilon\) is already…). There is a standard \(\Omega(D)\) lower bound for distributed testing problems. For various classes of sparse graphs (like bounded-degree minor-free classes), constant time isomorphism (standard) property testers are known. This paper provides a simulation argument showing that standard/centralized \(q\)-query property testers can be implemented in the distributed model, in \(O(Dq)\) rounds (this holds for any property, not just isomorphism). Thus, these simulations imply \(O(D)\)-round property testers for isomorphism for bounded-degree minor-free classes.

News for February 2020

Despite a wealth of February conference deadlines, papers were fairly sparse. We found two EDIT: three EDIT EDIT: four papers for the month of February, please share if we missed any relevant papers.

Monotone probability distributions over the Boolean cube can be learned with sublinear samples, by Ronitt Rubinfeld and Arsen Vasilyan (arXiv). By now, it is well known that assuming an (unknown) distribution enjoys some sort of structure can lead to more efficient algorithms for learning and testing. Often one proves that the structure permits a convenient representation, and exploits this representation to solve the problem at hand. This paper studies the learning of monotone distributions over the Boolean hypercube. The authors exploit and extend a structural statement about monotone Boolean functions by Blais, Håstad, Servedio, and Tan, using it to provide sublinear algorithms for estimating the support size, distance to uniformity, and the distribution itself.

Locally Private Hypothesis Selection, by Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, and Huanyu Zhang (arXiv). Given a collection of \(k\) distributions and a set of samples from one of them, can we identify which distribution it is? This paper studies this problem (and an agnostic generalization of it) under the constraint of local differential privacy. The authors show that this problem requires \(\Omega(k)\) samples, in contrast to the \(O(\log k)\) complexity in the non-private model. Furthermore, they give \(\tilde O(k)\)-sample upper bounds in various interactivity models.

Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning, by Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, and N. V. Vinodchandran (arXiv). Given samples from two distributions, can you estimate the total variation distance between them? This paper gives a framework for solving this problem for structured distribution classes, including Ising models, Bayesian networks, Gaussians, and causal models. The approach can be decomposed properly learning the distributions, followed by estimating the distance between the two hypotheses. Challenges arise when densities are hard to compute exactly.

Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Discrete Distributions, by Yi Hao and Alon Orlitsky (arXiv). The histogram of a dataset is the collection of frequency counts of domain elements. The profile of a dataset can be succinctly described as the histogram of the histogram. Recent works have shown that, in some sense, discarding information about your dataset by looking solely at the profile can be beneficial for certain problems in which it is “universal”. This work explores two new quantities, the entropy and dimension of the profile, which turn out to play a key role in quantifying the performance of estimators based on the profile.