# News for January 2021

The first month of 2021 has brought with it 5 papers, covering graph testing, Boolean function testing, and distribution testing — as well as database theory. Let’s dive into it.

Random walks and forbidden minors III: $$\mathrm{poly}(d/\varepsilon)$$-time partition oracles for minor-free graph classes, by Akash Kumar, C. Seshadhri, and Andrew Stolman (arXiv). Minor-closed bounded-degree graphs have a very nice property: denoting by $$d$$ the degree bound and $$n$$ the number of edges, it is always possible to partition any such graph into components of constant size, $$O_\varepsilon(1)$$, just by removing a linear number of edges, merely $$\varepsilon dn$$ ($$\varepsilon$$ being a small parameter). This is a crucial result in many graph property testing algorithms, those which rely on something called a “partition oracle”: loosely speaking, a routine which makes “few” queries to the graph, and is able to indicate which component of an underlying partition any given vertex belongs to. But what is “few” here? The first partition oracles made $$d^{\mathrm{poly}(d,1/\varepsilon)}$$ queries to the graph to answer any such request. This later got significantly improved to $$d^{\mathrm{log}(d/\varepsilon)}$$. Using spectral graph theory techniques previously developed by the authors (hence the “III” in the title), this work settles the question, achieving partition oracles which as good as it gets: making only $$\mathrm{poly}(d,1/\varepsilon)$$ queries to the graph! This in turns has immediate consequences for graph property testing, which the paper details.

And since we are on the topic of oracles… more exciting news on that front:

Spectral Clustering Oracles in Sublinear Time, by Grzegorz Gluch, Michael Kapralov, Silvio Lattanzi, Aida Mousavifar, Christian Sohler (arXiv). Given a graph $$G$$ and an integer $$k$$, one often want to partition the graph into components $$C_1,C_2,\dots,C_k$$, such that each $$C_i$$ is well-connected and has few edges to the other $$C_j$$’s. But can we get ultra efficient algorithms for such spectral clusterings? Specifically, can we design oracles for them: sublinear-time algorithms which provide implicit access to an underlying “good” spectral clustering $$\hat{C}_1,\hat{C}_2,\dots,\hat{C}_k$$, by returning on any query vertex $$v$$ the index of the cluster $$\hat{C}_i$$ to which $$v$$ belongs? This paper introduces the question, and answers it in the affirmative: in more detail, it provides a spectral clustering oracle which, for any $$\varepsilon>0$$, has preprocessing time $$2^{\mathrm{poly}(k/\varepsilon)}n^{1/2+O(\varepsilon)}$$, query time $$n^{1/2+O(\varepsilon)}$$, and space $$n^{1/2+O(\varepsilon)}$$; and provides access to a clustering with relative error $$O(\varepsilon\log k)$$ per cluster. The paper also allows tradeoffs between query time and space, and discusses applications to the Local Computation Algorithms (LCA) model.

Next stop, distribution testing…

The Sample Complexity of Robust Covariance Testing, by Ilias Diakonikolas and Daniel M. Kane (arXiv). Suppose you have i.i.d. samples from some high-dimensional Gaussian $$\mathcal{N}(0,\Sigma)$$ in $$d$$ dimensions, and want to test whether the unknown covariance matrix $$\Sigma$$ is the identity, versus $$\varepsilon$$-far from it (in Frobenius norm). Good news: we know how to do that, and $$\Theta(d/\varepsilon^2)$$ samples are necessary and sufficient. (To learn $$\Sigma$$, that’d be $$\Theta(d^2/\varepsilon^2)$$.) Bad news: you don’t have i.i.d. samples from some high-dimensional Gaussian $$\mathcal{N}(0,\Sigma)$$; what you have is i.i.d. samples from a noisy version of it, $$(1-\alpha)\mathcal{N}(0,\Sigma) + \alpha B$$, where $$B$$ is an arbitrary “bad” distribution (not necessarily Gaussian itself). You still want to test whether the covariance $$\Sigma$$ is the identity, but now you have that extra $$\alpha$$ fraction of noisy samples, and you need to do that testing robustly… The good news is, you can still do that by learning the covariance matrix $$\Sigma$$, robustly, with $$O(d^2/\varepsilon^2)$$ samples. The bad news is the main result of this paper: that’s also the best you can do. That is, $$\Omega(d^2)$$ samples are necessary: if you have to be robust to noise, testing is no longer easier than learning….

Onto Boolean functions!

Junta Distance Approximation with Sub-Exponential Queries, by Vishnu Iyer, Avishay Tal, and Michael Whitmeyer (ECCC). If you follow this blog, you may have seen over the past couple years a flurry of results about tolerant junta testing: “given query access to some Boolean function $$f\colon\{-1,1\}^n\to\{-1,1\}$$, how close is $$f$$ to only depending on $$k \ll n$$ of its variables?”
This paper contains several results on this problem, including an improved bicriteria tolerant testing algorithm: an efficient algorithm to distinguish between $$\varepsilon$$-close to $$k$$-junta and $$1.1\varepsilon$$-far from $$k’$$-junta making $$\mathrm{poly}(k,1/\varepsilon)$$ queries (and $$k’ = O(k/\varepsilon^2)$$). But the main result of the paper, and the one giving it its name, is for the non-relaxed version where $$k’=k$$: while all previous works had a query complexity $$2^{O(k)}$$, here the authors show how to break that exponential barrier, giving a fully tolerant testing algorithm with query complexity $$2^{\tilde{O}(\sqrt{k})}$$!

And finally, a foray into database theory:

Towards Approximate Query Enumeration with Sublinear Preprocessing Time, by Isolde Adler and Polly Fahey (arXiv). In this paper, the authors are concerned with the task of (approximate) query enumeration on databases, aiming for ultra efficient (i.e., sublinear-time) algorithms. Leveraging techniques from property testing (specifically, in the bounded-degree graph model), they show the following:
On input databases of bounded degree and bounded tree-width, every (fixed) first-order definable query can be enumerated approximately in time linear in the output size, after only a sublinear-time preprocessing phase.

That’s all for this month! If you noticed a paper we missed, please let us know in the comments.