**Learning-based Support Estimation in Sublinear Time** by Talya Eden, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, and Tal Wagner (arXiv). A classic problem in distribution testing is that of estimating the support size \(n\) of an unknown distribution \(\mathcal{D}\). (Assume that all elements in the support have probability at least \(1/n\).) A fundamental result of Valiant-Valiant (2011) proves that the sample complexity of this problem is \(\Theta(n/\log n)\). A line of work has emerged in trying to reduce this complexity, with additional sources of information. Canonne-Rubinfeld (2014) showed that, if one can query the exact probabilities of elements, then the complexity can be made independent of \(n\). This paper studies a robust version of this assumption: suppose, we can get constant factor approximations to the probabilities. Then, the main result is that we can get a query complexity of \(n^{1-1/\log(\varepsilon^{-1})} \ll n/\log n\) (where the constant \(\varepsilon\) denotes the additive approximation to the support size). This paper also does empirical experiments to show that the new algorithm is indeed better in practice. Moreover, it shows that existing methods degraded rapidly with poorer probability estimates, while the new algorithm maintains its accuracy even with such estimates.

**The Price of Tolerance in Distribution Testing** by Clément L. Canonne, Ayush Jain, Gautam Kamath, and Jerry Li (arXiv). While we have seen many results in distribution testing, the subject of tolerance is one that hasn’t received as much attention. Consider the problem of testing if unknown distribution \(\mathcal{p}\) (over domain \([n]\)) is the same as known distribution \(\mathcal{q}\). We wish to distinguish \(\varepsilon_1\)-close from \(\varepsilon_2\)-far, under total variation distance. When \(\varepsilon_1\) is zero, this is the standard property testing setting, and classic results yield \(\Theta(\sqrt{n})\) sample complexity. If \(\varepsilon_1 = \varepsilon_2/2\), then we are looking for a constant factor approximation to the distance. And the complexity is \(\Theta(n/\log n)\). Surprisingly, nothing was known in better. Until this paper, that is. The main result gives a complete characterization of sample complexity (up to log factors), for all values of \(\varepsilon_1, \varepsilon_2\). Remarkably, the sample complexity has an additive term \((n/\log n) \cdot (\varepsilon_1/\varepsilon^2_2)\). Thus, when \(\varepsilon_1 > \sqrt{\varepsilon_2}\), the sample complexity is \(\Theta(n/\log n)\). When \(\varepsilon_1\) is smaller, the main result gives a smooth dependence on the sample complexity. One the main challenges is that existing results use two very different techniques for the property testing vs constant-factor approximation regimes. The former uses simpler \(\ell_2\)-statistics (e.g. collision counting), while the latter is based on polynomial approximations (estimating moments). The upper bound in this paper shows that simpler statistics based on just the first two moments suffice to getting results for all regimes of \(\varepsilon_1, \varepsilon_2\).

**Open Problems in Property Testing of Graphs** by Oded Goldreich (ECCC). As the title clearly states, this is a survey covering a number of open problems in graph property testing. The broad division is based on the query model: dense graphs, bounded degree graphs, and general graphs. A reader will see statements of various classic open problems, such as the complexity of testing triangle freeness for dense graphs and characterizing properties that can be tested in \(poly(\varepsilon^{-1})\) queries. Arguably, there are more open problems (and fewer results) for testing in bounded degree graphs, where we lack broad characterizations of testable properties. An important, though less famous (?), open problem is that of the complexity of testing isomorphism. It would appear that the setting of general graphs, where we know the least, may be the next frontier for graph property testing. A problem that really caught my eye: can we transform testers that work for bounded degree graphs into those that work for bounded arboricity graphs? The latter is a generalization of bounded degree that has appeared in a number of recent results on sublinear graph algorithms.

- James Aspnes (Yale) on
*Population Protocols* - Uri Stemmer (Ben-Gurion University) on
*The Local Model of Differential Privacy: A Survey* - Mary Wootters (Stanford University) on
*Lifted Codes and Disjoint Repair Groups* - Christian Sohler (University of Cologne) on
*Property Testing in Planar Graphs* - Elaine Shi (Carnegie Mellon University) on
*Game-Theoretically Secure Protocols Inspired by Blockchains* - Jelani Nelson (UC Berkeley) on
*Optimal bounds for approximate counting*

Thanks again to the speakers and organizers, and looking forward to WOLA’22!

]]>**GSF-locality is not sufficient for proximity-oblivious testing**, by Isolde Adler, Noleen Kohler, Pan Peng (arXiv) The notion of proximity oblivious testers was made explicit in the seminal work of Goldreich and Ron in 2009 [GR09]. A proximity oblivious tester for a graph property is a constant query tester that rejects a graph with probability that monotonically increases with distance to the property. (**Edit**: *Correction*) A property is called proximity oblivious testable (or PO testable) if it has a one sided proximity oblivious tester. [GR09] gave a characterization of which properties \(\Pi\) are PO testable in the bounded degree model *if and only if* it is a “local” property of some kind which satisfies a certain non propagation condition. [GR09] conjectured that all such “local” properties satisfy this non propagation condition. This paper refutes the above conjecture from [GR09].

Coming up next. More action on triangle freeness.

**Testing Triangle Freeness in the General Model in Graphs with Arboricity \(O(\sqrt n)\)**, by Reut Levi (arXiv) PTReview readers are likely to be aware that triangle freeness has been a rich source of problems for developing new sublinear time algorithms. This paper considers the classic problem of testing triangle freeness in general graphs. In the dense case, algorithms with running time depending only on \(\varepsilon\) are known thanks to the work of Alon, Fischer, Krivelevich and Szegedy. In the bounded degree case, Goldreich and Ron gave testers with query complexity \(O(1/\varepsilon)\). This paper explores the problem in general graph case and proves an upper bound of \(O(\Gamma/d_{avg} + \Gamma)\) where \(\Gamma\) is the arboricity of the graph. The author also shows that this upperbound is tight for graphs with arboricity at most \(O(\sqrt n)\). Curiously enough, the algorithm does not take arboricity of the graph as an input and yet \(\Gamma\) (the arboricity) shows up in the upper and lower bounds.

**Testing Dynamic Environments: Back to Basics**, by Yonatan Nakar and Dana Ron (arXiv) Goldreich and Ron introduced the problem of testing “dynamic environments” in 2014. Here is the setup for this problem. You are given an environment that evolves according to a local rule. Your goal is to query some of the states in the system at some point of time and determine if the system is evolving according to some fixed rule or is far from it. In this paper, the authors consider environments defined by elementary cellular automata which evolve according to threshold rules as one of the first steps towards understanding what makes a dynamic environment tested efficiently. The main result proves the following: if your local rules satisfy some *conditions*, you can use a meta algorithm with query complexity \(poly(1/\varepsilon)\) which is non adaptive and has one sided error. And all the threshold rules indeed satisfy these *conditions* which means they can be tested efficiently.

**Identity testing under label mismatch**, by Clement Canonne and Karl Wimmer (arXiv) This paper considers a classic problem distribution testing with the following twist. Let \(q\) denote a distribution supported on \([n]\). You are given access to samples from another distribution \(p\) where \(p = q \circ \pi\) where \(\pi\) is some unknown permutation. Thus, I relabel the data and I give you access to samples from the relabeled dataset. Under this promise, note that identity testing becomes a trivial problem if \(q\) is known to be uniform over \([n]\). The authors develop algorithms for testing and tolerant testing of distributions under this additional promise of \(p\) being a permutation of some known distribution \(q\). The main result shows as exponential gap between the sample complexity of testing and tolerant testing under this promise. In particular, identity testing under the promise of permutation has sample complexity \(\Theta(\log^2 n)\) whereas tolerant identity testing under this promise has sample complexity \(\Theta(n^{1-o(1)})\).

**Testing symmetry on quantum computers**, by Margarite L. LaBorde and Mark M. Wilde (arXiv) This paper develops algorithms which test symmetries of a quantum states and changes generated by quantum circuits. These tests additionally also quantify how symmetric these states (or channels) are. For testing what are called “Bose states” the paper presents efficient algorithms. The tests for other kinds of symmetry presented in the paper rely on some aid from a quantum prover.

**Quantum proofs of proximity**, by Marcel Dall’Agnol, Tom Gur, Subhayan Roy Moulik, Justin Thaler (ECCC) The sublinear time (quantum) computation model has been gathering momentum steadily over the past several years. This paper seeks to understand the power of \({\sf QMA}\) proofs of proximity for property testing (recall \({\sf QMA}\) is the quantum analogue of \({\sf NP}\)). On the algorithmic front, the paper develops sufficient conditions for properties to admit efficient \({\sf QMA}\) proofs of proximity. On the complexity front, the paper demonstrates a property which admits an efficient \({\sf QMA}\) proof but does not admit a \({\sf MA}\) or an interactive proof of proximity.

*Local algorithms — that is, algorithms that compute and make decisions on parts of the output considering only a portion of the input — have been studied in a number of areas in theoretical computer science and mathematics. Some of the related areas include sublinear-time algorithms, distributed algorithms, streaming algorithms, (massively) parallel algorithms, inference in large networks, and graphical models. These communities have similar goals but a variety of approaches, techniques, and methods. This workshop is aimed at fostering dialogue and cross-pollination of ideas between the various communities.*

This year, the workshop will include:

**A poster session**: Please submit your poster proposal (title and abstract) at by**May 26th**. Everyone is invited to contribute. This session will take place on gather.town.**Invited long talks**: the tentative schedule is available, and features talks by James Aspnes, Jelani Nelson, Elaine Shi, Christian Sohler, Uri Stemmer, and Mary Wootters.**Junior-Senior social meetings****An AMA (Ask Me Anything) session**, moderated by Merav Parter**A Slack channel****An Open Problems session**

The Program Committee of WOLA 2021 is comprised of:

- Venkatesan Guruswami (CMU)
- Elchanan Mossel (MIT)
- Merav Parter (Weizmann Institute of Science)
- Sofya Raskhodnikova
**(chair)**(Boston University) - Gregory Valiant (Stanford)

and the organizing committee:

- Sebastian Brandt (ETH)
- Yannic Maus (Technion)
- Slobodan Mitrović (MIT)

For more detail, see the website; to stay up to date with the latest announcements concerning WOLA, join our mailing list!

]]>**Graph Streaming Lower Bounds for Parameter Estimation and Property Testing via a Streaming XOR Lemma**, by Sepehr Assadi and Vishvajeet N (arXiv). This paper establishes space vs. pass trade-offs lower bounds for streaming algorithms, for a variety of graph tasks: that is, of the sort “any \(m\)-pass-streaming algorithm for task \(\mathcal{T}\) must use memory at least \(f(m)\).” The tasks considered include graph property estimation (size of the maximum matching, of the max cut, of the weight of the MST) and property testing for sparse graphs (connectivity, bipartiteness, and cycle-freeness). The authors obtained exponentially improved lower bounds for those, via reductions to a relatively standard problem, (noisy) gap cycle counting, for which they establish their main lower bound. As a key component of their proof, they prove a general direct product result (XOR lemma) for the streaming setting, showing that the advantage for solving the XOR of \(\ell\) copies of a streaming predicate \(f\) decreases exponentially with \(\ell\).

**Robust Self-Ordering versus Local Self-Ordering** by Oded Goldreich (ECCC). In Nov 2020, we covered a paper that uses a tool called *self-ordered graphs, *that transferred bit string function lower bounds to graph property testing. Consider a labeled graph. A graph is self-ordered if its automorphism group only contains the identity element (it has no non-trivial isomorphisms). A graph is robustly self-ordered, if every permutation of the vertices leads to a (labeled) graph that is sufficiently “far” according to edit distance. Given a self-ordered graph \(G\), a local self-ordering procedure is the following. Given access to a copy \(G’\) of \(G\) and a vertex \(v \in V(G’)\), this procedure determines the (unique) vertex in \(V(G)\) that corresponds to \(v\) with sublinear queries to \(G\). In other words, it can locally “label” the graph. Intuitively, one would think that more robustly self-ordered graphs will be easier to locally label. This paper studies the relation between robust and local self-ordering. Curiously, this paper refutes the above intuition for bounded-degree graphs, and (weakly) confirms it for dense graphs. Roughly speaking, there are bounded degree graphs that are highly robustly self-ordered, for which any local self-ordering procedure requires \(\omega(\sqrt{n})\) queries. Moreover, there are bounded degree graphs with \(O(\log n)\)-query local self-ordering procedures, yet are not robustly self-ordered even for weak parameters. For dense graphs, the existence of fast non-adaptive local self-ordering procedures implies robust self-ordering.

**Testing identity of collections of quantum states: sample complexity analysis** by Marco Fanizza, Raffaele Salvia, and Vittorio Giovannetti (arXiv). This paper takes identity testing to the quantum setting. One should think of a \(d\)-dimensional quantum state as a \(d \times d\) density matrix (with some special properties). To learn the state entirely up to error \(\varepsilon\) would require \(O(\varepsilon^{-2} d^2)\) samples/measurements. A recent result of Badescu-O’Donnell-Wright proves that identity testing to a known state can be done significantly faster using \(O(\varepsilon^{-2} d)\) measurements. This paper takes this result a step further by consider a set of \(N\) quantum states. A “sample” is like a classical sample, where one gets a sample from a distribution of quantum states. The YES (“uniform”) case is when all the states are identical. The NO (“far from uniform”) case is when they are “far” from being the same state. This paper proves that \(O(\varepsilon^{-2}\sqrt{N}d)\) samples suffices for distinguishing these cases.

**Testing Hamiltonicity (and other problems) in Minor-Free Graphs**, by Reut Levi and Nadav Shoshan (arXiv). Graph Property Testing has been explored pretty well for dense graphs (and reasonably well for bounded degree graphs). However, testing properties in the general case still remains an elusive goal. This paper makes contributions in this direction and as a first result it gives an algorithm for testing Hamiltonicity *in minor free graphs* (with two sided error) with running time \(poly(1/\varepsilon)\). Let me begin by pointing out that Hamiltonicity is an irksome property to test in the following senses.

- It is neither monotone nor additive. So the partition oracle based algorithms do not immediately imply a tester (with running time depending only on \(\varepsilon\) for Hamiltonicity. This annoyance bugs you even in the bounded degree case.
- Czumaj and Sohler characterized what graph properties are testable with one-sided error in general planar graphs. In particular, they show a property of general planar graphs is testable
*iff*this property can be reduced to testing for a finite family of finite forbidden subgraphs. Again, Hamiltonicity does not budge to this result. - There are (concurrent) results by Goldreich and Adler-Kohler which show that with one-sided error, Hamiltonicity cannot be tested with \(o(n)\) queries.

The paper shows that distance to Hamiltonicity can be exactly captured in terms of a certain combinatorial parameter. Thereafter, the paper tries to estimate this parameter after cleaning up the graph a little. This allows them to estimate the distance to Hamiltonicity and thus also implies a tolerant tester (restricted to mino-free graphs).

**Testing properties of signed graphs**, by Florian Adriaens, Simon Apers (arXiv). Suppose I give you a graph \(G=(V,E)\) where all edges come with a label: which is either “positive” or “negative”. Such signed graphs are used to model various scientific phenomena. Eg, you can use these to model interactions between individuals in social networks into two categories like friendly or antagonistic.

This paper considers property testing problems on signed graphs. The notion of farness from the property extends naturally to these graphs (both in the dense graph model and the bounded degree model). The paper contains explores three problems in both of these models: signed triangle freeness, balance and clusterability. Below I will zoom into the tester for clusterability in the bounded degree setting developed In the paper. A signed graph is considered clusterable if you can partition the vertex set into some number of components such that the edges within any component are all positive and the edges running across components are all negative.

The paper exploits a forbidden subgraph characterization of clusterability which shows that any cycle with exactly one negative edge is a certificate of non-clusterability of \(G\). The tester runs multiple random walks from a handful of start vertices to search for these “bad cycles” by building up on ideas in the seminal work of Goldreich and Ron for testing bipariteness. The authors put all of these ideas together and give a \(\widetilde{O}(\sqrt n)\) time one-sided tester for clusterability in signed graphs.

**Local Access to Random Walks**, by Amartya Shankha Biswas, Edward Pyne, Ronitt Rubinfeld (arXiv). Suppose I give you a gigantic graph (with bounded degree) which does not fit in your main memory and I want you to solve some computational problem which requires you to solve longish random walks of length \(t\). And lots of them. It would be convenient to not spend \(\Omega(t)\) units of time performing every single walk. Perhaps it would work just as well for you to have an oracle which provides query access to a \(Position(G,s,t)\) oracle which returns the position of a walk from \(s\) at time \(t\) of your choice. Of course, you would want the sequence of vertices returned to behave consistently with some actual random walk sampled from the distribution of random walks starting at \(s\). Question is: Can I build you this primitive? This paper answers this question in affirmative and shows that for graphs with spectral gap \(\Delta\), this can be achieved with running time \(\widetilde{O}(\sqrt n/\Delta)\) per query. And you get the guarantee that the joint distribution of the vertices you return at queried times is \(1/poly(n)\) close to the uniform distribution over such walks in \(\ell_1\). Thus, for a random \(d\)-regular graph, you get running times of the order \(\widetilde{O}(\sqrt n)\) per query. The authors also show tightness of this result by showing to get subconstant error in \(\ell_1\), you necessarily need \(\Omega(\sqrt n/\log n)\) queries in expectation.

**Efficient and near-optimal algorithms for sampling connected subgraphs**, by Marco Bressan (arXiv). As the title suggests, this paper considers efficient algorithms for sampling a uniformly random \(k\)-graphlet from a given graph \(G\) (for \(k \geq 3\)). Recall, a \(k\)-graphlet refers to a collection of \(k\)-vertices which induce a connected graph in \(G\). The algorithm considered in the paper is pretty simple. You just define a Markov Chain \(\mathcal{G}_k\) with all \(k\)-graphlets as its state space. Two states in \(\mathcal{G}_k\) are adjacent *iff* their intersection is a \((k-1)\)-graphlet. To obtain a uniformly random sample, a classical idea is to just run this Markov Chain and obtain an \(\varepsilon\)-uniform sample. However, the gap between upper and lower bounds on the mixing time of this walk is of the order \(\rho^{k-1}\) where \(\rho = \Delta/\delta\) (that is the ratio of maximum and minimum degrees to the power \(k-1\)). The paper closes this gap up to logarithmic factors and shows that the mixing time of the walk is at most \(t_{mix}(G) \rho^{k-1} \log(n/\varepsilon)\). It also proves an almost matching lower bound. Further, the paper also presents an algorithm with event better running time to return an almost uniform \(k\)-graphlet. This exploits a previous observation: sampling a uniformly random \(k\)-graphlet is equivalent to sampling a uniformly random edge in \(\mathcal{G}_{k-1}\). The paper then proves a lemma which upperbounds the relaxation time of walks in \(\mathcal{G}_k\) to walks in \(\mathcal{G}_{k-1}\). And then you upperbound the mixing time in terms of the relaxation time to get an improved expected running time of the order \(O(t_{mix}(G) \cdot \rho^{k-2} \cdot \log(n/\varepsilon)\).

**Toward Instance-Optimal State Certification With Incoherent Measurements**, by Sitan Chen, Jerry Li, Ryan O’Donnell (arXiv). The problem of quantum state certification has gathered interest over the last few years. Here is the setup: you are given a quantum state \(\sigma \in \mathbb{C}^{d \times d}\) and you are also given \(N\) copies of an unknown state \(\rho\). You want to distinguish between the following two cases: Does \(\rho = \sigma\) or is \(\sigma\) at least \(\varepsilon\)-far from \(\rho\) in trace norm? Badescu et al showed in a recent work that if entangled measurements are allowed, you can do this with a mere \(O(d/\varepsilon^2)\) copies of \(\rho\). But using entangled states comes with its own share of problems. On the other hand if you disallow entanglement, as Bubeck et al show, you need \(\Omega(d^{3/2}/\varepsilon^2)\) measurements. This paper asks: for which states \(\sigma\) can you improve upon this bound. The work takes inspirations from *a la* “instance optimal” bounds for identity testing. Authors show a fairly general result which (yet again) confirms that the quantum world is indeed weird. In particular, the main result of the paper implies that the copy complexity of (the quantum analog of) identity testing in the quantum world (with non-adaptive queries) grows as \(\Theta(d^{1/2}/\varepsilon^2)\). That is, the number of quantum measurements you need increases with \(d\) (which is the stark opposite of the behavior you get in the classical world).

**Random walks and forbidden minors III: \(\mathrm{poly}(d/\varepsilon)\)-time partition oracles for minor-free graph classes**, by Akash Kumar, C. Seshadhri, and Andrew Stolman (arXiv). Minor-closed bounded-degree graphs have a very nice property: denoting by \(d\) the degree bound and \(n\) the number of edges, it is always possible to partition any such graph into components of *constant* size, \(O_\varepsilon(1)\), just by removing a linear number of edges, merely \(\varepsilon dn\) (\(\varepsilon\) being a small parameter). This is a crucial result in many graph property testing algorithms, those which rely on something called a “partition oracle”: loosely speaking, a routine which makes “few” queries to the graph, and is able to indicate which component of an underlying partition any given vertex belongs to. But what is “few” here? The first partition oracles made \(d^{\mathrm{poly}(d,1/\varepsilon)}\) queries to the graph to answer any such request. This later got significantly improved to \(d^{\mathrm{log}(d/\varepsilon)}\). Using spectral graph theory techniques previously developed by the authors (hence the “III” in the title), this work settles the question, achieving partition oracles which as good as it gets: making only \(\mathrm{poly}(d,1/\varepsilon)\) queries to the graph! This in turns has immediate consequences for graph property testing, which the paper details.

And since we are on the topic of oracles… more exciting news on that front:

**Spectral Clustering Oracles in Sublinear Time**, by Grzegorz Gluch, Michael Kapralov, Silvio Lattanzi, Aida Mousavifar, Christian Sohler (arXiv). Given a graph \(G\) and an integer \(k\), one often want to partition the graph into components \(C_1,C_2,\dots,C_k\), such that each \(C_i\) is well-connected and has few edges to the other \(C_j\)’s. But can we get *ultra* efficient algorithms for such spectral clusterings? Specifically, can we design oracles for them: sublinear-time algorithms which provide implicit access to an underlying “good” spectral clustering \(\hat{C}_1,\hat{C}_2,\dots,\hat{C}_k\), by returning on any query vertex \(v\) the index of the cluster \(\hat{C}_i\) to which \(v\) belongs? This paper introduces the question, and answers it in the affirmative: in more detail, it provides a spectral clustering oracle which, for any \(\varepsilon>0\), has preprocessing time \(2^{\mathrm{poly}(k/\varepsilon)}n^{1/2+O(\varepsilon)}\), query time \(n^{1/2+O(\varepsilon)}\), and space \(n^{1/2+O(\varepsilon)}\); and provides access to a clustering with relative error \(O(\varepsilon\log k)\) per cluster. The paper also allows tradeoffs between query time and space, and discusses applications to the Local Computation Algorithms (LCA) model.

Next stop, distribution testing…

**The Sample Complexity of Robust Covariance Testing**, by Ilias Diakonikolas and Daniel M. Kane (arXiv). Suppose you have i.i.d. samples from some high-dimensional Gaussian \(\mathcal{N}(0,\Sigma)\) in \(d\) dimensions, and want to test whether the unknown covariance matrix \(\Sigma\) is the identity, versus \(\varepsilon\)-far from it (in Frobenius norm). Good news: we know how to do that, and \(\Theta(d/\varepsilon^2)\) samples are necessary and sufficient. (To *learn* \(\Sigma\), that’d be \(\Theta(d^2/\varepsilon^2)\).) Bad news: you don’t have i.i.d. samples from some high-dimensional Gaussian \(\mathcal{N}(0,\Sigma)\); what you have is i.i.d. samples from a noisy version of it, \((1-\alpha)\mathcal{N}(0,\Sigma) + \alpha B\), where \(B\) is an arbitrary “bad” distribution (not necessarily Gaussian itself). You still want to test whether the covariance \(\Sigma\) is the identity, but now you have that extra \(\alpha\) fraction of noisy samples, and you need to do that testing robustly… The good news is, you can still do that by learning the covariance matrix \(\Sigma\), robustly, with \(O(d^2/\varepsilon^2)\) samples. The bad news is the main result of this paper: that’s also the best you can do. That is, \(\Omega(d^2)\) samples are necessary: if you have to be robust to noise, testing is no longer easier than learning….

Onto Boolean functions!

**Junta Distance Approximation with Sub-Exponential Queries**, by Vishnu Iyer, Avishay Tal, and Michael Whitmeyer (ECCC). If you follow this blog, you may have seen over the past couple years a flurry of results about *tolerant junta testing*: “given query access to some Boolean function \(f\colon\{-1,1\}^n\to\{-1,1\}\), how close is \(f\) to only depending on \(k \ll n\) of its variables?”

This paper contains several results on this problem, including an improved bicriteria tolerant testing algorithm: an efficient algorithm to distinguish between \(\varepsilon\)-close to \(k\)-junta and \(1.1\varepsilon\)-far from \(k’\)-junta making \(\mathrm{poly}(k,1/\varepsilon)\) queries (and \(k’ = O(k/\varepsilon^2)\)). But the main result of the paper, and the one giving it its name, is for the non-relaxed version where \(k’=k\): while all previous works had a query complexity \(2^{O(k)}\), here the authors show how to break that exponential barrier, giving a fully tolerant testing algorithm with query complexity \(2^{\tilde{O}(\sqrt{k})}\)!

And finally, a foray into database theory:

**Towards Approximate Query Enumeration with Sublinear Preprocessing Time**, by Isolde Adler and Polly Fahey (arXiv). In this paper, the authors are concerned with the task of (approximate) query enumeration on databases, aiming for ultra efficient (i.e., sublinear-time) algorithms. Leveraging techniques from property testing (specifically, in the bounded-degree graph model), they show the following:*On input databases of bounded degree and bounded tree-width, every (fixed) first-order definable query can be enumerated approximately in time linear in the output size, after only a sublinear-time preprocessing phase*.

That’s all for this month! If you noticed a paper we missed, please let us know in the comments.

]]>**Almost Optimal Bounds for Sublinear-Time Sampling of k-Cliques: Sampling Cliques is Harder Than Counting** by Talya Eden, Dana Ron, and Will Rosenbaum (arXiv). In many settings, sampling and counting are tightly coupled; the classic example being permanent estimation and sampling random bipartite matchings.This paper investigates the problem of sampling uniform random cliques, and unearths an interesting separation of approximately uniform sampling from approximate counting. Let’s consider the fundamental problem of edge and triangle counting, in graphs of constant arboricity. This class, which contains all minor-closed families, has deep connections to clique counting; clique counting can be done in linear time for bounded arboricity graphs.) The input connected graph \(G\) has \(n\) vertices, \(m\) edges, and \(t\) triangles. We assume the standard query model, with uniform random vertices, neighbor, and degree queries. Our aim is to get \((1+\varepsilon)\)-approximations for \(m\) and \(t\) (in general, any \(k\)-clique count). It is known from previous work that edge estimation can be done in \(O(1)\) time and triangle estimation can be done in \(O(n/t)\) time (ignoring log and \(\varepsilon\) dependencies). Moreover, these bounds is optimal. Can one get similar bounds for approximately sampling a uniform random edge/triangle? Previous work of Eden-Rosenbaum achieve such a bound for sampling edges. Surprisingly, this paper shows that triangle sampling is fundamentally harder, and requires \(\Omega(n^{3/4}/\sqrt{t})\) queries. The main result gives a precise and optimal bound for uniform sampling \(k\)-cliques in graphs of arboricity \(\alpha\).

**Testing and reconstruction via decision trees **by Guy Blanc, Jane Lange, and Li-Yang Tan (arXiv). A property tester is an (sublinear) algorithm that distinguishes an object with a property from those that are far. Reconstruction is a much stronger sublinear algorithm, which actually provides query access to the closest object. Suppose we have query access to an \(n\)-variate Boolean function \(f\), and the property \(\mathcal{P}\) of interest is size-\(s\) decision trees. A reconstructor would give query access to a function \(g \in \mathcal{P}\), such that \(dist(f,g) = O(OPT_f)\), where \(OPT_f\) is the distance of \(f\) to \(\mathcal{P}\). Observe that a reconstructor automatically gives a distance estimator (just estimate \(dist(f,g)\) by random sampling). One of the main results of this paper is a (weaker, bicriteria) reconstructor, where \(g\) is guaranteed to have decision-tree complexity \(s^{O(\log^2s)}\). Each query to \(g\) is computed in \(poly(\log s, 1/\varepsilon)\cdot n\) time. This reconstructor leads to a number of testing results, and reconstructors for other properties like low Fourier degree. Another interesting result proven: if there exists an efficient reconstructor that gets \(g\) of size \(s\), then there exists an efficient proper learner for decision trees (a big open problem).

**Testing Product Distributions: A Closer Look **by Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, and N. V. Vinodchandran (arXiv). This paper is on distribution testing, where all distributions involved are promised to be *product distributions*. Let \(P, Q\) be distributions over \(\Sigma^n\), where \(\Sigma\) is some (small) finite set. The problems studied are identity testing (\(Q\) is known/fixed, and one gets samples from unknown \(P\)) and closeness testing (both unknown). It is known from previous work that these problems can basically be solved with \(O(n|\Sigma|\varepsilon^{-2})\) samples. Note that the domain size is exponential in \(n\); thus, these bounds give an exponential improvement over identity/closeness testing in the vanilla (non-product) setting. Tolerant testing is of special significance here. Because the domain is so large, it makes sense to allow for some distance even in the “yes” case. The main result of this paper is to completely map out the complexities for identity/closeness testing where we wish to distinguish \(d_1(P,Q) < \varepsilon_1\) from \(d_2(P,Q) > \varepsilon_2\), where \(d_1, d_2\) are potentially different distance measures. For example, the paper considers total variation (TV), KL, Hellinger, and \(\chi^2\) distances. An important improvement achieved is for identity testing where \(d_1\) is Hellinger and \(d_2\) is TV distance: for certain values of \(\varepsilon_1, \varepsilon_2\), one can get a tester of optimal sample complexity \(O(\sqrt{n|\Sigma|})\). Previous bounds only gave a \(O(n|\Sigma|)\) sample complexity. There are a number of new lower bounds for different combinations of \(d_1, d_2\) distance pairs.

**Non-adaptive vs Adaptive Queries in the Dense Graph Testing Model** (by Oded Goldreich and Avi Wigderson) (ECCC) It has been long established in the dense graph property testing literature that the query complexity of adaptive testers and non-adaptive testers are only a quadratic factor apart. This work asks: is this gap necessarily tight and answers in the affirmative. The main conceptual tool used in the paper is the notion of *robustly self-ordered graphs* and *local self-ordering procedures* for these graphs (which was developed by the same authors and was covered in our September Post). These results use these tools to port lower bounds on property testing problems for bit strings to graph properties.

**Direct Sum and Partitionability Testing over General Groups **(by Andrej Bogdanov and Gautam Prakriya) (ECCC) In their celebrated work, Blum, Luby and Rubinfeld gave a four query test to determine whether a function \(f \colon \mathbb{F}_2^n \to \mathbb{F}_2\) is affine. This paper considers a generalization of the notion of affinity to functions \(f \colon \{0,1\}^n \to G\) where \(G\) is an Abelian group. The generalization sough asks: does \(f\) have the form \(f = \sum x_ig_i + g_0\) for group elements \(g_0, g_1, \cdots, g_n \in G\). In this setting, the BLR analysis does not apply as the domain and range are note equipped with the same group structure. This work presents a four query test for the above problem. In fact, the test is more general and can be used to decide whether a function \(f \colon D_1 \times D_2 \ldots \times D_n \to G\) from a product domain \(D_1 \times D_2 \ldots \times D_n\) to an Abelian group $G$ is a direct sum if it has the form \(f(x_1, x_2, \cdots, x_n) = \sum f_i(x_i)\) which resolves a conjecture of Dinur and Golubev. The work also presents results for testing whether a function \(f\) defined from a product domain to an Abelian group \(G\) is a direct product.

**Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing** (by Hadley Black, Iden Kalemaj and Sofya Raskhodnikova) (ArXiv) This paper generalizes the celebrated isoperimetric inequalities of boolean functions over the hypercube to the setting of real valued functions. This generalized isoperimetry is then put to work to obtain a monotonicity tester for \(f \colon \{0,1\}^n \to \mathbb{R}\). The tester makes \(\min(\tilde{O}(r \sqrt n), O(n))\) non-adaptive queries (where \(r\) is the size of image of \(f\)) and has one-sided error. The authors generalize the KMS inequality by a Boolean decomposition theorem which allows them to represent any real valued function over the cube as a collection of Boolean functions over the cube which capture \(f\)’s distance to montonicity as well the structure of violations to monotonicity in \(f\).

**Erasure-Resilient Sublinear-Time Graph Algorithms** (by Amit Levi, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova and Nithin Varma) (ArXiv) In a previous work, a subset of the authors explored how to equip property testers with the erasure-resilience for function properties. With graph properties, a different picture emerges. In the dense graph model, where you have query access to the adjacency matrix, the situation is *still* fine: adjacency matrices are functional representations of graphs: therefore, if you have a *black belt* in making property testers erasure resilient for function properties, you would be able to test properties of dense graphs too. However, if I give you query access to the graph through an adjacency list, the picture changes. These are non-functional representations of graphs and therefore need new conceptual tools. This paper begins the study of erasure resilience in the adjacency list model. It focuses on two computational tasks: testing connectivity and estimating average degree. Let me showcase the results in the paper on testing connectivity. It is shown that you encounter a threshold phenomena in testing connectivity. As long as the fraction of erasures is small, you get testers which run in time independent of the size of the graph. But when the fraction of erasures exceeds a certain cutoff, it is shown that the tester needs a number of queries linear in the size of the adjacency list.

**Expander Random Walks: A Fourier Analytic Approach** (by Gil Cohen, Noam Peri and Amnon Ta-Shma) (ECCC) While this is a not exactly a property testing paper, how can you not report a paper which proves a significant strengthening of the classic CLT for Markov Chains (alright, for random walks on expanders) with respect to the TV distance? Let us now unpack the above a little. So, consider the following setup: Suppose you have a \(d\)-regular expander \(G\). Now, imagine running the following process \(\textsf{(P)}\):

1) Label half the vertices in \(G\) as \(+1\) and the other half as \(-1\) arbitrarily.

2) Take a \(t-1\) step random walk which involves \(t\) vertices: say \(v_1, v_2, \ldots v_t\).

3) Finally, return the boolean label of all of the visited vertices. This gives a bit-vector \(x \in \{-1,1\}^t\).

Now, let us ask: Which class of functions get fooled by this process?

The main result of the paper shows that this process fools

1) Symmetric functions

2) The class \(AC^0\)

3) Functions \(f\) which are computable by low width read once branching programs.

In more detail, let \(f\) be a symmetric function and let \(G\) be a \(\lambda\)-spectral expander. For the process \(\textsf{(P)}\) defined above, it follows by the spectral expansion of \(G\), that \(discrepancy(f) = |E_{x \sim \textsf{(P)}} [f(x)] – E_{x \sim U_t} [f(x)]|\) is small.

Now note that the total variation distance is precisely equal to the maximum discrepancy you get over all symmetric functions \(f\). This leads to the connection that allows the authors to upgrade CLT for expander random walks as they are able to show a CLT for Markov Chains with respect to the TV distance (as opposed to the CLT with respect to the Kolmogorov Distance which is what was known from the work of Kipnis and Vadhan since 1986).

**New Sublinear Algorithms and Lower Bounds for LIS Estimation** (by Ilan Newman and Nithin Varma) (ArXiv) As the title suggests, this paper considers the task of estimating the length of the longest increasing sequence in an array. In particular, it gives a non-adaptive algorithm which estimates the LIS length within an additive error of \(\pm \varepsilon n\) in \(\tilde{O}\left(\frac{r}{\varepsilon^3}\right)\) queries (where \(r\) is the number of distinct elements in \(A\)). On the lower bound side, the paper presents a lower bound of \((\log n)^{\Omega(1/\varepsilon)}\) non-adaptive queries. The lower bound construction uses ideas from lower bound of Ben-Eliezer, Canonne, Letzter and Waingarten on testing monotone pattern freeness.

**Stability and testability: equations in permutations** (by Oren Becker, Alexander Lubotzky, AND Jonathan Mosheiff) (ArXiv) Consider the following question: Suppose I ask if two permuations \(A,B \in Sym(n)\). Suppose I want to decide whether these permutations commute or whether they are far from permutations which commute. Can this question be answered in time independent of \(n\)? The authors answer this question in the affirmative. They show a simple *Sample and Substitute* procedure which essentially does the following: take samples \(x_1, x_2, cdots x_k \sim [n]\). And accept *iff* \(ABx_i = BAx_i\) for each \(i\). The authors call this particular set of equations/constraints (checking whether \(AB = BA\)) stable: as it admits a Sample and Substitute style tester with query complexity independent of \([n]\). This allows the authors to consider the group theoretic notion of stability as a property testing concept and allows them to examine the notion of stability from a computational standpoint.

**StoqMA meets distribution testing** (by Yupan Liu) (ArXiv) This paper shows a containment result. A certain complexity class \(\textsf{eStoqMA}\) is shown to be a subset of \(\textsf{StoqMA}\). Let us informally define what these classes are. A language \(L \in \textsf{StoqMA}\) if there exists a verifier (which is a reversible quantum circuit) such that on input a string \(x\) such that for every \(x \in L\) there exists a witness quantum state which makes the verifier accept with probability at least \(2/3\). And in case \(x \not\in L\), for every quantum state the verifier rejects \(x\) with probability at least \(2/3\). The paper characterizes this class from a distribution testing point of view. And this connection allows the author to conclude that \(\textsf{eStoqMA} = \textsf{MA}\). Here, \(\textsf{eStoqMA}\) is a sub-class of \(\textsf{StoqMA}\) where all YES instances have an easy *witness*.

**(Edit: Added later)** We missed the following two papers. Thanks to our readers for pointing it out.

**Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu** (by Arnab Bhattacharya, Sutanu Gayen, Eric Price and N.V. Vinodchandran) (ArXiv) Graphical models are a convenient way to describe high dimensional data in terms of its dependence structure. An important question in this area concerns learning/inferring the underlying graphical model from *iid* samples. This paper considers the problem when the underlying graph is in fact a tree. Thus, the challenge is to learn a tree structured distribution. The main result is that given sample access to a tree structured distribution, you can recover an \(\varepsilon\) *approximate* tree with good probability in \(O(n/\varepsilon)\).

**Relaxed Locally Correctable Codes with Improved Parameters** (by Vahid R. Asadi and Igor Shinkar) (ECCC) In their work on Robust PCPs of Proximity, Ben-Sasson et al. asked whether it is possible to obtain a \(q\)-query *(Relaxed)* Locally Decodable Code whose block length is strictly smaller than the best known bounds on block lengths for Locally Decodable Codes. Recall with relaxed LDCs you are allowed to abort outputting the \(i^{th}\) bit of the message should you detect that the received word is not a valid codeword. This paper makes progress on this problem and shows that you can actually construct an \(O(q)\)-query relaxed LDCs which encode a message of length \(k\) using \(O(k^{1 + 1/q})\) queries. This matches some of the known lower bounds for constant query LDCs which thus makes progress towards understanding the gap between LDCs and relaxed LDCs in the regime \(q = O(1)\).