**On Learning and Testing Dynamic Environments** by Oded Goldreich and Dana Ron (ECCC). Given a dynamic environment that is evolving according to a local rule, the goal is to query some of the states in the system at some point of time and determine if the system is evolving according to some fixed rule or is far from it. A harder problem is to learn the evolution of the environment. They give some upper and some lower bounds for various versions of these problems. In particular they show that using sub-linear queries a lot can be achieved even in this setting. This is the first of its kind result and possibly many more results in this area will follow soon.

# News for February 2014

So what did we see in February 2014?

**Local Algorithms for Sparse Spanning Graphs** by Reut Levi, Dana Ron, and Ronitt Rubinfeld (arXiv). Given a graph \(G = (V,E)\), the aim is the find a small sparse spanning subgraph \(G’\) in the *local algorithm* setting. So, given any edge \((u,v) \in E\), we must determine in sublinear time whether this edge exists in \(G’\). The main result is an \(\Omega(|V|)\) lower bound, and a matching algorithm when \(G\) is either an expander or hyperfinite.

**A Characterization of Locally Testable Affine-Invariant Properties via Decomposition Theorems** by Yuichi Yoshida (arXiv). This result gives a complete characterization of (two-sided) locally testable affine-invariant properties. These are based on existing decomposition theorems for functions into structured and pseudorandom parts. This result is analogous to that of Alon, Fischer, Newman, and Shapira (link) on characterizations for testability over dense graphs.

**Testing probability distributions underlying aggregated data** by Clément Canonne and Ronitt Rubinfeld (arXiv). There has been much recent work on distribution testing, and this work discusses stronger models where better algorithms can be obtained. Given a distribution \(D\) over \([n]\), we can get samples from this distribution, and also query the probability \(D(i)\) of any \(i \in [n]\). With these “dual” queries, one gets distribution testers for a variety of problems surpassing previous lower bounds. Interestingly, this is stronger than the conditional model introduced recently (here and here).

**Strong Locally Testable Codes with Relaxed Local Decoders** by Oded Goldreich, Tom Gur, and Ilan Komargodski (ECCC). For a *locally testable code* (LTC), the property of being a codeword is efficiently (constant time) testable. A *locally decodable code* (LDC) is a code from which individual bits of the original message can be recovered in constant time. A *relaxed local decoder* is allowed to declare failure for recovered a small fraction of message bits. This allows for bypassing the difficult (and prominent) open problem of constructing small LDCs. This result gives a construction of near-linear LTCs with a relaxed local decoder. This result leads to a property that is hard to test but easy to verify (in the MA proofs of proximity sense, refer to last month’s open problem).

# Open problem for February 2014: Better approximations for the distance to monotonicity

* Slightly delayed, Feb’s open problem is by one of the three possible “yours truly”s, Sesh. Looking for more reader participation, hint, hint.*

**Basic setting:** Consider \(f:\{0,1\}^n \rightarrow R\), where \(R\) is some ordered range. There is a natural coordinate-wise partial order denoted by \(\prec\). The function is *monotone* if for all \(x \prec y\), \(f(x) \leq f(y)\). The *distance to monotonicity*, \(\epsilon_f\) is the minimum fraction of values that must be changed to make \(f\) monotone. This is an old problem in property testing.

**Open problem:** Is there an efficient constant factor approximation algorithm for \(\epsilon_f\)? In other words, is there a \(poly(n/\epsilon_f)\) time procedure that can output a value \(\epsilon’ = \Theta(\epsilon_f)\)?

**State of the art:** Existing monotonicity testers give an \(O(n)\)-approximation for \(\epsilon_f\), so there is much much room for improvement. (I’d be happy to see a \(O(\log n)\)-approximation.) Basically, it is known that the number of edges of \(\{0,1\}^n\) that violate monotonicity is at least \(\epsilon_f 2^{n-1}\) [GGLRS00], [CS13]. A simple exercise (given below) shows that there are at most \(n\epsilon_f 2^n\) violated edges. So just estimate the number of violated edges for an \(O(n)\)-approximation.

(Consider \(S \subseteq \{0,1\}^n\) such that modifying \(f\) on \(S\) makes it monotone. Every violated edge must have an endpoint in \(S\).)

**Related work:** Fattal and Ron [FR10] is a great place to look at various related problems, especially for hypergrid domains.

**References**

[CS13] D. Chakrabarty and C. Seshadhri. *Optimal Bounds for Monotonicity and Lipschitz Testing over Hypercubes and Hypergrids*. Symposium on Theory of Computing, 2013.

[FR10] S. Fattal and D. Ron.* Approximating the distance to monotonicity in high dimensions *. ACM Transactions on Algorithms, 2010.

[GGL+00] O. Goldreich, S. Goldwasser, E. Lehman, D. Ron, and A. Samorodnitsky.* Testing Monotonicity *. Combinatorica, 2000.

# News for January 2014

As we already covered in earlier posts, quite a few notable property testing papers were presented at the SODA and ITCS conferences this month. In addition, an exciting new set of results on a variant of the linearity testing problem was also posted on ECCC.

**Direct Sum Testing** by Roee David, Irit Dinur, Elazar Goldenberg, Guy Kindler, and Igor Shinkar. One of the true gems of property testing is the linearity testing: as Blum, Luby, and Rubinfeld first showed, we can test whether a boolean function is linear with a (very natural) 3-query test. A fascinating variant of this problem that is still largely open to this day is the following: is linearity still testable even when we consider functions defined on a *subset* of the hypercube? This paper considers the problem when the subsets in question are layers of the hypercube, a problem that is closely connected to PCP constructions and direct sum operations on multi-player games. The main result of the paper shows that we can also test linearity for functions defined on layers of the hypercube with a natural 3-query tester.

# Open problem for Jan 2014: Explicit lower bound for MAPs

* The open problem of the month is a new feature on PTReview. Every month we put up on open problem contributed by our readers. To know more about getting your problem broadcasted, check out the link “About Open Problem of the Month” at the top of the page. *

*The first ever open problem post is by Tom Gur on Merlin-Arthur proofs of proximity. This post doubles up a great survey on this new topic. Thanks Tom! *

**Primary reference:** Non-interactive proofs of proximity by Gur and Rothblum, ECCC, 2013.

**Basic setting:** Recent work by Ron Rothblum and myself [GR13] initiates the study of \(\mathsf{MA}\) proof-system analogues of property testing, referred to as \(\mathsf{MA}\) ** proofs of proximity** (\(\mathsf{MAP}\)s). A proof-aided tester for property \(\Pi\) is given *oracle* access to an input \(x\) and *free* access to a proof \(w\). A valid \(\mathsf{MAP}\) demands the following property. Given a proximity parameter \(\epsilon>0\), for inputs \(x \in \Pi\), there exists a proof that the tester accepts with high probability, and for inputs \(x\) that are \(\epsilon\)-far from \(\Pi\) no proof will make the tester accept, except with some small probability of error.

**Open problem:** Can one show an *explicit* property where \(\mathsf{MAP}\)s do not give additional power over standard testers? Formally, show an explicit property that requires any \(\mathsf{MAP}\) to make \(\Omega(|x|)\) queries even when given a long (e.g., length \(|x|/100\)) proof?

**Background:** Goldreich, Goldwasser, and Ron [GGR98] showed that “almost all” properties require probing a constant fraction of the tested object. This limitation was the primary motivation for \(\mathsf{MAP}\)s. Observe that given a sufficiently long proof, every property can be tested very efficiently by an \(\mathsf{MAP}\). Consider a proof that fully describes the object. The tester has free access to the proof, so all it has to do is verify the consistency of the proof and the object (which can be done by only making \(O(1/\epsilon)\) random queries). The tester can check for free that the object described in the proof has the desired property.

Indeed, the name of the game is to construct \(\mathsf{MAP}\)s with short proofs and significantly lower query complexity than standard testers. There exists a property that has an \(\mathsf{MAP}\) using a logarithmic-length proof and only a *constant* number of queries, but requires nearly linear number of queries for standard testing (see Section 3 of [GR13]).

Nevertheless, \(\mathsf{MAP}\)s with short proofs are far from being omnipotent. A random property requires any \(\mathsf{MAP}\) to perform \(\Omega(n)\) queries (where \(n\) is the length of the input), even when given a \(n/100\) length proof (Section 5 of [GR13]). The open problem is to find an explicit property that matches this lower bound. The best lower bound on *explicit* properties is far weaker; specifically, Fischer, Goldhirsh, and Lachish [FGL13] showed that testing a certain family of linear codes (namely, codes with “large” dual distance) requires any \(\mathsf{MAP}\) with a proof of length \(p \ge 1\) to make \(\Omega(n/p)\) queries.

**References**

[FGL13] E. Fischer, Y. Goldhirsh, and O. Lachish. *Partial tests, universal tests and decomposability*. Innovations in Theoretical Computer Science (ITCS), 2013.

[GGR98] O. Goldreich, S. Goldwasser, and D. Ron.* Property testing and its connection to learning and approximation*. Journal of the ACM (JACM), 1998.

[GR13] T. Gur and R. Rothblum. *Non-interactive proofs of proximity*. Electronic Colloquium on Computational Complexity (ECCC), 2013.

# SODA 2014

SODA 2014 had a bunch of property testing papers, all but one of which have been covered by our monthly reports. (This, arguably, shows that the monthly reports have some value.) I’ll just give the list of papers, and only give a summary for the “new” papers.

**Testing equivalence between distributions using conditional samples** by Clément Canonne, Dana Ron and Rocco A. Servedio

**Optimal Algorithms for Testing Closeness of Discrete Distributions** by Siu On Chan, Ilias Diakonikolas, Gregory Valiant and Paul Valiant

**Testing Surface Area** by Chenggang Wu, Ryan O’Donnell, Amir Nayyeri, and Pravesh Kothari

**Hereditary properties of permutations are strongly testable** by Tereza Klimosova and Daniel Kral (Arxiv). This papers solves an important open problem in permutation testing, and shows that all hereditary properties of permutations are testable, when the distance is given by the Kendall-Tau metric. Such a result was already known for the rectangular metric. The Kendall-Tau metric is the right analogue to the standard edit distance for graph properties, and so this result becomes the analogue of testing of hereditary graph properties.

# ITCS 2014

ITCS 2014 had a nice selection of property testing papers, some of which were covered in our monthly posts. One of our reporters was around, and sent us this report.

**Partial tests, universal tests and decomposability** by Eldar Fischer, Oded Lachish and Yonatan Goldhirsh. Yonathan talked about the notion of “partial testing”, where one wishes to test a subproperty of a property. This paper has results on when properties cannot be decomposed in testable subproperties, and, as Yonathan hinted at, uses sunflowers! This work is related to recent work of Gur and Rothblum on Merlin-Arthur proofs of proximity.

**High Dimensional Expanders and Property Testing** by Tali Kaufman and Alexander Lubotsky. Consider a function \(f\) on the vertices of a graph and suppose we wish to check if \(f\) is constant. The simple edge tester queries an edge and checks if both endpoints have distinct values. The success probability of this tester is the expansion of the graph. This paper generalizes this notion to high-dimensional simplicial complices, and this gives a tester for the tensor power property.

**Parameterized Testability** by Kazuo Iwama and Yuichi Yoshida. Many NP-hard problems are fixed-parameter tractable (FPT). Determining if a graph has a vertex cover of size \(k\) can be done in \(O(poly(n)) + 2^k\) time, as opposed to the trivial \(n^k\) bound. Yuichi talked about getting fixed-parameter *testing* algorithms, where the tester is constant when the property parameter is constant. The paper has a number of results, notably testing algorithms for \(k\)-vertex feedback set and \(k\)-path freeness.

# On Quantum Property Testing

*(A guest post by Ashley Montanaro on quantum property testing, a blogpost on his survey. Thanks Ashley!) *

**Quantum property testing**

As readers of this blog will be aware, the field of property testing is a vibrant and active area of research, with hundreds of papers having been written covering a vast range of topics. One direction which has perhaps been underexplored in recent years is the extent to which quantum computers could be useful in property testing. In this blog post, I will briefly discuss some examples of this kind. This is all based on a recent survey of quantum property testing, which is joint work with Ronald de Wolf. For technical details about everything I discuss here, and for references, please see this paper.

There are several ways in which one could generalise the idea of property testing to the quantum world. Perhaps the most obvious way, and the one which I will discuss in this post, is to consider quantum algorithms for testing properties of classical objects, with the goal being to find algorithms which are significantly faster than any possible classical property tester. We use the standard framework of property testing; the only difference is that our tester is allowed to be quantum.

A generalisation in a different direction which I will not discuss here is to testing whether *quantum* objects of some sort (such as quantum states or operations) have some property. This idea can be split into two cases, the first of which can be thought of as *classical* testers for quantum properties. While at first sight this idea might not seem to make much sense, in fact it is quite natural. For example, imagine we are given access to a quantum system which is claimed to be in a particular state, and we would like to verify that this is indeed the case. Further imagine that the only access to the system we are allowed is to perform measurements of some form, receiving classical measurement data. The problem of performing this kind of verification has been of both theoretical and practical interest to both physicists and computer scientists. Another, and very general, case we could consider is finding *quantum* testers for quantum properties. Here ideas and intuition from classical property testing can help develop algorithms for purely quantum tasks. See the survey and references therein for much more about the testing of quantum properties in each of these cases.

**Quantum testers for classical properties **

Quantum computers offer the prospect of dramatically faster algorithms for certain problems than are possible for classical computers. Property testing is a natural target for such algorithms, for at least a couple of reasons.

First, many of the most famous quantum algorithms known (such as Shor’s algorithm and Grover’s algorithm) operate in the query complexity model. Here the measure of complexity which we try to minimise is the number of queries to bits of the input used to compute some function of the input. Precisely the same model is usually used in property testing.

Second, it is known that quantum computers can only achieve super-polynomial speedups in the query complexity model for computing *partial* functions; that is, solving problems where we are promised that the algorithm will never receive certain inputs. In a property-testing problem we have such a promise: namely, the input either has some property, or is “far” from having that property.

This gives us hope that quantum algorithms could provide super-polynomial speedups for property-testing problems. However, it is not obvious that the (rather specific) nature of the promise in property testing lends itself to quantum speedup; indeed, it was only in 2002 that the first separation between quantum and classical property testers was demonstrated by Buhrman et al. In fact, this work gave two separations: \(O(1)\) vs. \(\Omega(\log n)\) for testing subsets of Hadamard codewords; and \(O(\log n \log \log n)\) vs. \(\Omega(n)\) for a property-testing variant of Simon’s problem.

Following these initial results, there has been a steady stream of other efficient quantum testers for classical properties. These are often based on applying in the property-testing context a quantum algorithm or primitive which has been useful elsewhere. One simple, and yet sometimes overlooked, way to obtain a more efficient quantum property tester from a classical property tester is via the important primitive of amplitude amplification, which is a more efficient quantum variant of classical probability amplification. Imagine we have a classical property tester for some property \(P\) such that the tester has perfect completeness (i.e. accepts objects with property \(P\) with certainty), uses \(q\) queries, and rejects objects \(\epsilon\)-far from \(P\) with probability \(p\). Then, using amplitude amplification, we obtain a quantum tester which rejects objects \(\epsilon\)-far from \(P\) with *constant* probability using \(O(q/\sqrt{p})\) queries. (Classical probability amplification would give \(O(q/p)\) queries.) For example, this can be applied to the standard classical tester for linearity of boolean functions to obtain a quantum property tester with constant success probability using \(O(1/\sqrt{\epsilon})\) queries, a quadratic improvement.

Some other examples where quantum algorithms which originated elsewhere have been used as part of efficient quantum property testers include:

- An algorithm of Chakraborty et al. for a variant of testing periodicity of a sequence, where the quantum algorithm uses \(O(1)\) queries but any classical algorithm must make \(\Omega(n^{1/4})\) queries. The algorithm is based on Shor’s algorithm and the classical lower bound is based on generalising prior work of Lachish and Newman. Quantum testers for periodic functions on groups, and some other group-theoretic properties, had previously been given by Friedl et al..
- A quantum algorithm of Atici and Servedio for testing \(k\)-juntas (boolean functions depending on at most \(k\) variables) using \(O(k/\sqrt{\epsilon})\) queries, which is a somewhat better complexity than the (later) best known classical bound of \(O(k \log k + k/\epsilon)\). The algorithm is based on the primitive of sampling from the Fourier spectrum of a boolean function, one of the earliest algorithmic ideas in quantum computing. The original algorithm of Atici and Servedio had a linear dependence on \(1/\epsilon\), but this can be improved to a square root using amplitude amplification.
- The use of quantum counting by Bravyi et al. and Chakraborty et al. to give polynomial speedups for testing a number of properties of probability distributions: whether two distributions are equal, whether a distribution is equal to a fixed distribution, whether two distributions have disjoint support, etc.
- The use of Ambainis’s efficient quantum algorithm for element distinctness to give polynomial speedups for testing bipartiteness and expansion of bounded-degree graphs. The basic idea here is to use the algorithm to efficiently find collisions in short random walks on the graph.

The last of these highlights an important gap in our knowledge: could there exist an exponential quantum speedup for testing a property of graphs? It is known that for problems with “too much” symmetry, quantum algorithms can achieve only a polynomial speedup over classical algorithms in the query complexity model. Graph properties (being invariant under permutations of the vertices) have an intermediate level of symmetry, and it is not clear whether a super-polynomial speedup is achievable in this setting.

**Lower bounds**

As well as the development of new quantum algorithms, an integral part of many of the above works demonstrating quantum-classical separations is the proof of new lower bounds on the number of classical queries required for property testing. It is also of interest to prove lower bounds on the *quantum* query complexity of property testing. This can be a challenging task. Indeed, one of the most successful techniques for proving quantum lower bounds, the adversary method originally introduced by Ambainis, suffers from a “property testing barrier” which prevents its application to property testing problems. This technique has subsequently been generalised to the negative-weight adversary method, which overcomes this barrier and indeed completely characterises quantum query complexity. However, this bound can be difficult to apply in practice and I am not aware of any examples of its use in property testing.

A different technique which has been successfully applied to lower-bound the quantum query complexity of various property testing problems is the method of polynomials. Here the idea is to exploit the fact that the acceptance probability of any quantum algorithm making a small number of queries is a low-degree multivariate polynomial. By proving a lower bound on the degree of any bounded polynomial which takes a large value on inputs with property \(P\), and a small value on inputs far from having property \(P\), we get a lower bound on the number of queries required by any quantum algorithm which tests \(P\).

One example where this idea can be applied is the testing of properties which correspond to linear codes. It turns out that for any code whose dual code has minimum distance \(d\), we get an \(\Omega(d)\) quantum lower bound on the number of queries required to test membership in the code. One such case is the Reed-Muller code of order \(d\), or equivalently the set of degree-\(d\) polynomials over \(F_2\); this argument can be used to show that any quantum algorithm that tests the property of being a degree-\(d\) polynomial must make \(\Omega(2^d)\) queries. Another, and much more complicated, example of the use of this method is a lower bound by Ambainis et al. on the quantum query complexity of testing expansion in graphs, which rules out an exponential quantum speedup for testing this property.

**Open questions**

The field of quantum property testing is just getting started and there are a number of interesting open questions. In particular, the quantum complexity of testing many properties which have been studied classically is wide open. Some examples include monotonicity, decision tree complexity, and almost all properties of graphs. In some cases, we might hope to find significant (e.g. exponential) speedups over any possible classical tester. The quantum property testers which have been found so far are largely based on applying quantum algorithms which originated elsewhere; but we could also hope to develop entirely new algorithmic techniques to attack these problems.

As well as considering specific problems, there are more wide-ranging questions which could be addressed. Can we achieve significant quantum speedups for testing properties with a high degree of symmetry – and more generally, what can we say about the structure of properties which admit efficient quantum testers? Can we find better (or simpler) lower-bound techniques for quantum property testers? Blais et al. have proven classical lower bounds based on an elegant technique using communication complexity – can their ideas be extended to prove quantum lower bounds based on quantum communication complexity?

Thinking more widely still, the question of just what it is about the structure of certain problems which makes them amenable to quantum speedup is of great interest and far from being resolved. Property testing provides a lens with which to study this question, whether by finding new examples of quantum speedup (or the lack thereof), or by uncovering deeper structural features of properties which determine whether they have efficient quantum testers. In both cases, there is the potential for new developments in quantum property testing to significantly influence quantum algorithmics more generally.

# News for December 2013

Here’s a report on the new papers we saw in December 2013.

**Property Testing on Linked Lists **by Peyman Afshani, Kevin Matulef and Bryan T. Wilkinson (ECCC). Monotonicity testing has been an important problem for past many years. In the usual settings either the input is accessed by random sampling or the input is given in an easy to access form (like an array). In this paper they have consider a model which is somewhere in the middle of the other two models. Here the input is given as an linked list. They have shown that, in this model, monotonicity testing requires Θ(n^{1/3}) queries.

**Direct Product Testing** by Irit Dinur and David Steurer (ECCC). In this paper they have considered the problem “Does there exist a two-query test that distinguishes between direct products and functions that are far from direct products with constant probability?” This is a very important problem that has application in gap amplification in PCP. They answer the question in the affirmative.

**Property-Testing in Sparse Directed Graphs: 3-Star-Freeness and Connectivity **by Frank Hellweg and Christian Sohler (ArXiv). There are different models, for querying, in the case of property testing in sparse directed graphs. In one such model, proposed by Bender and Ron, only the outgoing edges are queried. In this paper upper bounds on the query complexity for testing subgraph freeness and strong connectivity has been obtained.

**High Dimensional Expanders and Property Testing** by Tali Kaufman and Alexander Lubotzky (ArXiv). Expanders have always played an important role in property testing. Usually expanders are either used for designing testers or proving correctness of testers. We sometimes test for expansion property also. In this paper they have linked expansion property of some structure to testability of some other property. They study the higher dimensional expansion property, as studied by Gromov, Linial and Meshulam. They show that a simplicial complex is a high dimensional expander if and only if a suitable property is testable.

# News for November 2013

The action in November is mainly in algebraic property testing.

**On Testing Affine-Invariant Properties** by Arnab Bhattacharyya (ECCC). As mentioned here, Arnab put together an excellent survey on testing, ahem, affine-invariant properties. It gives a great overview of this area for a non-expert, and is highly recommended for anyone wanting to get a sense of the existing results and open problems.

**Algorithmic Regularity for Polynomials and Applications** by Arnab Bhattacharyya, Pooya Hatami, and Madhur Tulsiani (Arxiv). Not a pure property testing paper per se. But you can’t complain about papers related to regularity lemmas in a property testing blog. Think of the Szemerédi regularity lemma as the following: given any partition of a graph, one can “refine” it, so that the resulting partition has “few” pieces and is “regular”. Analogously, the regularity lemma for polynomials says that given any set of polynomials, one can “refine” it to get a “small regular” set of polynomials. This has direct applications in Reed-Muller testing (there’s your property testing connection!). This paper is on giving an algorithm that actually constructs this regular set of polynomials. Among other things, this algorithm can be used in decoding of Reed-Muller codes beyond the list-decoding radius.