Author Archives: Seshadhri

News for March 2015

This month’s post will liberally link to previous posts, because it turns out we have written enough in the past to motivate this month’s work. Our news this March has more monotonicity testing, non-deterministic testing, and image testing. Let me test your patience no more…

On the Complexity of Nondeterministically Testable Hypergraph Parameters

by Marek Karpinski and Ronald Markó (arXiv). The notion of nondeterministic graph property testing was introduced by Lovász and Vestergombi and further developed by Gishboliner and Shapira, and the authors. Our August 2014 news post explains these results, rather (ahem) beautifully, so check that out. This result generalizes nondeterministic property testing to hypergraphs, and proves the equivalence to deterministic (standard) property testing. Unlike their previous result, this does not give an explicit function for the complexity of a deterministic tester for a specified nondeterministically testable property. (We only know that it is “some” constant.) This is analogous to the original work of Lovász and Vestergombi, which was non-constructive. These non-constructive results use limit theorems for dense (hyper)graphs. This certainly leaves the open question of getting explicit complexity bounds.

Quantum Algorithm for Monotonicity Testing on the Hypercube by Aleksandrs Belovs and (PTReview’s very own) Eric Blais (arXiv). Much has been said about Boolean monotonicity testing in our reports, and you can refresh everything by checking out the open problem for Feb 2015. This result gives a \(O(n^{1/4}\epsilon^{-1/2})\) in the quantum testing model (check Ashley Montanaro’s survey on PTReview). Standard quantum amplification of the classic edge tester and the recent Khot-Minzer-Safra monotonicity tester yield \(O(\sqrt{n/\epsilon})\) and \((n^{1/4}/\epsilon)\) testers respectively. The point of this result is to get the best dependence on both \(n\) and \(\epsilon\). For \(\epsilon = 1/\sqrt{n}\), the new quantum algorithm gives a cubic speedup over existing classical testers. From a quantum computing perspective, there are few problems with polynomial super-quadratic speedup, and Boolean monotonicity testing now enters that list.

 

Constant-Time Testing and Learning of Image Properties by Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova (arXiv). The notion of testing image properties was first introduced by Raskhodnikova and further developed by Tsur and Ron. There have been some really interesting results on applying these methods in practice for computer vision. This paper studies different query models for the property tester. Typically, the input is a grid of (black and white) pixels, and we wish to determine if the black region is connected, convex, etc. Previous testers were allowed to query any pixel of choice and could be adaptive. This paper focuses on the restricted sample based models, where the tester only sees uniform random pixels, and variants thereof. Interestingly, this limited power suffices to test and even tolerant test the properties of being a half-plane, connectedness, and convexity. There are numerous results for the different models and tolerant vs standard testers. Having richer models would likely have more impact in practice, where query models may not be feasible. Maybe this result will lead to more work in computer vision!

Open problem for February 2015

Today’s post by Clément Canonne.

Following the Boolean monotonicity testing bonanza, here’s an open problem. In short, does adaptivity help for monotonicity testing of Boolean functions?

Problem: Consider the problem of monotonicity testing for Boolean functions on the hypercube. Given oracle access to \(f\colon \{0,1\}^n \to \{0,1\}\), we wish to decide if \(f\) is (i) monotone vs. (ii) \(\epsilon\)-far from monotone (in Hamming distance). For either the one-sided or two-sided version of the problem, what is the exact status of adaptive testers?

State of the art:
Fischer et al. [FLN+02] showed one-sided non-adaptive testers require \(\sqrt{n}\) queries. This implies an \(\Omega(\log n)\) lower bound for one-sided adaptive testers.
Chen et al. [CDST15] proved that two-sided non-adaptive testers require (essentially) \(\Omega(\sqrt{n})\) queries. This implies an \(\Omega(\log n)\) lower bound for 2-sided adaptive testers.
Khot et al. [KMS15] recently gave a one-sided non-adaptive tester making \(\tilde{O}(\sqrt{n}/\epsilon^2)\) queries. The story is essentially complete for non-adaptive testing.

Comments: As of now, it is not clear whether adaptivity can help. Berman et al. [BRY14] showed the benefit of adaptivity for Boolean monotonicity testing over the domain \([n]^2\) (switch the \(2\) and the \(n\) from the hypercube). A gap provably exists between adaptive and non-adaptive testers: \(O(1/\epsilon)\) vs. \(\Omega(\log(1/\epsilon)/\epsilon)\).

References:

[FLN+02] E. Fischer, E. Lehman, I. Newman, S. Raskhodnikova, R. Rubinfeld, and A. Samorodnitsky. Monotonicity testing over general poset domains. Symposium on Theory of Computing, 2002

[BRY14] P. Berman, S. Raskhodnikova, and G. Yaroslavtsev. \(L_p\) testing. Symposium on Theory of Computing, 2014

[CDST15] X. Chen, A. De, R. Servedio, L.-Y. Tang. Boolean function monotonicity testing requires (almost) \(n^{1/2}\) non-adaptive queries. Symposium on Theory of Computing, 2015

[KMS15] S. Khot, D. Minzer, and S. Safra. On monotonicity testing and Boolean Isoperimetric type theorems. ECCC, 2015


Erratum: a previous version of this post stated (incorrectly)  lower bound of \(\Omega(\sqrt{n}/\epsilon^2)\). This has been corrected to \(\Omega(\sqrt{n})\).

News for December 2014

Happy new year! I just looked back at our archives, and saw that PTReview has been on from July 2013. I’m happy (and maybe mildly surprised) that it’s still going strong. And so the chronicling of \(o(n)\) continues…

Much coolness we have to report: permutation testing, linearity testing, distribution testing, and monotonicity testing. Without further ado:

Large permutations and parameter testing by Roman Glebov, Carlos Hoppen, Tereza Klimosova, Yoshiharu Kohayakawa, Daniel Kral, and Hong Liu (arXiv). Just as typical dense graph testing involves checking properties on a random, constant-sized induced subgraph, we can look at permutation testers that test properties of permutations by sampling sub-permutations. The theory of dense graph testing is closely tied to the Szemeredi regularity lemma, the notion of graph limits, and the behavior of subgraph densities. (Check out this survey by Borgs et al.) An analogous theory for permutations has been built by a subset of the authors in a series of papers (survey). There is a notion of permutation properties/parameters that are forcible, meaning that two permutations that have similar frequencies of (a fixed, finite set of) subpermutations have close values of the parameter. This seems naturally connected to testing. Indeed, a permutation parameter that is forcible can be approximated in constant-time by simply approximating the frequencies of the subpermutations. This paper shows that converse is false: there is an constant-time approximable property that is not forcible.

A Self-Tester for Linear Functions over the Integers with an Elementary Proof of Correctness by Sheela Devadas and Ronitt Rubinfeld (arXiv). The venerable property of linearity testing needs no introduction, and has enough history to fill up this month’s post (and the next and the next). But what if the domain is n-bit integers and we care about running time in terms of bit operations? Suppose we wish to check if a program supposed to compute \(f(x) = b\cdot x\) (for fixed, known \(b\)). Inputs like \(2^k\) are easy to evaluate by the checker, and could be used to get faster checkers. This paper gives testers for linearity, multivariate linear functions, and multilinear functions, where computation time is always linear in sample complexity.

\(\ell_p\) Testing and Learning of Discrete Distributions by Bo Waggoner (arXiv). Distribution testing needs little introduction, and we have seen much progress over our time in PTReview. Let’s start with the basics. The seminal work of Batu et al. and later Paninksi showed that the sample complexity of testing uniformity of a distribution \(\mathcal{D}\) over universe \([n]\) is \(\Theta(\sqrt{n}/\epsilon^2)\). Meaning, the sample complexity of checking \(\|\mathcal{D} – \mathcal{U}\|_1 > \epsilon\) (where \(\mathcal{U}\) is uniform on \([n]\)) is \(\Theta(\sqrt{n}/\epsilon^2)\). But what if we had \(\ll \sqrt{n}/\epsilon^2\) samples? From prior work, nothing could be inferred. (Update: As Ilias Diakonikolas pointed out to me, the \(\ell_2\) results were previously known, both by the Batu et al. work and a recent paper by Chan et al. that settles the \(\ell_2\) question.) This paper changes that, and says that we can still infer something about other \(\ell_p\) norms, \(\|\mathcal{D} – \mathcal{U}\|_p\). What impressed me about this paper is the detailed understand of the interplay between \(n, \epsilon, p\). For example, the sample complexity of uniformity testing over \(\ell_p\)-norm for any \(p > 1\) is independent of \(n\). There are many, many results in this paper with an intriguing threshold phenomenon at \(p=2\). For distribution testing in practice, I would think that this result would be of much significance.

New algorithms and lower bounds for monotonicity testing by Xi Chen, Rocco A. Servedio, and Li-Yang Tan (arXiv). Ah yes, Boolean monotonicity testing. Consider the standard coordinate wise partial order on \(\{0,1\}^n\), given by \(\prec\). A function \(f:\{0,1\}^n \rightarrow \{0,1\}\) is monotone if \(\forall x \prec y, f(x) \leq f(y)\). The complexity of property testing (Boolean) monotonicity is one of those tantalizing, simple-to-state questions that is still quite open. I’ll spare you the full story and the epsilons, but here’s the status. The best upper bound is a non-adaptive, one-sided \(O(n^{7/8})\) tester by Chakrabarty and Seshadhri. The best lower bound is a non-adaptive, one-sided lower bound of \(\Omega(\sqrt{n})\) by Fischer et al. This implies an \(\Omega(\log\log n)\) lower bound for general testers. This paper changes all of this. The authors prove a \(\Omega(n^{1/5})\) lower bound for two-sided non-adaptive testers, leading to an exponentially better \(\Omega(\log n)\) lower bound for general testers. The main insight is to focus on monotone vs non-monotone families of linear threshold functions, and show that procedures making few (non-adaptive) queries cannot distinguish between such families. The main hammer is to use recent Central Limit Theorems. As an added bonus, this paper improves the upper bound for monotonicity testing to \(O(n^{5/6})\), with a modified tester (and better analysis) of Chakrabarty and Seshadhri. But can the lower bound be improved?

Boolean function monotonicity testing requires (almost) \(n^{1/2}\) non-adaptive queries by Xi Chen, Anindya De, Rocco A. Servedio, and Li-Yang Tan (arXiv). Yes, the lower bound can be improved. This paper gets the non-adaptive two-sided bound to (almost) \(\Omega(\sqrt{n})\), matching the one-sided Fischer et al bound. The paper proves improved Central Limit Theorems, tailored for this application. The authors, and I with them, believe that this is the true complexity. At that intriguing note, we end 2014!

New for August 2014

We’ve got a nice collection of papers this August, ranging from graph property testing, locally testable codes, to new lower bound approaches.

Deconstructions of Reductions from Communication Complexity to Property Testing using Generalized Parity Decision Trees by Roei Tell (ECCC). Blais, Brody, and Matulef (BBM) introduced the beautiful and powerful method of using communication complexity to derive property testing lower bounds. This result is all about understanding this approach is greater depth, and really trying to get at the crux of the reductions. The BBM approach uses rather heavy hammers (communication complexity lower bounds), so could we get away with weaker hammers to get property testing lower bounds? The key insight is that less powerful models can be reduced to existing property testing problems, and these are equivalent to the BBM reductions. While this (as of now) does not lead to newer bounds, it does shed significant light on existing bounds. The weaker model is essentially that of linear decision trees. Such reductions were also discovered by Brushundi, Chakraborty, and Kulkarni, proving \(\Omega(k)\) lower bounds for testing \(k\)-linearity.

Locally Correctable and Testable Codes Approaching the Singleton Bound by Or Meir (ECCC). Locally Testable Codes (LTCs) are codes that have a constant (or polylogarithmic) time tester. It is known that such codes with minor caveats cannot have a constant rate (a number of papers on this topic: Katz and Trevisan, Dinur and Kaufman, Ben-Sasson and Viderman). But when the query complexity can be \(n^\beta\) (where \(n\) is the codeword size), the rate can be arbitrarily close to \(1\) (Viderman, and more refs in the paper). This result takes this fact to the very extreme. The Singleton Bound is a lower bound between the tradeoff between rate and relative distance of a code. Amazingly, this lower bound tradeoff can be achieved by an LTC (and also Locally Correctable Code) testable with \(n^\beta\) queries!

Complexity of Nondeterministic Graph Parameter Testing by Marek Karpinksi and Ronald Mark&#243 (arXiv). It is easiest to think of nondeterminstic graph testing with the example of testing bipartiteness for dense graphs. Suppose we wish to test \(G\) for bipartiteness. As a “certificate”, we are provided a candidate partition \(V = V_1 \cup V_2\). There is a “simple” constant-time tester that can property testing if \(G\) is bipartite with respect to this partition. (Just sample two uniform random vertices, and check if the edge between them respects the bipartition.) So we say that bipartiteness is non-determinstically testable: if \(G\) is far from being bipartite, no certificate can convince the simple tester. The full definition is somewhat complex, but it is basically the same idea. (In this context, “testablility” always refers to constant time testable.) Lovász and Vestergombi proved the striking result that any non-deterministically testable property is also deterministically testable, but their proof did not give an explicit bound on the query complexity. Gishboliner and Shapira fixed that, and gave a tower-type bound for the query complexity. So if the query complexity of the underlying “simple” tester is \(q\), then the non-deterministic property has a tester whose query complexity is a tower of height \(q\). This results goes much further, and shows the final query complexity is at most triply exponential in \(q\). This is quite significant, since it is uses a weaker regularity approach than previous results.

News for May 2014

May seems to be a slow month for property testing, with only one paper on locally testable codes. Nonetheless, we have unearthed a paper from April (missed despite our best efforts) on a related topic of hidden set approximations.

The Power of an Example: Hidden Set Size Approximation Using Group Queries and Conditional Sampling by Dana Ron and Gilad Tsur (arXiv). Consider a known universe \(U\) and an unknown subset \(S\). Our aim is to approximately determine \(|S|\). We can perform subset queries: for a query subset \(T\), an oracle returns whether \(T \cap S \neq \emptyset\) or (a more powerful model) the oracle returns a uniform random element in \(T \cap S\). This paper gives a detailed study of this problem under various settings. There is a fascinating array of upper and lower bounds, including situations where \(U\) is ordered and \(T\) must be an interval, the difference between adaptivity and non-adaptivity, arbitrary subset queries, and much more. The topic seems to be quite rich, and it should yield some nice sets of problems to study further. It appears that some lower bounds are quite open. Students, pay attention!

Limitations on Testable Affine-Invariant Codes in the High-Rate Regime by Venkatesan Guruswami, Madhu Sudan, Ameya Velingker, and Carol Wang (ECCC). Locally Testable Codes (LTCs) are codes that (typically) have a constant time tester. Interestingly, recent work on testing codes using a linear number of queries (some small constant fraction of the code length) has connections to the Small Set Expansion (SSE) problem, which in turn is related to the famous Unique Games Conjecture (UGC). A paper by Barak et al. gives a construction of a small set expander (a graph where all small sets have high conductance) where the Laplacian has many small eigenvalues (so it’s “difficult” to find a low conductance cut). This construction uses high-rate LTCs that are linear time testable. In some sense, the better the rate, the better construction one gets. This paper asks how far this idea can go. So the idea is to construct the best rate linear-time testable LTC. Reed-Muller codes fall in this category, but their rate is quite far from optimal. Recent work by Guo et al. gives a slightly better construction, but this is far from the (near) optimal rate achieved by BCH codes (which are not known to be testable). All in all, this paper shows that for affine-invariant codes, the Guo et al. construction is essentially the best testable high-rate code one can get. I have obviously done no justice to the depth of the connections, the intricacies of the parameters, or the overall coolness of this subject. So go read the paper!

News for February 2014

So what did we see in February 2014?

Local Algorithms for Sparse Spanning Graphs by Reut Levi, Dana Ron, and Ronitt Rubinfeld (arXiv). Given a graph \(G = (V,E)\), the aim is the find a small sparse spanning subgraph \(G’\) in the local algorithm setting. So, given any edge \((u,v) \in E\), we must determine in sublinear time whether this edge exists in \(G’\). The main result is an \(\Omega(|V|)\) lower bound, and a matching algorithm when \(G\) is either an expander or hyperfinite.

A Characterization of Locally Testable Affine-Invariant Properties via Decomposition Theorems by Yuichi Yoshida (arXiv). This result gives a complete characterization of (two-sided) locally testable affine-invariant properties. These are based on existing decomposition theorems for functions into structured and pseudorandom parts. This result is analogous to that of Alon, Fischer, Newman, and Shapira (link) on characterizations for testability over dense graphs.

Testing probability distributions underlying aggregated data by Clément Canonne and Ronitt Rubinfeld (arXiv). There has been much recent work on distribution testing, and this work discusses stronger models where better algorithms can be obtained. Given a distribution \(D\) over \([n]\), we can get samples from this distribution, and also query the probability \(D(i)\) of any \(i \in [n]\). With these “dual” queries, one gets distribution testers for a variety of problems surpassing previous lower bounds. Interestingly, this is stronger than the conditional model introduced recently (here and here).

Strong Locally Testable Codes with Relaxed Local Decoders by Oded Goldreich, Tom Gur, and Ilan Komargodski (ECCC). For a locally testable code (LTC), the property of being a codeword is efficiently (constant time) testable. A locally decodable code (LDC) is a code from which individual bits of the original message can be recovered in constant time. A relaxed local decoder is allowed to declare failure for recovered a small fraction of message bits. This allows for bypassing the difficult (and prominent) open problem of constructing small LDCs. This result gives a construction of near-linear LTCs with a relaxed local decoder. This result leads to a property that is hard to test but easy to verify (in the MA proofs of proximity sense, refer to last month’s open problem).

Open problem for February 2014: Better approximations for the distance to monotonicity

Slightly delayed, Feb’s open problem is by one of the three possible “yours truly”s, Sesh. Looking for more reader participation, hint, hint.

Basic setting: Consider \(f:\{0,1\}^n \rightarrow R\), where \(R\) is some ordered range. There is a natural coordinate-wise partial order denoted by \(\prec\). The function is monotone if for all \(x \prec y\), \(f(x) \leq f(y)\). The distance to monotonicity, \(\epsilon_f\) is the minimum fraction of values that must be changed to make \(f\) monotone. This is an old problem in property testing.

Open problem: Is there an efficient constant factor approximation algorithm for \(\epsilon_f\)? In other words, is there a \(poly(n/\epsilon_f)\) time procedure that can output a value \(\epsilon’ = \Theta(\epsilon_f)\)?

State of the art: Existing monotonicity testers give an \(O(n)\)-approximation for \(\epsilon_f\), so there is much much room for improvement. (I’d be happy to see a \(O(\log n)\)-approximation.) Basically, it is known that the number of edges of \(\{0,1\}^n\) that violate monotonicity is at least \(\epsilon_f 2^{n-1}\) [GGLRS00], [CS13]. A simple exercise (given below) shows that there are at most \(n\epsilon_f 2^n\) violated edges. So just estimate the number of violated edges for an \(O(n)\)-approximation.
(Consider \(S \subseteq \{0,1\}^n\) such that modifying \(f\) on \(S\) makes it monotone. Every violated edge must have an endpoint in \(S\).)

Related work: Fattal and Ron [FR10] is a great place to look at various related problems, especially for hypergrid domains.

References

[CS13] D. Chakrabarty and C. Seshadhri. Optimal Bounds for Monotonicity and Lipschitz Testing over Hypercubes and Hypergrids. Symposium on Theory of Computing, 2013.

[FR10] S. Fattal and D. Ron. Approximating the distance to monotonicity in high dimensions . ACM Transactions on Algorithms, 2010.

[GGL+00] O. Goldreich, S. Goldwasser, E. Lehman, D. Ron, and A. Samorodnitsky. Testing Monotonicity . Combinatorica, 2000.

Open problem for Jan 2014: Explicit lower bound for MAPs

The open problem of the month is a new feature on PTReview. Every month we put up on open problem contributed by our readers. To know more about getting your problem broadcasted, check out the link “About Open Problem of the Month” at the top of the page.

The first ever open problem post is by Tom Gur on Merlin-Arthur proofs of proximity. This post doubles up a great survey on this new topic. Thanks Tom!

Primary reference: Non-interactive proofs of proximity by Gur and Rothblum, ECCC, 2013.

Basic setting: Recent work by Ron Rothblum and myself [GR13] initiates the study of \(\mathsf{MA}\) proof-system analogues of property testing, referred to as \(\mathsf{MA}\) proofs of proximity (\(\mathsf{MAP}\)s). A proof-aided tester for property \(\Pi\) is given oracle access to an input \(x\) and free access to a proof \(w\). A valid \(\mathsf{MAP}\) demands the following property. Given a proximity parameter \(\epsilon>0\), for inputs \(x \in \Pi\), there exists a proof that the tester accepts with high probability, and for inputs \(x\) that are \(\epsilon\)-far from \(\Pi\) no proof will make the tester accept, except with some small probability of error.

Open problem: Can one show an explicit property where \(\mathsf{MAP}\)s do not give additional power over standard testers? Formally, show an explicit property that requires any \(\mathsf{MAP}\) to make \(\Omega(|x|)\) queries even when given a long (e.g., length \(|x|/100\)) proof?

Background: Goldreich, Goldwasser, and Ron [GGR98] showed that “almost all” properties require probing a constant fraction of the tested object. This limitation was the primary motivation for \(\mathsf{MAP}\)s. Observe that given a sufficiently long proof, every property can be tested very efficiently by an \(\mathsf{MAP}\). Consider a proof that fully describes the object. The tester has free access to the proof, so all it has to do is verify the consistency of the proof and the object (which can be done by only making \(O(1/\epsilon)\) random queries). The tester can check for free that the object described in the proof has the desired property.

Indeed, the name of the game is to construct \(\mathsf{MAP}\)s with short proofs and significantly lower query complexity than standard testers. There exists a property that has an \(\mathsf{MAP}\) using a logarithmic-length proof and only a constant number of queries, but requires nearly linear number of queries for standard testing (see Section 3 of [GR13]).

Nevertheless, \(\mathsf{MAP}\)s with short proofs are far from being omnipotent. A random property requires any \(\mathsf{MAP}\) to perform \(\Omega(n)\) queries (where \(n\) is the length of the input), even when given a \(n/100\) length proof (Section 5 of [GR13]). The open problem is to find an explicit property that matches this lower bound. The best lower bound on explicit properties is far weaker; specifically, Fischer, Goldhirsh, and Lachish [FGL13] showed that testing a certain family of linear codes (namely, codes with “large” dual distance) requires any \(\mathsf{MAP}\) with a proof of length \(p \ge 1\) to make \(\Omega(n/p)\) queries.

References

[FGL13] E. Fischer, Y. Goldhirsh, and O. Lachish. Partial tests, universal tests and decomposability. Innovations in Theoretical Computer Science (ITCS), 2013.

[GGR98] O. Goldreich, S. Goldwasser, and D. Ron. Property testing and its connection to learning and approximation. Journal of the ACM (JACM), 1998.

[GR13] T. Gur and R. Rothblum. Non-interactive proofs of proximity. Electronic Colloquium on Computational Complexity (ECCC), 2013.

SODA 2014

SODA 2014 had a bunch of property testing papers, all but one of which have been covered by our monthly reports. (This, arguably, shows that the monthly reports have some value.) I’ll just give the list of papers, and only give a summary for the “new” papers.

Testing equivalence between distributions using conditional samples by Clément Canonne, Dana Ron and Rocco A. Servedio

Optimal Algorithms for Testing Closeness of Discrete Distributions by Siu On Chan, Ilias Diakonikolas, Gregory Valiant and Paul Valiant

Testing Surface Area by Chenggang Wu, Ryan O’Donnell, Amir Nayyeri, and Pravesh Kothari

Hereditary properties of permutations are strongly testable by Tereza Klimosova and Daniel Kral (Arxiv). This papers solves an important open problem in permutation testing, and shows that all hereditary properties of permutations are testable, when the distance is given by the Kendall-Tau metric. Such a result was already known for the rectangular metric. The Kendall-Tau metric is the right analogue to the standard edit distance for graph properties, and so this result becomes the analogue of testing of hereditary graph properties.

ITCS 2014

ITCS 2014 had a nice selection of property testing papers, some of which were covered in our monthly posts. One of our reporters was around, and sent us this report.

Partial tests, universal tests and decomposability by Eldar Fischer, Oded Lachish and Yonatan Goldhirsh. Yonathan talked about the notion of “partial testing”, where one wishes to test a subproperty of a property. This paper has results on when properties cannot be decomposed in testable subproperties, and, as Yonathan hinted at, uses sunflowers! This work is related to recent work of Gur and Rothblum on Merlin-Arthur proofs of proximity.

High Dimensional Expanders and Property Testing by Tali Kaufman and Alexander Lubotsky. Consider a function \(f\) on the vertices of a graph and suppose we wish to check if \(f\) is constant. The simple edge tester queries an edge and checks if both endpoints have distinct values. The success probability of this tester is the expansion of the graph. This paper generalizes this notion to high-dimensional simplicial complices, and this gives a tester for the tensor power property.

Parameterized Testability by Kazuo Iwama and Yuichi Yoshida. Many NP-hard problems are fixed-parameter tractable (FPT). Determining if a graph has a vertex cover of size \(k\) can be done in \(O(poly(n)) + 2^k\) time, as opposed to the trivial \(n^k\) bound. Yuichi talked about getting fixed-parameter testing algorithms, where the tester is constant when the property parameter is constant. The paper has a number of results, notably testing algorithms for \(k\)-vertex feedback set and \(k\)-path freeness.