# Policy on reporting papers

While we at PTReview always look through the posted papers, we do not check for correctness. We make a serious attempt to make sure the paper is reasonable. In a few instances, we have decided not to post a (topically relevant) paper, because it looks absolutely wrong. Our position is: the benefit of doubt goes to the author, and a borderline paper should be posted. We are only curating relevant tech reports, not passing judgment on results.

In some borderline cases, readers familiar with the subject complained to us that the paper should be not be considered a scientific contribution (because of, say, unspecified algorithms, blatantly incorrect or unverifiable central claims). These are cases where we were also unsure of the paper. We have usually removed/not posted such papers.

If the paper author(s) feels that his/her paper should nonetheless be posted, then they should email us at little.oh.of.n@gmail.com. As long as the paper is not complete nonsense and appears to cite relevant history, we will defer to the authors’ wishes.

# News for June 2020

Sublinear algorithms in times of social distancing…always something exciting. This month we have a slew of results on sublinear algorithms for classic graph problems.

(Ed: We have removed a previously posted paper due to correctness concerns raised by our readers. Please look at the post on our paper policy.)

Palette Sparsification Beyond (∆ + 1) Vertex Coloring by Noga Alon and Sepehr Assadi (arXiv). A basic fact from graph theory is that any graph has a $$(\Delta+1)$$-coloring, where $$\Delta$$ is the maximum degree. Followers of property testing are likely familiar with a fantastic result of Assadi-Chen-Khanna (ACK) on sublinear algorithms, that gives a sublinear algorithm for $$(\Delta+1)$$-coloring. (The running time is $$\widetilde{O}(n^{3/2})$$, where $$n$$ is the number of vertices.) The key tool is a palette sparsification theorem: suppose each vertex is given a “palette” of $$(\Delta+1)$$ colors. Each vertex randomly sparsifies its palette by sampling $$O(\log n)$$ colors, and is constrained to only use these colors. Remarkably, whp the graph can still be properly colored. This tool is at the heart of sublinear time/space algorithms for coloring. This paper gives numerous extensions to this theorem, where one can tradeoff a larger initially palette for a smaller final sample. Another extension is for triangle-free graphs, where the initial palette is of size $$O(\Delta/\ln \Delta)$$ and the sample is of size $$O(\Delta^\gamma + \sqrt{\ln n})$$ (for parameter $$\gamma < 1$$. This leads to an $$O(n^{3/2 + \gamma})$$ time algorithm for $$O(\Delta/\ln \Delta)$$ coloring of triangle-free graphs.

When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear-Time by Sepehr Assadi and Shay Solomon (arXiv). Taking off from sublinear coloring algorithms, one can ask if there are sublinear time algorithms for Maximal Independent Set (MIS) and Maximal Matching (MM). Alas, ACK prove that this is impossible. This paper investigates when one can get a sublinear time algorithm for these problems. For graph $$G$$, let $$\beta(G)$$ be the “neighborhood independence number”, the size of the largest independent set contained in a vertex neighborhood. This papers shows that both problems can be solved in $$\widetilde{O}(n \beta(G))$$ time. Examples of natural classes of graphs where $$\beta(G)$$ is constant: line graphs and unit-disk graphs. An interesting aspect is that MIS algorithm is actually deterministic! It’s the simple marking algorithm that rules out neighborhoods of chosen vertices; the analysis shows that not much time is wasted in remarking the same vertex.

Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation by Yu Chen, Sampath Kannan, and Sanjeev Khanna (arXiv). This paper studies sublinear algorithms for the metric TSP problem. The input is an $$n \times n$$ distance matrix. One can 2-approximate the TSP by computing the MST, and a result of Czumaj-Sohler gives a $$(1+\varepsilon)$$-approximation algorithm for the latter, running in $$O(n\varepsilon^{-O(1)})$$ time. The main question is: can one beat the 2-factor approximation in sublinear time? This paper considers the graphic TSP setting, where the distance matrix corresponds to the shortest path metric of an unweighted graph. One result is a $$(2-\varepsilon_0)$$-approximation algorithm (for an explicit constant $$\varepsilon_0$$) that runs in $$\widetilde{O}(n)$$ time. For the important $$(1,2)$$ TSP setting (all distances are either 1 or 2), the paper gives a $$O(n^{1.5})$$ time 1.63-approximation algorithm. Interestingly, there is a lower bound showing that $$(1+\varepsilon)$$-approximations, for arbitrarily small $$\varepsilon$$, cannot be achieved in $$o(n^2)$$ time. One of the key tools is sublinear algorithms for estimating the maximum matching size, itself a well-studied problem in the community.

# Welcome Akash Kumar!

Let’s welcome our latest editor, Akash Kumar. Akash will be taking the place of Gautam Kamath, who has decided to pass the torch on. Let’s also thank Gautam for all the help with PTReview.

# News for March 2020

I hope all of you are keeping safe and healthy in these difficult times. Thank heavens we have our research and our math to keep our sanity…

This month has seen two papers, one on testing variable partitions and one on distributed isomorphism testing.

Learning and Testing Variable Partitions by Andrej Bogdanov and Baoxiang Wang (arXiv). Consider a function $$f:\Sigma^n \to G$$, where $$G$$ is Abelian group. Let $$V$$ denote the set of variables. The function $$f$$ is $$k$$-separable if there is a partition $$V_1, V_2, \ldots, V_k$$ of $$V$$ such that $$f(V)$$ can be expressed as the sum $$f_1(V_1) + f_2(V_2) + \ldots + f_k(V_k)$$. This is an obviously natural property to study, though the specific application mentioned in the paper is high-dimensional reinforcement learning control. There are a number of learning results, but we’ll focus on the main testing result. The property of $$k$$-separability can be tested with $$O(kn^3/\varepsilon)$$ queries, for $$\Sigma = \mathbb{Z}_q$$ (and distance between functions is the usual Hamming distance). There is an analogous result (with different query complexity) for $$\Sigma = \mathbb{R}$$. It is also shown that testing 2-separability requires $$\Omega(n)$$ queries, even with 2-sided error. The paper, to its credit, also has empirical studies of the learning algorithm with applications to reinforcement learning.

Distributed Testing of Graph Isomorphism in the CONGEST model by Reut Levi and Moti Medina (arXiv). This result follows a recent line of research in distributed property testing algorithms. The main aim is to minimize the number of rounds of (synchronous) communication for a property testing problem. Let $$G_U$$ denote the graph representing the distributive network. The aim is to test whether an input graph $$G_K$$ is isomorphic to $$G_U$$. The main property testing results are as follows. For the dense graph case, isomorphism can be property tested (with two-sided error) in $$O(D + (\varepsilon^{-1}\log n)^2)$$ rounds, where $$D$$ is the diameter of the graph and $$n$$ is the number of nodes. (And, as a reader of this blog, you probably know what $$\varepsilon$$ is already…). There is a standard $$\Omega(D)$$ lower bound for distributed testing problems. For various classes of sparse graphs (like bounded-degree minor-free classes), constant time isomorphism (standard) property testers are known. This paper provides a simulation argument showing that standard/centralized $$q$$-query property testers can be implemented in the distributed model, in $$O(Dq)$$ rounds (this holds for any property, not just isomorphism). Thus, these simulations imply $$O(D)$$-round property testers for isomorphism for bounded-degree minor-free classes.

# News for December 2019

Happy new year! And now for the last post of 2019 papers. We have found a diverse collection of six papers, ranging from classic property testing topics to new perspectives on sublinear computation.

Sublinear Optimal Policy Value Estimation in Contextual Bandits by Weihao Kong, Gregory Valiant, Emma Brunskill (arXiv). This isn’t our usual sublinear paper, but it is definitely of interest to us sublinear folk. Let’s start with a stripped down definition of the problem (or rather, game). There are $$K$$ “arms”, where the $$i$$th arm is represented by an unknown vector in $$\beta_i \in \mathbb{R}^d$$. We are presented with a “context”, which is a vector $$x \in \mathbb{R}^d$$. Our job is to choose an arm $$i \in [K]$$. We get the reward $$x \cdot \beta_i$$ (with some noise added). The contexts appears from a known distribution. To aid us, we observe the rewards of $$N$$ iid contexts, so we observe a total of $$T = KN$$ rewards. There has been much work on figuring out the minimum value of $$T$$ required to learn the optimal policy. One requires at least $$d$$ (the dimension) samples to estimate any of the arm vectors. This papers shows that one can actually estimate the expected reward of the optimal policy, without being able to describe it, with sublinear in $$d$$ (technically, $$\widetilde{O}(\sqrt{d})$$) samples. We see this a lot in property testing, where producing the “optimal” solution for a problem requires linear-in-dimension samples, but estimating the optimal value is much cheaper (consider, for example, the situation of linearity testing, where we wish to find the closest linear function).

Sublinear Time Numerical Linear Algebra for Structured Matrices by Xiaofei Shi and David P. Woodruff (arXiv). This follows the recent linear of advances in sublinear time linear algebra. Given a matrix $$A \in \mathbb{R}^{n \times d}$$, the aim is to get algorithms that only look at $$o(nnz(A))$$ entries (where $$nnz(A)$$ is the number of non-zeroes, or the support). Consider the classic talk of low rank approximation. Unfortunately, suppose one entry is extremely large, and the others are extremely small. One has to find this large entry for any reasonable approximation, which (in the worst-case) requires $$nnz(A)$$ queries into $$A$$. Thus, previous papers make structural assumption (such as, $$A$$ being a Vandermonde matrix) to get sublinear bounds. This paper gives a clean black box method to get a variety of such results. Basically, one can replace the usual $$nnz(A)$$ term in many algorithms, by $$T(A)$$, which is the time to compute the matrix-vector product $$Ay$$, for $$y \in \mathbb{R}^d$$. In many cases $$T(A) = \widetilde{O}(n)$$, which can be significantly smaller than $$nnz(A)$$. This paper gives such results for low-rank approximations and many regression problems.

Robust and Sample Optimal Algorithms for PSD Low-Rank Approximation by Ainesh Bakshi, Nadiia Chepurko, and David P. Woodruff. Consider the low rank problem discussed above. As mentioned in the previous paragraph, we need structural assumptions on $$A$$. Previous results gave sublinear time low-rank approximations assuming that $$A$$ is positive semidefinite (PSD). The aim is to get a rank $$k$$ matrix $$B$$ such that $$\|A-B\|^2_2$$ is at most $$(1+\epsilon)$$-times the optimal such approximation. The previous algorithm of Musco-Woodruff makes $$\widetilde{O}(nk/\epsilon^{2.5})$$ queries in to $$A$$, while there is a lower bound of $$\Omega(nk/\epsilon)$$. This gap between the complexities is resolved in this paper with an upper bound of $$\widetilde{O}(nk/\epsilon)$$ queries.

Constructive derandomization of query algorithms by Guy Blanc, Jane Lange, and Li-Yang Tan (arXiv). This paper discusses an intriguing angle to sublinear question: when can they be derandomized? Abstractly, consider a randomized algorithm $$R$$ that makes $$q$$ queries. Think of $$R$$ as a function $$R(x,r)$$, where $$x$$ is the input, and $$r$$ is the randomness. We would like to design a deterministic algorithm $$D$$ making, ideally, close to $$q$$ queries and approximates $$\mathop{E}_r[R(x,r)]$$. For starters, consider some distribution over $$x$$, and suppose we want $$\mathbb{E}_x[D(x) – \mathbb{E}_r[R(x,r)]] < \epsilon$$. By (the easy direction of) Yao’s minimax lemma, one can show the existence of such an algorithm $$D$$ that makes $$O(q/\epsilon)$$ queries. But how to explicitly construct it? Indeed, the first result of this paper gives a “meta-algorithm” that takes as input the description of $$R$$ (which is of size $$N$$), has running time $$poly(N)2^{O(q/\epsilon)}$$ and outputs a description of $$D$$. When $$R$$ satisfies the stronger property of “bounded error”, one can get a $$O(q^3)$$-query algorithm $$D$$ that approximates $$\mathop{E}_r[R(x,r)]$$ for all $$x$$ (again, the existence is proven by a classic theorem of Nisan). Overall, this paper gives a method to derandomize sublinear time algorithms, and I wonder if there could be some applications of this method for proving lower bounds. After all, Yao’s minimax theorem is the tool for property testing lower bounds, and any perspective on Yao’s theorem is likely of relevance.

Testing Membership for Timed Automata by Richard Lassaigne and Michel de Rougemont (arXiv). Property testing for regular languages is a classic result in sublinear algorithms. This paper focuses on the more complex notion of timed automata. The technical definition is quite complicated, but here’s an overview. There is a finite automaton and a collection of “clocks”. Imagine a string being processed, where each alphabet symbol appears with a new timestamp. Thus, the input word is called a “timed word”. The transitions of the automaton involve the new symbol read, as well as constraints involving the clock times and the timestamp. Thus, we can enforce conditions like “only transition if another symbol is read within a single time unit”. In general, deciding whether a timed word is accepted by a timed automaton is NP-complete. This papers studies the property testing viewpoint. The paper gives a new definition of “timed edit distance” between timed words. The main result shows that one can distinguish time words accepted by a timed automaton from words that are far (according to timed edit distance), by querying a constant number of word positions.

On the query complexity of estimating the distance to hereditary graph properties by Carlos Hoppen, Yoshiharu Kohayakawa, Richard Lang, Hanno Lefmann, and Henrique Stagni (arXiv). This paper concerns the classic setting of property testing of dense graphs. It is well-known that all hereditary graph properties are testable, and are moreover, one can estimate the distance to the property in time that only depends on $$\varepsilon$$. Unfortunately, the queries complexities have large tower dependencies on $$\varepsilon$$, arising from the use of the Szemeredi regularity lemma. The question of property testing in dense graphs can be reduced to finding “removal” lemmas (such as the classic triangle remove lemma). Such a lemma states that if at least $$\varepsilon n^2$$ edges need to be removed from $$G$$ to destroy all “forbidden subgraphs”, then there must be “many” forbidden subgraphs in $$G$$. There is much recent research on finding families of forbidden subgraphs, where the “many” (in the above statement) is at least $$poly(\varepsilon)$$ times the trivial upper bound. This paper shows that one can also estimate the distance to any hereditary property, in a query complexity that depends directly on the corresponding removal lemma parameters. As a compelling example, one can estimate the distance to being chordal in $$\exp(1/\varepsilon)$$ queries, a significant improvement over standard tower bounds.

# News for Sept 2019

Five Six papers this month: results on testing separations, linearity testing in $$\mathbb{R}^n$$, testing for regular languages, graph property testing, topological property testing, and Boolean rank.

Hard properties with (very) short PCPPs and their applications, by Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Ron D. Rothblum (arXiv). Probably, the most significant takeaway from this work is a (largest possible) separation between standard and tolerant property testing. PCPPs (Probabilistically Checkable Proofs of Proximity) are the “NP” variant of property testing, where the tester is aided by a proof string. Consider property $$\mathcal{P}$$. If $$x \in \mathcal{P}$$, there must be a proof string that makes the tester accept (with probability 1). If $$x$$ is far from $$\mathcal{P}$$ (in the usual property testing sense), for any proof string, the tester must reject with sufficiently high probability. PCPPs have played a role in the classical constructions of PCPs, but have also found uses in getting a better understanding of property testing itself. And this paper shows how PCPP constructions can be used to get property testing separations. The main result in this paper is a property $$\mathcal{P}$$ that (basically) requires $$\Omega(n)$$ queries to “property test”, but has a PCPP system where the proof length is $$\widetilde{O}(n)$$. ($$n$$ is the input length.) The main construction uses collections of random linear codes. Significantly, these constructions show a strong separation between standard vs tolerant property testing, and standard vs erasure-resilient property testing. (The latter is a recent variant by Dixit et al, where certain parts of the input are hidden from the tester.) There is a property that is testable in a constant number of queries, but requires $$\widetilde{\Omega}(n)$$ queries to test tolerantly (for any non-trivial choice of parameters). An analogous result holds for erasure-resilient testing.

Distribution-Free Testing of Linear Functions on R^n, by Noah Fleming and Yuichi Yoshida (arXiv). Linearity testing is arguably the canonical problem in property testing, yet there is still much to be learned about it. This paper considers functions $$f: \mathbb{R}^n \to \mathbb{R}$$, and the distribution-free setting. (In this setting, distance is measured according is an unknown distribution $$\mathcal{D}$$ over the input, and the tester can access samples from this distribution. For $$\mathbb{R}^n$$, the “standard” distribution would the $$n$$-dimensional Gaussian.) The main result is that linearity testing can be done in the distribution-free setting with $$\widetilde{O}(1/\varepsilon)$$ queries, assuming that the distribution is continuous. The primary technical tool, an interesting result in its own right, is that additivity $$(f(x+y) = f(x) + f(y))$$ can be tested in $$\widetilde{O}(1/\varepsilon)$$ queries. The significance of the testing result is cemented by an $$\Omega(n)$$ lower bound for sample-based testers.

Sliding window property testing for regular languages by Moses Ganardi, Danny Hucke, Markus Lohrey, Tatiana Starikovskaya (arXiv). Fix a regular language $$\mathcal{R}$$. Consider the streaming model, and the basic question of recognizing whether the string (being streamed) is in $$\mathcal{R}$$. Simple, you will say! Run the DFA recognizing $$\mathcal{R}$$ in constant space. Now, suppose there is a sliding window length of $$n$$. The aim is to determine if the past $$n$$ symbols (the “active window”) form a string in $$\mathcal{R}$$. Suprisingly (at least to me), there is a full characterization of the space required for randomized algorithms, and (depending on $$\mathcal{R}$$), it is either $$\Theta(1)$$, $$\Theta(\log\log n)$$, $$\Theta(\log n)$$, or $$\Theta(n)$$. In the interest of beating these lower bounds, suppose we wish to property test on the active window. It turns out the answer is quite nuanced. There are deterministic $$O(\log n)$$-space testers and randomized two-sided $$O(1/\varepsilon)$$-space testers for all regular languages. For randomized one-sided testers, there are multiple possibilities for the optimal space complexity, and there is a full characterization of these regular languages.

A characterization of graph properties testable for general planar graphs with one-sided error (It’s all about forbidden subgraphs) by Artur Czumaj and Christian Sohler (arXiv). Property testing of sparse graphs has been receiving more attention, but most results focus on the bounded degree setting. Unfortunately, many of these results break quite dramatically on sparse graphs with unbounded degrees. This paper focuses on property testing, within the class of unbounded degree planar graphs. (Meaning, the input is always assumed to be planar.) The results achieve a significant goal: as the title suggests, there is a complete characterization of properties that are constant-query testable with one-sided error. The easier part is in showing that all such properties can be reduced to testing $$H$$-freeness. The harder (remarkable) result is $$H$$-freeness can be tested in general planar queries with constant queries. (This is non-trivial even for triangle-freeness.) And, as is easy to conjecture but hard to prove, these results carry over for all minor-closed families. As a small indication of the challenge, most testers for bounded-degree graphs work by doing constant depth BFSes. When high degree vertices are present, this method fails, and we really need new ideas to deal with such graphs.

Near Coverings and Cosystolic Expansion – an example of topological property testing by Irit Dinur and Roy Meshulam (ECCC). In most algebraic settings, property testing results can be seen as local to global theorems. When do local constraints on a large object imply a global condition? This paper gives a topological instantiation of this phenomenon. We need to define the cover of a simplicial complex $$X$$. For concreteness, think of a 2D simplicial complex $$X$$, which is a hypergraph with hyperedges of size at most 3, where subsets of hyperedges are also present. A 2-cover is a simplicial complex $$X’$$ with the following property. It has two copies of each vertex of $$X$$. Each hyperedge of $$X$$ must have two “corresponding” disjoint copies in $$X’$$. Let the copies of vertex $$v$$ be $$v_0, v_1$$. Then, for every hyperedge (say) $$(u,v,w)$$ of $$X$$, there must be two disjoint hyperedges in $$X’$$ involving copies of the corresponding vertices. One can consider the property testing twist: if the neighborhoods of “most” vertices $$v$$ in $$X$$ satisfy these condition (with respect to the neighborhoods of the copies of $$v$$ in $$X’$$), then is $$X’$$ close to being a cover of $$X$$? Indeed, this paper proves that such a “property testing condition” holds iff $$X$$ is a high-dimensional expander.

Property testing of the Boolean and binary rank by Michal Parnas, Dana Ron, and Adi Shraibman (arXiv). The Boolean rank of a matrix $$M$$ is a fundamental quantity that appears in many lower bound constructions. (Recall that an $$n \times m$$ Boolean matrix $$M$$ has a rank $$r$$ if $$M$$ can be expressed as $$X \cdot Y$$, where $$X \in \mathbb{F}_2^{n \times d}$$ and $$Y \in \mathbb{F}_2^{d \times m}$$.) In the real-valued setting, results show that one can property test rank in $$poly(d/\varepsilon)$$ queries. This paper proves an analogous result for the Boolean rank. There is a surprise element here: over reals, the rank can be computed in polynomial time, and many of the geometric intuitions can be brought over to the property testing problem. On the other hand, the Boolean rank is NP-hard to compute exactly, yet we can still get a tester with $$poly(d)$$ query complexity. The paper also gives results for binary rank. For the binary rank, we require the component matrices $$X, Y$$ to be Boolean, but algebraic operations are over the reals. In the case, the tester has query complexity $$2^{2d}$$ (with varying dependencies on $$\varepsilon$$ for adaptive/non-adaptive testers). The intriguing open problem is whether $$poly(d)$$-query testers exist for binary rank.

# News for June 2019

We’ve got four papers this month. A mix of distribution testing, matrix problems, and a different paper on the power of sampling.

On the Complexity of Estimating the Effective Support Size, by Oded Goldreich (ECCC). In distribution testing, a classic problem is that of approximating the support size. By a (now) classic result of Valiant-Valiant, the complexity of this problem is $$\Theta(n/\log n)$$. This paper raises the question of approximating the “effective” support, and that too with more than just samples from the distribution. The $$\epsilon$$-support of discrete distribution $$\mathcal{D}$$ is the smallest support among any distribution $$\mathcal{D}’$$ such that $$\|\mathcal{D} – \mathcal{D}’\|_1 \leq \epsilon$$ (or TV-distance). Denote this as $$supp_\epsilon(\mathcal{D})$$. One can also consider a bicriteria version. Given approximation parameter $$f$$ and thresholds $$\epsilon_1 < \epsilon_2$$, we need to provide a number in the range $$[supp_{\epsilon_2}(\mathcal{D}), f\cdot supp_{\epsilon_1}(\mathcal{D})]$$. The primary model studied allows for random samples and evaluation queries (where one gets the probability of any known element of the domain). In this model, for arbitrary $$\epsilon_1, \epsilon_2$$, there is a continuum of algorithms, trading off query complexity with approximation. At one end, for $$f = O(\log n)$$, the query complexity is $$\widetilde{O}(1/\epsilon_1)$$. At the other end, for $$f=1$$, the query complexity is $$O(\log^* n)$$. (Here, $$n = supp_{\epsilon_1}(\mathcal{D})$$.) There are lower bounds showing the necessity of evaluation queries for subpolynomial query complexities.

Communication and Memory Efficient Testing of Discrete Distributions, by Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, and Sankeerth Rao (arXiv). This paper adds additional computational constraints to the distribution testing problem. In the streaming setting, the algorithm is only allowed a single pass over the random samples. It has $$m$$ bits of storage, much smaller than $$n$$, the distribution size. The aim is to minimize the number of samples required for uniformity (and closeness) testing. For uniformity testing in the streaming setting, the paper gives an algorithm that uses $$\widetilde{O}(m + n/m)$$ samples. The standard collision algorithm requires $$\Theta(\sqrt{n})$$ storage (to store the samples), while this result gives a non-trivial bound for $$m \ll \sqrt{n}$$. There are lower bounds showing that these bounds are basically tight. In the distributed distribution testing problem, there are a number of processors, each holding $$\ell$$ samples. A referee asks a question to the processor, whose answers are broadcast to everyone. The aim is to minimize communication cost. For uniformity testing in this setting, there is a protocol using $$O(\sqrt{n\log n/\ell})$$ bits of communication. As a sanity check, note that for $$\ell = \sqrt{n}$$, one processor could simply run the collision algorithm locally to report the result.

Querying a Matrix through Matrix-Vector Products by Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang (arXiv). Consider $$n \times d$$ matrix $$M$$ over some field $$\mathbb{F}$$. One gets access to this matrix through matrix-vector products, and wishes to test some matrix property. This is a natural model in many settings, and generalizes the classic setting of query access. A subtle point is that one can only right multiply with “query” vectors, and there are problems where left multiplication can change the complexity. (A nice example in the paper is testing if a square matrix is symmetric. With both left and right multiplications, this is easy, since we can directly access rows and columns. By only accessing columns, this is non-trivial.) This paper studies a number of problems, broadly classified as linear algebra problems, statistics problems, and graph problems. Some highlights are: testing symmetry can be done in $$O(1)$$ queries, and the maximum eigenvalue can be approximated in $$O(\log n)$$ queries adaptively (but there is an $$\Omega(n)$$ non-adaptive lower bound). For graph problems, here’s an interesting discovery. If $$M$$ is the adjacency matrix, connectivity requires $$\Omega(n/\log n)$$ queries. But if $$M$$ is the signed edge-vertex matrix, then this can be done in $$poly(\log n)$$ queries. This paper provides a number of interesting directions and problems to study.

The Adversarial Robustness of Sampling by Omri Ben-Eliezer and Eylon Yogev (arXiv). This isn’t a property testing paper, but how can one ignore a paper on understanding the power of sampling? It’s convenient to think of the following setting. An algorithm gets a stream of $$n$$ numbers, and has to answer some questions about the stream (say, the median or other quantiles). The simplest strategy is to take a small random sample of the stream. But what if an adversary was generating the stream, depending on the current sample? Under what circumstances is the sample still “representative” of the stream? The specific results of this paper require getting into set systems and VC dimension, which I’ll leave for the sake of simplicity. Let’s go back to the median question. To get an approximate median, a constant number of samples suffice. A consequence of the main result is that if one takes $$O(\log n)$$ samples, then this is robust to adversarial streams. On the other hand, lower bounds show that constant sized samples can be fooled by an adversary. The paper is a really interesting read, and is a nice take on “standard facts” that we take for granted.

# News for February 2019

A lean month by our (now high) standards. Three papers, on local computational algorithms for spanners, sublinear set cover, and quantum distribution testing.

Local Computation Algorithms for Spanners, by Merav Parter, Ronitt Rubinfeld, Ali Vakilian, Anak Yodpinyanee (arXiv). Consider the problem of computing a $$k$$-spanner for a graph $$G$$. This is a subgraph $$H$$, where (for any pair of vertices) the shortest path distance in $$H$$ is at most $$k$$ times the corresponding distance in $$G$$. A Local Computation Algorithm (LCA) essentially gives sublinear query access to (some spanner) $$H$$, without any preprocessing on $$G$$. In other words, an LCA takes as input an edge $$(i,j)$$ in $$G$$, and in sublinear time says “yes” or “no”. Despite no preprocessing, it is guaranteed that all the “yes” answers form a spanner, and the number of “yes” answers is small. One of the main challenges in LCAs for graph problems is getting a polynomial dependence on $$\Delta$$, the maximum degree. (Many results end up with an exponential dependence on $$\Delta$$, essentially assuming bounded-degree.) This paper give an LCA outputting spanners of optimal size for $$k=3,5$$, with query complexity $$n^{1-\alpha}$$ (for constant $$\alpha$$ depending on $$k$$). The second result works for general $$k$$, but has a query complexity that is $$O(\Delta^4n^{2/3})$$.

Set Cover in Sub-linear Time, by Piotr Indyk, Sepideh Mahabadi, Ronitt Rubinfeld, Ali Vakilian, Anak Yodpinyanee (arXiv). Set cover needs no introduction. This paper considers sublinear algorithms for set cover. It is convenient to think of the instance as a binary $$m \times n$$ matrix, for $$m$$ sets over a universe of size $$n$$. The query model gives access to the $$j$$th non-zero entry in a desired column/row. The aim is this paper is to actually construct a set cover, and not just approximate its size (the latter problem with initiated in Nguyen-Onak). The first result shows a black-box conversion of any set cover algorithm into a sublinear query complexity algorithm. Any $$\alpha$$-approximation for set cover can be converted into an $$O(\alpha)$$-approximation, that makes $$O(m(n/k)^{\beta} + nk)$$ (where $$k$$ is the optimal set cover size, and $$\beta < 1$$ depends on $$\alpha$$). For large $$k$$, the paper gives an $$O(mn/k)$$ query, $$O(\log n)$$-approximation algorithm. There are various lower bounds given, that show these dependencies are necessary.

Distributional Property Testing in a Quantum World by András Gilyén and Tongyang Li (arXiv). There are two aspects to quantum distribution testing. First, we can think of faster distribution testing (for classical problems) using quantum computation. Second, we can generalize the problem of distribution testing to its quantum equivalent, called density operator testing (this papers gives the first result for this quantum generalization). A distribution is a non-negative vector in $$\mathbb{R}^n$$ with unit $$l_1$$-norm. The quantum generalization, a density operator, is a PSD matrix in $$\mathbb{C}^n$$ with unit trace. This paper has a number of results, along both aspects. For example, for the well-known problem of testing equality of distributions under total variation distance, the papers gives an $$\widetilde{O}(\sqrt{n}/\epsilon)$$ tester (an improvement over classical lower bounds). For testing equality of density operators under the same distance, the paper gives an $$O(n/\epsilon)$$ tester (note that the trivial bound is $$O(n^2)$$, since the operator is a matrix).

# News for November 2018

Alas! Sorry for the delay, but November was a good month with five seven papers. We missed a few papers from September, so please look at the updated post for September as well. (Update: Thanks to Omri Ben-Eliezer and an anonymous poster for pointing out two missed papers.)

From Local to Robust Testing via Agreement Testing, by Irit Dinur, Tali Kaufman, and Noga Ron-Zewi (ECCC). Consider a code, which is simply a subset of $$C \subseteq \mathbb{F}^n_q$$. (Typically, we think of linear codes, which form a linear subspace.) The study of locally testable codes is a rich area. Consider an input $$x \in \mathbb{F}^n_q$$. A local tester for code/property $$C$$ queries, according to some distribution, a subset $$K$$ of $$Q$$ coordinates in $$x$$, and rejects if the “view” $$x|_K$$ is inconsistent with any codeword. The rejection probability should be proportional to the distance of $$x$$ to $$C$$, denoted $$dist(x,C)$$. A robust tester demands the average distance of $$x|_K$$ to the corresponding property $$C|_K$$ must be proportional to $$dist(x,C)$$. This is a much stronger demand than that of just local testing. The main result is local testability implies robust testability for the special class of lifted codes. The proof goes via agreement testing, where an algorithm gets access to some fragments of $$x$$, and must decide if these fragments are consistent with a codeword.

Testing local properties of arrays, by Omri Ben-Eliezer (ECCC). Consider a function $$f:[n]^d \to \Sigma$$, where $$\Sigma$$ is finite. Now, suppose we define a property $$\mathcal{P}$$ with respect to a set of forbidden $$[k]^d$$ (consecutive) subarrays. Such a property is called $$k$$-local. The inspiration for this work is a line of research on testing of image properties. Note that for $$k=2,3$$, this notion subsumes a number of properties, such as monotonicity, $$c$$-Lipschitz, submodularity, convexity, etc. The main result is a one-sided, non-adaptive tester for all such properties. Ignoring $$\epsilon, k$$ dependencies, the query complexity is $$O(\log n)$$ for $$d=1$$ and $$O(n^{d-1})$$ for $$d > 1$$. Remarkably, there is no dependence on the alphabet size $$|\Sigma|$$. Moreover, the bound above is optimal for one-sided, non-adaptive testers. The basic idea to pick blocks of various sizes, and query all points on the boundary. Then, the tester can determine (by sort of brute force search) if this is consistent with some function in $$\mathcal{P}$$. The key is to prove that for a function that is far from $$\mathcal{P}$$, there are many blocks that cannot be consistent with $$\mathcal{P}$$.

Erasures vs. Errors in Local Decoding and Property Testing, by Sofya Raskhodnikova, Noga Ron-Zewi and Nithin Varma (ECCC). The main theme of this paper is the notion of erasure-resilient testing. Consider property $$\mathcal{P}$$. Suppose $$\alpha$$-fraction of the coordinates of input $$x$$ is hidden. Thus, queries to these coordinates of $$x$$ return nothing. Our aim is to accept if there is some “filling in” of the hidden coordinates that make $$x$$ satisfy $$\mathcal{P}$$, and reject if for all fillings, $$x$$ remains from from $$\mathcal{P}$$. If $$\alpha = 0$$, this is standard property testing. Suppose there exists an $$(\alpha, \alpha+\epsilon)$$-tolerant tester for $$\mathcal{P}$$. (I’m fudging factors a bit here for readability.) We get an $$\epsilon$$-tester with $$\alpha$$-fraction of erasures, by simply filling $$x$$ in arbitrarily. So this model is in between vanilla and tolerant testing. The main result is a separation. There is a property that is constant-query testable under erasures, but requires $$n^{\Omega(1)}$$ queries to tolerant test (with the above parameters). One of the main technical results is a list decoder for the Hadamard code that can tolerate erasures of any $$\alpha \lt 1$$.

Domain Reduction for Monotonicity Testing: A o(d) Tester for Boolean Functions on Hypergrids, by Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri (ECCC). Consider monotonicity testing of Boolean functions of the hypergrid, $$f:[n]^d \to \{0,1\}$$. It is known that there are $$O(d)$$ testers with complexity independent of $$n$$, while for $$n=2$$, there are $$o(d)$$ testers. The main result is a tester with both these properties. The paper gives an $$O(d^{5/6})$$ tester for such functions. The main technical ingredient is a domain reduction theorem. Consider restrictions of $$f$$ to a uniform random $$[poly(d\epsilon^{-1})]^d$$ hypergrid. The paper proves that if $$f$$ is $$\epsilon$$-far from monotone, then (with high probability) so is the restriction. Thus, for the purposes of monotonicity testing, one can simply assume that $$n = poly(d)$$. This gives a black-box method to get testers independent of $$n$$.

On the Testability of Graph Partition Properties, by Yonatan Nakar and Dana Ron (ECCC). Consider property testing on dense graphs. The seminar result of Goldreich-Goldwasser-Ron studies graph partitioning properties. Such a property is defined as follows. Fix constant $$k$$. We wish to partition the vertices of graph $$G$$ into $$V_1, V_2, \ldots, V_k$$. For each $$i$$, $$|V_i|/n \in [\rho^L_i, \rho^U_i]$$. Furthermore, for each $$i, j$$, the number of edges between $$V_i, V_j$$ must be in $$[\rho^L_{i,j}n^2, \rho^U_{i,j}n^2]$$. (All the $$\rho$$s are parameters of the property.) This subsumes properties like $$k$$-colorability, and the classic result is a $$poly(1/\epsilon)$$-tester for these properties. Note that the edge conditions are absolute, with respect to $$n^2$$, and not with respect to $$|V_i|\cdot |V_j|$$. This paper allows for relative bounds of the form $$[\rho^L_{i,j}|V_i|\cdot |V_j|, \rho^U_{i,j}|V_i|\cdot |V_j|]$$. This is far more expressive, subsuming properties like having large cliques, or large complete bipartite graphs. One of the main results is a (two-sided) $$poly(1/\epsilon)$$ tester for all such properties. Furthermore, there is a characterization of such properties that are one-sided testable in $$poly(1/\epsilon)$$ queries. For such properties, it is proven that the density conditions (between $$V_i, V_j$$) must either be unconstrained or have $$\rho^L_{i,j} = \rho^U_{i,j}$$ be either 1 or 0.

Limits of Ordered Graphs and Images, by Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida (arXiv). The theory of graph limits is a deep topic, with connections to property testing, regularity lemmas, and analysis of real-world networks. Roughly speaking, consider a sequence $$\{G_n\}$$ of graphs where the (relative) frequency of any fixed subgraph converges. Then, we can define a limit of the graphs, called a graphon, which is a measurable function $$W:[0,1]^2 \to [0,1]$$. This paper discovers appropriate limits of ordered graphs. Note that images can be represented as ordered graphs. An ordered graph is a symmetric function $$G:[n]^2 \to \{0,1\}$$ (with the trivial automorphism group). The graphon definitions and limits do not generalize here. There is a sequence of ordered graphs where the ordered subgraph frequencies converge, but one cannot associate any graphon as the limit. This paper discovered a new object called an orderon, which is a function $$W:([0,1]^2)^2 \to [0,1]$$. Orderons can be shown to be the limits of sequences of ordered graphs.

Two Party Distribution Testing: Communication and Security, by Alexandr Andoni, Tal Malkin, and Negev Shekel Nosatzki (arXiv). This paper gives a new take on distribution testing. Suppose Alice and Bob have each have access to $$t$$ samples from (unknown) distributions $$p$$ and $$q$$ respectively. Their aim is to distinguish $$p = q$$ from $$\|p-q\|_{TV} > \epsilon$$. How much communication is required? In the standard setting, it is known that $$n^{2/3}$$ (ignoring $$\epsilon$$) is necessary and sufficient. In the communication setting, the bound turns out to basically be $$\Theta((n/t)^2)$$. Observe how for $$t=n^{2/3}$$, the communication is $$n^{2/3}$$ (saying that Alice is simply going to send all her samples to Bob). But for larger $$t$$, the communication decreases. Since Alice can learn more about her distribution, she can use sketching techniques to reduce her communication. There is an analogous result for testing independence of two distributions. Furthermore, one can even require these protocols to be secure, so Alice and Bob learn nothing about each others samples (other than the final answer).

# News for July 2018

Three Four papers for July. New sublinear graph algorithms, distribution testing under new models, and sublinear matrix algorithms. Onward ho…
(Sorry Amit, for missing your paper on sublinear matrix algorithms.)

Metric Sublinear Algorithms via Linear Sampling, by Hossein Esfandiari and Michael Mitzenmacher (arXiv). Consider a weighted clique $$G = (V,E)$$ where $$V$$ is a set of points in a metric space and edge weights are metric distances. In this setting, sublinear algorithms are those that make $$o(n^2)$$ edge queries. This paper studies problems like densest subgraph and maxcut in this setting. The key method is a sparsifying algorithm that achieves the following. (I paraphrase their language.) Consider a positive parameter $$\alpha$$, and let $$w(e)$$ denote the weight of edge $$e$$. The aim is to get a subgraph $$H$$ that contains every edge $$e$$ (in $$G$$) with independent probability $$\min(w(e)/\alpha, 1)$$. Furthermore, this subgraph should be obtained in time linear in the number of edges in $$H$$ (hence the title of the paper). For problems like 1/2-approximating the densest subgraph and PTASes for maxcut, the results show that for a carefully chosen $$\alpha$$, approximate solutions on $$H$$ give solutions of comparable quality on $$G$$. These results cleanly generalize to settings where edge weights satisfy triangle inequality with some multiplicative penalty.

Sublinear Algorithms for ($$\Delta$$ + 1) Vertex Coloring, by Sepehr Assadi, Yu Chen, and Sanjeev Khanna (arXiv). Arguably, the first thing you learn about vertex coloring is that a graph with maximum degree $$\Delta$$ admits a $$(\Delta+1)$$-coloring, that can be found in linear time. But what about sublinear time/space? I like this! You take a simple classical fact, throw in sublinear constraints, and it opens up a rich theory. This paper shows a non-adaptive $$O(n^{3/2})$$-time algorithm for this problem, and gives a nearly matching lower bound. There are also results for streaming and parallel computation, but let’s focus on the sublinear result. It is remarkable that there is no loss in colors in going to sublinear time. (In contrast, the papers shows an $$\Omega(n^2)$$ lower bound for constructing a maximal matching.) The main technical tool is a list coloring result, where each vertex is given a list of colors and much choose its own from that list. Obviously, if each list is $$[\Delta + 1]$$, such a coloring is possible. The paper proves that even if each list is an independent $$O(\log n)$$-sized sample of $$[\Delta+1]$$, a valid coloring is still possible. The final algorithm is pretty involved, and uses this meta-algorithm as a building block.

Anaconda: A Non-Adaptive Conditional Sampling Algorithm for Distribution Testing, by Gautam Kamath and Christos Tzamos (ECCC). The standard model for distribution testing is access to samples from the unknown distribution $$\mathcal{D}$$ with support $$[n]$$. This has attracted much attention with a rich set of results, and the complexity classic problems of uniformity, identity, and equivalence are well understood. But there are alternate models, such as the model of conditional samples (Chakraborty-Fischer-Goldhirsh-Matsliah ’13 and Canonne-Ron-Servedio ’14). For any subset $$S \subseteq [n]$$, we can get a random sample from $$\mathcal{D}$$ restricted to $$S$$. This adds an algorithmic dimension to distribution testing. This paper studies the power of non-adaptive conditional (NACOND) queries. The main result is that uniformity, identity, and equivalence are testable with $$\mathrm{poly}(\log n)$$ queries. (There are existing $$\Omega(\log n)$$ lower bounds for all these problems.) The heart of these algorithms is a procedure ANACONDA that tries to find a set $$S$$ where some element has a high probability, relative to the mass of $$S$$.

Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices, by Amit Levi and Yuichi Yoshida (arXiv). When it comes to fundamental problems, it’s hard to beat quadratic minimization. Given a matrix $$A \in \mathbb{R}^{n\times n}$$, we wish to find $$v \in \mathbb{R}^n$$ that minimizes $$v^TAv$$. (This is basically a singular value/vector problem.) One may have additional terms in the objective, depending on $$v^Tv$$ or $$v^Tb$$ (for fixed vector $$b$$). This paper gives sublinear algorithms for this problem. A natural approach is to simply subsample $$k$$ rows and columns to get submatrix $$B$$, solve the problem for $$B$$, and hope for the best. This idea has a rich history from seminal work of Frieze-Kannan. Recently, Hayashi-Yoshida show that constant $$k$$ (only depending on error parameters) suffice for getting a non-trivial approximation for this problem. Unfortunately, the solution error depends on the $$\ell_\infty$$-norm of the solution. This paper shows that for polylogarithmic $$k$$, one can get an error depending on the $$\ell_2$$-norm of the solution. This is a significant improvement, especially for sparse solution vectors. The main technical workhorse is a new matrix decomposition theorem, that shows how any matrix can be written as a sum of a few block matrix, and a low-norm “error” matrix. Admirably, the paper shows a number of experiments,
showing the effectiveness of this technique for eigenvalue computations. It’s very nice to see how ideas from sublinear algorithms might have a practical impact.