Happy 2022 everyone! We now cover the last papers of 2021. Our readers have pointed out a number of papers from previous months that we missed (sorry). To adequately cover (and publicize) these papers, we’ve decided to just incorporate those papers on this post rather than update previous posts.
We have a breakthrough on Locally Testable Codes, sublinear algorithms for testing pattern freeness, new thoughts on convexity testing, and a perspective on sampling community structure. And here’s the final list for 2021.
Locally Testable Codes with constant rate, distance, and locality by Irit Dinur, Shai Evra, Ron Livne, Alexander Lubotzky, and Shahar Mozes (ECCC, posted Nov 2021). A Locally Testable Code (LTC) is one where the set of codewords has a property tester. The study of LTCs started with the PCP theorem, but blossomed into a deep research agenda. The holy grail is a \(c^3\)-LTC: one with constant rate, constant distance, and constant query property tester. Just to recall, the rate \(r \in (0,1)\) is the ratio of the message length to the encoding length. The distance \(\delta \in (0,1)\) is the (normalized) Hamming distance between codewords (we typically expect this to be constant for an error correcting code). And, most familiar to us, \(q\) is the number of queries required to property test codewords. The aim is to get all these parameters to be constants independent of the message length. The classic linearity test of BLR basically proves that the Hadamard code is 3-query testable. But the Hadamard code has poor (exponentially small) rate. Low-degree tests prove that Reed-Muller codes are constant-query testable, but they have inverse polynomial rate. Further work constructed LTCs with inverse polylogarithmic rate, meaning that the encoding has length \(n\cdot poly(\log n)\) for message length \(n\). There was even some belief that any LTC must have a rate going to zero. This paper invalidates that belief, and gives a truly \(c^3\)-construction. The construction is based on a 2-dimensional simplicial complex based on Cayley graphs. This is basically a graph with hyperedges corresponding to “squares” of vertices. Concurrent and independent work of Breuckmann-Ebedhardt and Panteleev-Kalachev also have similar constructions, though their perspective is of quantum codes.
On the Locally Testable Code of Dinur et al. (2021) by Oded Goldreich (ECCC). As the title says, this short paper gives a high level discussion of the previous result. It abstracts out some of the graph theoretic aspects, to explain connections to more combinatorial constructions. The construction and analysis is based on expander codes. These codes (e.g. low-density parity check, LDPC codes) have constant rate and distance, but we do not know how to test them. A string far from a codeword might only fail on (say) one parity check constraint. Interpreting the Dinur et al as an expander code, we essentially get a collection of correlated parity constraints. Hence, a string that is far from a codeword violates many constraints, leading to a tester. (Of course, this is an extremely watered down version of the analysis idea.) The construction takes two Cayley graphs on a non-Abelian group (which form the vertices). Specific 4-cycles in this graph (together with a constant sized tensor code) are used to make the final codewords.
Strongly Sublinear Algorithms for Testing Pattern Freeness by Ilan Newman and Nithin Varma (arXiv, posted Jun 2021). Pattern freeness is a major generalization of monotonicity testing. The input is (query access to) a function \(f: [n] \mapsto \mathbb{R}\). Let \(\pi: [k] \mapsto [k]\) be a permutation. The input is \(pi\)-free if no restriction of \(f\) to \(k\) elements induces the permutation \(pi\). Newman-Rabinovich-Rajendraprasad-Sohler introduced this problem. When \(k=2\), this property is precisely monotonicity, for which (many) \(O(\varepsilon^{-1} \log n)\) property testers are known. Is such a bound achievable for \(k \geq 3\)? Interestingly, for non-adaptive algorithms, an optimal bound of \(\Theta(n^{1-1/(k-1)})\) (ignoring, \(\varepsilon, k\) dependencies) was achieved by Ben-Eliezer and Canonne. This papers shows that there is an adaptive algorithm that makes \(n^{o(1)}\) queries, thereby proving a separation between adaptive and non-adaptive algorithms. A key idea is to visualize the input as a set of points in the plane, and construct a coarse grid that partitions the points. There is a highly non-trivial case analysis on top of this that leads to a \(O(\sqrt{n})\)-query algorithm (this already beats the non-adaptive lower bound).
Parameterized Convexity Testing by Abhiruk Lahiri, Ilan Newman, and Nithin Varma (arXiv, posted Oct 2021). Given a function \(f:[n] \mapsto \mathbb{R}\), the property of interest is (discrete) convexity. The discrete derivatives should be monotonically non-decreasing. It is known that there is a \(O(\varepsilon^{-1}\log n)\) query non-adaptive tester for this property. This paper studies this problem from the perspective of parameterized property testing, where the range is restricted. The main result is that if the number of distinct discrete derivates is at most \(s\), there is a \(O(\varepsilon^{-1} \log s)\) query non-adaptive tester. Moreover, this bound is optimal even for adaptive algorithms. This picture parallels that of monotonicity. Interestingly, when there are only two distinct discrete derivatives, the tester can be made deterministic.
Modularity and edge sampling by Colin McDiarmid and Fiona Skerman (arXiv). Not your typical sublinear algorithms paper, but certainly of interest to us. The main problem is to understand when a random samples of edges in a graph can reveal information about the “community structure”. The modularity of a graph is a measure of this structure. Essentially, a partition has low modularity if the number of edges crossing the partition is close to that of a random graph with the same degree distribution. A partition of high modularity has significant fewer edges, and is thus a valid “clustering” of the graph. The modularity of the graph is obtained by maximizing over partitions. (The normalization of modularity is chosen so that the partition generally has a constant number of sets.) This paper proves a concentration inequality for the modularity of a constant sample of uniform random edges. The modularity of a sample of \(\Theta(\varepsilon^{-5})\) edges is an additive \(\varepsilon\)-approximation of the overall modularity, whp.