Monthly Archives: October 2025

News for September 2025

With October already here (!), time to look back at September, with seven property testing papers! With a healthy mix of error-correcting codes, graph testing, distribution testing, and computational geometry:

Robust Local Testability of Tensor Products of Constant-Rate Algebraic Geometry Codes, by Sumegha Garg and Akash Sengupta (ECCC). Given two error-correcting codes \(\mathcal{C}_1,\mathcal{C}_2\), their tensor product code \(\mathcal{C}_1\otimes\mathcal{C}_2\) can be represented as the set of matrices \(M\) whose rows are codewords of \(\mathcal{C}_1\) and columns codewords of \(\mathcal{C}_2\). How to (very quickly) test if an arbitrary matrix \(M\) corresponds to a codeword of \(\mathcal{C}_1\otimes\mathcal{C}_2\)? That is, when is such a tensor code a Locally Testable Code (LTC)?
A natural approach is to pick either a row or column of \(M\) uniformly at random, and check if that is a codeword of \(\mathcal{C}_1\) or \(\mathcal{C}_2\). If this natural test works, then \(\mathcal{C}_1\otimes\mathcal{C}_2\) is said to be robustly locally testable. Which brings us to this paper: the main result is to show that the tensor codes obtained from Algebraic-Geometry (AG) codes (a generalization of Reed-Solomon codes) are, indeed, robustly locally testable.

Expansion without Connectivity: A Property Testing Perspective, by Irit Dinur and Oded Goldreich (ECCC). Testing whether a (bounded-degree) \(n\)-vertex graph is an expander is among the earliest tasks considered in property testing, back to the influential work of Goldreich and Ron (2000) which connected this to the distribution testing question of uniformity testing. This paper looks at an important twist of the question: what if instead of testing if a graph is an expander, one wanted to assess whether it was expanding? This sounds like a rephrasing of the same question, but with a crucial difference: here, a graph doesn’t need to be connected to be expanding, as long as each of its connected components each (by itself) an expander. After motivating this variant of the question and connecting , the authors provide a lower bound of \(\Omega(\sqrt{n})\) queries for testing expansion, and complement it with an \(O(n^{2/3+0.0001})\)-query algorithm solving (a bicriterion version of) the testing task. Interestingly, the upper bound involves another connection to distribution testing, this time to the task of generalized uniformity testing introduced by Batu and Canonne (2017).

Plane vs. Plane Low Degree Test, by Amey Bhangale and Silas Richelson (ECCC). Low-degree tests, one of the key ingredients of the original PCP theorem and a central object of study in related areas of TCS, must, given query access to a function \(f\colon \mathbb{F}_q^m \to \mathbb{F}_q\), test whether \(f\) is a degree-\(d\) polynomial, of far from every such low-degree function. The “plane v. plane” test is a natural approach to perform this task: given a description of the function as a truth table listing its degree-d restriction \(f_P\) to every plane \(P\), do the following. Pick two random planes \(P,P’\), and check if the restrictions agree on the intersection \(P\cap P’\). The question can then be rephrased as that of characterizing the soundness of the test: if this “local test” accepts with probability \(\varepsilon\), then must \(f\) be \(\textrm{poly}(\varepsilon)\)-close to a “global” low-degree polynomial? This work significantly improves the state-of-the-art on this question, in the low agreement regime (i.e., when the soundness parameter \(\varepsilon\) can be very small: only \(\Omega(d/q)\)).

Distribution Testing in the Presence of Arbitrarily Dominant Noise with Verification Queries, by Hadley Black and Christopher Ye (arXiv). The authors introduce a new setting for distribution testing, where the (unknown) distribution \(p\) to be test is noisy (the “Huber contamination model”): that is, instead of getting i.i.d. samples from \(p\), the algorithms gets i.i.d. samples from some \(\tilde{p} = \alpha p + (1-\alpha)q\), where \(q\) is an arbitrary “noise distribution.” But all isn’t lost, as the algorithm, for any sample, can also query whether it comes from the “good” part of the mixture \(p\), or the “bad” part \(q\) (there is no “ugly” part). The obvious baseline then is to query for all samples, discard the bad ones, and run any out-of-the-box testing algorithm on what remains: this only costs an extra \(1/\alpha\) factor in the sample complexity. The authors show that one can do (much) better than this baseline! In the case of uniformity/identity and closeness testing, their results show that one can trade additional noisy samples for fewer verification queries.

On the Structure of Replicable Hypothesis Testers, by Anders Aamand, Maryam Aliakbarpour, Justin Y. Chen, Shyam Narayanan, Sandeep Silwal (arXiv). A paper from July, which we missed then. Say you want to perform your favorite distribution testing task, but in a replicable fashion (as defined by Impagliazzo, Lei, Pitassi, and Sorell (2022)): that is, run on a new set of samples, your algorithm should have the same output with high probability. “Easy”! After all, your algorithm already accepts things it should accept (in the property) with high probability, and rejects those it should reject (far from the property) with high probability… Yet not so easy: the replicability requirement also asks the algorithm to be consistent for the in-between distributions, too, those that are close but not in the property! This question was introduced by Liu and Ye in 2024: this new paper significantly improves both the results themselves and the understanding of replicable testing algorithms. In particular, it provides a characterization of what any such algorithm must satisfy, and uses this to design a “canonical tester” in the replicable setting.

Testing Depth First Search Numbering, by Artur Czumaj, Christian Sohler, Stefan Walzer (arXiv). In the bounded-degree graph model for testing, the algorithm is given query access to the adjacency function (given a vertex \(v\), can ask for the i-th neighbor of \(v\)). In this paper, the author augment this model by adding labels to the vertices, along with the ability to query the label of any vertex; but, crucially, not the reverse access, which would be to get the vertex corresponding to a chosen label. The main motivation and focus of the paper is then to test whether the (numerical) labels of the input graph correspond to a valid DFS numbering, as one would obtain by running a DFS on the graph, or if the graph is \(\varepsilon\)-far from any graph for which these labels are a DFS numbering. The authors provide both an upper and lower bound for testing DFS numbering, showing that \(\Theta(n^{1/3})\) queries are necessary and sufficient (for constant \(\varepsilon\)).

And finally, another paper we originally missed earlier this year, now updated on the arXiv:

Property Testing of Curve Similarity, by Peyman Afshani, Maike Buchin, Anne Driemel, Marena Richter, and Sampson Wong (arXiv). The authors initiate the study of property testing for Fréchet distance (continuous or discrete): given two \(n\)-vertex curves \(P,Q\) and parameters \(\delta, \varepsilon\), the algorithm must accept if \(P,Q\) have Fréchet distance at most \(\delta\), and reject if they are “\(\varepsilon\)-far from having Fréchet distance \(\delta\)” (where this “farness” is formally defined as saying the best coupling path between \(P,Q\) in their \(\delta\)-free space matrix has cost at least \(\varepsilon n\)).
The authors provide two different (incomparable) algorithms analyzed in terms of a locality parameter \(t\), the first algorithm showing that when this parameter is known only \(\tilde{O}(t/\varepsilon)\) queries are sufficient, and the second algorithm achieving query complexity \(\tilde{O}(t^2 \max(t,\log n)/\varepsilon)\) when \(t\) is unknown. The authors then conclude with a few open questions, which the readers of this blog keen to explore property testing in computational geometry may find interesting!