Category Archives: Monthly digest

News for August 2025

Our press release this month features seven papers. Let’s take a look: a survey on efficiently testable graph properties; two papers on new directions in graph property testing; one that, after some meditations on distribution testing, introduces a novel direction; another on testing properties of strings; and two papers exploring property testing with a quantum twist. Without further delay, let us analyze our spread.

Polynomial Property Testing by Lior Gishboliner and Asaf Shapira (arXiv) This survey traces the development of property-testers for dense graph properties through the ages. The motivating question echoes throughout the survey: what is it that makes a dense graph property efficiently testable? Noting that any tester for your favorite property tester can be simulated using a canonical tester with at most a quadratic overhead in the number of queries, the survey focuses on properties which admit canonical testers that make at most \(poly(1/\varepsilon)\) queries. This part of the story is narrated in Section 3 of the paper. This section begins by reviewing the seminal result of Alon from 2002 which kicked off research in this area which asserts that a property of \(H\)-freeness can be tested efficiently \(\textit{iff}\) \(H\) is bipartite. Earlier this year, Gishboliner and Shapira extended this result to testing the property of \(H\)-freeness in \(k\)-uniform-hypergraphs. This result asserts that such a property is efficiently testable \(\textit{iff}\) \(H\) is \(k\)-partite. The survey then presents a result which extends this result to the setting of induced subgraphs. In particular, this result characterizes for which graphs \(H\), is the task of testing \(H\)-induced-subgraph freeness efficiently testable. The section concludes with a discussion of testing graph properties of bounded \(\textit{VC}\)-dimension.

Quality control in sublinear time: a case study via random graphs by Cassandra Marcussen, Ronitt Rubinfeld, and Madhu Sudan (arXiv) This paper introduces a general framework in which you can situate the following kinds of questions: So suppose you are given a probability parameter \(p\) and a value \(k \in \mathbb{N}\). Additionally, suppose I give you a graph \(G = (V,E)\) and I ask — is it true that the quantity

\(\rho_K(G) \colon = C_k(G)/\textbf{E}_{H \sim \mathcal{G}’_{n,p}}[C_k(G’)] \approx 1\)

or is it the case that

\(|\rho_K(G) – 1| > \varepsilon\)

(where the function \(C_k\) counts the number of \(k\)-sized cliques in \(G\)). The paper presents algorithms for this problem which issue at most \(\Big(\frac{1}{p}\Big)^{O(k)}\) queries.
Now, let me explain the broader framework this paper introduces. You want to think of \(\rho_k(G)\) as measuring the quality of your graph \(G\). You want to understand whether \(G\) is distributed as a genuine sample (that is, whether \(G\) is a “high-quality graph”) according to \(\mathcal{G}_{n,p}\) as measured by the function \(C_k\). In general, as the authors explain via an analogy, this is supposed to capture the following sentiment: you go to a bookstore and you want to buy a book with quality close to \(1\). The goal thus, is to choose a high-quality book. Note that this goal is weaker than both of the following goals: testing whether

\(\text{quality(book)} \approx 1\) for an adversarial book,

OR whether

\(\text{book} \sim \texttt{Some Distribution of your choice over books in store}\).

The authors situate quite a few other problems in this overarching framework and present algorithms for these.

Sublinear-Time Approximation for Graph Frequency Vectors in Hyperfinite Graphs by Gregory Moroie (arXiv) A useful first step for several algorithms on planar graphs of bounded degree is to exploit the fact that such graphs admit a hyperfinite decomposition. Using a primitive called the partition oracle, one can estimate in sublinear time various important graph parameters — like max-cut value, independent set size and so on to within a multiplicative \($(1 \pm \varepsilon)\). Often, a helpful step in these algorithms proceeds via a graph simplification step where one approximates the input graph with a crude sketch which tracks frequency counts of various subgraphs on the pieces produced by the partition oracle each of which have size at most \(k = O(1/\varepsilon^2)\). The featured paper gives algorithms which find a graph \(H\) in time \(poly(1/\varepsilon, d^k)\) which can be used to approximate the corresponding frequency vector in \(G\).

Instance-Optimal Uniformity Testing and Tracking by Guy Blanc, Clément L. Canonne, and Erik Waingarten (arXiv) After some meditations on the task of testing uniformity of distributions with support \([n]\), the featured paper suggests that despite the impressive algorithmic success on this task, there is a lot that one still desires. In particular, one of the fundamental reasons for dissatisfaction with the sample-optimal results for the above task, is that often in real-life situations, the only thing we are interested is any evidence of non-uniformity — without any distance notion in the picture. So, the paper formalizes the following question: Suppose we have a setup where at each time step \(t \in \mathbb{N}\), we receive as input a sample \(x_t \sim \boldsymbol{p}\) where \(\boldsymbol{p}\) is an unknown probability distribution supported on \([n]\). We want our algorithm to run continuously and return reject if at any point in time, the algorithm discovers \(\boldsymbol{p} \neq \texttt{UNIF}[n]\). The goal is to do this in as less time as possible. To rule out trivial algorithms which straight-up reject \(\boldsymbol{p}\), we require that the probability that the algorihtm EVER rejects \(\texttt{UNIF}[n]\) itself is at most some sufficiently small \(\delta > 0\). Denoting by \(OPT(\boldsymbol{p})\) the sample complexity of the best possible algorithm for this task which knows the entire distribution profile \(\boldsymbol{p}\) in hindsight. The featured paper devises an algorithm for this problem which is \(poly(\log (OPT))\)-competitive against this best algorithm.

Counting Distinct Square Substrings in Sublinear Time by Panagiotis Charalampopoulos, Manal Mohamed, Jakub Radoszewski, Wojciech Rytter, Tomasz Waleń, and Wiktor Zuba (arXiv) A square in a string is a substring of the form \(XX\), where \(X\) is a nonempty sequence of characters. The featured paper studies how to count the number of distinct square substrings in a string of length \(n\) over an alphabet of size \(\sigma\), given in packed form. The authors design a sublinear-time algorithm running in \(O(n/\log_\sigma n)\) in the word-RAM model by exploiting the structure of runs (maximal periodic fragments). To handle these runs systematically, the paper uses sparse Lyndon roots — where a Lyndon root is essentially the lexicographically smallest rotation of a repeated block — so that each repetition has a unique representative. The paper further groups certain overlapping runs into pyramidal structures to make counting efficient. This avoids scanning every position of the string and shows that distinct squares can be counted strictly faster than reading the entire string symbol by symbol, giving the first sublinear-time algorithm for this problem.

Resource-efficient algorithm for estimating the trace of quantum state powers by Myeongjin Shin, Junseo Lee, Seungwoo Lee, and Kabgyun Jeong (arXiv) The problem of estimating the trace of quantum state powers refers to computing \(\text{Tr}(\rho^k)\), where \(\rho\) is a quantum state (density matrix) and \(k\) is an integer — this quantity is key for tasks such as evaluating nonlinear functions of quantum states, Rényi entropy, and detecting entanglement. The featured paper presents a resource-efficient algorithm that reduces the quantum hardware requirements by using only \(O(\widetilde r)\) qubits and \(O(\widetilde r)\) multi-qubit gates, where \(\widetilde{r} = \min\Big({\text{rank}(\rho), \lceil \ln(2k / \varepsilon)\rceil}\Big)\). The featured paper builds on tools from mathematical foundations of entaglement spectroscopy (something called the Newton-Girard method): it estimates the traces of low powers \(\text{Tr}(\rho^i)\) for \(i = 1, \dots, \widetilde r\) using shallow quantum circuits and then uses classical recursion to reconstruct or approximate \(\text{Tr}(\rho^k)\) even when \(k\) is large — achieving rank-dependent complexity that works well for low-rank states.

Adversarially robust quantum state learning and testing by Maryam Aliakbarpour, Vladimir Braverman, Nai-Hui Chia, and Yuhan Liu (arXiv) The featured paper studies quantum state learning under a strong corruption model called \(\gamma\)-adversarial corruption, where an adversary can arbitrarily modify a \(\gamma\)-fraction of measurement outcomes, going beyond the usual SPAM noise assumptions. The authors give a non-adaptive measurement algorithm that learns a rank-\(r\) state up to trace-distance error \(\widetilde O(\gamma \sqrt{r})\), and also proves a matching lower bound \(\Omega(\gamma \sqrt{r})\), showing the guarantee is optimal. As the abstract notes (and piques your curiosity from the get-go), the results are at once optimistic and pessimistic: for constant-rank states, the error bound is dimension-independent, so adversarial corruption can be tolerated without dependence on the Hilbert space dimension; but for general full-rank states, the error necessarily scales as \(\gamma \sqrt{d}\), which means even a vanishingly small corruption rate, around \(1/\sqrt{d}\), is enough to destroy any non-adaptive learning algorithm.

The paper underscores that dimension-independent error is inevitable in this adversarial setting because single-copy measurements carry very limited information, so any algorithm must amplify small statistical differences to extract the state. This sensitivity leaves the estimation process inherently vulnerable to adversarially introduced errors unless the state has low rank, in which case the amount of information required to recover the state is substantially smaller.

News for July 2025

Finally, a post on the first! This month we have four papers, on understanding Nash Equilibria, deeper meditations on distribution testing, sublinear average degree estimation, and a paper on degree distributions that I’m quite excited to see.

Protocols for Verifying Smooth Strategies in Bandits and Games by Miranda Christ, Daniel Reichman, and Jonathan Shafer (arXiv). Consider a \(k\)-player game where each player has \(n\) options. Given a (mixed) strategy for each player, we would like to know if this is an approximate Nash equilibrium. But think of \(n\) as extremely large, so algorithms need to be sublinear in \(n\). This is a natural requirement, since in many real-world games, the independent options come from an extremely large space. This paper studies this setting from the verification perspective. Assume the algorithm/verifier can communicate with an all powerful prover, and has access to the game matrix/tensor. We want to verifier to make sublinear queries to the game and have sublinear communication (in bits) with the prover. The main result shows that, under some “smoothness” condition on the near equilibrium strategies, one can actually get such sublinear verifiers for Nash equilibria. The starting results in the paper are for the simpler multi-armed bandit setting, which are generalized to more complex games.

Faster Estimation of the Average Degree of a Graph Using Random Edges and Structural Queries by Lorenzo Beretta, Deeparnab Chakrabarty, and C. Seshadhri (arXiv). The problem of estimating the average degree of a graph in sublinear time is an old and classic problem. In a seminal work, Goldreich-Ron gave \((1+\varepsilon)\)-approximations with \(\widetilde{O}(\sqrt{n})\) queries, where \(n\) is the number of vertices. (Up to \(\log n, \varepsilon\) factors, the bound is optimal.) This works in the classic model where an algorithm can sample uar vertices, query degrees, and get neighbors. Suppose uar edges are also available. In that case, Motwani-Panigrahy-Xu give an algorithm that makes \(\widetilde{O}(n^{1/3})\) queries. What if one can also query pairs to check if they are edges? This is a standard operation in sublinear graph algorithms. Interestingly, this paper gives an \(\widetilde{O}(n^{1/4})\) query algorithm for this setting. And, if one can get the entire adjacency list in a single query (common in the study of web networks), then there is an \(\widetilde{O}(n^{1/5})\) query algorithm for this setting. This paper, following Beretta-Tetek, also studies the case where the number of vertices \(n\) is unknown, in which case the previous complexities change. Overall, there is a nuanced picture of the complexity of estimating the average degree.

Towards Tight Bounds for Estimating Degree Distribution in Streaming and Query Models by Arijit Bishnu, Debarshi Chanda, Gopinath Mishra (arXiv). In network science, which is study of real-world large graphs, probably one of the most important graph parameters studied is the degree distribution. Formally, let \(N(d)\) be the number of vertices of degree \(d\). The set \(\{N(d)\}\) is sometimes called the “degree distribution”, though to be precise, it is the “complementary cumulative degree histogram” (ccdh). There is a rich applied history of estimating the ccdh, but the only truly sublinear bound was given by Eden-Jain-Pinar-Ron-Seshadhri. The actual complexity was a little involved, and your truly had posed (in WOLA 2019) the question of determining the optimal bound. Which, I’m happy to say, this paper has nailed down. Ignoring log factors, the complexity is \(\Theta(m/h)\), where \(m\) is the number of edges and \(h\) is the, well, \(h\)-index of the degree distribution (the largest number \(k\) such that there are at least \(k\) vertices of degree at least \(k\)). The algorithm, which is similar to the previous result, is eminently practical and uses a combination of vertex and edge sampling to estimate different part of the distribution. Moreover, the algorithm can be implemented in the streaming setting.

Location-Invariant Properties of Functions versus Properties of Distributions: United in Testing but Separated in Verification by Oded Goldreich and Guy Rothblum (ECCC). Let us build up the distribution testing context. Consider a distribution \(\mathcal{D}\) over universe \([n]\). In the standard distribution testing setup, the tester gets samples from this distribution and has to (approximately) decide some property. One could imagine that there is an underlying space/function that actually generates this distribution. So think of \(f:[m] \to [n]\), where \(\mathcal{D}(i)\) is the fraction of the domain values \(j\) where \(f(j) = i\). A distribution property can be translated to the property/set of functions that induce the desired distributions. In the latter setting, there is more power, since a tester can explicitly query \(j \in [m]\) to get \(f(j)\). (While, in the distribution setting, the tester merely gets \(f(j)\) for a uar \(j \in [m]\).) It turns out that property testing in both settings are basically equivalent. This paper shows, to the contrary, there is a gap between these settings for the verification problem. Consider “doubly sublinear interactive protocols”, so that verifer must run with queries sublinear to the vanilla testing complexity, and the honest prover must run in queries sublinear to the learning complexity. For the classic uniformity testing problem, there is no such interactive protocol in the distribution testing setting. But, interestingly, there is a “doubly sublinear interactive protocol” in the functional setting, where the tester can get \(o(\sqrt{n})\) to verify uniformity.

News for June 2025

Another (sigh) delayed post. A moderately busy month, with 4 papers with sublinear graph algorithms and, of course, distribution testing.

A 0.51-Approximation of Maximum Matching in Sublinear \(n^{1.5}\) Time by Sepideh Mahabadi, Mohammad Roghani, and Jakub Tarnawski (arXiv). The title says it all. This paper gives a 0.51-approximation for the maximum matching in, well, \(O(n^{1.5})\) time. Here’s the back story. There are two kinds of sublinear maximum matching algorithms. The first kind (achieved after a long line of impressive results) get \((1- \varepsilon)\)-approximations in time \(O(n^{2 – \Omega_{\varepsilon}(1)})\). This means that that in sublinear \(o(n^2)\) time, one can get arbitrarily good approximations. The second kind beats the simple \(0.5\)-approximation (obtained from maximal matchings), but tries for closer to \(O(n)\) time. BehnezhadRoghaniRubinsteinSaberi gave a \((0.5+\varepsilon)\)-approximation algorithm running in \(n^{1+\varepsilon}\) time. But the largest value of \(\varepsilon\) possible in these results is tiny. Can one get a “significant” improvement over 0.5-approximations in time “significantly less” than \(n^2\)? The answer is yes, as the title explains.

Testing (Conditional) Mutual Information by Jan Seyfried, Sayantan Sen, and Marco Tomamichel (arXiv, ECCC). Consider a distribution over pairs \((A, B)\). The mutual information \(I(A,B)\) of \(A\) and \(B\) is the minimum KL-divergence between the original distribution and any product distribution over the support of \(A\) and \(B\). So the mutual information is zero iff \((A,B)\) is a product distribution, or equivalently \(A\) and \(B\) are independent. A natural distribution testing question is to distinguish \(A\) and \(B\) being independent from \(I(A,B) > \varepsilon\). One can take this problem a step further. Suppose the distribution is over triples \((A,B,C)\). Then can one distinguish conditional independence (\(I(A, B | C) = 0\)) from sufficient conditional dependence, so \(I(A, B | C) > \varepsilon\)? This problem was studied by (including our very own) Canonne-Diakonikolas-Kane-Stewart, and is known that \(O(d^{7/4})\) samples suffice. Here, \(d\) denotes the maximum alphabet size for each coordinate, so the overall domain size is \(d^3\). This paper gives a much more nuanced complexity, that depends separately on the support sizes for each coordinate. It is interesting that each coordinate plays a different role in the final complexity.

Tight simulation of a distribution using conditional samples by Tomer Adar (arXiv). Consider a distribution of \(\{0,1\}^n\). Many distribution testing results would not yield useful algorithms for this setting, since the domain is exponentially large. Inspired by this setting, the subcube conditional model was introduced by Bhattacharyya-Chakraborty. In this setting, we can generate samples conditioned on any subcube. The aim of this paper goes beyond property testing. Can we actually estimate specific values of the distribution with a few queries? Can we simulate the distribution, which means to “locally compute” a distribution that is close to the input? Specifically, given domain point \(x\), we want to compute a value \(\mu(x)\) such that \(\mu\) represents a distribution close to the input. This paper gives such a result, where each \(\mu(x)\) can be computed in \(O(n^2/\varepsilon^2)\) queries. The final distribution has KL-divergence \(\varepsilon^2\) from the original.

Sampling and Identity-Testing Without Approximate Tensorization of Entropy by William Gay, William He, Nicholas Kocurek, and Ryan O’Donnell (arXiv). Again, consider a distribution \(D\) over a large product domain \(\Sigma^n\). Our aim is identity testing, so we wish to determine if \({D} = {D}’\) or \(|{D} – {D}’| > \varepsilon\), where \({D}’\) is some explicitly known distribution. The model is much weaker than subcube condition access. Essentially, we can only get samples from subcubes of dimension (at least) \(n-1\). Blanca-Chen-Štefankovič-Vigoda studied identity testing where \({D}\) satisfies a condition called “approximate tensorization of entropy” (ATE). By Jensen’s inequality, if \({D}\) is a product distribution, the entropy of \({D}\) is at most the sum of entropies of the marginal distributions. If the entropy of \({D}\) is (say) at most a constant factor of this sum, we say it satisfies the ATE. Blanca-Chen-Štefankovič-Vigoda proves that \(\widetilde{O}(n/\varepsilon)\) suffices for identity testing. This paper studies input distributions that are mixtures of (say, a constant number of) distributions satisfying ATE. Interestingly, these mixtures do not satisfy the ATE. But this paper shows that one can still get \(\widetilde{O}(n/\varepsilon)\) query identity testers.

News for May 2025

Apologies for the delay in publishing this digest. May has seen a lot of activity in property testing, with quite a lot from the quantum side: 8 papers in total!

Improved Sample Upper and Lower Bounds for Trace Estimation of Quantum State Powers (arXiv) by Kean Chen and Qisheng Wang. Say you have access to copies of a quantum state \(\rho\) (lucky you!) and want to estimate some property of \(\rho\): specifically, you’d like an additive estimate of \(\mathrm{Tr} \rho^q\), for some \(q> 1\) of you choosing. (This quantity, for various \(q\), has connections to Rényi and Tsallis entropies). How many copies of the state do you need? The authors significantly strengthen previous bounds on the sample complexity of the task, providing a tight answer for \(q > 2\), and improved results for \(1< q < 2\). Besides their results, they also provide a list of future directions.

On estimating the quantum \(\ell_\alpha\) distance (arXiv), by Yupan Liu and Qisheng Wang. Now you’re given access to two quantum state \(\rho_0,\rho_1\), and you want to estimate their \(\alpha\)-Schatten distance, for a fixed \(\alpha>1\) — the “quantum analogue” of estimating the \(\ell_\alpha\) distance between two (classical) discrete probability distributions. The authors provide computationally efficient algorithms with constant sample complexity (for constant additive error), and characterize the computational complexity of the task as a function of \(\alpha\) (as \(\alpha \to 1\)).

Quantum Hamiltonian Certification (arXiv), by Minbo Gao, Zhengfeng Ji, Qisheng Wang, Wenjun Yu, and Qi Zhao. Now, you are given access to the time evolution operator \(e^{-i Ht}\), for an unknown Hamiltonian \(H\) over \(n\) qubits. Can you efficiently test (certify) whether \(H\) is close to (or far from) a reference Hamiltonian \(H_0\)? The paper considers this tolerant Hamiltonian task with respect to the (normalized) Frobenius distance, and provides tight bounds on the evolution time (the measure of “sample complexity” in this time-evolution setting). The authors also discuss consequences of their results for other norms, and also give an ancillary-free algorithm for the tolerant testing task.

Hamiltonian Locality Testing via Trotterized Postselection (arXiv), by John Kallaugher and Daniel Liang. When it Hamiltonian-rains, it Hamiltonian-pours! This papers considers another tolerant testing task on Hamiltonians, where instead of certifying whether \(H\) is close to a reference \(H_0\), the goal is to tolerantly test whether it is close to being “local” (vs. far from any local Hamiltonian), where a Hamiltonian is “local” if it can be decomposed as the sum of small-weight Pauli operators. The authors provide significantly improved upper bounds on the time evolution needed for this task, as well as a tight lower (and upper) bound for algorithms alllowed reverse time evolution.

And for the last quantum paper of the month (that we could find), quantum tomography:

Sample-optimal learning of quantum states using gentle measurements (arXiv), by Cristina Butucea, Jan Johannes, and Henning Stein. A gentle measurement, as formalized by Aaronson and Rothblum, is a measurement which only “mildly” collapses the quantum state. In their paper, Aaronson and Rothblum established two-way connections between gentle measurements and differential privacy. In this work, the authors study quantum tomography with gentle measurements, and provide what looks like a connection to local differential privacy, along with a new information theoretical tool, a quantum strong data processing inequality (SDPI) for gentle measurements.

And with this perfect segue, we now switch from quantum to local differential privacy:

Locally Differentially Private Two-Sample Testing (arXiv), by Alexander Kent, Thomas B. Berrett, and Yi Yu. This paper considers the question of two-sample testing (maybe more familiar to our reader under the name closeness testing) under local differential privacy, both for discrete and smooth continuous distributions. While some previous results on identity testing under LDP carry over to closeness testing, this work extends it to the smooth continuous case, and, focusing on practicality of the algorithms, focuses on a class of testers, the permutation tests.

… and to Boolean functions!

Testing Juntas Optimally with Samples (arXiv), by Lorenzo Beretta, Nathaniel Harms, and Caleb Koch. It is a truth universally acknowledged, that a single month in possession of a good number of property testing results, must be in want of a junta testing paper. And lo and behold: junta testing, indeed! The contributions of this paper are two-fold: first, the authors provide a tight query complexity bound for junta testing the distribution-free setting (the testing analogue of the PAC learning setting, where distance is measured with respect to an arbitrary, unknown, ambient distribution). Second, they give a lower bound showing that for tolerant junta testing in the distribution-free setting, one may as well learn the whole thing: testing is as hard as learning.

And to conclude… anti-concentration inequalities, and applications.

Algebraic aspects of the polynomial Littlewood-Offord problem (arXiv), by Zhihan Jin, Matthew Kwan, Lisa Sauermann, and Yiting Wang. While this paper is concerned primarily with the (polynomial) Littlewood–Offord problem, and how to improve a recent theorem of Meka, Nguyen and Vu under additional structural assumptions, the authors remark that some of the lemmas they establish along the way have direct implications for property testing of matrices and tensors. This is discussed in Section 1.1.4 of the paper.

News for April 2025

The bonanza of property testing results continues this month, with seven new papers on the arXiv! And with range, too: from regular languages to quantum states, with detours through distribution testing, Boolean functions, and relative error property testing. Oh, and yes, also, (quantum) magic!

The Trichotomy of Regular Property Testing (arXiv), by Gabriel Bathie, Nathanaël Fijalkow, and Corto Mascle. In property testing of regular languages, initiated by Alon, Krivelevich, Newman, and Szegedy in 2001, one has to decide whether an input word belongs to the target language \(L\), or is at distance at least \(\varepsilon\) from every \(x \in L\) (where the distance is typically Hamming or the (easier) edit distance). Surprisingly, it was shown that every regular language could be tested with \(\tilde{O}(1/\varepsilon)\) queries, independent of the input size. But is that tight? And is that \(\tilde{O}\) necessary? Many years of work later, the main result of this paper is settling the question, showing that for testing under the Hamming distance there are only three options: either a language is trivial (0 queries needed!), or easy (\(\Theta(1/\varepsilon)\) necessary and sufficient), or just plain hard \(\Theta(\log(1/\varepsilon)/\varepsilon)\) queries necessary and sufficient). Nothing else!

A Mysterious Connection Between Tolerant Junta Testing and Agnostically Learning Conjunctions (arXiv), by Xi Chen, Shyamal Patel, and Rocco Servedio. (1) Take two well-studied, notoriously challenging and seemingly unrelated problems about Boolean functions: agnostic learning conjunctions (that is, learning conjunctions with noise), and tolerantly testing juntas. (2) Unearth a new connection between the two tasks. (3) Write a very entertaining, illuminating introduction about this. (4) Oh, also, provide a new \(\tilde{O})(2^{n^{1/3}})\)-time agnostic learning algorithm for conjunctions, improving the previous best known result after 18 years; use it to obtain a \(\tilde{O})(2^{k^{1/3}})\)-query tolerant tester for juntas using this new connection, thus showing a polynomial separation between adaptive and non-adaptive algorithms for this task. (5) You get this paper.

Distribution Testing Meets Sum Estimation (arXiv), by Pinki Pradhan and Sampriti Roy. In the sum estimation problem, there is a set of \(n\) elements, each with a non-negative weight, and the goal is to estimate the total weight \(W\) while minimizing the number of weight queried. Under a model which allows both weighted and uniform sampling, this is known to be achievable with \(O(n^{1/3})\) queries (Beretta and Tětek). This paper considers the task under two additional assumptions: first, the weights are non-increasing, and second, the algorithm is allowed conditional weighted and uniform sampling (i.e., the same two types of sampling, but conditioned on any subset \(S\) of its choosing). In this setting, the authors show how to estimate the total weight to \(1\pm\varepsilon\) with only \(O((\log n)\text{poly}(1/\varepsilon))\) queries.

Efficient witnessing and testing of magic in mixed quantum states (arXiv), by Tobias Haug, and Poetri Sonya Tarabunga. Magic is real, and it’s quantifiable: roughly speaking, it quantifies the amount or “non-stabilizerness” of a state. I won’t pretend to fully understand what this means, but this paper shows that one can test the magic of low-entropy \(n\)-qubit states (i.e., distinguish between low magic states and high-magic states) with only polynomially (in \(n\)) many copies of the state.

Mildly-Interacting Fermionic Unitaries are Efficiently Learnable (arXiv), by Vishnu Iyer. More quantum property testing! In this paper, the author shows how to test whether an \(n\)-mode fermionic unitary has Gaussian dimension at least \(k\) (or is \(\varepsilon\)-far from it in Frobenius norm) in time \(\text{poly}(n, 1/\varepsilon)\). (This is then used as a building block to efficiently learn such unitaries.)

Testing Juntas and Junta Subclasses with Relative Error (arXiv), by Xi Chen, William Pires, Toniann Pitassi, and Rocco Servedio. In this relatively (!) new model of property testing, which we covered last October, the notion of farness between two Boolean functions is relative to their number of satisfying assignments. Testing with relative error is at least as hard as in the standard setting, and could be strictly harder: this paper shows that, for the case of testing juntas, this is not the case. Even with relative error testing, \(\tilde{O}(k/\varepsilon)\) queries are necessary and sufficient for junta testing! Using ideas from “standard testing” (specifically, from the “testing by implicit learning” framework), their results further extend to testing a large number of subclasses of juntas.

Relative-error testing of conjunctions and decision lists (arXiv), by Xi Chen, William Pires, Toniann Pitassi, and Rocco Servedio. Same team, same model — more results! After testing juntas in the relative error model, the authors continue their systematic exploration of the testing questions, this time focusing on testing decision lists and conjunctions. They are able to obtain a \(\tilde{O}(1/\varepsilon)\)-tester with two-sided error for the former, and an \(O(1/\varepsilon)\)-tester with one-sided error for the latter: both matching the best-known query complexity in the standard model.

News for February 2025

After a few quiet months on PTReview, the community is back with a bang. We have a selection of nine papers this month, ranging from hypergraphs, vertex coloring, triangle counting, a new take on linearity testing, and a sublinear foray into transformers.

Property Testing in Bounded Degree Hypergraphs by Hugo Aaronson, Gaia Carenini, and Atreyi Chanda (arXiv). We all know about the rich history of property testing and sublinear algorithms for graphs. The space for hypergraphs is far less studied, and likely has an even richer theory. Consider the input being an \(n\) vertex \(k\)-uniform hypergraph, so every hyperedge is a subset of \(n\) vertices. This paper is the first to define the bounded degree setting. The graph is represented as an adjacency list, wherein, for each vertex, we have the list of hyperedges containing that vertex. Distance between hypergraphs is defined in terms of the symmetric difference between the sets of hyperedges. The primary focus of the paper is on the fundamental property of \(k\)-colorability (note that \(k\) is also the hyperedge size). The main lower bound result is that for \(k \geq 3\), this property requires \(\Omega(n)\) queries to test. This is in contrast to the graph setting (\(k = 2\)), where bipartiteness is known to be testable for bounded degree graphs. This paper shows if that inputs have bounded treewidth, then \(k\)-colorability is testable with one-sided error in constant queries.

Simple Sublinear Algorithms for \((\Delta + 1)\) Vertex Coloring via Asymmetric Palette Sparsification by Sepehr Assadi and Helia Yazdanyar (arXiv). Consider access to the adjacency list of a graph with \(n\) vertices and maximum degree \(\Delta\). Most of our readers will be familiar with result of Assadi-Chen-Khanna proving that \((\Delta + 1)\)-coloring can be done in \(\widetilde{O}(n^{3/2})\) time. The key tool is a palette sparsification lemma, that shows that each vertex can independently pick a small list of colors, such that the final coloring is consistent with these lists. This paper proves a weaker version of this lemma with a significantly simpler proof. The result is an asymmetric version, where vertices choose color lists of varying sizes. The average length goes up by a log factor. But the proof is simpler, and it also allows for the final coloring to just use the greedy algorithm (constrained to these lists).

Improved Sublinear Algorithms for Classical and Quantum Graph Coloring by Asaf Ferber, Liam Hardiman, and Xiaonan Chen (arXiv). Another result on \((\Delta +1)\)-coloring. A result of Morris and Song show that there are simpler algorithms that give a \((1+\varepsilon)\Delta\)-coloring. Adapting their techniques, this paper gives a sublinear algorithm for \((\Delta+1)\)-coloring that runs in \(O(n^{3/2}\sqrt{\log n})\) time. This gives a \(\sqrt{\log n}\) factor improvement from the previous results. The algorithm can be accelerated in the quantum model using Grover’s search, to get a running time of \(\widetilde{O}(n^{4/3})\). Finally, the paper gives a \(\widetilde{O}(n^{5/4})\) time quantum algorithm for getting a \((1+\varepsilon)\Delta\)-coloring.

Monotonicity Testing of High-Dimensional Distributions with Subcube Conditioning by Deeparnab Chakrabarty, Xi Chen, Simeon Ristic, C. Seshadhri, and Erik Waingarten (arXiv). In my first foray into distribution testing, this paper studies the problem of testing monotonicity of distributions over the hypercube, \(\{-1,1\}^n\). Observe that the domain is considered to be exponentially large, and the aim is to get distribution testers that make \(\textrm{poly}(n)\) samples. To get such bounds, this paper assumes subcube conditional samples. Given any subhypercube of the domain, the algorithm can generate samples restricted to this subhypercube. A distribution is called monotone, if the corresponding probability mass function is monotone (wrt to the standard partial order on \(\{-1,1\}^n\)). For this setting, this paper shows that property of monotonicity can tested with \(\widetilde{O}(n/\varepsilon^2)\) samples. Moreover, this bound is shown to be optimal, up to polylog factors. One of the main tools is a new directed isoperimetric theorem for real-valued functions on the hypercube.

Compression Barriers in Autoregressive Transformers by Themistoklis Haris and Krzysztof Onak (arXiv). While this paper does not give a sublinear time algorithm, it uses many techniques from the sublinear algorithms toolkit to prove lower bounds. And it is on transformers, arguably one of the most important components for machine learning on sequences and Large Language Models. Consider a sequence of \(n\) vectors \(v_1, v_2, \ldots, v_n \in \mathbb{R}^d\), which one can think of as vector embeddings of \(n\) words (coming from, say, some generated text). Based on extensive learning and model fine tuning, assume that the model has learned some “key” vectors \(k_1, k_2, \ldots, k_n \in \mathbb{R}^d\). The next word is generated from a “query vector” \(q\), using which we compute \(M^{-1} \sum_{\ell \leq n} \exp(q \cdot k_\ell) v_\ell\) (where \(M\) is a normalization factor). Suppose there are \(n\) query vectors, and the aim is to generate the above sum for all the query vectors (these are like the next \(n\) words). In practice, there are many schemes to speed up computation by compressing the key vectors. This paper shows that, in the worst case, a computation of the next word/vector takes \(\Omega(nd)\) time and space. Thus, any sublinear compression/speed up method must fundamentally use properties of the data.

Arboricity and Random Edge Queries Matter for Triangle Counting using Sublinear Queries by Arijit Bishnu, Debarshi Chanda, Gopinath Mishra (arXiv). Triangle counting is a classic problem, with a rich history in sublinear algorithms. The earliest work gives an algorithm running in \(\widetilde{O}(n/t^{1/3} + m^{3/2}/t)\) time, where \(n, m\) are the usual graph parameters and \(t\) is the triangle count. (Polylog and \(\varepsilon\) factors are ignored.) This complexity can be improved to \(\widetilde{O}(n/t^{1/3} + m\alpha/t)\), where \(\alpha\) is the graph arboricity/degeneracy. The standard adjacency list model can be augmented with random edge queries, in which case, it was known that \(O(m^{3/2}/t)\) time algorithms exist. This paper shows that \(O(m\alpha/t)\)-time algorithms exist for the model with random edge queries, which combines the previous results into an even better algorithm.

Improved Sublinear-time Moment Estimation using Weighted Sampling by Anup Bhattacharya, Pinki Pradhan (arXiv). Consider an array of \(n\) non-negative weights \(w_1, \ldots, w_n\). The \(t\)th-moment is the sum \(\sum_{i \leq n} w^t_i\). The aim is to estimate this moment, with proportional samples. This means that an algorithm can generate index samples where the index \(i\) has probability proportional to \(w_i\). The first result on this problem, for \(t=1\), was by Motwani-Panigrahy-Xu. Beretta-Tetek do a detailed study of this problem for case \(t=1\) (sum estimation), and show that this problem can be solved in \(O(\sqrt{n})\) queries. When the weights are vertex degrees and we have graph access, there are results showing the existence of \(\widetilde{O}(n^{1-1/t})\) algorithms for \(t > 1\). This paper gives such a result for any non-negative weights with $t \geq 1$, and proves a matching lower bound. Moreover, this paper considers the setting where $atex t < 1$. Interestingly, there is an algorithm of query complexity \(O(\sqrt{n} + n^{1/t-1})\), and a lower bound showing that \(\Omega(n)\) samples are needed when \(t \leq 1/2\).

Biased Linearity Testing in the 1% Regime by Subhash Khot and Kunal Mittal (ECCC). The classic problem of linearity testing never stops yielding new insights. A function \(f:\{0,1\}^n \to \{-1,1\}\), if \(f(x) = (-1)^{a \cdot x}\) where \(a \in \{0,1\}^n\). The classic 3-query BLR test distinguishes linear functions from those that are far from linear. Specifically, if the tester passes with probability \(1-\delta\), then the function \(f\) is \(O(\delta)\)-close to linear. This called testing in the “99% regime”. Suppose we want to tester with guarantees in the “1%” regime. This means that if the tester passes with probability \(> \delta\), then \(f\) has at least \(\epsilon\) correlation with some linear function (where \(\epsilon\) is some function of \(\delta\)). Such results were known for the uniform distribution on the hypercube. This paper studies product distributions on the hypercube, where all marginals are identically \(p\)-biased (the probability of choosing a \(1\) is \(p\)). Such 1%-testers were known for \(p \in (1/3,2/3)\) from recent work. This paper gives a \(1+1/p\)-query tester for all values of \(p\), and proves that the number of queries must be at least \(1+1/p\).

News for January 2025

January 2025 was by many measures a very… eventful month; as far as property testing is concerned, not so much, with only two papers (and a third we had previously missed). Uneventful is not a bad thing, sometimes!

Many pentagons in triple systems, by Dhruv Mubayi and Jozsef Solymosi (arXiv). This paper is interested in quantifying the number of copies of \(C_k\) in 3-uniform hypergraphs. In the process, the authors establish a quantitative result very relevant to property testing, at least for those with an interest in testing triangle-freeness in dense graphs, improving on a result of Gishboliner, Shapira and Wigderson: namely, that if an \(n\)-vertex graph is \(\varepsilon\)-far from triangle-freeness, then for every \(\ell \geq 2\) it must contain \(\Omega(\varepsilon^{3\ell} n^{2\ell+1})\) copies of \(C_{2\ell+1}\).

Testing Noise Assumptions of Learning Algorithms, by Surbhi Goel, Adam R. Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan (arXiv). Testable learning has seen a surge of interest since its (recent) introduction by Rubinfeld and Vasilyan (2023). In this framework, a learning algorithm which works under some data distribution assumption (e.g., the data is from a spherical cow Gaussian) is not “off the hook” when that assumption isn’t met, as is the case in classical learning. Instead, the algorithm must put its money where its algorithmic mouth is: if the data does indeed satisfy the assumption, then it must output a hypothesis that satisfies the learning guarantee; if the data does not satisfy the assumption, it is allowed to abort and output an error flag; but if it does output a hypothesis, regardless of whether the distributional assumption is met then that hypothesis must satisfy the learning guarantee. In this sense, the algorithm must act like a property tester for the distributional assumption made on the data.
This paper extends the testable learning framework from data distribution tonoisy data generation model: the assumption to be tested (and used) is no longer only on the distribution of the data (regardless of the associated labels), but on the distribution of the pair (data, label), including the way the label may be corrupted. In particular, the authors focus as an application on learning high-dimensional origin-centered halfspaces, where the assumption is that the data is from a Gaussian distribution, with labels perturbed by Massart noise.

Learning multivariate Gaussians with imperfect advice, by Arnab Bhattacharyya, Davin Choo, Philips George John, and Themis Gouleakis (arXiv). Suppose you want to learn the mean (or, if the covariance isn’t known, even better, the mean and covariance) of a high-dimensional Gaussian from i.i.d. samples. You’re in luck: we know how to do it, and the algorithm is very simple! You’re not in luck, though: you’ll need a lot of these i.i.d. samples to achieve non-trivial accuracy. A number either linear or quadratic in the dimension, depending on whether you’re learning only the mean vector or the whole thing.
But say you’re in “luck”: a “good” (but not very trustworthy) friend comes to your help, claiming they already know the mean and covariance, and tell you what they (claim they) are. Can you use this possibly unreliable advice to learn the Gaussian better? This is the setting of learning with advice, and, in this paper, the authors show that yes, when learning Gaussians, you can! And, what’s even better (for this blog), the algorithm they design uses as a core subroutine a tolerant tester, which allows them to carefully checks the quality of the “advice”.

News for December 2024

Happy New Year to you and your loved ones! Welcome to the first post of 2025. This month, we feature four papers: one on property testing in the huge object model, two on distribution testing, and a fourth that, while not strictly a property testing paper, was too intriguing to ignore. The last paper recounts the resolution of a fascinating bet between Peter Sarnak and Noga Alon. With that teaser, let’s dive in!

Testing vs Estimation for Index-Invariant Properties in the Huge Object Model by Sourav Chakraborty, Eldar Fischer, Arijit Ghosh, Amit Levi, Gopinath Mishra, and Sayantan Sen (arXiv) Let me begin by reminding you the setup for distribution testing in the huge object model which was introduced by Goldreich and Ron. Here is a brief refresher of this model. Suppose you want to test whether a distribution \(\mathcal{D}\), supported over the Boolean hypercube, satisfies a certain property \(\mathcal{P}\). To do this, you sample a string \(x \sim \mathcal{D}\), where \(x\) is of length \(n\). Huge object model considers situations where \(n\) is very large and so you instead assume query access to the sampled strings. In this model, the distribution \(\mathcal{D}\) is considered \(\varepsilon\)-far from \(\mathcal{P}\) if the earthmover distance (EMD) between \(\mathcal{D}\) and the closest distribution satisfying \(\mathcal{P}\), measured with respect to the relative Hamming distance between bitstrings, is at least \(\varepsilon\). In our July 2022 post, we covered a paper by a subset of the authors which presented efficient testers in this model for index-invariant properties whose support had bounded VC dimension.

The main result of the featured paper shows the following. For an index invariant property, you can basically upgrade a plain tester to a tolerant tester. Thus, the ability to efficiently \(\varepsilon\)-test an index-invariant distribution property in the huge object model translates into an ability of being able to estimate distance from the property.

Optimal Algorithms for Augmented Testing of Discrete Distributions by Maryam Aliakbarpour, Piotr Indyk, Ronitt Rubinfeld, and Sandeep Silwal (arXiv) Let us take the standard setup of discrete distribution testing supported over \([n]\) with a slight twist. Suppose you can assume that the underlying distribution \(\boldsymbol{p}\) is not entirely unknown. As the paper argues, this might be a realistic assumption in distributions dealing with network traffic data or search engine queries. Among a couple more, the main result of this paper shows that indeed, with a good proxy \(\boldsymbol{p}’\) for the input distribution \(\boldsymbol{p}\) i.e., a situation where say \(\|\boldsymbol{p}’-\boldsymbol{u}\|_1 \gg \|\boldsymbol{p}’ – \boldsymbol{p}\|_1\) and \(\|\boldsymbol{p}’ – \boldsymbol{p}\|_1\) is small enough, you get testers for uniformity testing with sample complexity \(O(1)\). (Here, \(\boldsymbol{u}\) denotes the uniform distribution over \([n]\). In this framework, the authors also present algorithms with improved sample complexity for identity testing and closeness testing.

Settling the complexity of testing grainedness of distributions, and application to uniformity testing in the Huge Object model by Clement Canonne, Sayantan Sen, Qiping Yang (ECCC) A discrete distribution is called \(m\)-grained if all probabilities are integer multiples of \(1/m\). If you are a regular PTReview reader, you might recall our News for September 2021 which featured a paper by Goldreich and Ron which proved an \(\Omega(n^c)\) lower bound for testing \(m\)-grainedness for any \(c < 1\). Goldreich and Ron also conjectured that the true lower bound is actually \(\Omega(n/\log n)\) (when \(m = \Theta(n)\)). The current work resolves this conjecture settling the complexity of this problem.

Ramanujan Property and Edge Universality of Random Regular Graphs by Jiaoyang Huang, Theo Mckenzie, and Horng-Tzer Yau (arXiv) So, yes here is the (non-property testing paper) I wanted to tell you about. Let me start with the juicy bit, So, Peter Sarnak and Noga Alon had a bet about the following situation: Fix \(d \in \mathbb{N}\) and let us take a random \(d\)-regular graph. Sarnak conjectured that the probability this graph is in fact a Ramanujan expander goes to zero as \(n \to \infty\) whereas Alon conjectured that this probability tends to one as \(n \to \infty\). The featured paper shows that while this probability decreases with increasing \(n\), it approaches a limiting value which is around \(0.69\). You can watch this juicy bit here.

News for November 2024

Alas, another late post! But we compensate for that. With a rich hoard of ten papers this month, covering quantum property testing, distribution testing, matchings, Steiner trees, linearity testing, and dealing with corruptions. And some papers on multiple topics simultaneously.

Uniformity testing when you have the source code by Clément L. Canonne, Robin Kothari, and Ryan O’Donnell (arXiv). Consider classic problem of uniformity testing, where the domain is \([d]\). We all know by now that the complexity of uniformity testing is \(\Theta(\sqrt{d}/\varepsilon^2)\), where \(\varepsilon\) is the proximity parameter. Suppose the distribution was the output of an algorithm (like, a hash function) and we have access to the source code. (The source code is thought of as a circuit that outputs a single domain element.) Can we beat this bound? Surprisingly, the answer is yes, when the tester is allowed to be quantum. A single “sample” is basically a run of the code. The main result is an upper bound of \(O(d^{1/3}/\varepsilon^{4/3})\) (with some additional terms for the low \(\varepsilon\) setting). Tight lower bounds for this setting are still unknown, so this research direction of “distribution testing with the source code” may lead to many more results.

Coherence in Property Testing: Quantum-Classical Collapses and Separations by Fernando Jeronimo, Nir Magrafta, Joseph Slote, and Pei Wu (ECCC). The distribution testing problem again, where the domain is \(\{0,1\}^n\) and thought of as exponentially large. To distinguish (say) a uniform distribution with support size \(2^{n/8}\) vs support size \(2^{n/4}\) would require exponentially many (\(2^{n/16}\)) samples. From a quantum standpoint, we can restrict the input distribution to be coherent. Intuitively, this is analogous to being restricted to a subcube. Even then, the paper shows that the testing problem requires exponentially many samples. To show a separation between classical and quantum algorithms, the paper gives a model of interactive protocols for distribution testing (which has been studied before in the classical setting, like Chiesa-Gur). In this setting, the paper gives a quantum algorithm that runs in \(\textrm{poly}(n)\) time with polynomially many proof strings, and can approximate the support size of coherent distributions. Classical algorithms, on the other hand, still require exponentially many queries, even with access to proof strings.

Testing classical properties from quantum data by Matthias C. Caro, Preksha Naik, and Joseph Slote (arXiv). Property testing gains its power because of query access, since the tester can “zero in” on the relevant portion of the data. Sample based testers often have far worse complexities. Such testers only get access to random samples of the data/function. Consider access to a function \(f:\{0,1\}^n \rightarrow \{0,1\}\). The classic property of monotonicity can be tested in \(\sqrt{n}\) queries (ignoring the proximity parameter dependencies), but requires \(2^{\Theta(\sqrt{n})}\) samples. The paper studies sample based property testing, except with quantum “samples”. In the quantum world, the function \(f\) is stored/represented as a quantum superposition of states. Quantum samples can obtain richer information, like sampling the Fourier coefficients of \(f\). (Quantum queries give even stronger access.) This allows for many classical query-based property testers to become quantum sample-based testers. This paper gives results for many fundamental properties like monotonicity, symmetry, and triangle freeness.

Tolerant Testing of Stabilizer States with Mixed State Inputs by Vishnu Iyer and Daniel Liang (arXiv). Another quantum testing paper, but here, the property itself is quantum. The input is a quantum state, and the aim is test the property of being a stabilizer state. In all previous work on property testing of being a stabilizer, it was assumed that the input is a pure state. (One should think of a pure state as a sort of “basis” state, while a mixed state is a linear combination of these states.) Given noise, one would typically expect the input to be mixed. This paper gives the first tolerant tester for stabilizer states, where the input is allowed to be a mixed state.

Stochastic Matching via In-n-Out Local Computation Algorithms by Amir Azarmehr, Soheil Behnezhad, Alma Ghafari, and Ronitt Rubinfeld (arXiv). This is not a sublinear result by itself, but has connections that should interest our readers. The main problem is stochastic matching. We are given a graph \(G\). An input \(G_p\) is generated by keeping each edge independently with probability \(p\). The aim is to construct a sparse graph \(H\) such that the expected matching size of \(H \cap G_p\) is close to the expected matching size of \(G_p\). After a long line of work, this paper shows that there exists a subgraph \(H\) of \(\textrm{poly}(1/p)\) degree whose expected matching size is a \((1-\varepsilon)\)-approximation of the overall expectation. The main analysis technique is to study a local computation algorithm (LCA) for computing the maximum matching. An LCA essentially gives local access to a large output (say, a matching or coloring) such that each matching edge (say) of the output can be computed with sublinear lookups to the input. The standard complexity measure is the number of lookups of the input to compute a matching edge. But this paper looks at the “in complexity”, which is the number of queries that lookup a given edge. When both complexities are small, one can show a “decorrelation” of the overall output, which is used in the analysis.

Nearly Tight Bounds on Testing of Metric Properties by Yiqiao Bao, Sampath Kannan, and Erik Waingarten (arXiv). Consider an \(n \times n\) matrix of “distances” between \(n\) points. The aim is to test the fundamental property of the distances forming a metric. Despite a long line of work studying property testing of distances, points, etc., there has surprisingly been no result on this problem. The earliest work in this line is by Parnas-Ron, studying the property of being a tree metric or ultrametric (which satisfy a stronger version of triangle inequality). This paper gives a \(O(n^{2/3}/\varepsilon^{4/3})\) query algorithm for property testing metrics. There is a also a nearly matching lower bound, if one assumes that the dependence on \(\varepsilon\) is polynomial. For the problems of testing tree metrics (and ultrametrics), the paper gives a \(O(1/\varepsilon^2)\) query algorithm, an improvement from the previous bound of \(O(1/\varepsilon^6)\).

Sublinear Metric Steiner Tree via Improved Bounds for Set Cover by Sepideh Mahabadi, Mohammad Roghani, Jakub Tarnawski, Ali Vakilian (arXiv). This paper is on the classic metric Steiner tree problem. The input is an \(n \times n\) matrix of distances in a metric and a subset \(S\) of points. The aim is to estimate the minimum weight Steiner tree (a tree that connects all the terminals \(S\)). Getting a \(2\)-approximation is quite easy, since one can just consider a spanning tree on \(S\). A result of Chen-Khanna-Tan give a \((2-\eta)\)-approximation algorithm that runs in \(O(n^{13/7})\) queries (for some positive constant \(\eta\)). This paper gets the same approximation with \(O(n^{5/3})\) queries. The main technical work goes in designing a sublinear algorithm for a version of set cover problem, where the input set system is given as a matrix.

On Optimal Testing of Linearity by Vipul Arora, Esty Kelman, Uri Meir (arXiv). In property testing, it’s hard to get more fundamental than linearity testing. What is there new to say about this well-studied problem? This paper studies linearity testing under the online manipulations model of Kalemaj-Raskhodnikova-Varma. In this model, after every query of the tester, an adversary corrupts up to \(t\) entries on the input. Previous results show that linearity testing can be done in \(O(\log t + 1/\varepsilon)\) queries. But, the paper notes, all previous results require \(t \leq 2^{n/c}\) for some \(c \geq 2\). How far can be push \(t\)? Remarkably, the main result shows that \(t\) can be as large as \(2^n/\varepsilon^2\) and linearity testing is still feasible under online manipulations. The next result of this paper studies the property of low degree polynomials over the reals. The paper gives an optimal \(O(1/\varepsilon)\)-query tester for the property of being a \(d\)-degree polynomial.

Online versus Offline Adversaries in Property Testing by Esty Kelman, Ephraim Linder, and Sofya Raskhodnikova (arXiv). This paper is related to the online manipulations model discussed above. There is also an offline “erasure” model, wherein some coordinates/bits of the input are erased by an adversary. The original paper of Kalemaj-Raskhodnikova-Varma showed that sortedness of arrays can be tested with offline erasures, but cannot be tested with online erasures even when \(t=1\). This paper proves a complementary theorem. There are properties that can be tested with a \(O(1)\) queries for \(t = O(n/\log n)\) in the online manipulations model, but requires \(\Omega(\log n)\) queries in the offline setting with \(\Theta(n/\log\log n)\) erasures. So the online and offline models are incomparable in power. There is a similar result for a setting where \(t\) is constant. The lower bounds are shown using a property regarding repeated characters in strings.

News for October 2024

Four* papers on property testing last month! Lower bounds, upper bounds, distribution testing, quantum, and a new testing model!

* at least four. If we missed any, please let us know in the comments!

Lower Bounds for Convexity Testing, by Xi Chen, Anindya De, Shivam Nadimpalli, Rocco Servedio, and Erik Waingarten (arXiv). You’re given a membership oracle to a set \(S\) in \(\mathbb{R}^n\) (that is, query access to its indicator function \(f_S\colon \mathbb{R}^n\to \{0,1\}\)), and asked to decide if this set is convex, or “far from it”. This is a very natural and seemingly basic question— of course, we need to define what “far” means here, and the natural (normal, one may say) choice of underlying measure in \(\mathbb{R}^n\) is the standard Gaussian measure: \(d(S,T) = \Pr_{\mathcal{N}(0,I_n)}[ x \in S\Delta T]\).
Previous algorithms for this convexity testing question (and its tolerant testing analogue) are non-adaptive, and have \(2^{\tilde{O}(\sqrt{n})}\) query complexity. This paper shows that this is not just unfortunate, but also necessary: every non-adaptive tolerant tester for this question must make \(2^{\Omega(\sqrt[4]{n}}\) queries, and every (possibly adaptive) one-sided tester must have polynomial query complexity.

Replicable Uniformity Testing, by Sihan Liu and Christopher Ye (arXiv). In property testing, the algorithm must say YES with high probability on inputs which have the property, and NO with high probability on those which are far. On anything else, the algorithm is off the hook and can output either. This is typically considered to be fine, and, in any case, necessary to be able to obtain ultra-efficient algorithms. But what if, in this third case, we wanted to put the algorithm partially back on that hook, and required it to be consistent? The algorithm can answer either YES or NO, sure, but if I run it again on that same input, it should give the same answer with high probability. This is in line with a recent line of works on replicable algorithms, and is non-trivial to achieve in (the standard model of) distribution testing, where a distribution testing algorithm only gets to see random samples from the distribution, and thus needs to have a replicable behavior over that randomness. This paper introduces the question of replicable distribution testing, and provides both upper and lower bounds (essentially matching, with an asterisk) for the flagship task of uniformity testing.

Quantum property testing in sparse directed graphs, by Simon Apers, Frédéric Magniez, Sayantan Sen, and Dániel Szabó (arXiv). Graph property testing has a long and rich history in the classical setting, spanning more than two decades. There are several testing models, depending on whether the graph is dense, sparse, and directed or not: and even in the sparse, directed case, it is important to sometimes only allow outgoing edge queries. All these variants capture different meaningful scenarios, and relations and separations between them are known. This paper opens the direction of quantum testing for sparse graphs, either directed or not. The authors investigate what advantage quantum computers can bring for graph testing in this setting, and show one natural property for which a quadratic speedup exists: \(o(\sqrt{n})\) quantum queries in the outgoing-edge-query-only (unidrectional) sparse model, while classically \(\Omega(n)\) are necessary. They also show that this is not always the case: quantum testing of 3-colorability, as in the classical case, does not admit a \(o(n)\)-query tester.

Relative-error monotonicity testing, by Xi Chen, Anindya De, Yizhi Huang, Yuhao Li, Shivam Nadimpalli, Rocco Servedio, and Tianqi Yang (arXiv). Property testing of Boolean functions is defined “absolutely“: the distance between two functions is the fraction of the domain on which they differ, i.e., \(\displaystyle\frac{|f^{-1}(\{1\})\Delta g^{-1}(\{1\})|}{2^n}\)
This makes sense when the functions have a reasonable number of satisfying assignments: but may be much less meaningful for sparse functions, which only are non-zero on a \(o(1)\) fraction of the inputs—for instance, functions where “all the action” is concentrated in a tiny subcube of the hypercube. All these functions are vanishingly close to each other! To address this, the authors introduce a new distance notion, relative-error, where the distance from \(g\) to \(f\) is scaled by the sparsity of \(f\):
\(\displaystyle\frac{|f^{-1}(\{1\})\Delta g^{-1}(\{1\})|}{|f^{-1}(\{1\})|}\)
This requires a slightly different access model to avoid trivial impossibility results, so the tester is augmented with sampling access to satisfying assignments of \(f\), on top of query access to \(f\) (as otherwise it may just never even find one satisfying assignment). After introducing and motivating this testing model, the paper initiates its study in the specific case of testing monotonicity of Boolean functions.