News for August 2025

Our press release this month features seven papers. Let’s take a look: a survey on efficiently testable graph properties; two papers on new directions in graph property testing; one that, after some meditations on distribution testing, introduces a novel direction; another on testing properties of strings; and two papers exploring property testing with a quantum twist. Without further delay, let us analyze our spread.

Polynomial Property Testing by Lior Gishboliner and Asaf Shapira (arXiv) This survey traces the development of property-testers for dense graph properties through the ages. The motivating question echoes throughout the survey: what is it that makes a dense graph property efficiently testable? Noting that any tester for your favorite property tester can be simulated using a canonical tester with at most a quadratic overhead in the number of queries, the survey focuses on properties which admit canonical testers that make at most \(poly(1/\varepsilon)\) queries. This part of the story is narrated in Section 3 of the paper. This section begins by reviewing the seminal result of Alon from 2002 which kicked off research in this area which asserts that a property of \(H\)-freeness can be tested efficiently \(\textit{iff}\) \(H\) is bipartite. Earlier this year, Gishboliner and Shapira extended this result to testing the property of \(H\)-freeness in \(k\)-uniform-hypergraphs. This result asserts that such a property is efficiently testable \(\textit{iff}\) \(H\) is \(k\)-partite. The survey then presents a result which extends this result to the setting of induced subgraphs. In particular, this result characterizes for which graphs \(H\), is the task of testing \(H\)-induced-subgraph freeness efficiently testable. The section concludes with a discussion of testing graph properties of bounded \(\textit{VC}\)-dimension.

Quality control in sublinear time: a case study via random graphs by Cassandra Marcussen, Ronitt Rubinfeld, and Madhu Sudan (arXiv) This paper introduces a general framework in which you can situate the following kinds of questions: So suppose you are given a probability parameter \(p\) and a value \(k \in \mathbb{N}\). Additionally, suppose I give you a graph \(G = (V,E)\) and I ask — is it true that the quantity

\(\rho_K(G) \colon = C_k(G)/\textbf{E}_{H \sim \mathcal{G}’_{n,p}}[C_k(G’)] \approx 1\)

or is it the case that

\(|\rho_K(G) – 1| > \varepsilon\)

(where the function \(C_k\) counts the number of \(k\)-sized cliques in \(G\)). The paper presents algorithms for this problem which issue at most \(\Big(\frac{1}{p}\Big)^{O(k)}\) queries.
Now, let me explain the broader framework this paper introduces. You want to think of \(\rho_k(G)\) as measuring the quality of your graph \(G\). You want to understand whether \(G\) is distributed as a genuine sample (that is, whether \(G\) is a “high-quality graph”) according to \(\mathcal{G}_{n,p}\) as measured by the function \(C_k\). In general, as the authors explain via an analogy, this is supposed to capture the following sentiment: you go to a bookstore and you want to buy a book with quality close to \(1\). The goal thus, is to choose a high-quality book. Note that this goal is weaker than both of the following goals: testing whether

\(\text{quality(book)} \approx 1\) for an adversarial book,

OR whether

\(\text{book} \sim \texttt{Some Distribution of your choice over books in store}\).

The authors situate quite a few other problems in this overarching framework and present algorithms for these.

Sublinear-Time Approximation for Graph Frequency Vectors in Hyperfinite Graphs by Gregory Moroie (arXiv) A useful first step for several algorithms on planar graphs of bounded degree is to exploit the fact that such graphs admit a hyperfinite decomposition. Using a primitive called the partition oracle, one can estimate in sublinear time various important graph parameters — like max-cut value, independent set size and so on to within a multiplicative \($(1 \pm \varepsilon)\). Often, a helpful step in these algorithms proceeds via a graph simplification step where one approximates the input graph with a crude sketch which tracks frequency counts of various subgraphs on the pieces produced by the partition oracle each of which have size at most \(k = O(1/\varepsilon^2)\). The featured paper gives algorithms which find a graph \(H\) in time \(poly(1/\varepsilon, d^k)\) which can be used to approximate the corresponding frequency vector in \(G\).

Instance-Optimal Uniformity Testing and Tracking by Guy Blanc, Clément L. Canonne, and Erik Waingarten (arXiv) After some meditations on the task of testing uniformity of distributions with support \([n]\), the featured paper suggests that despite the impressive algorithmic success on this task, there is a lot that one still desires. In particular, one of the fundamental reasons for dissatisfaction with the sample-optimal results for the above task, is that often in real-life situations, the only thing we are interested is any evidence of non-uniformity — without any distance notion in the picture. So, the paper formalizes the following question: Suppose we have a setup where at each time step \(t \in \mathbb{N}\), we receive as input a sample \(x_t \sim \boldsymbol{p}\) where \(\boldsymbol{p}\) is an unknown probability distribution supported on \([n]\). We want our algorithm to run continuously and return reject if at any point in time, the algorithm discovers \(\boldsymbol{p} \neq \texttt{UNIF}[n]\). The goal is to do this in as less time as possible. To rule out trivial algorithms which straight-up reject \(\boldsymbol{p}\), we require that the probability that the algorihtm EVER rejects \(\texttt{UNIF}[n]\) itself is at most some sufficiently small \(\delta > 0\). Denoting by \(OPT(\boldsymbol{p})\) the sample complexity of the best possible algorithm for this task which knows the entire distribution profile \(\boldsymbol{p}\) in hindsight. The featured paper devises an algorithm for this problem which is \(poly(\log (OPT))\)-competitive against this best algorithm.

Counting Distinct Square Substrings in Sublinear Time by Panagiotis Charalampopoulos, Manal Mohamed, Jakub Radoszewski, Wojciech Rytter, Tomasz Waleń, and Wiktor Zuba (arXiv) A square in a string is a substring of the form \(XX\), where \(X\) is a nonempty sequence of characters. The featured paper studies how to count the number of distinct square substrings in a string of length \(n\) over an alphabet of size \(\sigma\), given in packed form. The authors design a sublinear-time algorithm running in \(O(n/\log_\sigma n)\) in the word-RAM model by exploiting the structure of runs (maximal periodic fragments). To handle these runs systematically, the paper uses sparse Lyndon roots — where a Lyndon root is essentially the lexicographically smallest rotation of a repeated block — so that each repetition has a unique representative. The paper further groups certain overlapping runs into pyramidal structures to make counting efficient. This avoids scanning every position of the string and shows that distinct squares can be counted strictly faster than reading the entire string symbol by symbol, giving the first sublinear-time algorithm for this problem.

Resource-efficient algorithm for estimating the trace of quantum state powers by Myeongjin Shin, Junseo Lee, Seungwoo Lee, and Kabgyun Jeong (arXiv) The problem of estimating the trace of quantum state powers refers to computing \(\text{Tr}(\rho^k)\), where \(\rho\) is a quantum state (density matrix) and \(k\) is an integer — this quantity is key for tasks such as evaluating nonlinear functions of quantum states, Rényi entropy, and detecting entanglement. The featured paper presents a resource-efficient algorithm that reduces the quantum hardware requirements by using only \(O(\widetilde r)\) qubits and \(O(\widetilde r)\) multi-qubit gates, where \(\widetilde{r} = \min\Big({\text{rank}(\rho), \lceil \ln(2k / \varepsilon)\rceil}\Big)\). The featured paper builds on tools from mathematical foundations of entaglement spectroscopy (something called the Newton-Girard method): it estimates the traces of low powers \(\text{Tr}(\rho^i)\) for \(i = 1, \dots, \widetilde r\) using shallow quantum circuits and then uses classical recursion to reconstruct or approximate \(\text{Tr}(\rho^k)\) even when \(k\) is large — achieving rank-dependent complexity that works well for low-rank states.

Adversarially robust quantum state learning and testing by Maryam Aliakbarpour, Vladimir Braverman, Nai-Hui Chia, and Yuhan Liu (arXiv) The featured paper studies quantum state learning under a strong corruption model called \(\gamma\)-adversarial corruption, where an adversary can arbitrarily modify a \(\gamma\)-fraction of measurement outcomes, going beyond the usual SPAM noise assumptions. The authors give a non-adaptive measurement algorithm that learns a rank-\(r\) state up to trace-distance error \(\widetilde O(\gamma \sqrt{r})\), and also proves a matching lower bound \(\Omega(\gamma \sqrt{r})\), showing the guarantee is optimal. As the abstract notes (and piques your curiosity from the get-go), the results are at once optimistic and pessimistic: for constant-rank states, the error bound is dimension-independent, so adversarial corruption can be tolerated without dependence on the Hilbert space dimension; but for general full-rank states, the error necessarily scales as \(\gamma \sqrt{d}\), which means even a vanishingly small corruption rate, around \(1/\sqrt{d}\), is enough to destroy any non-adaptive learning algorithm.

The paper underscores that dimension-independent error is inevitable in this adversarial setting because single-copy measurements carry very limited information, so any algorithm must amplify small statistical differences to extract the state. This sensitivity leaves the estimation process inherently vulnerable to adversarially introduced errors unless the state has low rank, in which case the amount of information required to recover the state is substantially smaller.

News for July 2025

Finally, a post on the first! This month we have four papers, on understanding Nash Equilibria, deeper meditations on distribution testing, sublinear average degree estimation, and a paper on degree distributions that I’m quite excited to see.

Protocols for Verifying Smooth Strategies in Bandits and Games by Miranda Christ, Daniel Reichman, and Jonathan Shafer (arXiv). Consider a \(k\)-player game where each player has \(n\) options. Given a (mixed) strategy for each player, we would like to know if this is an approximate Nash equilibrium. But think of \(n\) as extremely large, so algorithms need to be sublinear in \(n\). This is a natural requirement, since in many real-world games, the independent options come from an extremely large space. This paper studies this setting from the verification perspective. Assume the algorithm/verifier can communicate with an all powerful prover, and has access to the game matrix/tensor. We want to verifier to make sublinear queries to the game and have sublinear communication (in bits) with the prover. The main result shows that, under some “smoothness” condition on the near equilibrium strategies, one can actually get such sublinear verifiers for Nash equilibria. The starting results in the paper are for the simpler multi-armed bandit setting, which are generalized to more complex games.

Faster Estimation of the Average Degree of a Graph Using Random Edges and Structural Queries by Lorenzo Beretta, Deeparnab Chakrabarty, and C. Seshadhri (arXiv). The problem of estimating the average degree of a graph in sublinear time is an old and classic problem. In a seminal work, Goldreich-Ron gave \((1+\varepsilon)\)-approximations with \(\widetilde{O}(\sqrt{n})\) queries, where \(n\) is the number of vertices. (Up to \(\log n, \varepsilon\) factors, the bound is optimal.) This works in the classic model where an algorithm can sample uar vertices, query degrees, and get neighbors. Suppose uar edges are also available. In that case, Motwani-Panigrahy-Xu give an algorithm that makes \(\widetilde{O}(n^{1/3})\) queries. What if one can also query pairs to check if they are edges? This is a standard operation in sublinear graph algorithms. Interestingly, this paper gives an \(\widetilde{O}(n^{1/4})\) query algorithm for this setting. And, if one can get the entire adjacency list in a single query (common in the study of web networks), then there is an \(\widetilde{O}(n^{1/5})\) query algorithm for this setting. This paper, following Beretta-Tetek, also studies the case where the number of vertices \(n\) is unknown, in which case the previous complexities change. Overall, there is a nuanced picture of the complexity of estimating the average degree.

Towards Tight Bounds for Estimating Degree Distribution in Streaming and Query Models by Arijit Bishnu, Debarshi Chanda, Gopinath Mishra (arXiv). In network science, which is study of real-world large graphs, probably one of the most important graph parameters studied is the degree distribution. Formally, let \(N(d)\) be the number of vertices of degree \(d\). The set \(\{N(d)\}\) is sometimes called the “degree distribution”, though to be precise, it is the “complementary cumulative degree histogram” (ccdh). There is a rich applied history of estimating the ccdh, but the only truly sublinear bound was given by Eden-Jain-Pinar-Ron-Seshadhri. The actual complexity was a little involved, and your truly had posed (in WOLA 2019) the question of determining the optimal bound. Which, I’m happy to say, this paper has nailed down. Ignoring log factors, the complexity is \(\Theta(m/h)\), where \(m\) is the number of edges and \(h\) is the, well, \(h\)-index of the degree distribution (the largest number \(k\) such that there are at least \(k\) vertices of degree at least \(k\)). The algorithm, which is similar to the previous result, is eminently practical and uses a combination of vertex and edge sampling to estimate different part of the distribution. Moreover, the algorithm can be implemented in the streaming setting.

Location-Invariant Properties of Functions versus Properties of Distributions: United in Testing but Separated in Verification by Oded Goldreich and Guy Rothblum (ECCC). Let us build up the distribution testing context. Consider a distribution \(\mathcal{D}\) over universe \([n]\). In the standard distribution testing setup, the tester gets samples from this distribution and has to (approximately) decide some property. One could imagine that there is an underlying space/function that actually generates this distribution. So think of \(f:[m] \to [n]\), where \(\mathcal{D}(i)\) is the fraction of the domain values \(j\) where \(f(j) = i\). A distribution property can be translated to the property/set of functions that induce the desired distributions. In the latter setting, there is more power, since a tester can explicitly query \(j \in [m]\) to get \(f(j)\). (While, in the distribution setting, the tester merely gets \(f(j)\) for a uar \(j \in [m]\).) It turns out that property testing in both settings are basically equivalent. This paper shows, to the contrary, there is a gap between these settings for the verification problem. Consider “doubly sublinear interactive protocols”, so that verifer must run with queries sublinear to the vanilla testing complexity, and the honest prover must run in queries sublinear to the learning complexity. For the classic uniformity testing problem, there is no such interactive protocol in the distribution testing setting. But, interestingly, there is a “doubly sublinear interactive protocol” in the functional setting, where the tester can get \(o(\sqrt{n})\) to verify uniformity.

News for June 2025

Another (sigh) delayed post. A moderately busy month, with 4 papers with sublinear graph algorithms and, of course, distribution testing.

A 0.51-Approximation of Maximum Matching in Sublinear \(n^{1.5}\) Time by Sepideh Mahabadi, Mohammad Roghani, and Jakub Tarnawski (arXiv). The title says it all. This paper gives a 0.51-approximation for the maximum matching in, well, \(O(n^{1.5})\) time. Here’s the back story. There are two kinds of sublinear maximum matching algorithms. The first kind (achieved after a long line of impressive results) get \((1- \varepsilon)\)-approximations in time \(O(n^{2 – \Omega_{\varepsilon}(1)})\). This means that that in sublinear \(o(n^2)\) time, one can get arbitrarily good approximations. The second kind beats the simple \(0.5\)-approximation (obtained from maximal matchings), but tries for closer to \(O(n)\) time. BehnezhadRoghaniRubinsteinSaberi gave a \((0.5+\varepsilon)\)-approximation algorithm running in \(n^{1+\varepsilon}\) time. But the largest value of \(\varepsilon\) possible in these results is tiny. Can one get a “significant” improvement over 0.5-approximations in time “significantly less” than \(n^2\)? The answer is yes, as the title explains.

Testing (Conditional) Mutual Information by Jan Seyfried, Sayantan Sen, and Marco Tomamichel (arXiv, ECCC). Consider a distribution over pairs \((A, B)\). The mutual information \(I(A,B)\) of \(A\) and \(B\) is the minimum KL-divergence between the original distribution and any product distribution over the support of \(A\) and \(B\). So the mutual information is zero iff \((A,B)\) is a product distribution, or equivalently \(A\) and \(B\) are independent. A natural distribution testing question is to distinguish \(A\) and \(B\) being independent from \(I(A,B) > \varepsilon\). One can take this problem a step further. Suppose the distribution is over triples \((A,B,C)\). Then can one distinguish conditional independence (\(I(A, B | C) = 0\)) from sufficient conditional dependence, so \(I(A, B | C) > \varepsilon\)? This problem was studied by (including our very own) Canonne-Diakonikolas-Kane-Stewart, and is known that \(O(d^{7/4})\) samples suffice. Here, \(d\) denotes the maximum alphabet size for each coordinate, so the overall domain size is \(d^3\). This paper gives a much more nuanced complexity, that depends separately on the support sizes for each coordinate. It is interesting that each coordinate plays a different role in the final complexity.

Tight simulation of a distribution using conditional samples by Tomer Adar (arXiv). Consider a distribution of \(\{0,1\}^n\). Many distribution testing results would not yield useful algorithms for this setting, since the domain is exponentially large. Inspired by this setting, the subcube conditional model was introduced by Bhattacharyya-Chakraborty. In this setting, we can generate samples conditioned on any subcube. The aim of this paper goes beyond property testing. Can we actually estimate specific values of the distribution with a few queries? Can we simulate the distribution, which means to “locally compute” a distribution that is close to the input? Specifically, given domain point \(x\), we want to compute a value \(\mu(x)\) such that \(\mu\) represents a distribution close to the input. This paper gives such a result, where each \(\mu(x)\) can be computed in \(O(n^2/\varepsilon^2)\) queries. The final distribution has KL-divergence \(\varepsilon^2\) from the original.

Sampling and Identity-Testing Without Approximate Tensorization of Entropy by William Gay, William He, Nicholas Kocurek, and Ryan O’Donnell (arXiv). Again, consider a distribution \(D\) over a large product domain \(\Sigma^n\). Our aim is identity testing, so we wish to determine if \({D} = {D}’\) or \(|{D} – {D}’| > \varepsilon\), where \({D}’\) is some explicitly known distribution. The model is much weaker than subcube condition access. Essentially, we can only get samples from subcubes of dimension (at least) \(n-1\). Blanca-Chen-Štefankovič-Vigoda studied identity testing where \({D}\) satisfies a condition called “approximate tensorization of entropy” (ATE). By Jensen’s inequality, if \({D}\) is a product distribution, the entropy of \({D}\) is at most the sum of entropies of the marginal distributions. If the entropy of \({D}\) is (say) at most a constant factor of this sum, we say it satisfies the ATE. Blanca-Chen-Štefankovič-Vigoda proves that \(\widetilde{O}(n/\varepsilon)\) suffices for identity testing. This paper studies input distributions that are mixtures of (say, a constant number of) distributions satisfying ATE. Interestingly, these mixtures do not satisfy the ATE. But this paper shows that one can still get \(\widetilde{O}(n/\varepsilon)\) query identity testers.

News for May 2025

Apologies for the delay in publishing this digest. May has seen a lot of activity in property testing, with quite a lot from the quantum side: 8 papers in total!

Improved Sample Upper and Lower Bounds for Trace Estimation of Quantum State Powers (arXiv) by Kean Chen and Qisheng Wang. Say you have access to copies of a quantum state \(\rho\) (lucky you!) and want to estimate some property of \(\rho\): specifically, you’d like an additive estimate of \(\mathrm{Tr} \rho^q\), for some \(q> 1\) of you choosing. (This quantity, for various \(q\), has connections to Rényi and Tsallis entropies). How many copies of the state do you need? The authors significantly strengthen previous bounds on the sample complexity of the task, providing a tight answer for \(q > 2\), and improved results for \(1< q < 2\). Besides their results, they also provide a list of future directions.

On estimating the quantum \(\ell_\alpha\) distance (arXiv), by Yupan Liu and Qisheng Wang. Now you’re given access to two quantum state \(\rho_0,\rho_1\), and you want to estimate their \(\alpha\)-Schatten distance, for a fixed \(\alpha>1\) — the “quantum analogue” of estimating the \(\ell_\alpha\) distance between two (classical) discrete probability distributions. The authors provide computationally efficient algorithms with constant sample complexity (for constant additive error), and characterize the computational complexity of the task as a function of \(\alpha\) (as \(\alpha \to 1\)).

Quantum Hamiltonian Certification (arXiv), by Minbo Gao, Zhengfeng Ji, Qisheng Wang, Wenjun Yu, and Qi Zhao. Now, you are given access to the time evolution operator \(e^{-i Ht}\), for an unknown Hamiltonian \(H\) over \(n\) qubits. Can you efficiently test (certify) whether \(H\) is close to (or far from) a reference Hamiltonian \(H_0\)? The paper considers this tolerant Hamiltonian task with respect to the (normalized) Frobenius distance, and provides tight bounds on the evolution time (the measure of “sample complexity” in this time-evolution setting). The authors also discuss consequences of their results for other norms, and also give an ancillary-free algorithm for the tolerant testing task.

Hamiltonian Locality Testing via Trotterized Postselection (arXiv), by John Kallaugher and Daniel Liang. When it Hamiltonian-rains, it Hamiltonian-pours! This papers considers another tolerant testing task on Hamiltonians, where instead of certifying whether \(H\) is close to a reference \(H_0\), the goal is to tolerantly test whether it is close to being “local” (vs. far from any local Hamiltonian), where a Hamiltonian is “local” if it can be decomposed as the sum of small-weight Pauli operators. The authors provide significantly improved upper bounds on the time evolution needed for this task, as well as a tight lower (and upper) bound for algorithms alllowed reverse time evolution.

And for the last quantum paper of the month (that we could find), quantum tomography:

Sample-optimal learning of quantum states using gentle measurements (arXiv), by Cristina Butucea, Jan Johannes, and Henning Stein. A gentle measurement, as formalized by Aaronson and Rothblum, is a measurement which only “mildly” collapses the quantum state. In their paper, Aaronson and Rothblum established two-way connections between gentle measurements and differential privacy. In this work, the authors study quantum tomography with gentle measurements, and provide what looks like a connection to local differential privacy, along with a new information theoretical tool, a quantum strong data processing inequality (SDPI) for gentle measurements.

And with this perfect segue, we now switch from quantum to local differential privacy:

Locally Differentially Private Two-Sample Testing (arXiv), by Alexander Kent, Thomas B. Berrett, and Yi Yu. This paper considers the question of two-sample testing (maybe more familiar to our reader under the name closeness testing) under local differential privacy, both for discrete and smooth continuous distributions. While some previous results on identity testing under LDP carry over to closeness testing, this work extends it to the smooth continuous case, and, focusing on practicality of the algorithms, focuses on a class of testers, the permutation tests.

… and to Boolean functions!

Testing Juntas Optimally with Samples (arXiv), by Lorenzo Beretta, Nathaniel Harms, and Caleb Koch. It is a truth universally acknowledged, that a single month in possession of a good number of property testing results, must be in want of a junta testing paper. And lo and behold: junta testing, indeed! The contributions of this paper are two-fold: first, the authors provide a tight query complexity bound for junta testing the distribution-free setting (the testing analogue of the PAC learning setting, where distance is measured with respect to an arbitrary, unknown, ambient distribution). Second, they give a lower bound showing that for tolerant junta testing in the distribution-free setting, one may as well learn the whole thing: testing is as hard as learning.

And to conclude… anti-concentration inequalities, and applications.

Algebraic aspects of the polynomial Littlewood-Offord problem (arXiv), by Zhihan Jin, Matthew Kwan, Lisa Sauermann, and Yiting Wang. While this paper is concerned primarily with the (polynomial) Littlewood–Offord problem, and how to improve a recent theorem of Meka, Nguyen and Vu under additional structural assumptions, the authors remark that some of the lemmas they establish along the way have direct implications for property testing of matrices and tensors. This is discussed in Section 1.1.4 of the paper.

Announcing WoLA 2025 in Chicago

The 9th edition of WoLA, the Workshop on Local Algorithms, will be taking place on August 18-20, 2025 at the Toyota Technological Institute (TTIC), Chicago, IL.

For those unfamiliar with WoLA:

Local algorithms, that is, algorithms that compute and make decisions on parts of the output considering only a portion of the input, have been studied in a number of areas in theoretical computer science and mathematics. Some of these areas include sublinear-time algorithms, distributed algorithms, inference in large networks and graphical models. These communities have similar goals but a variety of approaches, techniques, and methods. This workshop is aimed at fostering dialogue and cross-pollination of ideas between the various communities.

You are all invited and we would love to see you there!  For more on (free) registration (by ⏰ August 10), how to submit a poster, list of invited speakers, and local arrangements, see the website.

Program Committee: Arnab Bhattacharyya (University of Warwick), Clément Canonne (University of Sydney), Elena Grigorescu (University of Waterloo), Moti Medina (Bar Ilan University), Rocco Servedio (Columbia University), Asaf Shapira (Tel Aviv University) [Chair], Ali Vakilian (TTIC), Yuichi Yoshida (NII)

News for April 2025

The bonanza of property testing results continues this month, with seven new papers on the arXiv! And with range, too: from regular languages to quantum states, with detours through distribution testing, Boolean functions, and relative error property testing. Oh, and yes, also, (quantum) magic!

The Trichotomy of Regular Property Testing (arXiv), by Gabriel Bathie, Nathanaël Fijalkow, and Corto Mascle. In property testing of regular languages, initiated by Alon, Krivelevich, Newman, and Szegedy in 2001, one has to decide whether an input word belongs to the target language \(L\), or is at distance at least \(\varepsilon\) from every \(x \in L\) (where the distance is typically Hamming or the (easier) edit distance). Surprisingly, it was shown that every regular language could be tested with \(\tilde{O}(1/\varepsilon)\) queries, independent of the input size. But is that tight? And is that \(\tilde{O}\) necessary? Many years of work later, the main result of this paper is settling the question, showing that for testing under the Hamming distance there are only three options: either a language is trivial (0 queries needed!), or easy (\(\Theta(1/\varepsilon)\) necessary and sufficient), or just plain hard \(\Theta(\log(1/\varepsilon)/\varepsilon)\) queries necessary and sufficient). Nothing else!

A Mysterious Connection Between Tolerant Junta Testing and Agnostically Learning Conjunctions (arXiv), by Xi Chen, Shyamal Patel, and Rocco Servedio. (1) Take two well-studied, notoriously challenging and seemingly unrelated problems about Boolean functions: agnostic learning conjunctions (that is, learning conjunctions with noise), and tolerantly testing juntas. (2) Unearth a new connection between the two tasks. (3) Write a very entertaining, illuminating introduction about this. (4) Oh, also, provide a new \(\tilde{O})(2^{n^{1/3}})\)-time agnostic learning algorithm for conjunctions, improving the previous best known result after 18 years; use it to obtain a \(\tilde{O})(2^{k^{1/3}})\)-query tolerant tester for juntas using this new connection, thus showing a polynomial separation between adaptive and non-adaptive algorithms for this task. (5) You get this paper.

Distribution Testing Meets Sum Estimation (arXiv), by Pinki Pradhan and Sampriti Roy. In the sum estimation problem, there is a set of \(n\) elements, each with a non-negative weight, and the goal is to estimate the total weight \(W\) while minimizing the number of weight queried. Under a model which allows both weighted and uniform sampling, this is known to be achievable with \(O(n^{1/3})\) queries (Beretta and Tětek). This paper considers the task under two additional assumptions: first, the weights are non-increasing, and second, the algorithm is allowed conditional weighted and uniform sampling (i.e., the same two types of sampling, but conditioned on any subset \(S\) of its choosing). In this setting, the authors show how to estimate the total weight to \(1\pm\varepsilon\) with only \(O((\log n)\text{poly}(1/\varepsilon))\) queries.

Efficient witnessing and testing of magic in mixed quantum states (arXiv), by Tobias Haug, and Poetri Sonya Tarabunga. Magic is real, and it’s quantifiable: roughly speaking, it quantifies the amount or “non-stabilizerness” of a state. I won’t pretend to fully understand what this means, but this paper shows that one can test the magic of low-entropy \(n\)-qubit states (i.e., distinguish between low magic states and high-magic states) with only polynomially (in \(n\)) many copies of the state.

Mildly-Interacting Fermionic Unitaries are Efficiently Learnable (arXiv), by Vishnu Iyer. More quantum property testing! In this paper, the author shows how to test whether an \(n\)-mode fermionic unitary has Gaussian dimension at least \(k\) (or is \(\varepsilon\)-far from it in Frobenius norm) in time \(\text{poly}(n, 1/\varepsilon)\). (This is then used as a building block to efficiently learn such unitaries.)

Testing Juntas and Junta Subclasses with Relative Error (arXiv), by Xi Chen, William Pires, Toniann Pitassi, and Rocco Servedio. In this relatively (!) new model of property testing, which we covered last October, the notion of farness between two Boolean functions is relative to their number of satisfying assignments. Testing with relative error is at least as hard as in the standard setting, and could be strictly harder: this paper shows that, for the case of testing juntas, this is not the case. Even with relative error testing, \(\tilde{O}(k/\varepsilon)\) queries are necessary and sufficient for junta testing! Using ideas from “standard testing” (specifically, from the “testing by implicit learning” framework), their results further extend to testing a large number of subclasses of juntas.

Relative-error testing of conjunctions and decision lists (arXiv), by Xi Chen, William Pires, Toniann Pitassi, and Rocco Servedio. Same team, same model — more results! After testing juntas in the relative error model, the authors continue their systematic exploration of the testing questions, this time focusing on testing decision lists and conjunctions. They are able to obtain a \(\tilde{O}(1/\varepsilon)\)-tester with two-sided error for the former, and an \(O(1/\varepsilon)\)-tester with one-sided error for the latter: both matching the best-known query complexity in the standard model.

News for March 2025

After a good spell last month, the community saw no reason to relent. This month, we had five papers. From tolerant testers for fundamentally important graph problems, to privately testing distributions, to more explorations in the conditional sampling model, to counting and sampling motifs. Also included as a bonus is a result on mulitpass streaming lower bounds for the approximating max-cut.

A Tolerant Independent Set Tester (arXiv) (by Cameron Seth) Consider the setting of testing graph properties in the dense graph model where you have access to the the adjacency matrix of the graph and you may check whether a pair \((i,j)\) is connected by an edge or not. In this model, consider the following task: I want you to design a sublinear time procedure which on input a dense graph, runs in time \(poly(\rho/\varepsilon)\) and returns YES if the graph is \(\varepsilon_1 = \widetilde{\Omega}(\varepsilon)\)-close to having an independent set of size at least \(\rho n\) and returns NO if the graph is \(\varepsilon_2 = \varepsilon\)-far from all graphs that have an independent set of size at least \(\rho n\). The verdict is required in both cases to hold with probability at least \(2/3\). Alright, that’s what the title pretty much gave away already. So, how does the paper establish this result?

To this end, the paper uses a remarkably simple algorithm. You can pick a random subset \(S \subseteq V\) of size \(s = \widetilde{O}(\rho^3/\varepsilon^2)\) vertices and foucs on counting how many edges you see in various subsets of \(S\) all of size \(\rho s\). If you see some subset with size \(\rho s\) which induces \(\varepsilon_1 s^2\) edges, you accept \(G\). If this condition fails, you reject \(G\). The analysis presented in the paper is intricate and proceeds by proving a new container lemma for sparse graphs (rather than independent sets). It is a good read and you should totally take a look at this paper.

Better Private Distribution Testing by Leveraging Unverified Auxiliary Data (arXiv) (by Maryam Aliakbarpour, Arnav Burudgunte, Clément Canonne (our own!), and Ronitt Rubinfeld) If our January post is fresh in your mind, you might recall the theme: assuming you know a little about the distribution \(\mathbf{p}\) presented to you allows you give improved algorithms for classic distribution testing tasks such as identity testing and uniformity testing (I am assuming support of my distributions is \([n]\)). This paper considers these problems in with the setting of differential privacy. So, you want to design private algorithms for these classic tasks where you get to assume that you know something about the target distribution — that is, you assume it is not completely unknown just like we had in our January posting. The paper presents sample efficient private algorithms both for identity testing and closeness testing.

Optimal mass estimation in the conditional sampling model (arXiv) (by Tomer Adar, Eldar Fischer, and Amit Levi) As the title suggests, this paper considers the task of estimating the probability mass of all elements in the support of a probability distribution sitting on a finite universe \(\mathcal{U}\). Of course, you have to assume stronger oracle access for this task and the oracle people assume access to is the conditional sampling oracle. You get to specify a subset \(S \subseteq \mathcal{U}\) and you will get a sample from \(S\) with the correct conditional probability of element being in \(S\) relative to the underlying distribution. Meel-Kumar-Pote gave an algorithm for this task which requests \(poly(\log n)/poly(\varepsilon)\) samples. The featured paper, on the other hand, presents algorithms for this problem which request only \(O(\log(\log n)/poly(\varepsilon))\) samples. The authors also show a mathcing lower bound effectively closing this problem.

Approximately Counting and Sampling Hamiltonian Motifs in Sublinear Time (arXiv) (by Talya Eden, Reut Levi, Dana Ron, and Ronitt Rubinfeld) Suppose I give you a large graph \(G = (V,E)\) and a small subgraph \(H\) whose number of occurences in \(G\) I wish to (approximately) count. You may assume that you have access to an oracle which can give you a random vertex, the \(i\)-th neighbor of your favorite vertex, degree of a vertex, and can also answer whether a particular pair of vertices is connected by an edge. This is the standard model we use for coming up with sublinear time algorithms for graph problems. It is contrasted sometimes with the augmented model where in addition to all of the above, you also get to assume access to an oracle which can give you uniformly random edges in the graph \(G\). As you can imagine, the augmented model allows you to count the number of occurences of \(H\) for a much richer class of small subgraphs \(H\). Indeed, this used to be the status-quo till this paper. Indeed, before this result came out, we knew that in the augmented model, we can pretty much count the number of occurences of all small subgraphs \(H\). Whereas, in the standard model, we only knew how to count edges, stars and cliques. The featured paper makes progress towards this divide and presents algorithms for approximately counting the number of copies of \(H\) for a much richer class of small subgraphs \(H\). One thing you might want to try is to simulate the algorithms in the augmented model by hacking up algorithms for returning uniformly random edges. However, that approach is a no go as the running time to sample u.a.r edges can be prohibitive for a sublinear time implementation in the standard model. This paper develops ideas which bypass such annoyances.


Multi-Pass Streaming Lower Bounds for Approximating Max-Cut (arXiv) (by Yumou Fei, Dor Minzer, Shuo Wang) This result, as the title revealed is from the streaming literature. But it should be of interest to our readers and so we cover it here. The essential summary is this: If you want to get a better than \(1/2\) approximation algorithm for the max cut problem in the streaming setting — like a \(1/2 + \varepsilon\) approximation — be ready to pay up \(poly(n)\) passes. In more detail, the technical statement reads out as follows: Any randomized, \(k\)-pass \(1/2 + \varepsilon\) approximation algorithm for max-cut requires \(\Omega(n^{1/3}/k)\) space. The lower bound results in this paper are inspired by the communication complexity lower bound of a certain problem investigated in the seminal work of Kapralov-Krachun from 2019 where the authors showed that any single pass streaming algorithm for max-cut aiming to produce a better than \(1/2\) approximation must cough up \(\Omega(n)\) space. This requires a lot of care as in the multipass streaming setting, you cannot hope to show your communication complexity lower bounds by a naive application of discrepancy method. The delicate finesse needed is hijacked in by a clever use of hypercontractivity. Looks like this will be an interesting treat for our intrepid readers.

News for February 2025

After a few quiet months on PTReview, the community is back with a bang. We have a selection of nine papers this month, ranging from hypergraphs, vertex coloring, triangle counting, a new take on linearity testing, and a sublinear foray into transformers.

Property Testing in Bounded Degree Hypergraphs by Hugo Aaronson, Gaia Carenini, and Atreyi Chanda (arXiv). We all know about the rich history of property testing and sublinear algorithms for graphs. The space for hypergraphs is far less studied, and likely has an even richer theory. Consider the input being an \(n\) vertex \(k\)-uniform hypergraph, so every hyperedge is a subset of \(n\) vertices. This paper is the first to define the bounded degree setting. The graph is represented as an adjacency list, wherein, for each vertex, we have the list of hyperedges containing that vertex. Distance between hypergraphs is defined in terms of the symmetric difference between the sets of hyperedges. The primary focus of the paper is on the fundamental property of \(k\)-colorability (note that \(k\) is also the hyperedge size). The main lower bound result is that for \(k \geq 3\), this property requires \(\Omega(n)\) queries to test. This is in contrast to the graph setting (\(k = 2\)), where bipartiteness is known to be testable for bounded degree graphs. This paper shows if that inputs have bounded treewidth, then \(k\)-colorability is testable with one-sided error in constant queries.

Simple Sublinear Algorithms for \((\Delta + 1)\) Vertex Coloring via Asymmetric Palette Sparsification by Sepehr Assadi and Helia Yazdanyar (arXiv). Consider access to the adjacency list of a graph with \(n\) vertices and maximum degree \(\Delta\). Most of our readers will be familiar with result of Assadi-Chen-Khanna proving that \((\Delta + 1)\)-coloring can be done in \(\widetilde{O}(n^{3/2})\) time. The key tool is a palette sparsification lemma, that shows that each vertex can independently pick a small list of colors, such that the final coloring is consistent with these lists. This paper proves a weaker version of this lemma with a significantly simpler proof. The result is an asymmetric version, where vertices choose color lists of varying sizes. The average length goes up by a log factor. But the proof is simpler, and it also allows for the final coloring to just use the greedy algorithm (constrained to these lists).

Improved Sublinear Algorithms for Classical and Quantum Graph Coloring by Asaf Ferber, Liam Hardiman, and Xiaonan Chen (arXiv). Another result on \((\Delta +1)\)-coloring. A result of Morris and Song show that there are simpler algorithms that give a \((1+\varepsilon)\Delta\)-coloring. Adapting their techniques, this paper gives a sublinear algorithm for \((\Delta+1)\)-coloring that runs in \(O(n^{3/2}\sqrt{\log n})\) time. This gives a \(\sqrt{\log n}\) factor improvement from the previous results. The algorithm can be accelerated in the quantum model using Grover’s search, to get a running time of \(\widetilde{O}(n^{4/3})\). Finally, the paper gives a \(\widetilde{O}(n^{5/4})\) time quantum algorithm for getting a \((1+\varepsilon)\Delta\)-coloring.

Monotonicity Testing of High-Dimensional Distributions with Subcube Conditioning by Deeparnab Chakrabarty, Xi Chen, Simeon Ristic, C. Seshadhri, and Erik Waingarten (arXiv). In my first foray into distribution testing, this paper studies the problem of testing monotonicity of distributions over the hypercube, \(\{-1,1\}^n\). Observe that the domain is considered to be exponentially large, and the aim is to get distribution testers that make \(\textrm{poly}(n)\) samples. To get such bounds, this paper assumes subcube conditional samples. Given any subhypercube of the domain, the algorithm can generate samples restricted to this subhypercube. A distribution is called monotone, if the corresponding probability mass function is monotone (wrt to the standard partial order on \(\{-1,1\}^n\)). For this setting, this paper shows that property of monotonicity can tested with \(\widetilde{O}(n/\varepsilon^2)\) samples. Moreover, this bound is shown to be optimal, up to polylog factors. One of the main tools is a new directed isoperimetric theorem for real-valued functions on the hypercube.

Compression Barriers in Autoregressive Transformers by Themistoklis Haris and Krzysztof Onak (arXiv). While this paper does not give a sublinear time algorithm, it uses many techniques from the sublinear algorithms toolkit to prove lower bounds. And it is on transformers, arguably one of the most important components for machine learning on sequences and Large Language Models. Consider a sequence of \(n\) vectors \(v_1, v_2, \ldots, v_n \in \mathbb{R}^d\), which one can think of as vector embeddings of \(n\) words (coming from, say, some generated text). Based on extensive learning and model fine tuning, assume that the model has learned some “key” vectors \(k_1, k_2, \ldots, k_n \in \mathbb{R}^d\). The next word is generated from a “query vector” \(q\), using which we compute \(M^{-1} \sum_{\ell \leq n} \exp(q \cdot k_\ell) v_\ell\) (where \(M\) is a normalization factor). Suppose there are \(n\) query vectors, and the aim is to generate the above sum for all the query vectors (these are like the next \(n\) words). In practice, there are many schemes to speed up computation by compressing the key vectors. This paper shows that, in the worst case, a computation of the next word/vector takes \(\Omega(nd)\) time and space. Thus, any sublinear compression/speed up method must fundamentally use properties of the data.

Arboricity and Random Edge Queries Matter for Triangle Counting using Sublinear Queries by Arijit Bishnu, Debarshi Chanda, Gopinath Mishra (arXiv). Triangle counting is a classic problem, with a rich history in sublinear algorithms. The earliest work gives an algorithm running in \(\widetilde{O}(n/t^{1/3} + m^{3/2}/t)\) time, where \(n, m\) are the usual graph parameters and \(t\) is the triangle count. (Polylog and \(\varepsilon\) factors are ignored.) This complexity can be improved to \(\widetilde{O}(n/t^{1/3} + m\alpha/t)\), where \(\alpha\) is the graph arboricity/degeneracy. The standard adjacency list model can be augmented with random edge queries, in which case, it was known that \(O(m^{3/2}/t)\) time algorithms exist. This paper shows that \(O(m\alpha/t)\)-time algorithms exist for the model with random edge queries, which combines the previous results into an even better algorithm.

Improved Sublinear-time Moment Estimation using Weighted Sampling by Anup Bhattacharya, Pinki Pradhan (arXiv). Consider an array of \(n\) non-negative weights \(w_1, \ldots, w_n\). The \(t\)th-moment is the sum \(\sum_{i \leq n} w^t_i\). The aim is to estimate this moment, with proportional samples. This means that an algorithm can generate index samples where the index \(i\) has probability proportional to \(w_i\). The first result on this problem, for \(t=1\), was by Motwani-Panigrahy-Xu. Beretta-Tetek do a detailed study of this problem for case \(t=1\) (sum estimation), and show that this problem can be solved in \(O(\sqrt{n})\) queries. When the weights are vertex degrees and we have graph access, there are results showing the existence of \(\widetilde{O}(n^{1-1/t})\) algorithms for \(t > 1\). This paper gives such a result for any non-negative weights with $t \geq 1$, and proves a matching lower bound. Moreover, this paper considers the setting where $atex t < 1$. Interestingly, there is an algorithm of query complexity \(O(\sqrt{n} + n^{1/t-1})\), and a lower bound showing that \(\Omega(n)\) samples are needed when \(t \leq 1/2\).

Biased Linearity Testing in the 1% Regime by Subhash Khot and Kunal Mittal (ECCC). The classic problem of linearity testing never stops yielding new insights. A function \(f:\{0,1\}^n \to \{-1,1\}\), if \(f(x) = (-1)^{a \cdot x}\) where \(a \in \{0,1\}^n\). The classic 3-query BLR test distinguishes linear functions from those that are far from linear. Specifically, if the tester passes with probability \(1-\delta\), then the function \(f\) is \(O(\delta)\)-close to linear. This called testing in the “99% regime”. Suppose we want to tester with guarantees in the “1%” regime. This means that if the tester passes with probability \(> \delta\), then \(f\) has at least \(\epsilon\) correlation with some linear function (where \(\epsilon\) is some function of \(\delta\)). Such results were known for the uniform distribution on the hypercube. This paper studies product distributions on the hypercube, where all marginals are identically \(p\)-biased (the probability of choosing a \(1\) is \(p\)). Such 1%-testers were known for \(p \in (1/3,2/3)\) from recent work. This paper gives a \(1+1/p\)-query tester for all values of \(p\), and proves that the number of queries must be at least \(1+1/p\).

News for January 2025

January 2025 was by many measures a very… eventful month; as far as property testing is concerned, not so much, with only two papers (and a third we had previously missed). Uneventful is not a bad thing, sometimes!

Many pentagons in triple systems, by Dhruv Mubayi and Jozsef Solymosi (arXiv). This paper is interested in quantifying the number of copies of \(C_k\) in 3-uniform hypergraphs. In the process, the authors establish a quantitative result very relevant to property testing, at least for those with an interest in testing triangle-freeness in dense graphs, improving on a result of Gishboliner, Shapira and Wigderson: namely, that if an \(n\)-vertex graph is \(\varepsilon\)-far from triangle-freeness, then for every \(\ell \geq 2\) it must contain \(\Omega(\varepsilon^{3\ell} n^{2\ell+1})\) copies of \(C_{2\ell+1}\).

Testing Noise Assumptions of Learning Algorithms, by Surbhi Goel, Adam R. Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan (arXiv). Testable learning has seen a surge of interest since its (recent) introduction by Rubinfeld and Vasilyan (2023). In this framework, a learning algorithm which works under some data distribution assumption (e.g., the data is from a spherical cow Gaussian) is not “off the hook” when that assumption isn’t met, as is the case in classical learning. Instead, the algorithm must put its money where its algorithmic mouth is: if the data does indeed satisfy the assumption, then it must output a hypothesis that satisfies the learning guarantee; if the data does not satisfy the assumption, it is allowed to abort and output an error flag; but if it does output a hypothesis, regardless of whether the distributional assumption is met then that hypothesis must satisfy the learning guarantee. In this sense, the algorithm must act like a property tester for the distributional assumption made on the data.
This paper extends the testable learning framework from data distribution tonoisy data generation model: the assumption to be tested (and used) is no longer only on the distribution of the data (regardless of the associated labels), but on the distribution of the pair (data, label), including the way the label may be corrupted. In particular, the authors focus as an application on learning high-dimensional origin-centered halfspaces, where the assumption is that the data is from a Gaussian distribution, with labels perturbed by Massart noise.

Learning multivariate Gaussians with imperfect advice, by Arnab Bhattacharyya, Davin Choo, Philips George John, and Themis Gouleakis (arXiv). Suppose you want to learn the mean (or, if the covariance isn’t known, even better, the mean and covariance) of a high-dimensional Gaussian from i.i.d. samples. You’re in luck: we know how to do it, and the algorithm is very simple! You’re not in luck, though: you’ll need a lot of these i.i.d. samples to achieve non-trivial accuracy. A number either linear or quadratic in the dimension, depending on whether you’re learning only the mean vector or the whole thing.
But say you’re in “luck”: a “good” (but not very trustworthy) friend comes to your help, claiming they already know the mean and covariance, and tell you what they (claim they) are. Can you use this possibly unreliable advice to learn the Gaussian better? This is the setting of learning with advice, and, in this paper, the authors show that yes, when learning Gaussians, you can! And, what’s even better (for this blog), the algorithm they design uses as a core subroutine a tolerant tester, which allows them to carefully checks the quality of the “advice”.

News for December 2024

Happy New Year to you and your loved ones! Welcome to the first post of 2025. This month, we feature four papers: one on property testing in the huge object model, two on distribution testing, and a fourth that, while not strictly a property testing paper, was too intriguing to ignore. The last paper recounts the resolution of a fascinating bet between Peter Sarnak and Noga Alon. With that teaser, let’s dive in!

Testing vs Estimation for Index-Invariant Properties in the Huge Object Model by Sourav Chakraborty, Eldar Fischer, Arijit Ghosh, Amit Levi, Gopinath Mishra, and Sayantan Sen (arXiv) Let me begin by reminding you the setup for distribution testing in the huge object model which was introduced by Goldreich and Ron. Here is a brief refresher of this model. Suppose you want to test whether a distribution \(\mathcal{D}\), supported over the Boolean hypercube, satisfies a certain property \(\mathcal{P}\). To do this, you sample a string \(x \sim \mathcal{D}\), where \(x\) is of length \(n\). Huge object model considers situations where \(n\) is very large and so you instead assume query access to the sampled strings. In this model, the distribution \(\mathcal{D}\) is considered \(\varepsilon\)-far from \(\mathcal{P}\) if the earthmover distance (EMD) between \(\mathcal{D}\) and the closest distribution satisfying \(\mathcal{P}\), measured with respect to the relative Hamming distance between bitstrings, is at least \(\varepsilon\). In our July 2022 post, we covered a paper by a subset of the authors which presented efficient testers in this model for index-invariant properties whose support had bounded VC dimension.

The main result of the featured paper shows the following. For an index invariant property, you can basically upgrade a plain tester to a tolerant tester. Thus, the ability to efficiently \(\varepsilon\)-test an index-invariant distribution property in the huge object model translates into an ability of being able to estimate distance from the property.

Optimal Algorithms for Augmented Testing of Discrete Distributions by Maryam Aliakbarpour, Piotr Indyk, Ronitt Rubinfeld, and Sandeep Silwal (arXiv) Let us take the standard setup of discrete distribution testing supported over \([n]\) with a slight twist. Suppose you can assume that the underlying distribution \(\boldsymbol{p}\) is not entirely unknown. As the paper argues, this might be a realistic assumption in distributions dealing with network traffic data or search engine queries. Among a couple more, the main result of this paper shows that indeed, with a good proxy \(\boldsymbol{p}’\) for the input distribution \(\boldsymbol{p}\) i.e., a situation where say \(\|\boldsymbol{p}’-\boldsymbol{u}\|_1 \gg \|\boldsymbol{p}’ – \boldsymbol{p}\|_1\) and \(\|\boldsymbol{p}’ – \boldsymbol{p}\|_1\) is small enough, you get testers for uniformity testing with sample complexity \(O(1)\). (Here, \(\boldsymbol{u}\) denotes the uniform distribution over \([n]\). In this framework, the authors also present algorithms with improved sample complexity for identity testing and closeness testing.

Settling the complexity of testing grainedness of distributions, and application to uniformity testing in the Huge Object model by Clement Canonne, Sayantan Sen, Qiping Yang (ECCC) A discrete distribution is called \(m\)-grained if all probabilities are integer multiples of \(1/m\). If you are a regular PTReview reader, you might recall our News for September 2021 which featured a paper by Goldreich and Ron which proved an \(\Omega(n^c)\) lower bound for testing \(m\)-grainedness for any \(c < 1\). Goldreich and Ron also conjectured that the true lower bound is actually \(\Omega(n/\log n)\) (when \(m = \Theta(n)\)). The current work resolves this conjecture settling the complexity of this problem.

Ramanujan Property and Edge Universality of Random Regular Graphs by Jiaoyang Huang, Theo Mckenzie, and Horng-Tzer Yau (arXiv) So, yes here is the (non-property testing paper) I wanted to tell you about. Let me start with the juicy bit, So, Peter Sarnak and Noga Alon had a bet about the following situation: Fix \(d \in \mathbb{N}\) and let us take a random \(d\)-regular graph. Sarnak conjectured that the probability this graph is in fact a Ramanujan expander goes to zero as \(n \to \infty\) whereas Alon conjectured that this probability tends to one as \(n \to \infty\). The featured paper shows that while this probability decreases with increasing \(n\), it approaches a limiting value which is around \(0.69\). You can watch this juicy bit here.