A nice array of papers for this month, ranging from distribution testing, dense graph property testing, sublinear graph algorithms, and various mixtures of them. Let us now sample our spread! (Adding another semistreaming paper who achieved similar results to another posted paper. -Ed)
A Lower Bound on the Complexity of Testing Grained Distributions by Oded Goldreich and Dana Ron (ECCC). A discrete distribution is called \(m\)-grained if all probabilities are integer multiples of \(1/m\). This paper studies the complexity of testing this property of distributions. For simplicity, consider the property of being \(n/2\)-grained, where the support size is \(n\). The classic lower bound for testing uniformity shows that \(\Omega(\sqrt{n})\) samples are required to distinguish the uniform distribution from a distribution uniform on \(n/2\) elements. Thus, we get a lower bound of \(\Omega(\sqrt{n})\) for testing \(n/2\)-grainedness (if I am permitted to use that word). This paper proves a lower bound of \(\Omega(n^c)\), for all constant \(c < 1\). It is conjectured that the lower bound is actually \(\Omega(n/\log n)\), which would match the upper bound (for any label-invariant property).
Testing Distributions of Huge Objects by Oded Goldreich and Dana Ron (ECCC). This paper introduced a new model that marries distribution testing with property testing on strings. The “object” of interest is a distribution \(\mathcal{D}\) over strings of length \(n\). We wish to test if \(\mathcal{D}\) possesses some property. The tester can get a random string \(x\) from the distribution, and can query any desired index of \(x\). The distance between distributions is defined using the earthmover distance (where we use the Hamming distance between strings). This model is called the DoHO (Distributions of Huge Objects) model. There are many questions posed and connections drawn to classical property testing and distribution testing. What I find interesting is a compelling application: the distribution \(\mathcal{D}\) may represent noisy or perturbed versions of a single object. The DoHO model gives a natural generalization of standard property testing to noisy objects. This paper considers problems such as testing if \(\mathcal{D}\) is: a random perturbation of a string, or a random cyclic shift, or a random isomorphism of a graph.
Sublinear Time and Space Algorithms for Correlation Clustering via Sparse-Dense Decompositions by Sepehr Assadi and Chen Wang (arXiv). Correlation clustering is a classic problem where edges in a graph are labeled ‘+’ or ‘-‘, denoting whether these edges should be uncut or cut. The aim is to cluster the graph minimizing the total “disagreements” (cut ‘+’ edges or uncut ‘-‘ edges). This paper gives an \(O(1)\)-approximation algorithm that runs in \(O(n\log^2n)\) time; this is the first sublinear time approximation algorithm for this problem. Correlation clustering has seen results for the property testing/sublinear algorithms community, first by Bonchi, Garcia Soriano, and Kutzkov. But previous results were essentially on the dense graph model, giving \(O(\epsilon n^2)\) error assuming adjacency matrix input. This paper considers access to the adjacency list of ‘+’ edges. Interestingly (from a technical standpoint), the key tool is a new sparse-dense decomposition. Such decompositions emerged from the seminal work of Assadi-Chen-Khanna for sublinear \((\Delta+1)\)-colorings, and it is great to see applications beyond coloring.
Sublinear-Time Computation in the Presence of Online Erasures by Iden Kalemaj, Sofya Raskhodnikova, and Nithin Varma (arXiv). Can property testing be done when portions of the input are hidden? This question was first raised by Dixit-Raskhodnikova-Thakurta-Varma, who gave a model of erasure-resilient testing. There is an adversary who hides (erases) part of the input function; queries to those parts just yield a dummy symbol. This paper defines an online version of this model. There is an erasure parameter \(t\). On each query by the property tester, the adversary can erase \(t\) values of the function. Consider the property of classic linearity of functions \(f:\{0,1\}^d \rightarrow \{0,1\}\). The BLR tester queries triples of pairs \((x,y, x \oplus y)\). Observe how this tester is easily defeated by our adversary, by erasing the value \(f(x\oplus y)\). One of the main results of this paper is a \(O(1/\varepsilon)\)-query tester for linearity, that works for any constant erasure parameter \(t\). Note that this matches the bound for the standard setting. There are a number of results for other classic properties, such as monotonicity (sortedness) and Lipschitz.
Sublinear Time Eigenvalue Approximation via Random Sampling by Rajarshi Bhattacharjee, Cameron Musco, and Archan Ray (arXiv). Consider the problem of estimating all the eigenvalues of a real, symmetric \(n \times n\) matrix \(M\) with bounded entries, in sublinear time. The main result shows that the eigenvalues of a uniform random \(O(\epsilon^{-4}\log n)\) principal submatrix can be used to approximate all eigenvalues of \(M\) up to additive error \(\epsilon n\). One can think of this as a sort of concentration inequality for eigenvalues. This result follows (and builds upon) work of Bakshi-Chepurko-Jayaram on property testing semidefiniteness. The key idea is that eigenvectors corresponding to large eigenvalues have small infinity norm: intuitively, since all entries in \(M\) are bounded, such an eigenvector must have its mass spread out among many coordinates. Hence, we can get information about it by randomly sampling a few coordinates. The paper also shows that approach of taking principal submatrices requires taking \(\Omega(\epsilon^{-2})\) columns/rows.
Deterministic Graph Coloring in the Streaming Model by Sepehr Assadi, Andrew Chen, and Glenn Sun (arXiv). This is technically not a sublinear algorithms paper (well ok, streaming is sublinear, but we tend not to cover the streaming literature. Maybe we should? – Ed.) But, I think that the connections are of interest to our community of readers. The main tool of sublinear \((\Delta+1)\)-coloring algorithm of Assadi-Chen-Khanna is the palette sparsification lemma (\(\Delta\) is the maximum degree). This lemma shows that vertices can randomly shorten their ”palette” of colors, after which all colorings from these palettes lead to very few monochromatic edges. This is an immensely powerful tool, since one can get immediately sublinear complexity algorithms in many models: adjacency list, streaming, distributed. Is the randomness necessary? Note that these algorithms run in \(\Omega(n)\) time/space, so it is conceivable that deterministic sublinear algorithms exists. This paper shows that randomization is necessary in the semi-streaming model (space \(O(n poly(\log n))\)). Indeed, there exist no deterministic semi-streaming algorithms that can achieve even \(\exp(\Delta^{o(1)})\) colorings.
Adversarially Robust Coloring for Graph Stream by Amit Chakrabarti, Prantar Ghosh, and Manuel Stoeckl (arXiv). This paper studies the same problem as the above, but presents the results in a different way. In a randomized algorithm, we normally think of an adversary that fixes a (hard) input, and the algorithm then makes its random decisions. An adaptive adversary is one that changes the input (stream) based on the decisions of an algorithm. In this definition, a robust algorithm is one that can give correct answers, even for adversarially generated output. A deterministic algorithm is automatically robust. This paper show that there do not exist semi-streaming algorithms that can achieve \((\Delta+1)\)-colorings. The quantitative lower bound is weaker (\(\Omega(\Delta^2)\) colors), but it is against a stronger adversary.