We’ve got four papers this month. A mix of distribution testing, matrix problems, and a different paper on the power of sampling.
On the Complexity of Estimating the Effective Support Size, by Oded Goldreich (ECCC). In distribution testing, a classic problem is that of approximating the support size. By a (now) classic result of Valiant-Valiant, the complexity of this problem is \(\Theta(n/\log n)\). This paper raises the question of approximating the “effective” support, and that too with more than just samples from the distribution. The \(\epsilon\)-support of discrete distribution \(\mathcal{D}\) is the smallest support among any distribution \(\mathcal{D}’\) such that \(\|\mathcal{D} – \mathcal{D}’\|_1 \leq \epsilon\) (or TV-distance). Denote this as \(supp_\epsilon(\mathcal{D})\). One can also consider a bicriteria version. Given approximation parameter \(f\) and thresholds \(\epsilon_1 < \epsilon_2\), we need to provide a number in the range \([supp_{\epsilon_2}(\mathcal{D}), f\cdot supp_{\epsilon_1}(\mathcal{D})]\). The primary model studied allows for random samples and evaluation queries (where one gets the probability of any known element of the domain). In this model, for arbitrary \(\epsilon_1, \epsilon_2\), there is a continuum of algorithms, trading off query complexity with approximation. At one end, for \(f = O(\log n)\), the query complexity is \(\widetilde{O}(1/\epsilon_1)\). At the other end, for \(f=1\), the query complexity is \(O(\log^* n)\). (Here, \(n = supp_{\epsilon_1}(\mathcal{D})\).) There are lower bounds showing the necessity of evaluation queries for subpolynomial query complexities.
Communication and Memory Efficient Testing of Discrete Distributions, by Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, and Sankeerth Rao (arXiv). This paper adds additional computational constraints to the distribution testing problem. In the streaming setting, the algorithm is only allowed a single pass over the random samples. It has \(m\) bits of storage, much smaller than \(n\), the distribution size. The aim is to minimize the number of samples required for uniformity (and closeness) testing. For uniformity testing in the streaming setting, the paper gives an algorithm that uses \(\widetilde{O}(m + n/m)\) samples. The standard collision algorithm requires \(\Theta(\sqrt{n})\) storage (to store the samples), while this result gives a non-trivial bound for \(m \ll \sqrt{n}\). There are lower bounds showing that these bounds are basically tight. In the distributed distribution testing problem, there are a number of processors, each holding \(\ell\) samples. A referee asks a question to the processor, whose answers are broadcast to everyone. The aim is to minimize communication cost. For uniformity testing in this setting, there is a protocol using \(O(\sqrt{n\log n/\ell})\) bits of communication. As a sanity check, note that for \(\ell = \sqrt{n}\), one processor could simply run the collision algorithm locally to report the result.
Querying a Matrix through Matrix-Vector Products by Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang (arXiv). Consider \(n \times d\) matrix \(M\) over some field \(\mathbb{F}\). One gets access to this matrix through matrix-vector products, and wishes to test some matrix property. This is a natural model in many settings, and generalizes the classic setting of query access. A subtle point is that one can only right multiply with “query” vectors, and there are problems where left multiplication can change the complexity. (A nice example in the paper is testing if a square matrix is symmetric. With both left and right multiplications, this is easy, since we can directly access rows and columns. By only accessing columns, this is non-trivial.) This paper studies a number of problems, broadly classified as linear algebra problems, statistics problems, and graph problems. Some highlights are: testing symmetry can be done in \(O(1)\) queries, and the maximum eigenvalue can be approximated in \(O(\log n)\) queries adaptively (but there is an \(\Omega(n)\) non-adaptive lower bound). For graph problems, here’s an interesting discovery. If \(M\) is the adjacency matrix, connectivity requires \(\Omega(n/\log n)\) queries. But if \(M\) is the signed edge-vertex matrix, then this can be done in \(poly(\log n)\) queries. This paper provides a number of interesting directions and problems to study.
The Adversarial Robustness of Sampling by Omri Ben-Eliezer and Eylon Yogev (arXiv). This isn’t a property testing paper, but how can one ignore a paper on understanding the power of sampling? It’s convenient to think of the following setting. An algorithm gets a stream of \(n\) numbers, and has to answer some questions about the stream (say, the median or other quantiles). The simplest strategy is to take a small random sample of the stream. But what if an adversary was generating the stream, depending on the current sample? Under what circumstances is the sample still “representative” of the stream? The specific results of this paper require getting into set systems and VC dimension, which I’ll leave for the sake of simplicity. Let’s go back to the median question. To get an approximate median, a constant number of samples suffice. A consequence of the main result is that if one takes \(O(\log n)\) samples, then this is robust to adversarial streams. On the other hand, lower bounds show that constant sized samples can be fooled by an adversary. The paper is a really interesting read, and is a nice take on “standard facts” that we take for granted.