A comparatively slow month, as summer draws to a close: we found three papers online. Please let us know if we missed any! (Edit: And we added two papers missed from June.)
Testing convexity of functions over finite domains, by Aleksandrs Belovs, Eric Blais, and Abhinav Bommireddi (arXiv). This paper studies the classic problem of convexity testing, and proves a number of interesting results on the adaptive and non-adaptive complexity of this problem in single- and multi-dimensional settings. In the single-dimensional setting on domain \([n]\), they show that adaptivity doesn’t help: the complexity will be \(O(\log n)\) in both cases. However, in the simplest two-dimensional setting, a domain of \([3] \times [n]\), they give a polylogarithmic upper bound in the adaptive setting, but a polynomial lower bound in the non-adaptive setting, showing a strong separation. Finally, they provide a lower bound for \([n]^d\) which scales exponentially in the dimension. This leaves open the tantalizing open question: is it possible to avoid the curse of dimensionality when testing convexity?
Testing Isomorphism in the Bounded-Degree Graph Model, by Oded Goldreich (ECCC). This work investigates the problem of testing isomorphism of graphs, focusing on the special case when the connected components are only polylogarithmically large (the general bounded-degree case is left open). One can consider when a graph is given as input, and we have to query a graph to test if they are isomorphic. This can be shown to be equivalent (up to polylogarithmic factors) to testing (from queries) whether a sequence is a permutation of a reference sequence. In turn, this can be shown to be equivalent to the classic distribution testing question of testing (from samples) whether a distribution is equal to some reference distribution. The same sequence of equivalences almost works for the case where there is no reference graph/sequence/distribution, but we only have query/query/sample access to the object. The one exception is that the reduction doesn’t work to reduce from testing distributions to testing whether a sequence is a permutation, due to challenges involving sampling with and without replacement. However, the author still shows the lower bound which would be implied by such a reduction by adapting Valiant’s proof for the distribution testing problem to this case.
Learning Very Large Graphs with Unknown Vertex Distributions, by Gábor Elek (arXiv). In this note, the author studies a variant of distribution-free property testing on graphs, in which (roughly) neighboring vertices have probabilities of bounded ratio, and a query reveals this ratio. Applications to local graph algorithms and connections to dynamical systems are also discussed.
EDIT: We apparently missed two papers from June — the first paper was accepted to NeurIPS 2019, the second to COLT 2019.
The Broad Optimality of Profile Maximum Likelihood, by Yi Hao and Alon Orlitsky (arXiv). Recently, Acharya, Das, Orlitsky, and Suresh (ICML 2017) showed that the Profile Maximum Likelihood (PML) estimator enables a unified framework for estimating a number of distribution properties, including support size, support coverage, entropy, and distance to uniformity, obtaining estimates which are competitive with the best possible. The approach is rather clean: simply estimate the PML of the distribution (i.e., the maximum likelihood distribution of the data, if the the labels are discarded and only the multiplicities of elements are kept), and apply the plug-in estimator (i.e., if you want to estimate entropy, compute the entropy of the resulting PML distribution). The present work shows that PML is even more broadly applicable — such an approach applies to any property which is additive, symmetric, and appropriately Lipschitz. They also show specific results for many other properties which have been considered in the past, including Rényi entropy, distribution estimation, and identity testing.
Sample-Optimal Low-Rank Approximation of Distance Matrices by Piotr Indyk, Ali Vakilian, Tal Wagner, David Woodruff (arXiv). Getting a rank \(k\) approximation of an \(n \times m\) matrix \(M\) is about as classic a problem as it gets. Suppose we wanted a running time of \(O(n+m)\), which is sublinear in the matrix size. In general, this is not feasible, since there could be a single large entry that dominates the matrix norm. This paper studies the case where the matrix is itself a distance matrix. So there is an underlying point set in a metric space, and the \(i, j\)th entry of \(M\) is the distance between the $i$th and $j$th point. Previous work showed the existence of \(O((n+m)^{1+\gamma})\) time algorithms (for arbitrary small constant $\gamma > 0$, with polynomial dependence on \(k\) and error parameters). This work gives an algorithm that runs in \(\widetilde{O}(n+m)\) time. The main idea is to sample the rows and columns according to row/column norms.
Paper appeared on arXiv: “The Broad Optimality of Profile Maximum Likelihood”, by Yi Hao and Alon Orlitsky, https://arxiv.org/abs/1906.03794
Thank you, we apparently missed this a few months ago. It’s been added now!