Last month was very calm. Eerily calm in fact: no property testing or related sublinear-time algorithm in view!* Gird your loins for the April batch, I reckon?
\({}^\ast\) That we saw, at least. if we missed some, please mention it in the comments below!)
Update: As mentioned in the comments, we did indeed missed two (related) works on distribution estimation from a competitive viewpoint. Namely, for a large class of properties (entropy, distance to uniformity, support size…), Hao, Orlitsky, Suresh, and Wu provide in the first paper (arXiv 1904.00070) an “instance-optimal(ish)” estimator which does “as well” with \(m/\sqrt{\log m}\) samples than the natural and naive empirical estimator would do with \(m\) (thus, in some sense, amplifying the data size). In the following paper (arXiv 1903.01432), Hao and Orlitsky improve this and remove the “ish” to get the optimal amplification \(m/\log m\).
This appeared last week in stat.ML: https://arxiv.org/pdf/1904.00070.pdf. The authors a new approach to estimate any additive Lipschitz property of distributions. Their main result is to relate the performance of the optimal estimator with the empirical estimator. They evaluate their estimator for entropy estimation, support size, distance to uniformity, among others.
The authors of the above-mentioned paper have substantially improved their results.
See the paper https://arxiv.org/pdf/1903.01432.pdf.
Thanks for the links!