Monthly Archives: February 2025

News for January 2025

January 2025 was by many measures a very… eventful month; as far as property testing is concerned, not so much, with only two papers (and a third we had previously missed). Uneventful is not a bad thing, sometimes!

Many pentagons in triple systems, by Dhruv Mubayi and Jozsef Solymosi (arXiv). This paper is interested in quantifying the number of copies of \(C_k\) in 3-uniform hypergraphs. In the process, the authors establish a quantitative result very relevant to property testing, at least for those with an interest in testing triangle-freeness in dense graphs, improving on a result of Gishboliner, Shapira and Wigderson: namely, that if an \(n\)-vertex graph is \(\varepsilon\)-far from triangle-freeness, then for every \(\ell \geq 2\) it must contain \(\Omega(\varepsilon^{3\ell} n^{2\ell+1})\) copies of \(C_{2\ell+1}\).

Testing Noise Assumptions of Learning Algorithms, by Surbhi Goel, Adam R. Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan (arXiv). Testable learning has seen a surge of interest since its (recent) introduction by Rubinfeld and Vasilyan (2023). In this framework, a learning algorithm which works under some data distribution assumption (e.g., the data is from a spherical cow Gaussian) is not “off the hook” when that assumption isn’t met, as is the case in classical learning. Instead, the algorithm must put its money where its algorithmic mouth is: if the data does indeed satisfy the assumption, then it must output a hypothesis that satisfies the learning guarantee; if the data does not satisfy the assumption, it is allowed to abort and output an error flag; but if it does output a hypothesis, regardless of whether the distributional assumption is met then that hypothesis must satisfy the learning guarantee. In this sense, the algorithm must act like a property tester for the distributional assumption made on the data.
This paper extends the testable learning framework from data distribution tonoisy data generation model: the assumption to be tested (and used) is no longer only on the distribution of the data (regardless of the associated labels), but on the distribution of the pair (data, label), including the way the label may be corrupted. In particular, the authors focus as an application on learning high-dimensional origin-centered halfspaces, where the assumption is that the data is from a Gaussian distribution, with labels perturbed by Massart noise.

Learning multivariate Gaussians with imperfect advice, by Arnab Bhattacharyya, Davin Choo, Philips George John, and Themis Gouleakis (arXiv). Suppose you want to learn the mean (or, if the covariance isn’t known, even better, the mean and covariance) of a high-dimensional Gaussian from i.i.d. samples. You’re in luck: we know how to do it, and the algorithm is very simple! You’re not in luck, though: you’ll need a lot of these i.i.d. samples to achieve non-trivial accuracy. A number either linear or quadratic in the dimension, depending on whether you’re learning only the mean vector or the whole thing.
But say you’re in “luck”: a “good” (but not very trustworthy) friend comes to your help, claiming they already know the mean and covariance, and tell you what they (claim they) are. Can you use this possibly unreliable advice to learn the Gaussian better? This is the setting of learning with advice, and, in this paper, the authors show that yes, when learning Gaussians, you can! And, what’s even better (for this blog), the algorithm they design uses as a core subroutine a tolerant tester, which allows them to carefully checks the quality of the “advice”.