February this year was slightly superlinear, with 29 days instead of the usual 28. As a result… 5 property testing papers! (Including one overlooked from January),
Testing Calibration in Subquadratic Time, by Lunjia Hu, Kevin Tian, and Chutong Yang (arXiv). The authors consider the question of model calibration, where a binary predictor is said to be calibrated if \(\mathbb{E}[ y\mid v=t ] = t\) for all \(t\), where \(y\) is the observed outcome and \(v\) is the prediction. This notion, central to algorithmic fairness, comes with a host of challenges: one of them being to assess whether a given predictor is indeed calibrated, and quantifying by how much it deviates from it. Following work by Błasiok, Gopalan, Hu, and Nakkiran which introduced a notion of distance to calibration, the paper defines the (property testing) task of calibration testing, with connections to distribution testing, and provides subquadratic-time algorithms (in the sample complexity) for the task. The authors also obtain analogous results for tolerant calibration testing, which they also introduce.
The role of shared randomness in quantum state certification with unentangled measurements, by Jayadev Acharya and Yuhan Liu (arXiv). In this paper (from January), the authors consider the following question, the quantum analogue of identity testing from the classical distribution testing world: what is the copy complexity (≈sample complexity) of certifying (≈testing) whether an unknown quantum state (≈quantum analogue of a probability distribution) is equal to a known, reference quantum state? And, crucially, what about doing this when our quantum hands are tied, i.e., without using entanglement — but possibly with adaptive measurements? This is not a new question, and we previously covered a couple papers on this in April 2020 and Feb 2021. What is new here is that the authors show it’s not about adaptivity! Mirroring what happens in the classical (distributed) case, the key here turns out to be shared randomness: that is, whether the measurements are made independently (in which case \(\Theta(d^2)\) copies are necessary and sufficient), or chosen randomly but jointly (in which case the copy complexity is \(\Theta(d^{3/2})\)).
Low Acceptance Agreement Tests via Bounded-Degree Symplectic HDXs, by Yotam Dikstein, Irit Dinur, and Alexander Lubotzky (ECCC) and Constant Degree Direct Product Testers with Small Soundness, by Mitali Bafna, Noam Lifshitz, Dor Minzer (ECCC). [Two independent works]
Let \(X\) be a (small) set of \(k\)-element subsets of \([n]\), and \(\{f_S\colon S\to \Sigma\}_{S\in X}\) a family of partial functions. Is there a way to “stitch together” all the functions \(f_S\) into a global one \(G\colon X \to \Sigma\)? A testing algorithm for this is called an agreement test, and the most natural goes as follows: pick \(S,T\in X\) at random (say, with fixed, small intersection), and accept if, and only if, \(f_{S}, f_T\) agree on \(S\cap T\). Does this work? In which parameter regime (i.e., how does the acceptance probability \(\varepsilon\) relate to the closeness-to-a-global-function-\(G\)? How large does \(X\) need to be? The two papers both show that the above agreement test works in the small soundness regime (small \(\varepsilon\)), for \(= O(n)\). Or, as the authors of the first of the two papers put it: “In words, we show that ε agreement implies global structure”
Efficient learning of quantum states prepared with few fermionic non-Gaussian gates, by Antonio Anna Mele and Yaroslav Herasymenko (arXiv). While most of the paper’s focus is on tomography (learning) of a specific class of quantum states, the authors also provide in Appendix A an algorithm for a property testing question: namely, testing the Gaussian dimension of a quantum state: specifically, tolerant testing of \(t\)-compressible \(n\)-qubit Gaussian states in trace distance (Theorem 48). I do not fully grasp what all this means, to be honest, so I’ll stop here.