# News for November 2023

November does not seem to have been a busy month for sublinear-time algorithms! As usual, please let us know if we missed any paper.

Nearly Optimal Bounds for Sample-Based Testing and Learning of k-Monotone Functions, by Hadley Black (arXiv). (This paper is from Oct, which we unfortunately missed. -Ed) Monotone functions are a common object of study in property testing. One of the big promises of property testing is that it is potentially easier than learning. Consider a monotone function $$f:\{0,1\}^n \to \{0,1\}$$, and the problem of sampled-based monotonicity testing. This means that the tester only has access to uniform random samples. The classic Bshouty-Tamon Fourier spectrum result proves that that $$\exp(O(\sqrt{d}/\varepsilon))$$ samples suffice to learn $$f$$ up to error $$\varepsilon$$. But could testing be significantly easier? The only non-trivial lower bound for sample-based testers was $$\Omega(\sqrt{2^d/\varepsilon})$$ for $$\varepsilon \ll d^{-3/2}$$. Until this paper. One of the main result is a lower bound of $$\exp(\Omega(\sqrt{d}/\varepsilon))$$, which is optimal up to polynomial factors. Thus, with random samples, the learning and testing complexities are within polynomials of each other. Interestingly, sampled-based testing with one-sided error requires $$2^{\Omega(d)}$$ queries, and is therefore harder than two-sided testing. That runs counter to most monotonicity testing results, wherein the one-sided and two-sided tester complexities are (roughly) the same. The paper also gives analogous results for the property of $$k$$-monotonicity, a generalization of monotonicity.

Testing Intersecting and Union-Closed Families, by Xi Chen, Anindya De, Yuhao Li, Shivam Nadimpalli, Rocco A. Servedio (arXiv). A function $$f:2^{[n]} \to \{0,1\}$$ is monotone if $$f^{-1}(1)$$ is a monotone set system. Testing monotonicity of functions of the form $$f:2^{[n]} \to \{0,1\}$$ is a cornerstone of research in property testing and the work has resulted in novel insights on isoperimetry of the Boolean hypercube. Khot, Minzer and Safra (FOCS 2015; SICOMP 2018) gave a $$\tilde{\Theta}(\sqrt{n}/\epsilon^2)$$-query nonadaptive $$\epsilon$$-tester for monotonicity, which is complemented by a $$\tilde{\Omega}(n^{1/3})$$ lower bound by Chen, Waingarten and Xie (STOC 2017). The current paper studies testing the properties of `intersectingness’ and union-closedness of the set system $$f^{-1}(1)$$ defined by functions of the form $$f:2^{[n]} \to \{0,1\}$$. They show that neither of these properties can have nonadaptive $$\epsilon$$-testers with $$\text{poly}(n,1/\epsilon)$$ query complexity, which happens to be the case for monotonicity as discussed above. In addition to the strong lower bounds they exhibit, they also give nonadaptive one-sided error $$\epsilon$$-testers with query complexity $$\text{poly }(n^{\sqrt{n \log (1/\epsilon)}},1/\epsilon)$$ for these properties.

Next, we make a leap to the world of quantum sublinear-time algorithms.

(Quantum) complexity of testing signed graph clusterability, by Kuo-Chin Chen, Simon Apers, and Min Hsieu Hsieh (arXiv). A signed graph is one where edges are either labeled positive or negative and such a graph is clusterable, if there is a way to partition the vertex set so that edges with endpoints in the same part are labeled positive and edges whose endpoints are in different parts are labeled negative. This paper proves two main results on property testing clusterability of signed graphs presented in the bounded degree model. First, they show that every classical tester for the problem requires $$\Omega(\sqrt{n})$$ queries. Additionally, they also prove that there is a quantum algorithm for the same problem with query complexity $$\tilde{O}(n^{1/3})$$, thereby implying that quantum algorithms are more powerful than their classical counterparts for the problem of clusterability testing.

Quantum speedups for linear programming via interior point methods, by Simon Apers and Sander Gibling (arXiv). This paper gives a quantum algorithm that, given a linear program over $$d$$ variables containing $$n$$ constraints, outputs a feasible solution with value at most $$\epsilon$$ more than that of the optimal solution, and runs in time $$\tilde{O}(\sqrt{n} \cdot\text{ poly}\log(1/\epsilon))$$ for constant $$d$$, i.e., in time sublinear in the “height” of the program. Other quantum algorithms for solving linear systems in the same sense have running times that either depend on $$\text{poly}(1/\epsilon)$$ or on the condition number of the linear system being solved.