Tight Bounds on the Round Complexity of the Distributed Maximum Coverage Problem

Authors: Sepehr Assadi, Sanjeev Khanna.
Conference: The 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'18).
Abstract: We study the maximum k-set coverage problem in the following distributed setting. A collection of input sets S1 , . . . , Sm over a universe [n] is partitioned across p machines and the goal is to find k sets whose union covers the most number of elements. The computation proceeds in rounds where in each round machines communicate information to each other. Specifically, in each round, all machines simultaneously send a message to a central coordinator who then communicates back to all machines a summary to guide the computation for the next round. At the end of the last round, the coordinator outputs the answer. The main measures of efficiency in this setting are the approximation ratio of the returned solution, the communication cost of each machine, and the number of rounds of computation.

Our main result in this paper is an asymptotically tight bound on the tradeoff between these three measures for the distributed maximum coverage problem. We first show that any r-round protocol for this problem either incurs a communication cost of k · m^{Ω(1/r)} or only achieves an approximation factor of k^{Ω(1/r)}. This in particular implies that any protocol that simultaneously achieves good approximation ratio (O(1) approximation) and good communication cost (O (n) communication per machine), essentially requires logarithmic (in k) number of rounds. We complement our lower bound result by showing that there exist an r-round protocol that achieves an (e/e-1)-approximation (essentially best possible) with a communication cost of k·m^{O(1/r)} as well as an r-round protocol that achieves a k^{O(1/r)}-approximation with only O(n) communication per each machine (essentially best possible).

We further use our results in this distributed setting to obtain new bounds for the maximum coverage problem in two other main models of computation for massive datasets, namely, the dynamic streaming model and the MapReduce model.