A theorist’s dream is to show that hard instances/obstructions for an (optimal) algorithm can be used as gadgets to prove tight hardness reductions (which proves optimality of the algorithm). An example of such a result is that of Prasad Raghavendra who showed that for any constraint satisfaction problem (CSP), there is an SDP which achieves the best possible approximation factor assuming UGC. We show that a similar phenomenon occurs in CSPs with global modular constraints.^{1}

A global modular constraint is a linear equation (in all the variables) modulo *M* for some fixed constant *M*. Take any polytime solvable Boolean CSP like 2SAT or LIN$_2$ or HORNSAT. Let’s look at LIN$_2$ for concreteness, it is a system of linear equations modulo 2. By Gaussian elimination over $\mathbb F_2$, we can find in polynomial time if it is satisfiable. Now suppose we are given an additional linear equation modulo *M* (for some fixed constant *M* not equal to 2). Can we find in polynomial time if the new system is satisfiable? Surprisingly, we show that the answer depends on the prime factorization of *M*. For example, it is possible for *M* = 3, but not for *M* = 15. We show that for such problems, the obstructions to a natural algorithm and gadgets useful for hardness reduction (assuming ETH) are closely connected to complexity (like degree or sparsity) of polynomial representations of OR mod *M*. Thus showing good lower bounds on the complexity of such polynomials imply good algorithms and constructing low complexity representations imply good hardness results. We also show some connections to submodular minimization with global modular constraints.

Joint work with Joshua Brakensiek and Venkatesan Guruswami. The full paper can be found at https://arxiv.org/abs/1902.04740.

Talk recording is available at zoom.us

- Our results are still quite far from showing such a complete theory, this work is only the beginning of such a theory.
^{[return]}