Random Sampling in Discrepancy Theory
陈雪,4 月 13 日 11:00
The general discrepancy problem considers how to find a coloring \(\mathbf{x} \in \{\pm 1\}^n\) minimizing \(\| \mathbf{A} \mathbf{x}\|_{\infty}\) for a given matrix \(\mathbf{A}\) of dimension \(m \times n\). Classical results have studied this problem with various forms of \(\mathbf{A}\). When \(\mathbf{A} \in \{0,1\}^{m \times n}\) corresponds to a set system of \(n\) elements and \(m=O(n)\) subsets, Spencer's celebrated result shows \(\min_{\mathbf{x}} \| \mathbf{A} \mathbf{x}\|_{\infty} = O(\sqrt{n})\). For a bounded-degree set system, say each element is in at most \(d\) subsets, Beck and Fiala showed that the discrepancy is \(O(d)\).