|Location:||MSRI: Simons Auditorium|
If a random variable X is easier to simulate than to analyze, one way to estimate its expected value E(X) is to generate n samples that are distributed according to the law of X and take their average. If the samples are independent, then (assuming X has finite variance) the estimate will have typical error O(1/sqrt(n)). But often one can do better by introducing appropriate forms of negative dependence. In the toy context of simulating Markov chains to estimate their absorption probabilities, I\\'ll describe a scheme that uses maximally anticorrelated identically distributed Bernoulli random variables (aka rotor routers) and has typical error O((log n)/n), and a related scheme with typical error O(1/n). This might seem to be optimal, and indeed one cannot expect the average (X_1+...+X_n)/n to differ from its expected value E(X) by less than O(1/n). However, by using weighted averages that assign X_i less weight when i is near 1 or n and greater weight when i is near n/2, one can get estimators for E(X) with typical error significantly smaller than O(1/n).
The methods and ideas are mostly probabilistic and combinatorial. No prior knowledge of rotor-routing or smoothing kernels, and no familiarity with (or fondness for) statistics, will be assumed.