The Bias–Variance Tradeoff

Just like in Statistics, machine-learning models (typically called hypotheses) can suffer from a bias–variance tradeoff where models with low variance (high stability, low variability) exhibit high bias (high likelihood of being far off the mark) whereas models with low bias (high chance of accuracy, low likelihood of being off) can exhibit high variance.

In US politics, conservatives “stay the course” even if it’s the wrong course, perhaps because to admit error is unpleasant. This is analogous to high bias and low variance. Progressives, on the other hand, may be seen considering the validity of mutiple (including opposing) viewpoints. This is considered flip-flopping (an instability of opinion, so high variance) that can lead to choosing a stance that is better supported by evidence (low bias).

These, of course, are not the only two options. In human affairs, the remaining combinations also exist, but rarely in technology.

[After-the-fact clarification: High bias and high variance is not difficult to find in human affairs: Being far off the mark, and wildly adjusting one’s position is not only not unusual, it’s also not a bad thing. Critical thinking requires adjusting one’s views. Whether such adjustment takes the form of a conscious, well-informed honing in or a wild, unprincipled random walk is not addressed by the bias–variance tradeoff. The consistent coexistence of low bias and low variance also occurs with people; those people are quite impressive. It is helpful to remember, for this case, that low variance does not mean no variance, so we can still expect self-critique and adjustment from those individuals.]