All things being equal, things are never equal. We rely on experience and "gut instinct." Is that the best way? This article asks us to think about how and why we decide.
Nobel Laureate Danny Kahneman et. al explore the impact of "Noise" in decision-making.
"Most people...simply eyeball each line and produce a quick judgment." You did and judged Susie the stronger player.
Kahneman lays out the following:
- A set of predictor variables are used to predict a target outcome
- Human judges produce clinical predictions
- A rule (such as multiple regression) uses the same predictors to produce mechanical predictions of the same outcome
- The overall accuracy of clinical and mechanical predictions is compared
Paul Meehl examined studies comparing clinical and mechanical predictions. "Mechanical rules were generally superior to human judgment."
In a given data set, data could be flawed (e.g. sample size) or random.
- Criteria might deserve heavier weighting or not correlate with success.
- A great leader at the bottom of the roster may or may not change outcomes.
- The difference between 6 and 7 may not be the same as between 9 and 10.
- Some "off the chart" skill might dominate.
- Positive traits tend to correlate.
- It's rare for a high basketball IQ player to be completely deficient in skill (I had one once).
Models have been used to predict success in graduate schools and the likelihood that accused criminals will jump bail.
We know the three top predictors on NBA success are college attended, performance, and age at the time of draft (younger is better). Caring about a player's Wonderlic score or personality are interesting but not necessarily predictive.
Some factors may overpower others, such as scoring. Should we concern ourselves with "possession enders" such as the Four Factors (field goal percentage, turnovers, rebounds, and free throws)? Or, if we have several dominant scorers or rebounders should we look for complementary players?
Kahneman shares, "There is so much noise in judgment that a noise-free model of a judge achieves more accurate predictions than the actual judge does."
Are we opposed to algorithms? Can simple rules replace our judgment? Can we simplify?
"Robin Dawes...achieved a breakthrough...he proposed giving all predictors equal weights. His surprising discovery was that these equal-weight models are about as accurate as "proper" regression models, and far superior to clinical judgments."
The key is having inputs (predictors) that correlate with results. For example, a data point like "charges taken" might be useful but not correlate with results.
The "broken leg" exception tells us to override the model when we have superior information, e.g. a "broken leg." That could include criminal or other "deal-breaker" character history.
Why don't we use prediction algorithms?
- Machines are not infallible.
- We trust our decisions (Moneyball)
- We fear replacement by algorithms.
- Lack of knowledge about the data on algorithms hinders us.
At the least, including objective performance measures gives us information to challenge our "lying eyes."
Kahneman closes a chapter, "When there is a lot of data, machine-learning algorithms will do better than humans and better than simple models...they are free of noise and they do not attempt to apply complex, usually invalid insights about the predictors."
Spencer Haywood got a tryout at the University of Detroit. The coach told him that if he sank fifteen consecutive free throws, he would earn a scholarship. The rest is history. "In God we trust; all others need data."
Lagniappe: ATO Backscreen Lob