A Roll of the Dice, Part 2

Scouting beats stats, but scouting plus stats beats anything else. What’s the right way to make front office decisions? There’s actually a science to it. The second in a three part series about the draft.

Simmons and Gladwell concluded their discussion about drafting with the following exchange:

Simmons: “You know, there’s no real science to it—”

Gladwell: “Well that’s my point.”

But there is. It’s the science of decision making under uncertainty, and it’s what the draft is all about.A Roll of the Dice, Part 1

Michael Lewis missed something. His 2003 book, Moneyball, had received critical acclaim, shot up bestseller lists, and helped accelerate the data revolution in sports. But, he wrote in the introduction to his latest book, The Undoing Project, while he captured the emerging use of statistics in baseball, he hadn’t seen the larger picture. “I’d set out to tell a story about the way markets worked, or failed to work, especially when they were valuing people. But buried somewhere inside it was another story, one that I’d left unexplored and untold, about the way the human mind worked, or failed to work, when it was forming judgments and making decisions. When faced with uncertainty – about investments or people or anything else – how did it arrive at its conclusions?”

The Undoing Project is a book about Daniel Kahneman and Amos Tversky, two psychologists who researched the systematic biases in human decision making the way nobody had before. But Lewis didn’t start the book by telling their story. Instead, the opening chapter details the drafting strategies of the Houston Rockets, zooming out from Moneyball to bring the larger picture into focus.

Data analysis has generally been treated in the NBA like a foreign object. In the minds of many it is a magic box in the corner of the room that you ask questions of and then determine how much to trust the answer. There is some mysterious process that produces that answer, and it isn’t always correct – but to be a modern NBA executive you have to consult the magic box and sometimes decide to listen to it.

This misunderstanding of the value of statistical analysis is a side effect of the “scouting vs. statistics” debate1. It is a framing that the media has run with, and which has unfortunately seated itself deeply in the minds of both fans and NBA executives: that these are two separate processes, two separate camps, and that a decision maker must choose between them at any given time.

But this is a dangerous misconception. It’s not about listening to one vs. the other. It’s about making a decision, and understanding how different pieces of evidence factor into that decision — no matter whether that evidence takes the form of a scout’s observation or the output of a statistical model.

Statistical analysis can’t be done for its own sake. It needs to flow out of an understanding of how humans make decisions, because it is just one aspect of what Kahneman and Tversky famously called “judgment under uncertainty.”

Judgment Under Uncertainty

It’s something we all do all of the time. There are the regular, small decisions: what clothes to wear that day, what route to drive, which restaurant to go to. Then there are the less common, larger decisions: which college to attend, which job to take, who to marry. All of them involve some degree of uncertainty, some amount of important unknowns — and yet we have to decide.

This is the primary work of a basketball operations employee, whether GM, coach or video room intern. All are there to make, or help make, basketball decisions. Who to draft, whether to agree to a trade, how much money to offer a player, which lineup to use, how to defend the pick-and-roll. All of these are very important decisions and all of them have highly uncertain outcomes. But, again, we have to decide somehow. So how do we do this? How should we do this?

People have been discussing, researching, and debating this for a long time, of course. What Kahneman and Tversky2 brought to light were the systematic mistakes we made with our decisions. They exposed the ways our minds would, over and over again, fall into certain traps, sometimes even when we already knew they were there. They showed that just going with the gut can frequently lead to errors.

And here is where statistical analysis fits into the decision making process: as a tool to be used when we reach the limits of human capabilities. Think of it like a boat. Used in the right places and in the right ways — for example, to get across a river — it’s indispensable. But in the wrong places and the wrong ways — say, to get up a mountain — it will take you nowhere, and make you look ridiculous in the process.

This is the science of decision making under uncertainty. It encompasses everything from behavioral economics to cognitive psychology, from statistical thinking to group dynamics, and it has a lot to say about how to improve the accuracy of front office decision making. Perhaps nowhere more than at the draft.

Freestyle Chess

As we saw in Part 1, you don’t have to study the history of the draft for long to see how much we don’t know. The draft is all about uncertainty: how will a player’s skills translate to the NBA? What is the player like as a person, and how will that impact his development? What is the likelihood that injuries hold them back from consistently producing on the court?

Making decisions in a complex environment like this would be difficult enough. But player evaluation in the draft also happens to be a task that is very difficult to improve at. Developing intuition is easiest in a simple, predictable environment where there is clear and frequent feedback.3 Think about playing a card game or a video game: you get better at it without much effort because the rules are always the same and you find out pretty quickly when you make a good or bad decision.

The draft is the opposite of this: there is a lot that goes into whether a player ends up being successful, the nature of the game is subtly shifting over time, and you don’t often find out the results of your decision for multiple years. It’s almost designed for people to not be able to get better at it. Which is a large part of why there are so many mistakes, and why the league as a whole doesn’t seem to be getting much better at drafting over time.4

Yet, as we saw in Part 1, it’s not an impossible task — there is a rough order to the draft. It’s an order which comes from traditional scouting, expertise that has value. More value, in fact, than statistics alone. If you attempt to forecast the success of prospects based solely off of their college stats and compare that to forecasting their success based solely off of where they were picked, you’ll find that just looking at where they were picked is more accurate. Scouting beats stats.

When I moved out to Portland and started working full-time with the Trail Blazers, I have to admit I was a bit skeptical of this idea of scouting expertise. There were so many mistakes and misjudgments by the league as a whole — how much more could scouts really see than a hardcore fan? I quickly learned the answer: a lot. I had been carefully watching film for a few years at that point, and yet when I’d sit with any of our executives and watch film with them they’d consistently point out things I would have missed without them. It took me years to feel like I could see many of the same things they did. Their experience was evident.

This expertise comes from years of pattern recognition. The scouting brain has been trained on countless hours of watching basketball, and so it can recognize a player who has the potential that another doesn’t. But there’s a danger to this as well. This recognition is often subconscious, and might pick up on patterns that don’t matter: focusing too much on what a player looks like and their mannerisms rather than on what truly predicts success. It’s the reason so many pre-draft comparisons are of the same race, body type, or school, instead of based on style of play.

That’s why the best decisions are made by blending these two areas appropriately. Tyler Cowen, a professor of economics at George Mason University and the author of the fascinating blog Marginal Revolution, uses freestyle chess as an example of the power of this “experts + computers” dynamic. Ever since the chess computer Deep Blue beat Garry Kasparov in 1997, chess has been held up as an example of how computers are becoming better than humans at many tasks. And it’s true: the best computer chess programs now easily beat the best human players. Yet, Cowen notes, the best chess games in history have tended to be played not by computers, but by a combination of humans and computers in the “freestyle” tournaments that allow this: by experts using many different chess programs as aids.

We can look to this as a model for how to make front office decisions. Analytics are there to supplement the limitations that expert decision makers have: stats can capture more information than observers can, models can weight this information more accurately, and numbers don’t have psychological quirks that might lead to irrational decisions. People, on the other hand, know the sport. They can use their knowledge of how the game works to understand where the data comes from, what the right questions are to ask and whether the answers are providing real insight. A decision making process that combines these two will be more accurate than either on its own. Scouting plus stats beats anything else.

In his opening chapter Lewis describes how Rockets’ GM Daryl Morey came to this realization after a mistake made in the draft, one where Morey felt he relied too much on the results of his draft model. “And thus began a process of Morey trying as hard as he’d ever tried at anything in his life to blend subjective human judgment with his model. The trick wasn’t to just build a better model. It was to listen both to it and to the scouts at the same time.”

“At the same time” is the key here. It’s not about just trading off between listening to one or the other. The two work hand in hand: experts understanding the strengths and weaknesses of the data, the data employed to mitigate the weaknesses of the experts. When making decisions under uncertainty, you have to know how and when to use the tools at your disposal. After all, you don’t want to try to climb a mountain in a boat.


  1. A debate that Lewis himself helped start with the publication of Moneyball
  2. Building off the work of those before them, such as Herbert Simon 
  3. As theorized by Daniel Kahneman and Gary Klein in their 2009 paper about expertise
  4. Per Michael Lopez’s graph, shown in Part 1
Get notified when I publish something new: