A few years ago, I read Superforecasting by Phil Tetlock and Dan Gardner and it changed the way I see the world. Tetlock has spent decades researching predictions, and in Superforecasting he relates what separates the truly accurate forecasters from the dart-throwing chimps. That in and of itself makes the book worth reading. But what captured my attention was even more fundamental: the importance of measuring forecasts in the first place. As Tetlock and Gardner write:
Imagine a world in which people love to run, but they have no idea how fast the average person runs, or how fast the best could run, because…there are no independent race officials and timekeepers measuring results. How likely is it that the running times are improving in this world? Not very.
That idea gnawed at me. It had never quite hit me so squarely how little we actually measured in the NBA world. We had hundreds of people within the league, whether scouts or front-office executives, whose primary business it was to make predictions. And yet these predictions were only roughly measured, if at all. That’s even more true outside of NBA teams. So much of analyzing sports, whether as a media member or a fan, is making predictions—but nobody is keeping track.1
And so I resolved that as soon as I had the opportunity I would try to bring Superforecasting-style prediction to the basketball world. I finally had that chance last year. The result was Predict, a competition where anyone could try to predict the answers to basketball questions like: “Will the Warriors win the championship?” or “Will Luka Doncic win the Western Conference Rookie of the Month for December?” I created a platform for people to sign up, lock in their odds, update at any time, and be scored in the same way Tetlock’s forecasters were.
I’ll admit, I didn’t quite know what to expect. I was excited by the chance to actually test my own NBA knowledge against others, but also realistic enough to know that it could go quite badly for me. After all, here I am, making my living analyzing the NBA—what if I got things wildly wrong?
And yet, as I wrote at the start of the competition, as embarrassing as that outcome would be, it would also be the point. It would be far worse in my mind to actually not be properly analyzing the game and not know it than to simply hide my ignorance from the public—and myself.
With the first season of Predict finished, I will say the experience of playing was both humbling and validating. Contestants on Predict are awarded points based on how accurate they were relative to the crowd. If their prediction is better than the crowd they gain points; if worse, they lose. So how did I do? Well, I certainly didn’t beat the crowd, finishing with -236 points on the 31 questions I answered.
But that score is a little misleading, because it turns out that beating the crowd is a really difficult task. Only 73 of the 1,260 participants who answered at least 15 questions ended with a positive score. And my score of -236 was still better than 92% of those 1200+ people who answered enough questions. I’ll repeat: it’s really difficult to beat the crowd.
Yet even though I did well in the competition, it didn’t feel that way going through it. Making predictions in the face of uncertainty is uncomfortable. You realize all of the ways you could be wrong, all of the things you don’t know. And then when they’re revealed to you, when you find out the answer, you kick yourself for not realizing it earlier.
That’s life, of course. We make decisions all of the time, big and small, in the face of great uncertainty. The only way we avoid a state of constant regret is by deluding ourselves, often through hindsight bias. Hindsight bias is why everything seems obvious once we know the answer.2 So we tell ourselves that we really were right with our prediction, we just missed one small part. Or this really random thing happened that was hard to predict that threw everything off. That keeps us feeling good about ourselves, and prevents us from being overwhelmed by the knowledge of how little we really know.
And it’s why playing a game like Predict is so revealing. You learn about basketball, about yourself, and about the world, in a way you only can if you are constantly making predictions, recording them, and then being evaluated on them. It makes you vulnerable, yes. It can be quite uncomfortable. But there aren’t many better ways to learn and grow.
Take my experience playing the Playoff Predict competition, where the participants predicted the chance a given team would win each playoff series. I struggled there, clocking in at just the 37th percentile. And that has made me reflect deeply on what I got wrong and why. I think I was overconfident, anchoring too much to my initial predictions as the series went along. I also overrated my own analysis of the potential matchups—knowing that one team has an advantage in a specific area is useful, but only if the coaches and players make decisions how you’d expect. And that frequently doesn’t happen. Plus there’s always the fact that it was a small sample, and maybe I just got a little unlucky.
That last point is a little tongue-in-cheek, of course—I’m not going to chalk up my poor performance to bad luck while claiming my successful predictions were skill. But it’s also a little true. The nature of this kind of competition is that luck does play a role, especially if there aren’t many questions.
At the end of the season I surveyed the top 10 scorers in the season-long Predict competition to see how they approached it and what we could learn from their success. Many of them emphasized the role luck played in their results, which is both the humility you’d expect from someone who would succeed at this type of competition—but also true. Since the scoring system heavily weights the season-long questions, hitting big on one or two of them while avoiding a major miss was enough to vault you to the top of the leaderboard.
Still, there were other commonalities amongst the top scorers that suggests that their success was not all luck and small sample size. First, they rarely put 0% or 100% as their prediction. They knew that the world is too uncertain to guarantee that something will happen. Garrett, who came in 10th place, told me:
I’m a skeptical person by nature. I’ve played a fair amount of poker, blackjack, etc. in my day and I can’t tell you how many times I’ve heard someone say an event will, or will definitely not occur, only to see them swiftly proven wrong. I say that to illustrate that I went into these questions with the basic assumption of “well I could see it going either way really.”
Andy, who came in 7th, said:
For the most part, my approach on this was to initially not have anything at 100% or 0% since things can be so random and things like injuries, teams tanking, trades, etc. could swing things so much.
That approach may have meant they left points on the table if they got the question right, but they avoided a massive screw up that would have torpedoed their chances of finishing near the top.
Second, most of them used a strategy of anchoring and adjusting. They would find something they thought was relevant to help base their initial prediction off of, and then adjust it based on their own opinions. For example, first place finisher Siddharth said the way he came up with the odds the Warriors would win the championship was first starting with the Vegas odds and thinking that he was more pessimistic than the public on Golden State’s chances for a variety of reasons. So he anchored to a public estimate and then adjusted downward from there.
He was not the only one of the top finishers to use this kind of strategy. This helped keep them moored to a guess that was backed by evidence while still allowing them to factor in their own opinion when appropriate.
Third, the top finishers were willing to update their predictions as events unfolded—but not too much. Almost none of them kept their predictions the same the whole way through the season. If something big happened that changed their estimate of the likelihood, they updated their prediction. But they also tried not to overreact, understanding that maybe something like the KD/Draymond spat in the middle of the season wasn’t a good sign for the Warriors, but it didn’t necessarily doom them to lose this year on its own.
These are principles we can all learn from in our future predictions. Which will be important because, yes, Predict will be back for a second season.
In the spirit of learning from the past, there will be some tweaks to the format next year, part of which will be based on your feedback and suggestions. What changes would you want to see? What recommendations do you have? Please let me know in the Discussion section.
Congratulations to Siddarth for coming in first place and for all of those 72 others who finished better than the crowd. And if you didn’t do well or didn’t play at all, I hope you’ll join us next season when we wipe the slate clean and see if we can all learn from this past year, following Tetlock’s principles of using measurement to help us improve.
- Surely plenty of these predictions are just for the sake of entertainment. But we still value accuracy from our experts (or at least pretend to). ↩
- I didn’t come up with that line. It’s the fantastic title of Duncan Watts’ book. ↩