scribble

Ben Brostoff

About Posts Book Recos Privacy Email GitHub

27 Dec 2018
Stop Resulting

In 2018, the best lesson I learned was to stop creating narratives based on results.

Creating a narrative based on results is what former professional poker player and World Series of Poker Champion Annie Duke calls “resulting” in her book Thinking In Bets. One example Duke returns to throughout the book is some poker players’ tendency to use the results of a hand to evaluate the decisions that took place during it.

Players that fall into the trap of resulting often want to preserve a narrative that emphasizes their own (assumed) above-average skill. These players believe decisions made if they won a big hand were skill and not luck-based. Conversely, to preserve this narrative, these same players assume their losses were luck and not skill-based.

In reality, luck plays a role in the outcome of any individual poker hand. As Duke notes in the book, if a player is 80% favored to win a hand before more cards are placed on the table, the same player losing the hand once all cards are out is not wrong, but part of the 20% chance another player wins.

The outcome of a hand like that isn’t part of a narrative, but part of a statistical distribution. The player who lost the hand needs to place their decision to play before cards came out in the context of the 80%, and the one who won should recognize they only had a 20% chance of victory in the pre-flop situation.

Resulters ignore or manipulate this context in order to draw a lesson from an outcome that fits a narrative. Resulting is clearly detrimental to learning because it fails to consider significant portions of the why behind decisions — views of uncertainty at the time the decision was made, and whether those views were accurate.

My opinion — based on my own life — is that most junior and mid-level jobs prepare people to be resulters. The jobs I had early out of college involved implementing well defined specs in code (ex. hide the button when the user isn’t authorized) or making template-based Excel models (ex. reduce the cash flow 20% in the stress test). Quality of work was easy to measure — the code or the model did what was expected.

The outcomes in the businesses I worked at were also easy to measure (whether I was looking at the right outcomes is for another blog post). Looking at user traffic or deals won or capital raised at what valuation was the scoreboard to me. So, my logical conclusion most times until this year was:

If I execute well, my company will have a higher probability of success

Now, this may come off as controversial, but for the majority of jobs at the majority of companies, I believe this statement to be wrong. Most people at companies execute ideas created by others and execute them fairly well. Based on the ever-improving quality of tools in finance and web development — the two industries I’m most familiar with — good execution is becoming easier and easier.

When was the last time you saw a start-up that did not have a flashy looking product or an impressive demo (signs of great execution)? Yet the statistics are clear — most start-ups fail. Clearly, execution isn’t the whole story, or success would be more common. It may not even be positively correlated with success, as evidenced by start-ups where unpolished prototypes representing a pivot saved the day, supplanting polished existing products.

I started my own software consulting business this year and gained an appreciation for the impact of confusing process and outcomes. My first introduction to this idea was getting clients. Working through a combination of LinkedIn, AngelList, HN’s Who’s Hiring, networking events and friends and family, I found some introductory calls would go swimmingly and others would be a complete failure.

My actions before the calls were the same — same research process, same introduction, mostly same pitch (with some tweaks over time as I learned more).

I initially equated the bad outcomes to lack of skill and the good ones to skill, but over time it became clear that companies have varying degrees of acceptance to contracting. Some companies I encountered only had developers that were contractors; others only had full-time developers. There was no easy way to gain this information without doing the call in the first place.

I now accept based on the data I’ve gathered that the percentage of companies willing to hire contractors for the services I provide is probably below 25%. This is not to say I haven’t tanked calls and performed well on others; it’s just that I hear the “we generally don’t hire contractors” line or some variation of it 75% of the time. Some of these companies can still be convinced, but the odds of success are far lower than a company with previous positive experiences with contractors.

Another example — early in my search for clients, I found a majority of companies asking for help with devops, specifically Kubernetes. I had to pass on these opportunities as my Kubernetes experience is minimal. I became concerned that the only consulting opportunities might involve Kubernetes, despite the fact that the sample size at that point was probably five 30 minute discussions.

I can say now with a full year of calls under my belt that help with Kubernetes or devops tasks represents a small proportion of the consulting asks I get. The spike in asks at the beginning was the result of a small sample size. Deriving information from this sample was ill-advised and a classic example of watching the tape.

Rejections by companies or opportunities I decline because I can’t help I used to see as a personal failure, and sometimes still do. But I think this is an insidious form of resulting that will hurt my business long-term, just as assuming knowledge of one JavaScript framework will be worth just as much today as it is in three years. Deep diving into whatever technology is popular or continually using the same tools can both be harmful when driven by knee-jerk interpretation of results. Execution without constant evaluation of what is being executed is, as discussed in Naval Ravikant’s recent podcast with Kapil Gupta, hard work for hard work’s sake.

Changes in priorities and plans are evidence that leaders of a business are evaluating the probabilities of success for different strategies, be it correctly or incorrectly. Importantly, this view is different from what I used to believe. It used to hugely frustrate me when I worked on some complicated interface for months only for it be thrown away. I would attribute the code churn to bad communication between sales and engineering — an execution failure. Perhaps this explanation is accurate some of the time. Plans also change because new information forces strategy changes.

Resulters blame execution and bad decisions for why certain strategies don’t pan out. They fail to recognize that their evaluation is based on outcomes. The execution was good if the sales person made the sale; the decision to play the hand was bad if the player lost the hand.

Figuring out what needs to be done I now believe is a process that requires viewing process and outcomes separately. The risk of tying good and bad outcomes to skill through a narrative is rejecting useful strategies and embracing harmful ones. In 2019, I want to eliminate this risk by separating process and outcomes.


scribble

About Posts Book Recos Privacy Email GitHub