On performance at work

Unlocking Potential is a newsletter by me, Francisco H. de Mello, CEO of Qulture.Rocks (YC W18)


After a long hiatus, about which I pretend to talk about later, I wanted to write about a topic I've been thinking about a lot: performance, more specifically at work.

Every entrepreneur on earth wants her company to perform at its best, or at its fullest potential. In order to make that happen, each and every person in the company has to perform at their best.

But what is performance? How can we influence performance? What are its components and determinants?

That's what I want to discuss in this essay. I hope you come out on the other side with a better mental model of what is performance and how to drive it.

Performance = NPV of behavior(s)

Let's start with a working definition of what performance actually means.

The title above is not quite true but points to the truth [1]. Performance is the value of the behaviors of an employee in a given period of time. Let's break down the definition.

First, we talk about behaviors. Behaviors are the work. It's writing lines of text - or code -, calling customers, designing UIs, posting stuff on social media, running webinars… you get the picture.

Second, we talk about value. Here, it helps to think that each relevant behavior we take at work has a net present value, or produces a tilt in the company's future cash flows, even if minimal. We hope, of course, that the value is positive, or added, and not negative, or subtracted. But people work for organizations because they are able to offer value to the organization, ideally more value than they cost in terms of salaries and so on and so forth.

If we analyze all these relevant behaviors for a given period - let's say a year - we can theoretically add up their NPVs, compare said NPVs with the NPVs of previous periods, compare the NPVs of different employees, and so on.

Results: Abstracting performance

We frequently equate performance with results, but results are abstractions created to make our lives easier. One such abstraction we frequently call a “result” is improving KPIs like sales or profits. We could also say that for somebody as experienced as salesperson A, selling 100 dollars is about right, and therefore sales of 90 mean something about their performance.

But just like the map is not the terrain, results are not value-added per se. If I sell X dollars of product, the amount sold is not my performance. These are proxies, and we have to be very aware of that status and its potential limitations.

There are two problems with blindly equating (in the case of salespeople) sales dollars with performance. The first problem is that several other factors outside of the salesperson's control may have contributed to the number being 90 (and not 50 or 120). For example, a lead may have come referred by the CEO converted into a 20 dollar deal. The Government may have pushed a bill that created regulation in the industry covered by the salesperson in a way that her product became necessary - or obsolete -. The salesperson may have caught COVID-19, and therefore spent two weeks at home, unable to work because of the symptoms. You get the picture. The point here is that presuming 90 dollars is a good measure of how much value was added to the company, two salespeople with results of 90 dollars in sales in a given period may have very distinct performances because there may be different factors play influencing said result.

The second problem is that many factors may influence how much value the 90 dollar sales figure actually generates for the company. In other words, to presume that the 90 dollar figure is a good measure of how much value was added to the company is frequently dangerous. For example, the salesperson may have - even knowingly - sold a deal that is very likely to churn in the short-run; the majority of the deals sold by the salesperson may be into a low-growth industry, which is very unlikely to bring upsells in the lifetime of the contract; the pricing may just be off - too cheap, eating margins, or too expensive, causing reputational risk. The point here is: the 90-dollar figure produced by two different salespeople may mean a very different value-added to the organization.

Btw, people also use goals as proxies for performance. If my goal was to sell 100 dollars, and I sold 90 dollars, I hit 90% of my goal. That's abstracting performance a level further. We decide how much someone should bring in terms of results and measure “performance” against said expectations. So goals are also not performance per se.

I know you must be thinking I'm going too deep into the sales examples. The interesting thing is I wanted to take down the case where equating performance with a measure or the attainment of a goal (or quota, in industry parlance) is easiest.

If we move to other functions within a company, the gap becomes wider.

Let's think of what performance means in the reality of a developer. Based on our definition, performance would mean the developer's behaviors that produce positive value for the organization. For example, shipping code that then becomes a feature that customers use could be an example of a performance sample. Doing so with clean, understandable, reusable code is an even better example since the developer prevents problems that will happen in the future (and subtract from future cash flows). But would you feel comfortable equating the number of lines of code merged by the developer as value-added to the organization? How about if you threw in an additional measure of code health? Or the number of comments said code gets from code reviewers within the team? Even then, I think you wouldn't want to do that. It's just too crude an abstraction.

Even though I'd say 99% of top sales organizations mix the concepts of performance and quota attainment so much that there seems to be no distinction between the two, I think it's ok to do so in the case of sales: weighing costs and benefits, you'll be fine. I just wanted to point out how severe the limitations of doing so are, especially given that doing so for most other functions shows even more severe limitations.

The determinants of performance

How can we enhance performance? What does it take for someone to perform? Or even better asked, what makes, et ceteris paribus, one person perform better and another perform worse? That's the realm of performance determinants.

There are three determinants of performance: declarative knowledge, procedural knowledge, and motivation.

In a nutshell, these mean, respectively, knowing what to do, knowing how to do it, and wanting to do it.

Declarative knowledge, or knowing what to do, is a matter of education (I, for example, know what a DCF is, know how an income statement, cash flow statement, and balance sheet work, and so on and so forth. It is also knowing what I must do in the specific case at hand: I need to perform a valuation analysis of Acme Inc., by Friday, with the goal of helping my boss make the case of whether our firm should invest or not in Acme Inc.). According to Wikipedia (which has some great easy-to-grasp examples):

In epistemologydescriptive knowledge (also known as propositional knowledgeknowing-thatdeclarative knowledge,[1][2] or constative knowledge)[3][4] is knowledge that can be expressed in a declarative sentence or an indicative proposition.[5] "Knowing-that" can be contrasted with "knowing-how" (also known as "procedural knowledge"), which is knowing how to perform some task, including knowing how to perform it skillfully.[1] It can also be contrasted with "knowing of" (better known as "knowledge by acquaintance"), which is non-propositional knowledge of something which is constituted by familiarity with it or direct awareness of it. By definition, descriptive knowledge is knowledge of particular facts, as potentially expressed by our theories, concepts, principles, schemas, and ideas.[6] The descriptive knowledge that a person possesses constitute her understanding of the world and the way that it works.[6]

Procedural knowledge, or knowing how to do it, is knowing how to apply declarative knowledge. It's a bit fuzzier, but means I know how to actually go about doing the DCF, or, better, how to apply the declarative knowledge I have in practice. Wikipedia does a great job of describing procedural knowledge:

Procedural knowledge (also known as knowing-how, and sometimes referred to as practical knowledgeimperative knowledge, or performative knowledge)[1] is the knowledge exercised in the performance of some task. Unlike descriptive knowledge (also known as "declarative knowledge" or "propositional knowledge" or "knowing-that"), which involves knowledge of specific facts or propositions (e.g. "I know that snow is white"), procedural knowledge involves one's ability to do something (e.g. "I know how to change a flat tire"). A person doesn't need to be able to verbally articulate their procedural knowledge in order for it to count as knowledge, since procedural knowledge requires only knowing how to correctly perform an action or exercise a skill.

Motivation, to end our triad, means to want to do something. We can further break down motivation into three components. One is the choice to do something. I either want to do it or not. It's binary. Another is the level of effort I want to expend in doing it (if the choice was “yes”.) The final one is the duration of the effort I want to expend: for how long I'm willing to sustain the level of effort chosen. Another way to put it is direction, amplitude, and duration (Campbell 1993).

I want to run the marathon, I want to go hard, and I'm willing to go hard for the three or four hours necessary to complete the race [2].

Measuring performance

Now we get to an important consequence of our discussion of what performance actually is: measuring performance. Measuring performance is really important because it allows us to a) improve performance and b) reward performance.

One common way to measure performance is to measure what organizations believe are two different aspects of performance: behaviors, on the one hand, and results, on the other hand.

HR professionals usually think these are actually two different aspects or dimensions of performance. Results are “what” someone does. Behaviors are “how” someone does the “what.” But understanding what performance really is allowed me to have a better perspective on this [2].

When I started to think deeply about performance, I realized that behaviors and results are not “what” and “how,” or different aspects of performance, but actually different measures of the same thing.

Behaviors are what we should actually be focusing on. We should try to look at all the relevant behaviors someone had in a given period, and assess how much value these behaviors added and subtracted to the organization. And this is a really hard task. So hard that we've tried to find a shortcut, and that shortcut is results.

Results are proxies for the value said behaviors generated to the organization, but they are pretty flawed, even though they are much easier to analyze. It's much easier to not have to shadow a salesperson all year long and observe her behaviors, opting instead to just gauge how much she actually sold in dollar terms (or how much she sold against her quota) and triangulate some measure of performance.

But pairing both doesn't mean looking at the “what” and the “how.” It means looking at performance with two different tools that aim to measure the same thing. In the sales example, that even makes sense: since both tools are flawed, how about pairing them and getting the average?

In other functions, we don't even have reasonable results, so we may just look at behaviors. That would be useful for developers, designers, finance and accounting professionals, brand marketers, etc.

Improving performance: working “harder” or “smarter”

How does performance improve? Two ways: effort or development.

This is really interesting.

A salesperson (let's call her A, for consistency's sake) can improve her production by working 30 minutes more every day to do an additional call, resulting, given a constant conversion rate of calls to deals done, in an additional amount of dollars or logos sold.

Salesperson A has another way to improve her performance: tweak her sales pitch in order to improve her conversion rates and, given a constant commitment of hours, sell more.

We could call these two alternatives "working harder" and "working smarter," to be aggressively simplistic.

You can argue that improvements by "working smarter" are more durable because it's harder for them to recede. Once you do your better pitch, why would you revert to the old one? "Working harder", on the other hand, is more volatile. If you're not feeling well, lost a bit of motivation, etc., you can just work less and then produce less.


This was supposed to be an essay, in the sense of an unstructured exploration of a topic without a clear thesis or conclusion. But if you could take one thing with you, I'd urge you to take this: performance is the value that people's behaviors add to an organization. Results and goal attainment are not performance, but proxies for performance. In order to really assess performance, you'd have to shadow people around on all work-related situations and then add the value-added up.

[1] Most of what I discuss in this article can be traced back to Campbell (1993). It's a great piece of work [4].

[2] This example still isn't as crisp as I wanted it to be. It doesn't feel 100% right, especially the “duration of effort” part.

[3] I usually hate these “what” and “how” analogies. In the realm of OKRs, John Doerr explains that objectives are “what” we want to achieve, and key results are “how” we will achieve these objectives. So much trouble has been caused by this terrible explanation.

[4] Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 35–70). San Francisco, CA: Jossey-Bass.