Published 15 May, 2020 5 minutes of reading
When you think about products—not projects—you get into a problem. How do I know the product was successful? Is it worth continuing to invest in its development? With my teams, I've learned that no matter how many years of data you have to help you measure value, it is really difficult for people to adopt a culture of defining, launching, analyzing, measuring, controlling, relaunching; I often find myself questioning product managers to tell me if what we did worked. Silence or uncertainty are not uncommon, and this is the origin of my motivation to write this post. Here are some relevant criteria to that end.
Purpose, objectives, and key results:
The difference between a product and a project is that the former begins to live when it goes into production, the latter dies when it is launched. To elevate a product in such a way that it has a clear purpose, relentlessly pursued since its conception, is fundamental to that product's success. To complement that statement, purpose isn't enough. It's necessary to have clear and aligned business objectives and key results (OKRs) that will allow for a clear measurement of what must be achieved once it is in the market.
Return on Investment
1 out of 2 products I have had to build for my customers is a hypothesis, that is, they are playing a bet that it will work. There is no certainty that it works as intended, that it has been successfully adopted by your customers. So the first metric should always be to get the return on investment.
Therefore, defining the return on investment when the stakes are high should be evaluated more as a business (or cashflow) case against what the product generates once it is launched. This is why it's always good to try to launch as fast as possible within the shortest amount of time. Remember that for a good ROI you have to consider other factors that go beyond the work of your teams; I suggest these factors be accompanied by your financing to make the model accurate.
This is the most complicated area. Defining value in a product is one of the most complex things to do. Often, you generate indicators and the objectives to be pursued, but when the product is launched, these objectives get lost in the day-to-day work and concerns of separate stakeholders, and nobody measures it. Nobody knows for sure what should be measured. Later on, a determination is made for a larger budget, and "therefore" new functionalities must be developed—without thinking too much about whether the previous functionalities are even pulling their weight or not.
This is why I emphasize that a process of aligning business objectives must be done. Then, with this input, determine the generic metrics that follow, in order to help determine the value of the product in the hands of the client:
The Net Promoter Score (NPS) is an indicator that was introduced by Bain & Co. through which you can determine the level of promotion of a product or service. Simply put, it is the ability to measure how much you are willing to recommend such a product to a friend or acquaintance. It is measured by recording the level of detractors (0 to 5), passives (6 to 8) and promoters (9-10). Determining this prepares you for a powerful discussion of the quality of your product.
Stars – NSI – CSAT
The Net Satisfaction Index (NSI) or Customer Satisfaction (CSAT) measure of the satisfaction generated by products in the hands of users. They are easy to understand, but dangerous while measuring, because capturing a measurement of product value during an incorrect point of the flow can generate bad sampling. The secret here lies in knowing the perfect place to be able to determine the value of the experience. This is not always at the end of a flow when most evaluations are typically requested.
While these metrics are important, the measurement of stars is often very telling. Unlike the two previous, customer review stars provide segmentation that goes into a little more analysis within one place, without the need to tabulate much. Determining rankings or values at the level of matching groups can help to evaluate experience (UX) or interactions (UI) as well as to better understand the perceived service inline with the digital service provided by the device.
Many fight with delivery dates. It's become a pastime during conversations with people who make software the "mockery"—respectful of course—of client-defined delivery dates, long before a project has even started. But over the years I've learned to love deadlines. They're good for pushing things through, and if professional service teams have a good delivery schedule, products can be handled smoothly and with good results. Although I veered off a bit, it's important to be able to detail out metrics that answer the classic question: when?
Disclaimer: I hate measuring user story points. I don't think they help much, but in this case they can be valuable measures to determine certain types of factors such as, the most number of story points that you have over time. For example, when I have a graph of points measured over time, I could evaluate the average amount of functionality that is released by or from each team.
The rate of delivery can help open the discussion, never use it as a factor to determine the average time for your teams to launch things, it is not a speed metric. It is a context-specific analysis for decision making by analyzing the types of functionalities that get released. Remember that a user story point may not be a feature, so don't generate biases.
This metric helps when you want to know a little more about the details of customer/user interaction with your product. Like the delivery metric, this one must also be evaluated with caution, because it depends on the purpose or objective of the product. It's good to not have a high amount of interaction time, especially when you expect to facilitate a process for the customer; but it is good when you want them to consume content. You need to measure and analyze based on the type of scenario.
– Hours of use.
– Recurrence of use.
– Time per module or functionality.
– % of abandonment.
– User segments that engage with certain functionalities.
– Time spent on certain functionalities.
In closing, I would like to emphasize that data is misleading if taken as a set of rules written in stone. Data is essential, but not deterministic. As Professor Daniel Kahnneman put it so well:
"A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions have always known this fact."
Beware of turning this into religion. It is necessary to constantly evaluate a product that is built and released, and the data and metrics shift and change when a product is delivered into the hands of human beings that are just as complex as you and me.