The Tell-tale Sign of Great Product Management
Last updated:
- User research and data analysis
- Negotiation and Prioritization
- Project management and Execution
- Experimentation and Testing
- Rollout and steady-state monitoring
- Re: Faster horses
The tell-tale sign of great product management is that it seems to read your mind.
Picture an app or digital product you use all the time.
It mostly works well but there's this small thing that annoys you to no end:
You don't get redirected after concluding an action; you need to do it manually every time.
You need to provide the same piece of information multiple times.
You need to manually perform an action that could be done automatically for you.
Then, the next time you open the app, it's now working exactly the way you wanted it to.
This is no accident — this is the work of Product Managers (PMs) behind the scenes, who somehow found out that that specific issue was affecting users (including you) and then kicked off the process to fix it.
This is the essence of product management:
- User Research + Data analysis
- Negotiation + Prioritization
- Project management + Execution
- Experimentation + Testing
- Rollout + Monitoring
Let's look at these areas in more detail2:
User research and data analysis
The first step is to discover that there's a problem. And it's not trivial as one might expect, given the complexity and scale involved in modern digital products.
This discovery step can be done via user research or via data analysis:
User research refers to reaching out to customers and interviewing them or watching them use the app. You collect feedback, organize it, then plan on how to address it.
Data analysis refers to looking at usage logs and trying to identify common patterns and possible problems. Two common approaches for this are Funnel Analysis (seeing where customers "drop off" while completing a task) and Process Mining, which is the principled analysis of log data to find which paths users take when using an application and seeing where optimizations can be made.
Dogfooding (i.e. having creators use the products they create) is another way to find problems, but it's obviously biased and doesn't scale to large products.
Negotiation and Prioritization
This is the stage when popular documents such as PRDs (Product Requirements Documents) are circulated among stakeholders
This is where the PM argues why this specific issue should be solved before other items on the roadmap.
This step should ideally be principled and backed by data, but there is often quite a bit of subjective opinions and gut-based decisions.
Once the issue's effort and value have been estimated, this feature can be ranked against other competing initiatives and added to the team's roadmap.
Project management and Execution
Once the feature is properly defined and prioritized on the team's roadmap, it will eventually be implemented (assuming no other project disrupts the plans
).
Execution in the domain of digital products usually involves coding and it's organized as a software project.
Software projects are notoriously hard to estimate, however, so it's common for them to run late or fail altogether. Changes in complex systems often have unpredictable effects and sometimes incur technical debt1.
Assuming the project went well and the feature is correctly coded, we can start testing.
Experimentation and Testing
When the code is written, it doesn't mean that the work is done.
New features are never made available to customers immediately after they're ready. This would be too risky, especially for larger, established firms.
Instead, companies enable enhancements to a small portion of customers2 to see the impacts on a smaller scale before making the new feature available to the general audience.
Two common experimentation/testing patterns are:
A/B testing where the new feature is enabled for a random (small) percentage of the customer base. This enables unbiased analyses and mitigates risk in case of problems.
Early access users are customers who volunteer to test out new features. They agree to provide feedback for a chance to have a say in how the product is developed.
During testing, the team checks for problems such as:
Increased number of support tickets;
Higher level of system errors or latency;
Statistically significant worsening of CSAT metrics.
Rollout and steady-state monitoring
Once the experimentation phase is over, one can gradually increase the percentage of customers that can use the new feature. This is called the rollout.
It's important to roll out changes gradually because this enables graceful scaling of cloud-based systems that are involved in the new feature. And it gives the team ample room to backtrack in case something weird happens.
Once the feature has been fully rolled out, the torch is passed on to the monitoring team. This usually includes analysts looking at logs and other system usage data. Maybe some dashboards and/or recurring reports to be viewed weekly or monthly.
Re: Faster horses
"If I had asked people what they wanted, they would have said faster horses". Often attributed to Henry Ford, the father of automobiles.
People will say that a great PM doesn't just provide users with faster horses, but answers questions users didn't even know they had.
Two things can be true at the same time.
It's perfectly possible for PMs to fix a problem and or mitigate an annoyance magically but in a way that's different from what you expected. An automobile.
But sometimes the problem is clear and the solution is easy. A faster horse is OK sometimes. ![]()
1: Simply said, technical debt is code that fixes a problem in the short term but will need "fixing" down the line.
2: At a system level, this is usually done with Feature flags.