Make Metrics Matter
Artificial intelligence, data science and analytics: these functions or data products tend to get the most airtime and attention when we think about whether our organisation is data-driven. A focus on sophistication and maturity goes hand-in-hand with assessing the newest technologies and approaches. As is the case in other disciplines and industries we can overlook the workhorses that are always there and used by many instead of the chosen few.
When I speak to companies about "data culture" I ask about their reporting and Business intelligence first. This is not because I don't think analytics or modelling teams are important. It is because culture goes much deeper than job titles, formal teams, or expensive projects, and into the DNA of how every single employee thinks about and uses data. The reports we rely on and the metrics we monitor are the lions share of the "data-driven decision making" in our organisation. The pipelines and sources of truth that make up our business and operational foundations are how data really flows through the company. If we don't understand that the rest is just noise.
Rather than being distracted by the latest algorithm or the shiniest tool, our data teams have the opportunity to really amplify the effectiveness of data in their organisation if they focus first on making the core metrics matter.
To do this, two things are vital:
- Understand the limitations of Metrics
- Pay attention to the behaviour of people.

What gets measured gets managed
This is one of the most famous quotes in business literature and, as with anything that garners too much fame, one of the most scrutinised.
Strategy thought leader Roger Martin wrote an article on this and other popular quotes where he highlights how much we tend to blindly follow the wisdom of quotes even when we don't know who said them or in what context.
I like a snappy quote to land a message as much as the next person but what I find interesting about this one, in particular, is the fact that some sources show extended versions of the quote that completely change its intention.
The Deming Institute version (1993) states:
"It is wrong to suppose that if you can't measure it, you can't manage it – a costly myth."
Before that Simon Caulkin summarised a paper by V.F. Ridgway from 1956 by saying (source) :
"What gets measured gets managed – even when it's pointless to measure and manage it, and even if it harms the purpose of the organisation to do so."
I share these alternatives to the "Drucker" version because I want my readers to challenge their belief that we truly manage what we measure or that doing so is always a good idea.
In the last 2 years we have seen a lot of layoffs across the tech industry and a new found focus on efficiency. Many in my network have shared that their leadership have become obsessed with productivity metrics to justify their work. For some in client facing roles, this looks like monitoring how many touchpoints they have with their clients on a weekly and monthly basis. In many cases, the clients are the least happy with this because they do not have time or need for the increased interactions and would much rather a focus on quality rather than quantity. But that is harder to define and measure consistently. A belief from leadership, and I'm sure in some cases based on top-level insights and correlations, that more interactions is positively related to the outcomes they care about has led to dissatisfaction on both sides of the desk and meaningless busy work.
I previously worked in a team that had "Measurement" in the title and bought in hook, line and sinker to the belief that we live and die by our KPIs. And to some extent I still do, but now with the benefit of more experience and a changed perspective on what that really means.
Sometimes we measure the wrong thing and that shows in our decisions and actions. Sometimes we measure the right thing and we still don't get what we intend to. So why do we bother?
It is about being data-driven. It is about being informed and having direction in our strategies. It is about having a shared source of truth for all of our teams to follow. But it can only deliver those things if we are realistic about the limitations to a) our metrics and b) all metrics. Unfortunately, sometimes our data teams are blindest to this truth.
Metrics always involve a distortion from our actual objective because our complex reality cannot be exactly measured
The irony of analyst blindness to metric limitations is that they are acutely aware of the specific limitations and weaknesses of the data they work with every day. https://towardsdatascience.com/make-metrics-matter-2ed7181c06e5/Nonetheless, sometimes a familiarity with these weakenesses or being so immersed in that data means they forget that others aren't as fluent in the gaps or that it may not be fit for the purpose that people are using it.
Additionally, there can be a fear if we shout too loud about the weakness of our data that people will associate that weakness to us too.
We will always have metrics. They are necessary for the reasons I outline above. But they are infinitely more valuable when we are equipped to select or define the one that fits our objective best.
Metrics are a distortion because sometimes our objective cannot be exactly tracked or measured and so we choose the "closest" trackable event to monitor that objective. This "proxy" is a translation that needs to take place but is often not spoken about or not done consciously because we have used the proxy for so long.
Some example reasons a proxy is required:
- The outcomes take too long to deliver so we need leading indicators.
- We want to assess people processes that aren't automatically recorded.
We also have to consider that the data itself is not clean, not complete or not consistent. So we can have one metric but many different versions of it depending on the source or who is doing the calculating. And so we get further and further away from our initial objective. Not through ill intent but the generally accepted reality that data is logged by people or processes and workflows that were built by people and so is imperfect.
Jeff Bezos has been quoted as saying:
"Usually, when data and anecdotes don't match, the latter tend to be right"
Take Marketing. As a domain, marketing is very aware of the imperfection of its data and metrics. This is because it is such a human discipline that relies on the unpredictability and variance of human reactions as well as a mix of channels with varying levels of trackability and engagement patterns (billboard, tv, digital etc.). https://towardsdatascience.com/make-metrics-matter-2ed7181c06e5/Nonetheless, once marketing effectiveness metrics are defined they can become dogma – especially when accountable to finance teams for budgets.
The telephone game
And so we pass our metric from data team, to marketing team, to finance and at each stage it gains distance from the person who most understands its context and its gaps. I've written more about the Marketing use case specifically here for those that are interested.
The need for metrics to travel across functions can often impact how we choose our metrics in the first place. Rather than them being picked because they are the most appropriate we might choose them because they most easily related to a metric another department is using. Or we have been told we need one metric even though multiple might make more sense to describe the nuance of a situation.
So we fit our metrics to the audience who will use them and the language that they speak. I am actually a big advocate for the latter because, at their core, I believe metrics are about a shared language. That being said, we need to be conscious in how we define and communicate them and ensure our teams understand the implications of the decisions we have made.
I have seen multiple examples of decisions being made by senior leaders because those decisions are easier to justify/report in the shared metrics in their organisation or the accepted risk or wisdom that exists. All while they actually believe there is a better alternative but an empiricist or availability bias makes them choose the safer option. There are situations where (even as data people) it may be better to accept and be intentional about "good enough" rather than seek perfection and find it in irrelevance. People are rarely purely rational and they give themselves permission to be this way if they have the data to back it up.
You can trust people not to be trustworthy
I mean this in the nicest possible way. People are people. Flawed, irrational and unpredictable. If we assume they will do exactly what we would do we are off to a bad start.
I've spoken about the various reasons metrics have limitations in their design. People are to credit/blame for some of those reasons but their impact gets even more pronounced when they are told to act on those metrics.
When we want to drive efficiency and/or effectiveness in our organisations we often set KPIs for our teams with targets to help guide them in their productivity and prioritisation. When we do this we are essentially simplifying our mission and objectives down to one or more metrics that we want them to follow. Even if these metrics were a perfect representation of the mission, when we rely on people to act on them we add new variables into the mix: Individual interpretations and preferences.
Goodhart's Law, as well as the idea of "perverse incentives", help to illustrate what I mean.
Goodhart's Law:
When a measure becomes a target it ceases to be a good measure.
Analysts may relate to this phenomenon by drawing parallels to work in modelling problems where you try to make a prediction by using a variable overly entwined with your outcome.
Put simply, you set a target as a way of understanding progress towards a greater goal but people will focus on delivering the target even if not delivering your intentioned goal.

The Cobra Effect explains how a solution based on a metric we haven't thought through can actually worsen the problem we set out to solve. The story goes that Delhi was overrun with cobras and the British rulers at the time decided to offer a bounty for dead cobras to incentivise the people of Delhi to solve the problem. The bounty was considered so generous that the people started breeding cobras so that they had more to kill, thus adding to the initial issue rather than solving it.
You might think this is an extreme but we see more subtle versions of this play out in business every single day. If you incentivise people on a metric tied to their compensation they will game the best way to maximise their compensation (vs. minimum effort) and they will do this very shrewdly for the period on which they are goalled.
Early in my career I worked in a Customer Analytics team and was given the problem to reduce monthly churn rate for one line of business. My manager advised that I build a churn propensity model based on a monthly cycle for those customers. The output would be used for email marketing campaigns which provided bonuses to attract high risk customers to return. After a few months of running these campaigns we noticed that the same customers were getting the bonuses each month. On investigation we saw that the model was using "recency" within the calendar month as one of the key metrics. Our emails would go out in the first few days of the month, the customers would come back for the bonus and then not be seen until the next email campaign. And so the cycle continued. We had unwittingly created a customer behaviour rather than addressing one that existed naturally. Monthly churn was no longer a meaningful measure (if it ever had been).
In general, there is a distance between what you actually wanted to happen and how it is ultimately executed and you can't be in every room to police that.

When we combine all of these different effects we can lose sight of our original mission or objective very quickly and end up with something quite different. As data professionals, we aren't in every room and if we pause to reflect I am sure all of us can think of an example where a data point we generated travelled and transformed as it did.
Take the driving seat in getting this right
We want our organisations to be data-driven. We wouldn't have gotten into these careers if that wasn't something we believed in and, it is good for the longevity of our careers if it is the case.
But if we want to spend our time on high impact projects and sophisticated techniques, we first have to ensure that everyone is using metrics and KPIs in the right way to benefit the overall health of the organisation and it's objectives. This has both an indirect and direct impact on our work as those KPIs are also what our models will be assessed against so we need to trust they are the right thing. It is a topic for a whole other article but one of the key things to look out for is misunderstandings on correlation vs. causation at a macro level. This is a concept that analytics teams understand deeply but shouldn't assume that understanding exists in every insight we deliver. Take responsibility for communicating clearly why a chosen metric is useful but also socialising how not to use or interpret it.
If we can educate our business stakeholders on the right metrics to use and the right ways to use them then we are doing better than many and have a great foundation for everything else. Being able to take an objective and big picture view of the proposed metrics and their strengths and weaknesses will make us better data professional is in the long run. The first steps are educating ourselves on what the business really needs and how our colleagues are likely to act and helping to build metrics that matter based on those key truths.
If joining these dots is something you or your leadership team need help with then check out my offering on kate-minogue.com
Through a unique combined focus on People, Strategy and Data I am available for a range of consulting and advisory engagements to support and enhance how you deliver on your strategy across Business, Data and Execution challenges and opportunities. Follow me here or on Linkedin to learn more.