Why You Should Stop Aspiring to IT Service Desk Industry Metric Benchmarks

As an industry, we seem to love statistics and metrics. And the IT service desk industry metrics are the most metric-intensive part of IT, where it can often seem as though every little thing that service desk agents do is measured and then reported on – from the number of tickets handled per hour, Level 1 ticket closure rates, cost per ticket, to end-user (customer) satisfaction with the service they have just received.

However, knowing what their internal score is against each metric, and how they have improved performance over time, is often not enough for IT service desks (or line management); instead, they want to know how they compare to their peers or against industry benchmarks. But how relevant are industry benchmarks and should service desks be aspiring to meet (or exceed) them?

How many of these “popular” service desk metrics does your company use?

HDI – a professional association for the technical support industry – surveys its members annually to identify the average scores across a number of common service desk and IT service management (ITSM) metrics. Some of which are detailed in the diagram below, taken from the 2015 HDI Support Center Practices & Salary Report:

HDI, 2015 HDI Support Center Practices & Salary Report (Q4, 2015)Source: HDI, 2015 HDI Support Center Practices & Salary Report (Q4, 2015)

But how helpful is comparing your organization to industry averages such as these?

Knowing how other organizations are performing is always interesting, and potentially valuable, data to have. But how helpful are industry averages in reality? If we pick out a couple of the above metrics, say that:

  1. First contact resolution (FCR) is 66% for incidents and 66.9% for service requests. This sounds reasonable given that a few years back the industry was touting 70% as an aspirational FCR level (and, before you ask, I have no idea where the 70% target actually came from). But it’s anyone’s guess as to what’s included when creating this measurement for each organization surveyed. For instance, if password reset has been automated by an organization, and thus password reset requests never hit the service desk, a big slice of FCR-friendly tickets is no longer available to bump up the organization’s FCR percentage. And let’s not forget that some organizations that are still handing them manually will consider password resets as incidents while others will consider them as service requests.
  2. Average handling time for phone calls is 8-10 minutes for incidents and 5-8 minutes for service requests. Again the absence of password reset tickets from the mix will adversely affect a service desk’s average score. But what about the impact of the organization’s application estate complexity, say? By this I mean, is the service desk only dealing with issues related to standard, shrink-wrapped applications or cloud services, or is there also a plethora of home-grown business applications to contend with. If it’s the latter, then the average handling time statistics could either go up or down depending on how these applications are treated. For instance, some organizations might expect the corporate IT service desk to deal with them, passing them to Level 2 support if judged “too hard” after 20-30 minutes of work. Or the organization might alternatively have a business application support team where issues are logged by the IT service desk in just a couple of minutes, passed over to the application support team, with the original ticket then closed.

With so many variables, it definitely makes it difficult to compare apples with apples with industry benchmarks.

Cost per ticket is probably one of the flakiest benchmarks

Cost per ticket is another great example of potentially not “comparing apples with apples.”

When comparing your organization’s all-in service desk cost per ticket to that of other companies, or industry averages, the likelihood is that it won’t be a true comparison. For instance, what has been included (and excluded) in gauging the cost of service desk operations and has been included (and excluded) in the number of applicable tickets?

Focusing on the former, some service desk costs are obvious, for example, the people costs associated with service desk agents. But, even here, is it just the salaries or does it include other employer costs such as pension contributions. Then what else should one include?

  • The software and hardware used – including automation technologies that reduce manual labor costs?
  • The cost of facilities including furniture, floor space, lighting, heating, and rates (hopefully bundled into a simple per desk fee)?
  • Corporate employee overhead costs such as the per-person cost of having an HR function to deal with people issues?
  • The service desk manager total employment cost plus a proportion of their manager’s cost, and even a proportion of their manager’s manager cost (they might spend 5% of their time on service desk opportunities and issues)? Although the reality is that individual’s salaries can often be too sensitive to be shared outside of HR.

So, as with the two previous examples, are IT support organizations ever truly able to compare like with like when aspiring to industry benchmarks? But this is only one part of the issue with using industry averages for comparison purposes.

Average scores don’t necessarily reflect an average company

Comparing metrics in isolation is also problematic. For instance, trying to beat the industry average score for customer satisfaction while also trying to reduce handling times and the cost per ticket (to below the industry averages for each) is going to be difficult. Why? As handling times and costs are reduced, there’s a likelihood that customer service will suffer. After all, these metrics are connected.

But the real issue is an even larger one. As potentially wayward highs and lows are netted off when creating industry averages, such that the average scores belong to an “average” organization that probably doesn’t exist in reality. Or, if it did exist, it wouldn’t necessarily be an optimal IT service desk operation. And even then, is your organization also an average organization? Probably not.

In many ways, industry averages need to be more granular, or focused, allowing an inquisitive and aspirational organization to understand things from their perspective (or situation) such as:

  • The average ticket handling time for organizations with a customer satisfaction score of 90% or above
  • The cost per ticket for organizations handling 500 tickets per month (which is probably going to be higher than that for an organization handling 5000 with the help of automation)
  • The cost per ticket for organizations handling 5000 tickets per month in the pharmaceutical industry
  • The time to answer the phone, or to respond to emails, in the smallest of support IT teams

It’s only through trying to compare apples with apples that such industry data becomes more relevant to the organizations wishing to use it to benchmark and potentially improve, against. And comparing to industry averages culled from organizations with little similarity to your own can at best only fuel misguided aspirations.

Don’t get me wrong, it’s great to have access to industry average metric data such as this provided by HDI, but people have to use it with their eyes open. In particular, its relevancy to their organization’s current situation.

As to whether more granular data is out there, it is but you’ll probably either have to pay for it or use an ITSM tool that can provide aggregated anonymized customer data, and the ability to analyze it, for free.