On-time performance (OTP) rate is one of the most critical and commonly measured supply chain KPIs for eCommerce retailers and carriers alike. Few, if any, scorecards leave it off, and it is a building block to many other equally important measurements – like On-Time-in-Full (OTIF) and Perfect Order Rate.

When it comes to measuring on-time performance in the last mile we frequently see folks making the same mistake – exclusively measuring against a service agreement (SLA) rather than an estimated delivery date (EDD) or, better yet, the promise date made to the consumer. In fact, if you believe that your carrier rarely drops below 90-95% for OTP, you may be blindly putting your team at risk for customer complaints and lost business.

As we head into another holiday season, particularly one that is guaranteed to have more than its share of disruption, it’s critical that retailers take a hard look at how they measure performance if they want to minimize customer impact and effectively benchmark themselves against others.

The 3 “Flavors” of On-Time Performance Rate and Knowing When to Use Each

At Convey we believe that there are three definitions of “on-time” for retailers that need to be measured and used in specific ways. But…wait, on-time just means on-time… right? How is it possible there are different definitions? Well, it depends on who you are asking. For eCommerce retailers, where loyalty is critical to long-term success, on-time performance has a lot more to do with consumer experience than traditional supply chain methodology allows.

For retailers, success means hitting the expectation set with the shopper – not the agreement made with the carrier. On-time rates have long been benchmarked for the big 3 parcel carriers (UPS, FedEx, and USPS) by various experts in the industry like Shipmatrix, Shipware and Shipsights. Even carriers themselves will occasionally comment publicly.

The problem is that these numbers rarely align with consumer expectations. What might appear as a small dip (such as the drop to the low 90’s following the onset of the pandemic in May) will result in a disproportionate deluge of customer calls – most of which have little to do with a carrier reported delay. In fact, Convey has found that when a customer submits negative feedback and lists a “delay” as the reason, 90% of the time there is no corresponding delay exception reported by the carrier. Think about your own experiences — if the average OTP performance is 96+%, why does every single person have a multitude of their own delivery horror stories? Retailers who rely on this legacy definition are guaranteed to end up with a scorecard that’s all green despite many unhappy customers and a care team that’s overburdened with issues they can’t explain, much less overcome.

Let’s walk through the three relevant but very distinct OTP measures you should understand:

1. Performance to SLA: The Status Quo

This is the traditional supply chain model for measuring on-time performance and originates from logistics as a back-of-house function focused on carrier rate negotiations and routing. In this model, a shipment is “on-time” if the carrier delivers within the timeline allowable for the service level selected. In this model late deliveries that exceed a certain percentage (often in the high 90’s for the major carriers) may be eligible for reimbursement of some of the shipping fees. It’s worth noting that in this model guaranteed service refunds (GSRs) are subject to an extensive list of carrier reported exception codes which are not counted in this version of carrier OTP. Another variant of this carrier sla OTP is the “raw” calculation that includes the exceptions even though they don’t guarantee a rebate. This is also one of many reasons major carriers chose to discontinue money-back guarantees for their lower tier service offerings at the onset of the pandemic – and will likely stay that way through this holiday season and perhaps beyond.

This method of calculating OTP is how many audit firms, consultants who focus on carrier negotiations and cost containment (like Shipmatrix), and carriers themselves will report on this data. If you are negotiating a rebate with a carrier – this is what you want to use. However, this metric should never be used to gauge delivery performance through the eyes of your customer.

Use this metric when: negotiating rebates with carriers.

2. Performance to the Estimated Delivery Date (EDD): The Best Measurement of Carrier Performance for Consumers (And Convey’s Definition of Choice)

Retailers are in the best position to align to the customer’s perception when measuring on-time performance to the first delivery date given for a shipment. Even if no date was promised at checkout, when a consumer sees that initial delivery date for her new air fryer, she begins getting excited and perhaps even plans a meal around it – changing the date later on will result in disappointment.

As soon as an item ships and is scanned the carrier will provide both the consumer and the retailer with an EDD via the tracking page. This date may, or may not, align with the service level guarantee measured by the first model but nevertheless sets the customer’s expectation of when they will receive their package. In fact, our data shows that most standard ground shipments will arrive on day 3 despite having a guarantee of up to 5 days.

Additionally, carriers vary widely on how often they change an EDD once an item is in transit. While some are really great at setting and meeting their initial estimated delivery dates others make a habit of “moving the goalposts” which in and of itself creates customer satisfaction issues. Pre-covid we were seeing revised estimated delivery dates occur in about 12% of shipments, while they’ve started to normalize some in August, this summer has seen revised dates affect more than 20% of parcel shipments.

If the carrier misses this original delivery date, Convey counts it as late – and so does the consumer. This is why Convey feels strongly that measuring based on SLA’s is not a true indicator of the carrier’s performance in the eyes of the customer. Consumers don’t care about SLA agreements – they just want their package when they were told it was going to arrive and 94% will blame the retailer for a failure here.

For this reason, retailers who care about the customer experience should measure carrier on-time performance against the original expected delivery date. In this model, the “low-90’s” reported at the beginning of the pandemic really looked closer to the mid-70s and was much more consistent with the widespread reports of delays across the nation and with the collective consumer experience.

Use this metric when: measuring carrier performance against consumer expectations.

3. Performance to Promise: The Best for Retailer Optimization of Click-to-Deliver Fulfillment Times and Customer Expectations

Similar to performance to EDD, on-time performance to promise date (PD) is a critical tool for retailers looking to optimize their operations against customer expectations set at purchase.

Shoppers are surprisingly unforgiving when they feel like the delivery promise was mis-represented at purchase (83% expect some form of compensation if missed), and this on-time rate is therefore mission critical for retailers who want to create and foster a loyal customer base. This metric encompases both fulfillment and shipping delays to allow retailers to understand how well they are making and keeping promises to their customers across all last-mile functions.

Nailing on-time to promise is critical to ensuring long term trust and brand loyalty – particularly when purchase decisions are time sensitive like an item of clothing needed for an event or dog food needed to ensure a beloved pet doesn’t go hungry. With free, (and relatively) fast shipping now table stakes – consistency and trust in delivery performance is the next key point of differentiation for retailers.

Use this metric when: working to optimize front-end promise dates or benchmarking overall delivery speed against other retailers.

Summary

Each of these metrics has a time and a place but all too often when discussing OTP with our retail clients, industry leaders and analysts, the traditional and overly-optimistic methodology of tying OTP to the carrier SLA feels inaccurate. Successful retailers have developed scorecards that encompass all three of these on-time metrics – and in turn take responsibility for ensuring they secure all the data necessary to do so. Understanding both the carrier perspective and the consumer perspective will help you to manage your partners in a way that will help you to drive the outcomes you are really trying to achieve.

For more real-time data around how carriers are performing relative to revised delivery dates, delays, and customer expectations head over to our Parcel Network Pulse Dashboard.

On-time performance (OTP) rate is one of the most critical and commonly measured supply chain KPIs for eCommerce retailers and carriers alike. Few, if any, scorecards leave it off, and it is a building block to many other equally important measurements – like On-Time-in-Full (OTIF) and Perfect Order Rate.

When it comes to measuring on-time performance in the last mile we frequently see folks making the same mistake – exclusively measuring against a service agreement (SLA) rather than an estimated delivery date (EDD) or, better yet, the promise date made to the consumer. In fact, if you believe that your carrier rarely drops below 90-95% for OTP, you may be blindly putting your team at risk for customer complaints and lost business.

As we head into another holiday season, particularly one that is guaranteed to have more than its share of disruption, it’s critical that retailers take a hard look at how they measure performance if they want to minimize customer impact and effectively benchmark themselves against others.