Edition 118, March 2022

Fraud Prevention: Rethinking the Problem Before Jumping to the Solution

By Gerardo Pelayo, Innover Digital

Can out-of-warranty events that get completed as if they were under warranty be significantly and systematically reduced?

When an equipment malfunctions or an asset breaks down, service providers are facing as challenging a problem as ever. High customer expectations mean that there’s little time to plan and react effectively, which may involve the coordination and deployment of field service technicians, tools, spare parts and/or swap inventory. The challenge is exacerbated by the complexity of the installed base being served, part of which comes from the variety of models and configurations, but also the differences in warranty status of the components within an asset, or across assets for the same customer. The joint implication is that there are more pieces to reconcile and less time to do so.

One may then argue that there’s more data than ever to address this problem, which is true; yet it’s also true that the latter often results in more data sources that need to be synchronized, more reports that need to be consumed, and more alerts that need to be validated and acted against. Recognizing that data, analytical models and digital tools are all powerful levers, their success is constrained by decision-makers’ bandwidth and the available time to trigger an action. In other words, they’re necessary for scale and depth of analysis, but these levers are not sufficient on their own to operationalize a better way of preventing fraud.

Rather than pitching a specific solution, the below paragraphs offer a set of guidelines for identifying and prioritizing what is needed by decision-makers to make the digital possibilities real and impactful for the use case presented at the outset of this article: if a service is out of warranty, then don’t treat it as if it were under warranty. Having said this, the thought process can be applied to other use cases in reverse logistics and beyond.

1. Understand the reasons for fraud that you can actually do something about
It may be positioned as root cause analysis, clustering, or pattern recognition, but to make an impact at scale, the identification of common elements is certainly helpful so that the same business rules, performance measurement, and decision-making criteria can be applied to them. The levers to achieve that may range from descriptive analytics made available through a user-friendly dashboard and interpreted by subject-matter experts, to machine learning models that automatically categorize your service events based on the likelihood to belong to any given group plus the application of a business rule. For example, in the simplest scenario of a binary prediction for “fraud alert”, an event being analyzed by the model may be more likely than not to be an instance of fraud, but the business rule applied to avoid incorrectly classifying non-fraud events as fraud could have a minimum confidence threshold of 85%.

It’s true that more advanced methods and tools have a more scalable way of combing through data and finding patterns that were not explicitly being looked for. But even the most advanced methods will fall flat when the categories can’t be acted against, i.e., right answers to the wrong question. The key prerequisite to be successful is for the data science team to understand the targeted business outcomes and process, so that modeling outputs make business sense and the model is not only analytically correct, but also useful.

2. Use a predictability measure that aligns to your resources and the relative impact of fraud
There are multiple ways to measure the performance of the same predictive output, all of them technically correct, but where many of them can nudge the business to very misleading conclusions. Take one of the most commonly used: accuracy. A model that is 90% accurate in classifying events as “fraud vs. not fraud” sounds initially great. However, if only 5% of your events are fraud, then an analytical model could presumably predict every event as not fraud, and still yield a 95% accuracy.

Therefore, it’s critical to link the way in which the output’s performance is measured, to the way in which the analytical output will be utilized. One may want to minimize the chance of raising false alerts for fraud, or minimize the chance that a fraudulent event will go unnoticed, and the metrics should reflect these business priorities. Similarly, if it’s relatively expensive from a resourcing or a customer experience perspective to investigate every flagged event, then minimizing false alerts takes on a higher priority. But if fraud events are associated to the unjustified use of expensive parts and recurrent disruptions for the planning of resources, then minimizing unidentified fraud events becomes the key metric. In summary, it doesn’t only matter how often the model gets it right; it especially matters where the model is getting it right.

3. Choose the solution strategy that is better, not necessarily the most advanced  
The last key barrier to a successful operationalization is simply having the right fit between the solution selected and the incumbent goals, business context and team’s capabilities. Analytical outputs are never the end-point; at their best, they’re a step that provides the team with timely and impactful intelligence that didn’t exist before. But the latter is only achieved when the business is able to understand those outputs quickly, translate them into action plans and consistently implement the recommended path without investing more resources in the solution than the ones that were getting wasted because of the problem.

The above may mean setting up a solutions desk in a low-cost region to manage targeted events, or it could be a highly automated business process flow that connects the different data systems and triggers a set of pre-defined or AI-informed actions. Deciding on which solution path makes more sense depends on: (i) the expected workload (the higher it is, the more automated processes are justified); (ii) the complexity of the actions involved and their associated probability of human error and delays; (iii) the speed of implementation; and (iv) the alignment of the skills needed to make it work against the existing, or reasonably achievable, incumbent skills.   

Available levers in this digital era should be understood and utilized, diligently. The above provides guidelines for assessing, selecting and integrating these levers into the business workflow to drive quick and sustainable impact.

 


Gerardo Pelayo
Dr. Gerardo Pelayo is the Vice President of Supply Chain Solutions at Innover Digital. He has a demonstrated track record balancing implementation with thought leadership and innovation. Accumulating over 15 years of international work and academic experience in supply chain and business transformation, his current focus is on operationalizing data and digital accelerators to achieve results with speed at scale. He holds a B.S. in Industrial and Systems Engineering from Monterrey Tech, and a Ph.D. in Supply Chain Management (cum laude) from the MIT-Zaragoza Logistics Program.