“In the machine learning era, it becomes easy for everyday users to build their own AI solutions without understanding the mechanics of how they work. This makes it difficult for business teams to deploy a predictive or prescriptive model that can be trusted.
One of the biggest concerns that people have when building predictive or prescriptive models is that they are unsure how the machine makes decisions. It’s important to note that it isn’t just business users who are struggling to understand their models. ML engineers are also facing this issue. According to a survey done by Mxnet in 2017, 35% of AI experts felt that their colleagues did not understand how their models were made while 47% were uncomfortable presenting the results to these colleagues.”
The text above is the output from an AI snippet that tried writing an opening for the title “Machine learning interpretability”. Quite well put, one would say.
We live in an age where most technologies that surround us drive actions that are powered by machine learning models or weak AI. ML adoption has been challenging as businesses fail to interpret the results and justify the same to their customers. The toughest part is not about building an accurate model but explaining and convincing business stakeholders and consumers on how it works. Moreover, with the explosion of a variety of data generated through Instagram likes, Facebook shares, tweets, mobile GPS pings, Google searches, behavioral information in cookies, online payments, and such, it is important for businesses to not only identify and understand the influencing micro-factors but also evaluate what works for a micro-segment of their consumer base. In this article, we will take a closer look at some of the interesting applications of this in the e-commerce domain.
Businesses want to understand the ‘why’
A constant challenge that data scientists face while building these models is whether the features used are any good or if the model is trustworthy and unbiased from a business standpoint. On top of this, compliance, fair lending, ethical AI, and GDPR regulations demand explanations of model outputs to ensure that no biases exist within the model structure. The purpose is now being extended to augment humans at scale – moving the generalization of model outputs to specifics of business use cases. We want to shift our focus from “which consumer segment will buy this product” to “what product will John buy?”
The above illustration sums up the maturity and the expectations of business consumption. On one hand, standard rule-based or econometric models are simple enough to be understood but they lack the accuracy we seek from an impact perspective. On the other, neural networks are great at establishing accuracy but are difficult for us to explain the why or contextualize business actions. Recent advances in the field of data science have been bridging this gap with explainable AI (XAI) – a model with local or global stack ranking features that explains a model’s outcomes and moves the less interpretable models to the more interpretable side. At Enquero, we are leveraging the advances in XAI to explain model predictions and enabling businesses to rethink traditional problems such as customer segmentation, in an improved way to drive meaningful impact.
A new perspective towards enabling micro-segmentation
In today’s world, it is a given that all enterprises have a customer segmentation architecture in place that is integrated with their customer relationship management (CRMs) systems or customer data platforms (CDPs). From a simple clustering exercise to building a customer lifetime model (gamma-gamma, beta geometric or negative binomial distribution, Weibull time-to-event) based on recency-frequency-monetary characteristics. One of the main problems with these implementations is that we are either limited to numerical features or only focus on the core transactional characteristics of customers. This often leads to diminished impact of an enterprise’s marketing strategies.
From an implementation perspective, marketing strategies focus on improving lifetime value, preventing churn, increasing membership enrollment, encouraging active engagement, and optimizing marketing spend. But the traditional way of implementing these strategies fails to capture features that are centered around business actions. The clusters of consumers profiled are usually used in conjunction with say, a churn model. This might partially leverage and answer a global explanation of consumer behavior (key-driver or feature importance analysis). But while deciding on the course of action, the global explanation turns out to be ineffective for local implementations. For example, a model might suggest that most people prefer a particular segment of a luxury product. And we diligently act on that consumer segment with an apt marketing campaign. Despite the effort, such campaigns are bound to fail because we missed out on leveraging local (specific) explanations.
To address this, we may want to extend the use of the churn model and enable model explanations. One of the most widely used methods for model explanations is SHAP (SHapely Additive exPlanations). This allows us to understand the impact of a variable compared to that of a baseline value.
Micro-segmentation on a major e-commerce retailer
For any leading e-commerce retailer, understanding the key features that retain and engage customers is crucial for achieving success. In one such live use case, as part of boosting our customer’s marketing strategies and micro-segmented targeting, our team improved their current inactive user segmentation exercise to extend customer behavior from demographics and transactional information to psychographic and app usage information that showed membership state, age, and usage information. Our simple micro-segmentation framework now helps solve such segmentation problems easily to deliver the right solution any e-commerce retailer might need.
The diagram below illustrates the explanation of a customer inactive model. This explains what are the features that stop a customer from engaging. The implications of this are huge as we now can drill this down to a customer level to gain micro-level insights. So, instead of looking at the whole segment, you can look at a specific customer and understand what drove the customer away.
One of the clear advantages of flattening the SHAP scores from say, an XGBoost model, is that we can now use the standardized scores of the features (including mixed data types) in segmenting customers. This allows us to create actionable and clearly labeled customer (micro) segments specific to inactive customers.
The figure above demonstrates the difference between the two customer clusters in our use case. The features and the clusters become easier to profile and we can further understand the core features that drive such behavior for each cluster. A clear advantage of this is to be able to group and segment by a subset of customers and identify what marketing campaigns better fit a specific group.
Adopting Explainable AI is not just about understanding and explaining why a model predicted the way it did or enabling the creation of micro-segments to make marketing initiatives more customer-centric. But it is also about detecting model biases and identifying rogue models or loopholes that a model learns over time due to skewed data. Let’s assume that a static customer journey in an enterprise CRM rewards interfacing customers with discounts if they add items to their shopping carts and do not make purchases within the next 24 hours. Such static rules could be easily exploited. With advances in technology, CRM platforms, and custom applications these static rules can be replaced with dynamic customer journeys by leveraging machine learning and XAI. This will further discourage CRM design restrictions and map a marketing campaign to the best-fit micro-segment and treat each segment in a personalized manner.
Here are some possible areas other than e-commerce where XAI can help.
- We can effectively rank consumer complaints based on severity by highlighting specific messages in the complaints to explain the pain points
- In a financial domain, we can clearly illustrate each feature that is used in credit scoring models and ensure that no inherent model biases exist
- Insurance companies can ensure that the premiums for insurance are not defined by a consumer’s race, gender, or region
- Businesses can identify machine failures that are not always due to core components
XAI is the need of the times to help businesses understand the reason behind an outcome and justify the next course of action. It is, thus, crucial to explain your AI decision to your business functions and not just focus on building models, mechanically.
Vikram Raju - Principal AI/ML Architect | Innovation and Design Thinking
With a decade-worth of experience in conceptualizing and building next-gen advanced analytics solutions across retail and finance, Vikram is known to replace textbook solutions with innovative and research-driven prototypes to unleash competitive advantage for today’s and tomorrow’s business problems.
Subscribe to blog