NEW ARTICLE FROM TECHNICAL DIRECTOR FSCORELAB ON RB.RU

3112
NEW ARTICLE FROM TECHNICAL DIRECTOR FSCORELAB ON RB.RU

OBSTACLES FOR DISTRIBUTION OF MACHINE LEARNING

One of the main obstacles to the widespread dissemination of machine learning in business is the trade-off between the interpretability and complexity of the algorithm. The more complex the internal structure of the model, the more deep the relationship between the variables it can find, but the more difficult it becomes for people to understand, especially not directly related to the world of machine learning and statistics.

The most common example of a simple model is linear regression. Each linear regression coefficient shows how the predicted value will change on average if the value of the variable grows by one unit.

For example, Figure 1 demonstrates the simplest model for predicting the number of days of delay in repayment of a loan with one variable – the age of the client. According to this model, increasing the client’s age by one year on average reduces the outlook for delay by 0.36 days.

However, even with linear models it’s not so simple. For example, if the model has several highly correlated variables (for example, the age and number of closed loans in the credit history), a direct interpretation of the coefficients under these variables may not be such a trivial task. Even more difficult is the case with nonlinear models, where the dependencies between variables and prediction can have a complex, nonmonotonic form with a lot of interdependencies.

Such models in machine learning are often called “black boxes” (black boxes). The input of the model is given a set of variables on the basis of which it calculates its prediction, but how this decision was made, what factors influenced it – the answers to these questions are often hidden in the “black box” of the algorithm.

WHAT IS INTERPRETABILITY?

Before proceeding to the description of the existing methods of interpretation, it is superfluous to discuss what exactly is meant by the interpretability of the model and why it is necessary at all to interpret and explain the predictions of the ML models.

The term “model interpretability” is umbrella, that is, it includes a whole set of characteristics and definitions.

In the most general form, interpretability can be defined as the degree to which a person can understand the reason for which a certain decision was made (for example, a decision to issue a loan to a client).

This definition is a good starting point, but it does not answer many important questions. For example, who exactly is meant by a “person” – an expert in the given subject area or an ordinary person “from the street”? What factors should be taken into account in the evaluation of interpretability – the time and mental resources expended, the depth of understanding of the internal processes of models, the trust of a person to them?

Despite the absence at the moment of a unified approach to this concept, we intuitively feel what makes the ML-model more or less interpretable. This is any information in the form accessible to perception, which improves our understanding of what factors and how it influenced this particular prediction and the work of the model as a whole. This information can take a variety of forms, for example, visualizations and graphics or textual explanations.

 

FULL VERSION OF THE ARTICLE >>

54321
(0 votes. Average 0 of 5)
Leave a reply

Your email address will not be published. Required fields are marked *