https://arxiv.org/pdf/2212.07432v1.pdf

A counterfactual explanation is a type of explanation for a model's prediction that tells you the smallest changes that would need to be made to your input data to change the model's prediction.

SVMs are advantageous because they work well with high dimensional data, have theoretical guarantees on stability and sample complexity, tend to generalize well (due to low Rademacher Complexity), and they find decision boundaries (hyperplanes) that avoid overfitting (by having a low 2 norm and a wide margin). SVMs seem to perform relatively well even when there isnt alot of data.

there is little to no information on how to use the wide-margin property to change the labels of instances with undesirable predictions and how this can be used to enhance interpretability

<aside> 💡 features are basically explanations in classification models

</aside>

actionability as giving a recommendation of the easiest way for a user to change an undesirable model prediction, based on a set of rules.

The case study is with diabetes dataset. The way factors interact is not understood. Controlling all factors are unrealistic. The goal is to tweak the least amount of factors the least amount.

<aside> 💡 SVMs help with some critical decisions in medical and financial fields. Therefore, interpretability and explanation must be high. This paper addresses that.

</aside>