Document Type : Original Article

Authors

1 Department of Management and Economics, Science and Research Branch. Islamic Azad University, Tehran, Iran.

2 Faculty of Industrial Engineering, K.N. Toosi University of Technology, Tehran, Iran.

3 Faculty of Industrial Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran.

Abstract

Customer churn prediction has been gaining significant attention due to the increasing competition among mobile service providers. Machine learning algorithms are commonly used to predict churn; however, their performance can still be improved due to the complexity of customer data structure. Additionally, the lack of interpretability in their results leads to a lack of trust among managers. In this study, a step-by-step framework consisting of three layers is proposed to predict customer churn with high interpretability. The first layer utilizes data preprocessing techniques, the second layer proposes a novel classification model based on supervised and unsupervised algorithms, and the third layer uses evaluation criteria to improve interpretability. The proposed model outperforms existing models in both predictive and descriptive scores. The novelties of this paper lie in proposing a hybrid machine learning model for customer churn prediction and evaluating its interpretability using extracted indicators. Results demonstrate the superiority of clustered dataset versions of models over non-clustered versions, with KNN achieving a recall score of almost 99% for the first layer and the cluster decision tree achieving a 96% recall score for the second layer. Additionally, parameter sensitivity and stability are found to be effective interpretability evaluation metrics. 

Keywords

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
Ambarwari, A., Adrian, Q. J., & Herdiyeni, Y. (2020). Analysis of the effect of data scaling on the performance of the machine learning algorithm for plant identification. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 4(1), 117–122.
Amin, M. S., Chiam, Y. K., & Varathan, K. D. (2019). Identification of significant features and data mining techniques in predicting heart disease. Telematics and Informatics, 36, 82–93.
Beeharry, Y., & Tsokizep Fokone, R. (2022). Hybrid approach using machine learning algorithms for customers’ churn prediction in the telecommunications industry. Concurrency and Computation: Practice and Experience, 34(4), e6627.
Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(2).
Brânduşoiu, I., Toderean, G., & Beleiu, H. (2016). Methods for churn prediction in the pre-paid mobile telecommunications industry. 2016 International Conference on Communications (COMM), 97–100.
Burez, J., & Van den Poel, D. (2009). Handling class imbalance in customer churn prediction. Expert Systems with Applications, 36(3), Article 3.
Che, Z., Purushotham, S., Khemani, R., & Liu, Y. (2015). Distilling knowledge from deep networks with applications to healthcare domain. ArXiv Preprint ArXiv:1512.03542.
Coussement, K., Lessmann, S., & Verstraeten, G. (2017). A comparative analysis of data preparation algorithms for customer churn prediction: A case study in the telecommunication industry. Decision Support Systems, 95, 27–36.
Coussement, K., & Van den Poel, D. (2008). Churn prediction in subscription services: An application of support vector machines while comparing two parameter-selection techniques. Expert Systems with Applications, 34(1), 313–327.
Cullen, S. (1993). Selecting a classification method by cross-validation. (pp. 135–143). Machine learning 13.
Dahiya, K., & Bhatia, S. (2015). Customer churn analysis in telecom industry. 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO)(Trends and Future Directions), 1–6.
DeCastro-García, N., Muñoz Castañeda, Á. L., Escudero García, D., & Carriegos, M. V. (2019). Effect of the Sampling of a Dataset in the Hyperparameter Optimization Phase over the Efficiency of a Machine Learning Algorithm. Complexity, 2019.
Dolatabadi, S. H., & Keynia, F. (2017). Designing of customer and employee churn prediction model based on data mining method and neural predictor. 74–77.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. ArXiv Preprint ArXiv:1702.08608.
Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 0210–0215.
Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68–77.
Ebrah, K., & Elnasir, S. (2019). Churn prediction using machine learning and recommendations plans for telecoms. Journal of Computer and Communications, 7(11), 33–53.
Escalante, H. J., Escalera, S., Guyon, I., Baró, X., Güçlütürk, Y., Güçlü, U., van Gerven, M., & van Lier, R. (2018). Explainable and interpretable models in computer vision and machine learning. Springer.
Figueroa, R. L., Zeng-Treitler, Q., Kandula, S., & Ngo, L. H. (2012). Predicting sample size required for classification performance. BMC Medical Informatics and Decision Making, 12(1), 1–10.
Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. Proceedings of the IEEE International Conference on Computer Vision, 3429–3437.
Frosst, N., & Hinton, G. (2017). Distilling a neural network into a soft decision tree. ArXiv Preprint ArXiv:1711.09784.
García, S., Fernández, A., & Herrera, F. (2009). Enhancing the effectiveness and interpretability of decision tree and rule induction classifiers with evolutionary training set selection over imbalanced problems. Applied Soft Computing, 9(4), 1304–1314.
Gattermann-Itschert, T., & Thonemann, U. W. (2021). How training on multiple time slices improves performance in churn prediction. European Journal of Operational Research, 295(2), 664–674.
Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58.
Gürsoy, U. Ş. (2010). Customer churn analysis in telecommunication sector. İstanbul Üniversitesi İşletme Fakültesi Dergisi, 39(1), 35–49.
Han, J., Pei, J., & Kamber, M. (2011). Data mining: Concepts and techniques. Elsevier.
Herman, B. (2017). The promise and peril of human evaluation for model interpretability. ArXiv Preprint ArXiv:1711.07414, 8.
Howley, T., Madden, M. G., O’Connell, M.-L., & Ryder, A. G. (2005). The effect of principal component analysis on machine learning accuracy with high dimensional spectral data. International Conference on Innovative Techniques and Applications of Artificial Intelligence, 209–222.
Hung, C.-K. (2017). Making machine-learning tools accessible to language teachers and other non-techies: T-SNE-lab and rocanr as first examples. 2017 IEEE 8th International Conference on Awareness Science and Technology (ICAST), 355–358.
Komorowski, M., Marshall, D. C., Salciccioli, J. D., & Crutain, Y. (2016). Exploratory data analysis. Secondary Analysis of Electronic Health Records, 185–203.
Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., & Doshi-Velez, F. (2019). An evaluation of the human-interpretability of explanation. ArXiv Preprint ArXiv:1902.00006.
Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016). Interpretable decision sets: A joint framework for description and prediction. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1675–1684.
Lavesson, N., & Davidsson, P. (2006). Quantifying the impact of learning algorithm parameter tuning. AAAI, 6, 395–400.
Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350–1371.
Lian-Ying, Z., Amoh, D. M., Boateng, L. K., & Okine, A. A. (2019). Combined appetency and upselling prediction scheme in telecommunication sector using support vector machines. International Journal of Modern Education and Computer Science, 10(6), 1.
Lipton, Z. C. (2018). The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31–57.
Mardani, A., Jusoh, A., Nor, K., Khalifah, Z., Zakwan, N., & Valipour, A. (2015). Multiple criteria decision-making techniques and their applications–a review of the literature from 2000 to 2014. Economic Research-Ekonomska Istraživanja, 28(1), 516–571.
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency, 279–288.
Mohr, F., & van Rijn, J. N. (2022). Learning Curves for Decision Making in Supervised Machine Learning–A Survey. ArXiv Preprint ArXiv:2201.12150.
Momin, S., Bohra, T., & Raut, P. (2020). Prediction of customer churn using machine learning. EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing, 203–212.
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1765–1773.
Moraffah, R., Karami, M., Guo, R., Raglin, A., & Liu, H. (2020). Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explorations Newsletter, 22(1), 18–33.
Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080.
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 427–436.
Pan, T., Zhao, J., Wu, W., & Yang, J. (2020). Learning imbalanced datasets based on SMOTE and Gaussian distribution. Information Sciences, 512, 1214–1233.
Qureshi, S. A., Rehman, A. S., Qamar, A. M., Kamal, A., & Rehman, A. (2013). Telecommunication subscribers’ churn prediction model using machine learning. 131–136.
Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J., & Napolitano, A. (2009). RUSBoost: A hybrid approach to alleviating class imbalance. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(1), Article 1.
Sivasankar, E., & Vijaya, J. (2019). Hybrid PPFCM-ANN model: An efficient system for customer churn prediction through probabilistic possibilistic fuzzy clustering and artificial neural network. Neural Computing and Applications, 31(11), 7181–7200.
Syarif, I., Prugel-Bennett, A., & Wills, G. (2016). SVM parameter optimization using grid search and genetic algorithm to improve classification performance. ℡KOMNIKA (Telecommunication Computing Electronics and Control), 14(4), 1502–1509.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. ArXiv Preprint ArXiv:1312.6199.
Umayaparvathi, V., & Iyakutti, K. (2016). Attribute selection and Customer Churn Prediction in telecom industry. 84–90.
Vafeiadis, T., Diamantaras, K. I., Sarigiannidis, G., & Chatzisavvas, K. C. (2015). A comparison of machine learning techniques for customer churn prediction. Simulation Modelling Practice and Theory, 55, 1–9.
Vahidi Farashah, M., Etebarian, A., Azmi, R., & Ebrahimzadeh Dastjerdi, R. (2021). A hybrid recommender system based-on link prediction for movie baskets analysis. Journal of Big Data, 8(1), 1–24.
Yang, F., Du, M., & Hu, X. (2019). Evaluating explanation without ground truth in interpretable machine learning. ArXiv Preprint ArXiv:1907.06831.
Ying, W., Li, X., Xie, Y., & Johnson, E. (2008). Preventing customer churn by using random forests modeling. 429–434.