TY - GEN
T1 - Design and Implementation of Machine Learning Evaluation Metrics on HPCC Systems
AU - Suryanarayanan, A.
AU - Chala, Arjuna
AU - Xu, Lili
AU - Shobha, G.
AU - Shetty, Jyoti
AU - Dev, Roger
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - The HPCC Systems Production Machine Learning bundles provide a diverse set of features that allow the parallelized creation and training of Machine learning models and a large set of evaluation metrics that can be used to test the trained model to ascertain its performance. To help monitor the models more closely however, a new set of evaluation methods that incorporate the analysis of clusters and the selection of features, as well as other commonly used tests, have been proposed, implemented, and tested. The implementations are written completely in Enterprise Control Language and support the various features provided by the Machine Learning bundles such as the Myriad Interface. This paper provides a comprehensive summary of the evaluation metrics currently available in the library, before presenting the details of the design and implementation of the new evaluation methods. It the goes on to present the results of testing these implementations against implementations present in the python scikit-learn library, and a few data visualisations demonstrating some uses of the implemented evaluation metrics.
AB - The HPCC Systems Production Machine Learning bundles provide a diverse set of features that allow the parallelized creation and training of Machine learning models and a large set of evaluation metrics that can be used to test the trained model to ascertain its performance. To help monitor the models more closely however, a new set of evaluation methods that incorporate the analysis of clusters and the selection of features, as well as other commonly used tests, have been proposed, implemented, and tested. The implementations are written completely in Enterprise Control Language and support the various features provided by the Machine Learning bundles such as the Myriad Interface. This paper provides a comprehensive summary of the evaluation metrics currently available in the library, before presenting the details of the design and implementation of the new evaluation methods. It the goes on to present the results of testing these implementations against implementations present in the python scikit-learn library, and a few data visualisations demonstrating some uses of the implemented evaluation metrics.
KW - HPCC Systems
KW - Machine Learning
KW - Performance Evaluation Metrics
UR - http://www.scopus.com/inward/record.url?scp=85083073401&partnerID=8YFLogxK
U2 - 10.1109/CSITSS47250.2019.9031056
DO - 10.1109/CSITSS47250.2019.9031056
M3 - Contribución a la conferencia
AN - SCOPUS:85083073401
T3 - CSITSS 2019 - 2019 4th International Conference on Computational Systems and Information Technology for Sustainable Solution, Proceedings
BT - CSITSS 2019 - 2019 4th International Conference on Computational Systems and Information Technology for Sustainable Solution, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th International Conference on Computational Systems and Information Technology for Sustainable Solution, CSITSS 2019
Y2 - 20 December 2019 through 21 December 2019
ER -