Evaluation of Hyperparameter Optimization Techniques in Deep Learning considering Accuracy, Runtime and Computational Efficiency Metrics

Authors

  • John Bush Idoko Near East University, North Cyprus, Mersin-10, Turkey
  • Mohammad Khaleel Sallam Ma’aitah Applied Science Private University, Amman Jordan
  • Almuntadher Alwhelat Department of Computer Engineering, Al-Farabi University College, Baghdad 10022, Iraq
  • Kennedy Smart Nuralogix Corporation, 250 University Ave Suite 209, Toronto, Canada
  • Zainab Alwaeli Department of Computer Engineering, Near East university, North Cyprus, Mersin-10, Turkey

Keywords:

Hyperparameter Optimization, Deep Learning, Approximation Algorithms, FNN, MNIST Dataset

Abstract

Hyperparameter optimization is considered one of the most important steps in training deep learning models, since the performance metrics of the models, such as accuracy, generalizability, and computational efficiency, are closely related to this. The following five hyperparameter optimization techniques have been explored in this work: Grid Search, Random Search, Genetic Algorithm, Particle Swarm Optimization, and Simulated Annealing, on a feedforward neural network (FFNN) trained with the MNIST dataset. It considers two main configurations of 20 epochs and 50 epochs, focusing on three key metrics: accuracy, runtime, and computational efficiency. Results show that the approximation algorithms, like Genetic Algorithm and Simulated Annealing, can achieve a fantastic trade-off between accuracy and runtime, which allows them to perform much better in terms of computational practicality than the classical methods like Grid Search and Random Search. As a simple example, the highest Genetic Algorithm accuracy is 98.60% within 50 epochs, while Simulated Annealing performed better with the fastest run in 357.52 seconds. These results are bound to show how much flexibility and efficiency there is by the approximation algorithms when searching high-dimensional hyperparameter spaces under scarce resources. This work also presents a trade-off analysis between exhaustive classic techniques and adaptive approximation techniques. The Python implementation used-which is modular in architecture-provides a basic structure that can be extended to complex datasets and architectures. By bridging computational efficiency with practical efficacy, this work provides actionable guidance by both practitioners and researchers in the use of deep learning, offering a possible direction for choosing hyperparameter optimization methodologies most appropriate to constraints versus objectives.

Downloads

Download data is not yet available.

Author Biography

  • Mohammad Khaleel Sallam Ma’aitah, Applied Science Private University, Amman Jordan

    Head of Electrical Engineering Department, Electrical Engineering / Robotics and Artificial Intelligence Engineering, Faculty of Engineering & Technology, Applied Science Private University, Amman Jordan

Downloads

Published

30-06-2025

Issue

Section

Articles

How to Cite

Idoko, J. B., Ma’aitah, M. K. S., Alwhelat, A., Smart, K., & Alwaeli, Z. (2025). Evaluation of Hyperparameter Optimization Techniques in Deep Learning considering Accuracy, Runtime and Computational Efficiency Metrics. Journal of Soft Computing and Data Mining, 6(1), 182-199. https://penerbit.uthm.edu.my/ojs/index.php/jscdm/article/view/21926