Performance Evaluation of Classical Machine Learning Models Versus Deep Neural Networks

Authors

  • Naveen Author
  • Chander Shikhavat Author

Keywords:

Classical Machine Learning, Deep Neural Networks, Performance Evaluation, Model Comparison

Abstract

There has been a surge in interest in comparing the performance of traditional machine learning models versus deep neural networks across a variety of analytical tasks as a result of the rapid development of data-driven applications. In the field of machine learning, traditional methods such as logistic regression, decision trees, support vector machines, and k-nearest neighbors have been highly regarded for a considerable amount of time due to their ease of use, straightforwardness, and effectiveness. Deep neural networks, on the other hand, have become increasingly popular as a result of their superior capacity to learn intricate nonlinear patterns from high-dimensional datasets that are both huge and complicated. A comparative analysis of the performance of traditional machine learning models versus deep neural networks with regard to the accuracy, scalability, computing cost, and interpretability of the results. In order to evaluate classification performance under a variety of data sizes and feature complexity levels, experimental analysis is carried out on datasets that are representative of the whole. According to the findings, classical models frequently achieve competitive performance on datasets that are either smaller or well-structured, but deep neural networks exhibit superior performance on data settings that are both large-scale and complicated.

Downloads

Download data is not yet available.

References

Mitchell, T. M. (1997). Machine Learning. McGraw-Hill, New York.

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer, New York.

Hastie, T., Tibshirani, R., & Friedman, J. (2017). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd ed.). Springer.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.

Vapnik, V. N. (1998). Statistical Learning Theory. Wiley.

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 1929–1958.

Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning, 448–456.

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87.

Downloads

Published

2025-09-30

Issue

Section

Original Research Articles

How to Cite

Performance Evaluation of Classical Machine Learning Models Versus Deep Neural Networks. (2025). International Journal of Artificial Intelligence, Computer Science, Management and Technology, 2(3), 28-32. https://ijacmt.com/index.php/j/article/view/35