Feature Engineering and Selection Strategies for Improving Machine Learning Accuracy

Authors

  • Meena Kumari Independent Researcher Author

Keywords:

Feature Engineering, Feature Selection, Machine Learning Accuracy, Dimensionality Reduction

Abstract

A significant contribution to the enhancement of the precision and dependability of machine learning models is made by the processes of feature engineering and feature selection. When dealing with complicated and high-dimensional datasets, the quality of the input features frequently has a higher impact on the performance of the model than the algorithm that is selected itself. Feature engineering is the process of transforming raw data into informative representations, whereas feature selection is the process of identifying the variables that are most significant, hence eliminating noise and repeated information. These techniques for feature engineering include normalization, encoding, feature building, and domain-driven transformations. Additionally, these techniques include feature selection strategies such as filter, wrapper, and embedding methods. This article will discuss how these approaches improve model generalization, decrease overfitting, and enhance computing efficiency across a variety of machine learning problems. An analysis of the impact of feature engineering and selection on both traditional and advanced learning models is presented, along with comparative observations for each.

Downloads

Download data is not yet available.

References

Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer, New York.

Hastie, T., Tibshirani, R., & Friedman, J. (2017). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd ed.). Springer.

Kuhn, M., & Johnson, K. (2013). Applied Predictive Modeling. Springer, New York.

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87.

Kotsiantis, S. B., Zaharakis, I., & Pintelas, P. (2007). Supervised machine learning: A review of classification techniques. Informatica, 31, 249–268.

Van der Maaten, L., Postma, E., & Van den Herik, J. (2009). Dimensionality reduction: A comparative review. Journal of Machine Learning Research, 10, 66–71.

Liu, H., & Motoda, H. (2007). Computational Methods of Feature Selection. Chapman & Hall/CRC.

Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1–2), 273–324.

Géron, A. (2022). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (3rd ed.). O’Reilly Media.

Downloads

Published

2025-09-30

Issue

Section

Original Research Articles

How to Cite

Feature Engineering and Selection Strategies for Improving Machine Learning Accuracy. (2025). International Journal of Artificial Intelligence, Computer Science, Management and Technology, 2(3), 23-27. https://ijacmt.com/index.php/j/article/view/34

Similar Articles

11-20 of 22

You may also start an advanced similarity search for this article.