Publication Date
Spring 2014
Degree Type
Master's Project
Degree Name
Master of Science (MS)
Department
Computer Science
Abstract
The Naïve Bayes Model is a special case of Bayesian networks with strong independence assumptions. It is typically used for classification problems. The Naïve Bayes model is trained using the given data to estimate the parameters necessary for classification. This model of classification is very popular since it is simple yet efficient and accurate. While the Naïve Bayes model is considered accurate on most of the problem instances, there is a set of problems for which the Naïve Bayes does not give accurate results when compared to other classifiers such as the decision tree algorithms. One reason for it could be the strong independence assumption of the Naïve Bayes model. This project aims at searching for dependencies between the features and studying the consequences of applying these dependencies in classifying instances. We propose two different algorithms, the Backward Sequential Joining and the Backward Sequential Elimination that can be applied in order to improve the accuracy of the Naïve Bayes model. We then compare the accuracies of the different algorithms and derive conclusion based on the results.
Recommended Citation
Valsan, Sanya, "Backward Sequential Feature Elimination And Joining Algorithms In Machine Learning" (2014). Master's Projects. 364.
DOI: https://doi.org/10.31979/etd.wbh4-kcgg
https://scholarworks.sjsu.edu/etd_projects/364