Author

Mehal Patel

Publication Date

Spring 2014

Degree Type

Master's Project

Department

Computer Science

Abstract

Bayesian Classifiers are used to classify unseen observations to one of the probable class category (also called class labels). Classification applications have one or more features and one or more class variables. Naïve Bayes Classifier is one of the simplest classifier used in practice. Though Naïve Bayes Classifier performs well in practice (in terms of its prediction accuracy), it assumes strong independence among features given class variable. Naïve Bayes assumption may reduce prediction accuracy when two or more features are dependent given class variable. In order to improve prediction accuracy, we can relax Naïve Bayes assumption and allow dependencies among features given class variable. Capturing feature dependencies more likely improves prediction accuracy for classification applications in which two (or more) features have some correlation given class variable. The purpose of this project is to exploit these feature dependencies to improve prediction by discovering them from input data. Probabilistic Graphical Model concepts are used to learn Bayesian Classifiers effectively and efficiently.

Share

COinS