Date of Submission
The Naive Bayesian algorithm for classification has been a staple in machine learning for decades. Simple and efficient, the algorithm makes unrealistic independence assumptions about the data; yet it performs very well, often nearly matching the performance of far more complex modern algorithms. Only recently have researchers understood the theoretical reasons for this unreasonably good performance. In 2004, Professor Harry Zhang of the University of New Brunswick articulated the notion of a dependence-derivative factor, which more defines precisely how much Naive Bayes is harmed by certain violations of its independence assumption. In this project, I present a way to use Zhang’s dependence derivatives to create classifiers similar to Naive Bayes, but with a network struture more resilient of these violations.
Access restricted to On-Campus only
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Barrow, Lionel R., "A Method for Automatically Generating Network Structure in Bayesian Classifiers" (2011). Senior Projects Spring 2011. 181.