Date of Submission

Spring 2011

Academic Program

Mathematics

Advisor

Cliona Golden

Abstract/Artist's Statement

The Naive Bayesian algorithm for classification has been a staple in machine learning for decades. Simple and efficient, the algorithm makes unrealistic independence assumptions about the data; yet it performs very well, often nearly matching the performance of far more complex modern algorithms. Only recently have researchers understood the theoretical reasons for this unreasonably good performance. In 2004, Professor Harry Zhang of the University of New Brunswick articulated the notion of a dependence-derivative factor, which more defines precisely how much Naive Bayes is harmed by certain violations of its independence assumption. In this project, I present a way to use Zhang’s dependence derivatives to create classifiers similar to Naive Bayes, but with a network struture more resilient of these violations.

Distribution Options

Access restricted to On-Campus only

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

Share

COinS