Wednesday, July 15, 2009

Expectation Maximization

I am going over the following explanation and example of expectation maximization from the IR book (page: http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html book http://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf ). It is a very readable book and all books should at least be like that (or preferably like the Head First Series).

Turns out the 2007 version had erroneous calculations and it did not occur to me to check the errata. I am working with the 09 version of the book:

My notes begin with K Means because Expectation Maximization is a generalization of K Means (actually there is some mention of an edit distance but ignore that). Honestly speaking I should have begun with EM but that will be the next more refined version.












The books pages:



It is a waste of human intellect to learn one framework after another circling one technology after another. They all essentially do the same thing but differently.

But machine learning and something like it holds promise. This one algorithm and many better than it are important. Perhaps this learning has the potential to liberate humankind.

No comments:

Total Pageviews

Popular Posts