Monday, February 25, 2008

Worth Reading

Darroch, J.N. & Ratcliff, D. Generalized iterative scaling for log-linear models. 1972
comment: A quite readable paper on a special case (log-linear, kind of exponential distribution) of probability function estimation that can be done in a recursive way like EM (not exactly). It's based on the non-negtive attributes of mutual information between probabilities. So instead of maximized by expectation, it maximizes mutual information. Interesting comparison should be able to be made between this algorithm and EM. But I don't have time to do that and nobody has interest in that specific aspect, maybe.

Andrew McCallum, Maximum Entropy Markov Models for Information Extraction and Segmentation.
comment: A graphical probabilistic model like HMM but it's not generative, rather than descriptive (I name it this way). You construct this model by putting together the transfers you want, not by modeling how the symbols are generated. Yet it's as efficient as HMM.

No comments: