class HiddenMarkovModelTagger(TaggerI): (source)
Constructor: HiddenMarkovModelTagger(symbols, states, transitions, outputs, ...)
Hidden Markov model class, a generative model for labelling sequence data. These models define the joint probability of a sequence of symbols and their labels (state transitions) as the product of the starting state probability, the probability of each state transition, and the probability of each observation being generated from each state. This is described in more detail in the module documentation.
This implementation is based on the HMM description in Chapter 8, Huang, Acero and Hon, Spoken Language Processing and includes an extension for training shallow HMM parsers or specialized HMMs as in Molina et. al, 2002. A specialized HMM modifies training data by applying a specialization function to create a new training set that is more appropriate for sequential tagging with an HMM. A typical use case is chunking.
Parameters | |
symbols | the set of output symbols (alphabet) |
states | a set of states representing state space |
transitions | transition probabilities; Pr(s_i | s_j) is the probability of transition from state i given the model is in state_j |
outputs | output probabilities; Pr(o_k | s_i) is the probability of emitting symbol k when entering state i |
priors | initial state distribution; Pr(s_i) is the probability of starting in state i |
transform | an optional function for transforming training instances, defaults to the identity function. |
Class Method | train |
Train a new HiddenMarkovModelTagger using the given labeled and unlabeled training instances. Testing will be performed if test instances are provided. |
Method | __init__ |
Undocumented |
Method | __repr__ |
Undocumented |
Method | best |
Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. |
Method | best |
Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. This uses a simple, direct method, and is included for teaching purposes. |
Method | entropy |
Returns the entropy over labellings of the given sequence. This is given by: |
Method | log |
Returns the log-probability of the given symbol sequence. If the sequence is labelled, then returns the joint log-probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the log-probability over all label sequences. |
Method | point |
Returns the pointwise entropy over the possible states at each position in the chain, given the observation sequence. |
Method | probability |
Returns the probability of the given symbol sequence. If the sequence is labelled, then returns the joint probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the probability over all label sequences. |
Method | random |
Randomly sample the HMM to generate a sentence of a given length. This samples the prior distribution then the observation distribution and transition distribution for each subsequent observation and state... |
Method | reset |
Undocumented |
Method | tag |
Tags the sequence with the highest probability state sequence. This uses the best_path method to find the Viterbi path. |
Method | test |
Tests the HiddenMarkovModelTagger instance. |
Class Method | _train |
Undocumented |
Method | _backward |
Return the backward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence from t... |
Method | _best |
Undocumented |
Method | _best |
Undocumented |
Method | _create |
The cache is a tuple (P, O, X, S) where: |
Method | _exhaustive |
Undocumented |
Method | _exhaustive |
Undocumented |
Method | _forward |
Return the forward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence up to and including t. |
Method | _output |
No summary |
Method | _outputs |
Return a vector with log probabilities of emitting a symbol when entering states. |
Method | _sample |
Undocumented |
Method | _tag |
Undocumented |
Method | _transitions |
Return a matrix of transition log probabilities. |
Method | _update |
Undocumented |
Instance Variable | _cache |
Undocumented |
Instance Variable | _outputs |
Undocumented |
Instance Variable | _priors |
Undocumented |
Instance Variable | _states |
Undocumented |
Instance Variable | _symbols |
Undocumented |
Instance Variable | _transform |
Undocumented |
Instance Variable | _transitions |
Undocumented |
Inherited from TaggerI
:
Method | evaluate |
Score the accuracy of the tagger against the gold standard. Strip the tags from the gold standard text, retag it using the tagger, then compute the accuracy score. |
Method | tag |
Apply self.tag() to each element of sentences. I.e.: |
Method | _check |
Undocumented |
def train(cls, labeled_sequence, test_sequence=None, unlabeled_sequence=None, **kwargs): (source) ¶
Train a new HiddenMarkovModelTagger using the given labeled and unlabeled training instances. Testing will be performed if test instances are provided.
Parameters | |
labeled | a sequence of labeled training instances, i.e. a list of sentences represented as tuples |
test | a sequence of labeled test instances |
unlabeled | a sequence of unlabeled training instances, i.e. a list of sentences represented as words |
transform:function | an optional function for transforming training instances, defaults to the identity function, see transform() |
estimator:class or function | an optional function or class that maps a condition's frequency distribution to its probability distribution, defaults to a Lidstone distribution with gamma = 0.1 |
verbose:bool | boolean flag indicating whether training should be verbose or include printed output |
max | number of Baum-Welch interations to perform |
**kwargs | Undocumented |
Returns | |
HiddenMarkovModelTagger | a hidden markov model tagger |
Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming.
Parameters | |
unlabeled | the sequence of unlabeled symbols |
Returns | |
sequence of any | the state sequence |
Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. This uses a simple, direct method, and is included for teaching purposes.
Parameters | |
unlabeled | the sequence of unlabeled symbols |
Returns | |
sequence of any | the state sequence |
Returns the entropy over labellings of the given sequence. This is given by:
H(O) = - sum_S Pr(S | O) log Pr(S | O)
where the summation ranges over all state sequences, S. Let Z = Pr(O) = sum_S Pr(S, O)} where the summation ranges over all state sequences and O is the observation sequence. As such the entropy can be re-expressed as:
H = - sum_S Pr(S | O) log [ Pr(S, O) / Z ] = log Z - sum_S Pr(S | O) log Pr(S, 0) = log Z - sum_S Pr(S | O) [ log Pr(S_0) + sum_t Pr(S_t | S_{t-1}) + sum_t Pr(O_t | S_t) ]
The order of summation for the log terms can be flipped, allowing dynamic programming to be used to calculate the entropy. Specifically, we use the forward and backward probabilities (alpha, beta) giving:
H = log Z - sum_s0 alpha_0(s0) beta_0(s0) / Z * log Pr(s0) + sum_t,si,sj alpha_t(si) Pr(sj | si) Pr(O_t+1 | sj) beta_t(sj) / Z * log Pr(sj | si) + sum_t,st alpha_t(st) beta_t(st) / Z * log Pr(O_t | st)
This simply uses alpha and beta to find the probabilities of partial sequences, constrained to include the given state(s) at some point in time.
Returns the log-probability of the given symbol sequence. If the sequence is labelled, then returns the joint log-probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the log-probability over all label sequences.
Parameters | |
sequence:Token | the sequence of symbols which must contain the TEXT property, and optionally the TAG property |
Returns | |
float | the log-probability of the sequence |
Returns the pointwise entropy over the possible states at each position in the chain, given the observation sequence.
Returns the probability of the given symbol sequence. If the sequence is labelled, then returns the joint probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the probability over all label sequences.
Parameters | |
sequence:Token | the sequence of symbols which must contain the TEXT property, and optionally the TAG property |
Returns | |
float | the probability of the sequence |
Randomly sample the HMM to generate a sentence of a given length. This samples the prior distribution then the observation distribution and transition distribution for each subsequent observation and state. This will mostly generate unintelligible garbage, but can provide some amusement.
Parameters | |
rng:Random (or any object with a random() method) | random number generator |
length:int | desired output length |
Returns | |
list | the randomly created state/observation sequence, generated according to the HMM's probability distributions. The SUBTOKENS have TEXT and TAG properties containing the observation and state respectively. |
nltk.tag.api.TaggerI.tag
Tags the sequence with the highest probability state sequence. This uses the best_path method to find the Viterbi path.
Parameters | |
unlabeled | the sequence of unlabeled symbols |
Returns | |
list | a labelled sequence of symbols |
Tests the HiddenMarkovModelTagger instance.
Parameters | |
test | a sequence of labeled test instances |
verbose:bool | boolean flag indicating whether training should be verbose or include printed output |
**kwargs | Undocumented |
def _train(cls, labeled_sequence, test_sequence=None, unlabeled_sequence=None, transform=_identity, estimator=None, **kwargs): (source) ¶
Undocumented
Return the backward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence from t .. T.
Parameters | |
unlabeled | the sequence of unlabeled symbols |
Returns | |
array | the backward log probability matrix |
The cache is a tuple (P, O, X, S) where:
S maps symbols to integers. I.e., it is the inverse mapping from self._symbols; for each symbol s in self._symbols, the following is true:
self._symbols[S[s]] == sO is the log output probabilities:
O[i,k] = log( P(token[t]=sym[k]|tag[t]=state[i]) )X is the log transition probabilities:
X[i,j] = log( P(tag[t]=state[j]|tag[t-1]=state[i]) )P is the log prior probabilities:
P[i] = log( P(tag[0]=state[i]) )
Return the forward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence up to and including t.
Parameters | |
unlabeled | the sequence of unlabeled symbols |
Returns | |
array | the forward log probability matrix |