class documentation

Hidden Markov model class, a generative model for labelling sequence data. These models define the joint probability of a sequence of symbols and their labels (state transitions) as the product of the starting state probability, the probability of each state transition, and the probability of each observation being generated from each state. This is described in more detail in the module documentation.

This implementation is based on the HMM description in Chapter 8, Huang, Acero and Hon, Spoken Language Processing and includes an extension for training shallow HMM parsers or specialized HMMs as in Molina et. al, 2002. A specialized HMM modifies training data by applying a specialization function to create a new training set that is more appropriate for sequential tagging with an HMM. A typical use case is chunking.

Parameters
symbolsthe set of output symbols (alphabet)
statesa set of states representing state space
transitionstransition probabilities; Pr(s_i | s_j) is the probability of transition from state i given the model is in state_j
outputsoutput probabilities; Pr(o_k | s_i) is the probability of emitting symbol k when entering state i
priorsinitial state distribution; Pr(s_i) is the probability of starting in state i
transforman optional function for transforming training instances, defaults to the identity function.
Class Method train Train a new HiddenMarkovModelTagger using the given labeled and unlabeled training instances. Testing will be performed if test instances are provided.
Method __init__ Undocumented
Method __repr__ Undocumented
Method best_path Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming.
Method best_path_simple Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. This uses a simple, direct method, and is included for teaching purposes.
Method entropy Returns the entropy over labellings of the given sequence. This is given by:
Method log_probability Returns the log-probability of the given symbol sequence. If the sequence is labelled, then returns the joint log-probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the log-probability over all label sequences.
Method point_entropy Returns the pointwise entropy over the possible states at each position in the chain, given the observation sequence.
Method probability Returns the probability of the given symbol sequence. If the sequence is labelled, then returns the joint probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the probability over all label sequences.
Method random_sample Randomly sample the HMM to generate a sentence of a given length. This samples the prior distribution then the observation distribution and transition distribution for each subsequent observation and state...
Method reset_cache Undocumented
Method tag Tags the sequence with the highest probability state sequence. This uses the best_path method to find the Viterbi path.
Method test Tests the HiddenMarkovModelTagger instance.
Class Method _train Undocumented
Method _backward_probability Return the backward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence from t...
Method _best_path Undocumented
Method _best_path_simple Undocumented
Method _create_cache The cache is a tuple (P, O, X, S) where:
Method _exhaustive_entropy Undocumented
Method _exhaustive_point_entropy Undocumented
Method _forward_probability Return the forward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence up to and including t.
Method _output_logprob No summary
Method _outputs_vector Return a vector with log probabilities of emitting a symbol when entering states.
Method _sample_probdist Undocumented
Method _tag Undocumented
Method _transitions_matrix Return a matrix of transition log probabilities.
Method _update_cache Undocumented
Instance Variable _cache Undocumented
Instance Variable _outputs Undocumented
Instance Variable _priors Undocumented
Instance Variable _states Undocumented
Instance Variable _symbols Undocumented
Instance Variable _transform Undocumented
Instance Variable _transitions Undocumented

Inherited from TaggerI:

Method evaluate Score the accuracy of the tagger against the gold standard. Strip the tags from the gold standard text, retag it using the tagger, then compute the accuracy score.
Method tag_sents Apply self.tag() to each element of sentences. I.e.:
Method _check_params Undocumented
@classmethod
def train(cls, labeled_sequence, test_sequence=None, unlabeled_sequence=None, **kwargs): (source)

Train a new HiddenMarkovModelTagger using the given labeled and unlabeled training instances. Testing will be performed if test instances are provided.

Parameters
labeled_sequence:list(list)a sequence of labeled training instances, i.e. a list of sentences represented as tuples
test_sequence:list(list)a sequence of labeled test instances
unlabeled_sequence:list(list)a sequence of unlabeled training instances, i.e. a list of sentences represented as words
transform:functionan optional function for transforming training instances, defaults to the identity function, see transform()
estimator:class or functionan optional function or class that maps a condition's frequency distribution to its probability distribution, defaults to a Lidstone distribution with gamma = 0.1
verbose:boolboolean flag indicating whether training should be verbose or include printed output
max_iterations:intnumber of Baum-Welch interations to perform
**kwargsUndocumented
Returns
HiddenMarkovModelTaggera hidden markov model tagger
def __init__(self, symbols, states, transitions, outputs, priors, transform=_identity): (source)

Undocumented

def __repr__(self): (source)

Undocumented

def best_path(self, unlabeled_sequence): (source)

Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming.

Parameters
unlabeled_sequence:listthe sequence of unlabeled symbols
Returns
sequence of anythe state sequence
def best_path_simple(self, unlabeled_sequence): (source)

Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. This uses a simple, direct method, and is included for teaching purposes.

Parameters
unlabeled_sequence:listthe sequence of unlabeled symbols
Returns
sequence of anythe state sequence
def entropy(self, unlabeled_sequence): (source)

Returns the entropy over labellings of the given sequence. This is given by:

H(O) = - sum_S Pr(S | O) log Pr(S | O)

where the summation ranges over all state sequences, S. Let Z = Pr(O) = sum_S Pr(S, O)} where the summation ranges over all state sequences and O is the observation sequence. As such the entropy can be re-expressed as:

H = - sum_S Pr(S | O) log [ Pr(S, O) / Z ]
= log Z - sum_S Pr(S | O) log Pr(S, 0)
= log Z - sum_S Pr(S | O) [ log Pr(S_0) + sum_t Pr(S_t | S_{t-1}) + sum_t Pr(O_t | S_t) ]

The order of summation for the log terms can be flipped, allowing dynamic programming to be used to calculate the entropy. Specifically, we use the forward and backward probabilities (alpha, beta) giving:

H = log Z - sum_s0 alpha_0(s0) beta_0(s0) / Z * log Pr(s0)
+ sum_t,si,sj alpha_t(si) Pr(sj | si) Pr(O_t+1 | sj) beta_t(sj) / Z * log Pr(sj | si)
+ sum_t,st alpha_t(st) beta_t(st) / Z * log Pr(O_t | st)

This simply uses alpha and beta to find the probabilities of partial sequences, constrained to include the given state(s) at some point in time.

def log_probability(self, sequence): (source)

Returns the log-probability of the given symbol sequence. If the sequence is labelled, then returns the joint log-probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the log-probability over all label sequences.

Parameters
sequence:Tokenthe sequence of symbols which must contain the TEXT property, and optionally the TAG property
Returns
floatthe log-probability of the sequence
def point_entropy(self, unlabeled_sequence): (source)

Returns the pointwise entropy over the possible states at each position in the chain, given the observation sequence.

def probability(self, sequence): (source)

Returns the probability of the given symbol sequence. If the sequence is labelled, then returns the joint probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the probability over all label sequences.

Parameters
sequence:Tokenthe sequence of symbols which must contain the TEXT property, and optionally the TAG property
Returns
floatthe probability of the sequence
def random_sample(self, rng, length): (source)

Randomly sample the HMM to generate a sentence of a given length. This samples the prior distribution then the observation distribution and transition distribution for each subsequent observation and state. This will mostly generate unintelligible garbage, but can provide some amusement.

Parameters
rng:Random (or any object with a random() method)random number generator
length:intdesired output length
Returns
listthe randomly created state/observation sequence, generated according to the HMM's probability distributions. The SUBTOKENS have TEXT and TAG properties containing the observation and state respectively.
def reset_cache(self): (source)

Undocumented

def tag(self, unlabeled_sequence): (source)

Tags the sequence with the highest probability state sequence. This uses the best_path method to find the Viterbi path.

Parameters
unlabeled_sequence:listthe sequence of unlabeled symbols
Returns
lista labelled sequence of symbols
def test(self, test_sequence, verbose=False, **kwargs): (source)

Tests the HiddenMarkovModelTagger instance.

Parameters
test_sequence:list(list)a sequence of labeled test instances
verbose:boolboolean flag indicating whether training should be verbose or include printed output
**kwargsUndocumented
@classmethod
def _train(cls, labeled_sequence, test_sequence=None, unlabeled_sequence=None, transform=_identity, estimator=None, **kwargs): (source)

Undocumented

def _backward_probability(self, unlabeled_sequence): (source)

Return the backward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence from t .. T.

Parameters
unlabeled_sequence:listthe sequence of unlabeled symbols
Returns
arraythe backward log probability matrix
def _best_path(self, unlabeled_sequence): (source)

Undocumented

def _best_path_simple(self, unlabeled_sequence): (source)

Undocumented

def _create_cache(self): (source)

The cache is a tuple (P, O, X, S) where:

  • S maps symbols to integers. I.e., it is the inverse mapping from self._symbols; for each symbol s in self._symbols, the following is true:

    self._symbols[S[s]] == s
    
  • O is the log output probabilities:

    O[i,k] = log( P(token[t]=sym[k]|tag[t]=state[i]) )
    
  • X is the log transition probabilities:

    X[i,j] = log( P(tag[t]=state[j]|tag[t-1]=state[i]) )
    
  • P is the log prior probabilities:

    P[i] = log( P(tag[0]=state[i]) )
    
def _exhaustive_entropy(self, unlabeled_sequence): (source)

Undocumented

def _exhaustive_point_entropy(self, unlabeled_sequence): (source)

Undocumented

def _forward_probability(self, unlabeled_sequence): (source)

Return the forward probability matrix, a T by N array of log-probabilities, where T is the length of the sequence and N is the number of states. Each entry (t, s) gives the probability of being in state s at time t after observing the partial symbol sequence up to and including t.

Parameters
unlabeled_sequence:listthe sequence of unlabeled symbols
Returns
arraythe forward log probability matrix
def _output_logprob(self, state, symbol): (source)
Returns
floatthe log probability of the symbol being observed in the given state
def _outputs_vector(self, symbol): (source)

Return a vector with log probabilities of emitting a symbol when entering states.

def _sample_probdist(self, probdist, p, samples): (source)

Undocumented

def _tag(self, unlabeled_sequence): (source)

Undocumented

def _transitions_matrix(self): (source)

Return a matrix of transition log probabilities.

def _update_cache(self, symbols): (source)

Undocumented

Undocumented

_outputs = (source)

Undocumented

Undocumented

Undocumented

_symbols = (source)

Undocumented

_transform = (source)

Undocumented

_transitions = (source)

Undocumented