Package Bio :: Package HMM :: Module MarkovModel :: Class HiddenMarkovModel
[hide private]
[frames] | no frames]

Class HiddenMarkovModel

source code

object --+
         |
        HiddenMarkovModel

Represent a hidden markov model that can be used for state estimation.
Instance Methods [hide private]
 
__init__(self, initial_prob, transition_prob, emission_prob, transition_pseudo, emission_pseudo)
Initialize a Markov Model.
source code
 
get_blank_transitions(self)
Get the default transitions for the model.
source code
 
get_blank_emissions(self)
Get the starting default emmissions for each sequence.
source code
 
transitions_from(self, state_letter)
Get all destination states to which there are transitions from the state_letter source state.
source code
 
transitions_to(self, state_letter)
Get all source states from which there are transitions to the state_letter destination state.
source code
 
viterbi(self, sequence, state_alphabet)
Calculate the most probable state path using the Viterbi algorithm.
source code
 
_log_transform(self, probability)
Return log transform of the given probability dictionary.
source code

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Properties [hide private]

Inherited from object: __class__

Method Details [hide private]

__init__(self, initial_prob, transition_prob, emission_prob, transition_pseudo, emission_pseudo)
(Constructor)

source code 

Initialize a Markov Model.

Note: You should use the MarkovModelBuilder class instead of initiating this class directly.

Arguments:

o initial_prob - A dictionary of initial probabilities for all states.

o transition_prob -- A dictionary of transition probabilities for all possible transitions in the sequence.

o emission_prob -- A dictionary of emission probabilities for all possible emissions from the sequence states.

o transition_pseudo -- Pseudo-counts to be used for the transitions, when counting for purposes of estimating transition probabilities.

o emission_pseudo -- Pseudo-counts to be used for the emissions, when counting for purposes of estimating emission probabilities.

Overrides: object.__init__

get_blank_transitions(self)

source code 

Get the default transitions for the model.

Returns a dictionary of all of the default transitions between any two letters in the sequence alphabet. The dictionary is structured with keys as (letter1, letter2) and values as the starting number of transitions.

get_blank_emissions(self)

source code 

Get the starting default emmissions for each sequence.

This returns a dictionary of the default emmissions for each letter. The dictionary is structured with keys as (seq_letter, emmission_letter) and values as the starting number of emmissions.

transitions_from(self, state_letter)

source code 

Get all destination states to which there are transitions from the state_letter source state.

This returns all letters which the given state_letter can transition to. An empty list is returned if state_letter has no outgoing transitions.

transitions_to(self, state_letter)

source code 

Get all source states from which there are transitions to the state_letter destination state.

This returns all letters which the given state_letter is reachable from. An empty list is returned if state_letter is unreachable.

viterbi(self, sequence, state_alphabet)

source code 

Calculate the most probable state path using the Viterbi algorithm.

This implements the Viterbi algorithm (see pgs 55-57 in Durbin et al for a full explanation -- this is where I took my implementation ideas from), to allow decoding of the state path, given a sequence of emissions.

Arguments:

o sequence -- A Seq object with the emission sequence that we want to decode.

o state_alphabet -- The alphabet of the possible state sequences that can be generated.

_log_transform(self, probability)

source code 

Return log transform of the given probability dictionary.

When calculating the Viterbi equation, add logs of probabilities rather than multiplying probabilities, to avoid underflow errors. This method returns a new dictionary with the same keys as the given dictionary and log-transformed values.