Bio.HMM.DynamicProgramming module
Dynamic Programming algorithms for general usage.
This module contains classes which implement Dynamic Programming algorithms that can be used generally.
- class Bio.HMM.DynamicProgramming.AbstractDPAlgorithms(markov_model, sequence)
Bases:
object
An abstract class to calculate forward and backward probabilities.
This class should not be instantiated directly, but should be used through a derived class which implements proper scaling of variables.
This class is just meant to encapsulate the basic forward and backward algorithms, and allow derived classes to deal with the problems of multiplying probabilities.
Derived class of this must implement:
_forward_recursion – Calculate the forward values in the recursion using some kind of technique for preventing underflow errors.
_backward_recursion – Calculate the backward values in the recursion step using some technique to prevent underflow errors.
- __init__(markov_model, sequence)
Initialize to calculate forward and backward probabilities.
- Arguments:
markov_model – The current Markov model we are working with.
sequence – A training sequence containing a set of emissions.
- forward_algorithm()
Calculate sequence probability using the forward algorithm.
This implements the forward algorithm, as described on p57-58 of Durbin et al.
- Returns:
A dictionary containing the forward variables. This has keys of the form (state letter, position in the training sequence), and values containing the calculated forward variable.
The calculated probability of the sequence.
- backward_algorithm()
Calculate sequence probability using the backward algorithm.
This implements the backward algorithm, as described on p58-59 of Durbin et al.
- Returns:
A dictionary containing the backwards variables. This has keys of the form (state letter, position in the training sequence), and values containing the calculated backward variable.
- class Bio.HMM.DynamicProgramming.ScaledDPAlgorithms(markov_model, sequence)
Bases:
Bio.HMM.DynamicProgramming.AbstractDPAlgorithms
Implement forward and backward algorithms using a rescaling approach.
This scales the f and b variables, so that they remain within a manageable numerical interval during calculations. This approach is described in Durbin et al. on p 78.
This approach is a little more straightforward then log transformation but may still give underflow errors for some types of models. In these cases, the LogDPAlgorithms class should be used.
- __init__(markov_model, sequence)
Initialize the scaled approach to calculating probabilities.
- Arguments:
markov_model – The current Markov model we are working with.
sequence – A TrainingSequence object that must have a set of emissions to work with.
- class Bio.HMM.DynamicProgramming.LogDPAlgorithms(markov_model, sequence)
Bases:
Bio.HMM.DynamicProgramming.AbstractDPAlgorithms
Implement forward and backward algorithms using a log approach.
This uses the approach of calculating the sum of log probabilities using a lookup table for common values.
XXX This is not implemented yet!
- __init__(markov_model, sequence)
Initialize the class.