I've been trying to get into hidden Markov models and the Viterbi algorithm recently. I found a library called hmmlearn (http://hmmlearn.readthedocs.io/en/latest/tutorial.html) to help me generate a state sequence for two states (with Gaussian emissions). Then I wanted to re-determine the state sequence using Viterbi. My code works, but predicts approximately 5% of the states wrong (depending on the means and variances of the Gaussian emissions). The hmmlearn library has a .predict method which also uses Viterbi to determine the state sequence.
My problem now is that the Viterbi algorithm by hmmlearn is much better than my hand-written one (error rate is lower than 0.5% compared to my 5%). I couldn't find any major problem in my code, so I'm not sure why this is the case. Below is my code where I first generate the state and observation sequence Z and X, predict Z with hmmlearn and finally predict it with my own code:
# Import libraries
import numpy as np
import scipy.stats as st
from hmmlearn import hmm
# Generate a sequence
model = hmm.GaussianHMM(n_components = 2, covariance_type = "spherical")
model.startprob_ = pi
model.transmat_ = A
model.means_ = obs_means
model.covars_ = obs_covars
X, Z = model.sample(T)
## Predict the states from generated observations with the hmmlearn library
Z_pred = model.predict(X)
# Predict the state sequence with Viterbi by hand
B = np.concatenate((st.norm(mean_1,var_1).pdf(X), st.norm(mean_2,var_2).pdf(X)), axis = 1)
delta = np.zeros(shape = (T, 2))
psi = np.zeros(shape= (T, 2))
### Calculate starting values
for s in np.arange(2):
delta[0, s] = np.log(pi[s]) + np.log(B[0, s])
psi = np.zeros((T, 2))
### Take everything in log space since values get very low as t -> T
for t in range(1,T):
for s_post in range(0, 2):
delta[t, s_post] = np.max([delta[t - 1, :] + np.log(A[:, s_post])], axis = 1) + np.log(B[t, s_post])
psi[t, s_post] = np.argmax([delta[t - 1, :] + np.log(A[:, s_post])], axis = 1)
### Backtrack
states = np.zeros(T, dtype=np.int32)
states[T-1] = np.argmax(delta[T-1])
for t in range(T-2, -1, -1):
states[t] = psi[t+1, states[t+1]]
I'm not sure if I have a big error in my code or if hmmlearn just uses a more refined Viterbi algorithm. I have noticed by looking into the falsely predicted states that the impact of the emission probability B seems to be too big as it causes the states to change too frequently even if the transition probability to go to the other state is really low.
I'm rather new to python so please excuse my ugly coding. Thanks in advance for any tips you might have!
Edit: As you can see in the code, I'm stupid and used variances instead of the standard deviation to determine the emission probabilities. After fixing this, I get the same result as the implemented Viterbi algorithm.
Related
I'm looking into GP regression, but I'm getting some behaviour that I do not understand.
Basically, I wanted to show convergence for GP on the osciallatory Genz function (basically a period wave), which led me to this picture Gp convergence, sorry for the missing labels (x axis: num samples, y axis: relative error measure in 2000 points)
This is OK, but I was curious why it took so long before the error started to drop. Plotting the resulting GP fit I got this (busy) plot GP fit is orange, true function is blue. What I don't understand is what happens up until it starts to capture the true function. I assumed it had something to do with the kernel. The plot here uses a RBF kernel with length_scale = 1 (I also tried both higher and lower values, but got the same results).
I kind of expected it to have a more smooth behaviour even if it couldn't capture the true model.
So, to my question: why do I see this "spikey" behaviour? And can I do something to change it (kernel-wise or other)?
kernel = RBF(length_scale = 1, length_scale_bounds = (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel)
gp.fit(X, y)
def genz(x, method = 'default'):
d = x.shape[1]
a = 10/d
w = 1/2
num_points = x.shape[0]
funcval = np.empty([1,num_points])
for i in range(num_points):
funcval[0,i] = np.cos(2 * np.pi * w + np.sum(a * x[i,:]))
return funcval
It seems like the optimized length scale is very small compared to its domain space. I also felt very weird when I was digging into this library; changing some hyperparameters and the number of optimization didn't work for me as well. It might be helpful to change your kernel function to matern with changing the gamma value but not very much. If you really want to customize as you want, I might recommend you to use gpytorch similar to torch implementation or the GPML matlab toolbox.
This seems like it would be simple, but I can't get things to work. I 100 dimension vector spaces and I have several vectors in each space that are matched. I want to find the transformation matrix (W) such that:
a_vector[0] in vector space A x W = b_vector[0] in vector space B (or approximation).
So a paper mentions the formula for this.
So no bias term, no activation that we typically see.
I've tried using sklearns Linear Regression without much success.
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
regression_model = LinearRegression(fit_intercept=True)
regression_model.fit(X_train, y_train)
regression_model.score(X_test, y_test)
> -1451478.4589335269 (!!???)
y_predict = regression_model.predict(X_test)
regression_model_mse = mean_squared_error(y_predict, y_test)
regression_model_mse = 524580.06
Tried tensorflow without much success. Don't care about the tool - tensorflow, sklearn - just looking for help with the solutions.
Thanks.
EDIT
so I hand rolled the code below - maxing for cosine sim (representing how close the predicted points are to the real points - 1.00 = perfect match) - but it is VERY SLOW.
shape = (100,100)
W1 = np.random.randn(*shape).astype(np.float64) / np.sqrt(sum(shape))
avgs = []
for x in range(1000):
shuffle(endevec)
distance = [0]
for i,x in enumerate(endevec):
pred1 = x[0].dot(W1)
cosine = 1 - scipy.spatial.distance.cosine(pred1, x[1])
distance.append(cosine)
diff = pred1 - x[0]
gradient = W1.T.dot(diff) / W1.shape[0]
W1 += -gradient * .0001
avgs.append(np.mean(distance))
sys.stdout.write('\r')
# the exact output you're looking for:
sys.stdout.write(str(avgs[-1]))
sys.stdout.flush()
EDIT 2
Jeanne Dark below had a great answer for finding the transformation matrix using:
M=np.linalg.lstsq(source_mtrx[:n],target_mtrx[:n])[0]
On my dataset of matched vecs, the predicted vecs using the TM found with this method was:
minmax=(-0.09405095875263214, 0.9940633773803711)
mean=0.972490919224675 (1.0 being a perfect match)
variance=0.0011325349465895844
skewness=-18.317443753033665
kurtosis=516.5701661370497
Had tiny amount of really big outliers.
The plot of cosine sim was:
I was having exactly the same problem yesterday. I ended up using numpy.linalg.lstsq and I think it works.
# find tranformation matrix M so that: source_matrix∙M = target_matrix based
#on top n most frequent terms in the target corpus
n=500 # the choice of n depends on the size of your vocabulary
M=np.linalg.lstsq(source_mtrx[:n],target_mtrx[:n])[0]
print M.shape # returns (100,100)
# apply this tranformation to source matrix:
new_mtrx= np.array([np.dot(i, M) for i in source_mtrx])
Also check out this paper Lexical Comparison Between Wikipedia and Twitter Corpora by Using
Word Embeddings. They are based on the paper that you mentioned and they follow the same method but they explain the implementation with more details. For example, they suggest that in order to find the transformation matrix M we only use the vectors of the top n most frequent terms, and then, after we apply the transformation to the source matrix, we calculate the similarity for the remaining terms.
Please let me know if u find another solution for calculating M based on SGD.
After I implemented a LS estimation with gradient descent for a simple linear regression problem, I'm now trying to do the same with Maximum Likelihood.
I used this equation from wikipedia. The maximum has to be found.
train_X = np.random.rand(100, 1) # all values [0-1)
train_Y = train_X
X = tf.placeholder("float", None)
Y = tf.placeholder("float", None)
theta_0 = tf.Variable(np.random.randn())
theta_1 = tf.Variable(np.random.randn())
var = tf.Variable(0.5)
hypothesis = tf.add(theta_0, tf.mul(X, theta_1))
lhf = 1 * (50 * np.log(2*np.pi) + 50 * tf.log(var) + (1/(2*var)) * tf.reduce_sum(tf.pow(hypothesis - Y, 2)))
op = tf.train.GradientDescentOptimizer(0.01).minimize(lhf)
This code works, but I still have some questions about it:
If I change the lhf function from 1 * to -1 * and minimize -lhf (according to the equation), it does not work. But why?
The value for lhf goes up and down during optimization. Shouldn't it only change in one direction?
The value for lhf sometimes is a NaN during optimization. How can I avoid that?
In the equation, σ² is the variance of the error (right?). My values are perfectly on a line. Why do I get a value of var above 100?
The symptoms in your question indicate a common problem: the learning rate or step size might be too high for the problem.
The zig-zag behaviour, where the function to be maximized goes up and down, is usual when the learning rate is too high. Specially when you get NaNs.
The simplest solution is to lower the learning rate, by dividing your current learning rate by 10 until the learning curve is smooth and there are no NaNs or up-down behavior.
As you are using TensorFlow you can also try AdamOptimizer as this adjust the learning rate dynamically as you train.
I am new to Apache Spark and trying to use the machine learning library to predict some data. My dataset right now is only about 350 points. Here are 7 of those points:
"365","4",41401.387,5330569
"364","3",51517.886,5946290
"363","2",55059.838,6097388
"362","1",43780.977,5304694
"361","7",46447.196,5471836
"360","6",50656.121,5849862
"359","5",44494.476,5460289
Here's my code:
def parsePoint(line):
split = map(sanitize, line.split(','))
rev = split.pop(-2)
return LabeledPoint(rev, split)
def sanitize(value):
return float(value.strip('"'))
parsedData = textFile.map(parsePoint)
model = LinearRegressionWithSGD.train(parsedData, iterations=10)
print model.predict(parsedData.first().features)
The prediction is something totally crazy, like -6.92840330273e+136. If I don't set iterations in train(), then I get nan as a result. What am I doing wrong? Is it my data set (the size of it, maybe?) or my configuration?
The problem is that LinearRegressionWithSGD uses stochastic gradient descent (SGD) to optimize the weight vector of your linear model. SGD is really sensitive to the provided stepSize which is used to update the intermediate solution.
What SGD does is to calculate the gradient g of the cost function given a sample of the input points and the current weights w. In order to update the weights w you go for a certain distance in the opposite direction of g. The distance is your step size s.
w(i+1) = w(i) - s * g
Since you're not providing an explicit step size value, MLlib assumes stepSize = 1. This seems to not work for your use case. I'd recommend you to try different step sizes, usually lower values, to see how LinearRegressionWithSGD behaves:
LinearRegressionWithSGD.train(parsedData, numIterartions = 10, stepSize = 0.001)
I have been trying to fit a linear model to a set of stress/strain data by minimizing chi-squared. Unfortunately using the code below is not correctly minimizing the chisqfunc function. It is finding the minimum at the initial conditions, x0, which is not correct. I have looked through the scipy.optimize documentation and tested minimizing other functions which has worked correctly. Could you please suggest how to fix the code below or suggest another method I can use to fit a linear model to data by minimizing chi-squared?
import numpy
import scipy.optimize as opt
filename = 'data.csv'
data = numpy.loadtxt(open(filename,"r"),delimiter=",")
stress = data[:,0]
strain = data[:,1]
err_stress = data[:,2]
def chisqfunc((a, b)):
model = a + b*strain
chisq = numpy.sum(((stress - model)/err_stress)**2)
return chisq
x0 = numpy.array([0,0])
result = opt.minimize(chisqfunc, x0)
print result
Thank you for reading my question and any help would be greatly appreciated.
Cheers, Will
EDIT: Data set I am currently using: Link to data
The problem is that your initial guess is very far from the actual solution. If you add a print statement inside chisqfunc() like print (a,b), and rerun your code, you'll get something like:
(0, 0)
(1.4901161193847656e-08, 0.0)
(0.0, 1.4901161193847656e-08)
This means that minimize evaluates the function only at these points.
if you now try to evaluate chisqfunc() at these 3 pairs of values, you'll see that they EXACTLY match, for example
print chisqfunc((0,0))==chisqfunc((1.4901161193847656e-08,0))
True
This happens because of rounding floating points arithmetics. In other words, when evaluating stress - model, the var stress is too many order of magnitude larger than model, and the result is truncated.
One could then just try bruteforcing it, increasing floating point precision, with writing data=data.astype(np.float128) just after loading the data with loadtxt. minimize fails, with result.success=False, but with a helpful message
Desired error not necessarily achieved due to precision loss.
One possibility is then to provide a better initial guess, so that in the subtraction stress - model the model part is of the same order of magnitude, the other to rescale the data, so that the solution will be closer to your initial guess (0,0).
It is MUCH better if you just rescale the data, making for example nondimensional with respect to a certain stress value (like the yelding/cracking of this material)
This is an example of the fitting, using as a stress scale the maximum measured stress. There are very few changes from your code:
import numpy
import scipy.optimize as opt
filename = 'data.csv'
data = numpy.loadtxt(open(filename,"r"),delimiter=",")
stress = data[:,0]
strain = data[:,1]
err_stress = data[:,2]
smax = stress.max()
stress = stress/smax
#I am assuming the errors err_stress are in the same units of stress.
err_stress = err_stress/smax
def chisqfunc((a, b)):
model = a + b*strain
chisq = numpy.sum(((stress - model)/err_stress)**2)
return chisq
x0 = numpy.array([0,0])
result = opt.minimize(chisqfunc, x0)
print result
assert result.success==True
a,b=result.x*smax
plot(strain,stress*smax)
plot(strain,a+b*strain)
Your linear model is quite good, i.e. your material has a very linear behaviour for this range of deformation (what material is it anyway?):