I'm trying to figure out what is the proper format of a python list to be given as input to a svm_problem function in python. I got the following program from the web, stackoverflow.
I have the following:
from svm import *
x=[ [1,0,1],[-1,0,-1],[1,0,0]]
#x=[ [1,0,1],[-1,0,-1]]
prob = svm_problem( [1,-1],x )
param = svm_parameter(kernel_type = LINEAR, C = 10)
m = svm_model(prob, param)
print m.predict([ 1,1, 1])
It raises an assertion error, says assert failed: assert len(x)==len(y).
But if x=[ [1,0,1],[-1,0,-1]], the program works perfectly. Am I not supposed to give a train-data problem of length more than 2?
Also I don't understand what in x=[[1,0,1],[-1,0,-1]] is a label and what is the data?
Any help is highly appreciated.
svm_problem() takes two parameters: the first parameter of a vector of labels, and the second is a matrix of features. You get this assertion error because you are only specifying 2 labels, [1, -1], as the first parameter in your call to svm_problem.
Example:
y = [1,-1,1,1]
x = [[1,0,1], [-1,0,-1], [1,2,3], [4,5,6]]
prob = svm_problem(y, x)
If you give 3 examples, you need to give the classes for the three inputs as well, so you need to do
prob = svm_problem( [1,1,-1],x )
or something similar.
Related
I apologize for the poor question title but I'm not sure quite how to phrase it. Here's the problem I'm trying to solve: I have two NNs working off of the same input dataset in my code. One of them is a traditional network while the other is used to limit the acceptable range of the first. This works by using a tf.where() statement which works fine in most cases, such as this toy example:
pcts= [0.04,0.06,0.06,0.06,0.06,0.06,0.06,0.04,0.04,0.04]
legal_actions = tf.where(pcts>=0.05, tf.ones_like(pcts), tf.zeros_like(pcts))
Which gives the correct result: legal_actions = [0,1,1,1,1,1,1,0,0,0]
I can then multiply this by the output of my first network to limit its Q values to only those of the legal actions. In a case like the above this works great.
However, it is also possible that my original vector looks something like this, with low values in the middle of the high values: pcts= [0.04,0.06,0.06,0.04,0.04,0.06,0.06,0.04,0.04,0.04]
Using the same code as above my legal_actions comes out as this: legal_actions = [0,1,1,0,0,1,1,0,0,0]
Based on the code I have this is correct, however, I'd like to include any zeros in the middle as part of my legal_actions. In other words, I'd like this second example to be the same as the first. Working in basic TF this is easy to do in several different ways, such as in this reproducible example (it's also easy to do with sparse tensors):
import tensorflow as tf
pcts= tf.placeholder(tf.float32, shape=(10,))
legal_actions = tf.where(pcts>=0.05, tf.ones_like(pcts), tf.zeros_like(pcts))
mask = tf.where(tf.greater(legal_actions,0))
legals = tf.cast(tf.range(tf.reduce_min(mask),tf.reduce_max(mask)+1),tf.int64)
oh = tf.one_hot(legals,10)
oh = tf.reduce_sum(oh,0)
with tf.Session() as sess:
print(sess.run(oh,feed_dict={pcts:[0.04,0.06,0.06,0.04,0.04,0.06,0.06,0.04,0.04,0.04]}))
The problem that I'm running into is when I try to apply this to my actual code which is reading in batches from a file. I can't figure out a way to fill in the "gaps" in my tensor without the range function and/or I can't figure out how to make the range function work with batches (it will only make one range at a time, not one per batch, as near as I can tell). Any suggestions on how to either make what I'm working on work or how to solve the problem a completely different way would be appreciated.
Try this code:
import tensorflow as tf
pcts = tf.random.uniform((2,3,4))
a = pcts>=0.5
shape = tf.shape(pcts)[-1]
a = tf.reshape(a, (-1, shape))
a = tf.cast(a, dtype=tf.float32)
def rng(t):
left = tf.scan(lambda a, x: max(a, x), t)
right = tf.scan(lambda a, x: max(a, x), t, reverse=True)
return tf.minimum(left, right)
a = tf.map_fn(lambda x: rng(x), a)
a = tf.reshape(a, (tf.shape(pcts)))
I'm trying to estimate impulse response functions of a -1 standard-deviation shock to a 3-dimension VAR using statsmodels.tsa, however I'm currently having issues with setting the shock magnitude.
This gives me the IRFs for a 1 s.d. shock, the default:
import numpy as np
import statsmodels.tsa as sm
model = sm.vector_ar.var_model.VAR(endog = data)
fitted = model.fit()
shock= -1*fitted.sigma_u
irf = sm.vector_ar.irf.IRAnalysis(model = fitted)
The function IRAnalysis takes an argument P, an upper diagonal matrix that sets the shocks, I found this looking at the source code. However inputting P as shown below doesn't seem to be doing anything.
irf = statsmodels.tsa.vector_ar.irf.IRAnalysis(model = fitted, P = -np.linalg.cholesky(model.fitted_U))
I would really appreciate some help.
Thanks in advance.
I have had the same question and finally found something that works on my end.
instead of using the IRAnalysis explicitly, I found that transforming the VAR model into it's MA representation was the best way to adjust the size of the shock.
from statsmodels.tsa.vector_ar.irf import IRAnalysis
J = fitted.ma_rep(T)
J = shock*np.array(J)
This will give you the output of the irfs for T periods.
I also wanted the standard error bands on my plots, so I did something similar to that particular function as well.
G, H = fitted.irf_errband_mc(orth=False, repl=1000, steps=T, signif=0.05, seed=None, burn=100, cum=False)
Hope this helps
I am using sklearn.svm.SVR for a "regression task" which I want to use my "customized kernel method". Here is the dataset samples and the code:
index density speed label
0 14 58.844020 77.179139
1 29 67.624946 78.367394
2 44 77.679100 79.143744
3 59 79.361877 70.048869
4 74 72.529289 74.499239
.... and so on
from sklearn import svm
import pandas as pd
import numpy as np
density = np.random.randint(0,100, size=(3000, 1))
speed = np.random.randint(20,80, size=(3000, 1)) + np.random.random(size=(3000, 1))
label = np.random.randint(20,80, size=(3000, 1)) + np.random.random(size=(3000, 1))
d = np.hstack((a,b,c))
data = pd.DataFrame(d, columns=['density', 'speed', 'label'])
data.density = data.density.astype(dtype=np.int32)
def my_kernel(X,Y):
return np.dot(X,X.T)
svr = svm.SVR(kernel=my_kernel)
x = data[['density', 'speed']].iloc[:2000]
y = data['label'].iloc[:2000]
x_t = data[['density', 'speed']].iloc[2000:3000]
y_t = data['label'].iloc[2000:3000]
svr.fit(x,y)
y_preds = svr.predict(x_t)
the problem happens in the last line svm.predict which says:
X.shape[1] = 1000 should be equal to 2000, the number of samples at training time
I searched the web to find a way to deal with the problem but many questions alike (like {1}, {2}, {3}) were left unanswered.
Actually, I had used SVM methods with rbf, sigmoid, ... before and the code was working just fine but this was my first time using customized kernels and I suspected that it must be the reason why this error happened.
So after a little research and reading documentation I found out that when using precomputed kernels, the shape of the matrix for SVR.predict() must be like [n_samples_test, n_samples_train] shape.
I wonder how to modify x_test in order to get predictions and everything works just fine with no problem like when we don't use customized kernels?
If possible please describe "the reason that why the inputs for svm.predict function in precomputed kernel differentiates with the other kernels".
I really hope the unanswered questions that are related to this issue could be answered respectively.
The problem is in your kernel function, it doesn't do the job.
As the documentation https://scikit-learn.org/stable/modules/svm.html#using-python-functions-as-kernels says, "Your kernel must take as arguments two matrices of shape (n_samples_1, n_features), (n_samples_2, n_features) and return a kernel matrix of shape (n_samples_1, n_samples_2)." The sample kernel on the same page satisfies this criteria:
def my_kernel(X, Y):
return np.dot(X, Y.T)
In your function the second argument of dot is X.T and thus the output will have shape (n_samples_1, n_samples_1) which is not that is expected.
The shape does not match means the test data and train data are of not equal shape, always think about matrix or array in numpy. If you are doing any arithmetic operation you always need a similar shape. That's why we check array.shape.
[n_samples_test, n_samples_train] you can modify shapes but its not best idea.
array.shape, reshape, resize
are used for that
Why does the following python code block work? I did define function layer_size() but I have not defined function layer_sizes_test_case().
def layer_sizes( X , Y ):
n_x = X.shape[0]
n_h = 4
n_y = Y.shape[0]
return n_x , n_h , n_y
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
This code is working because:
Firstly I assume you came across this code in week 3 of Andrew Ng's deeplearning (Planar data classification)
Check the import statement:
from testCases_v2 import *
this line of code imports above mentioned function:
def layer_sizes_test_case():
np.random.seed(1)
X_assess = np.random.randn(5, 3)
Y_assess = np.random.randn(2, 3)
return X_assess, Y_assess
Clink on the mentioned link Github link for the functions .py file:
Github link This GIT doesn't belongs to me. Posted here for educational purpose.
First of all, return needs to be inside the function layer_sizes. So, indent your code properly. Like this:
def layer_sizes( X , Y ):
n_x = X.shape[0]
n_h = 4
n_y = Y.shape[0]
return n_x , n_h , n_y
Anyway, I don't see anyway your code could work. It just had a syntax error that I had to let you know about. If you actually meant that the return should be indented to inside the function and just messed up, then I still don't see any way your code could've worked. You must have imported a module that has the function layer_size_test_case() or maybe you defined this function somewhere above this block of code you have posted and you must have forgot about it later. If you could provide more information, I can edit my answer to give you more support.
I am trying to do a Mean operation given the actual lengths of sequences. (Masking Zero vectors)
My inputs sequence_outpus are of (batch_size, max_len, dimensions)
I have a tensor that stores the actual lengths of each sequence in the batch. I used the function from https://danijar.com/variable-sequence-lengths-in-tensorflow/
def length(sequence):
used = tf.sign(tf.reduce_max(tf.abs(sequence), reduction_indices=2))
length = tf.reduce_sum(used, reduction_indices=1)
length = tf.cast(length, tf.int64)
return length
I do this:
lengths = length(sequence_outputs)
lengths = tf.cast(length, tf.float32)
lengths = tf.expand_dims(lengths,1)
sentence_outputs = tf.reduce_sum(sentence_outputs,1) / lengths
The graph compiles but I am getting NaN loss values. Furthermore my lengths become negative values when debugging with eval().
This seems to be a simple problem but I've been stuck with this for sometime and would appreciate some help!
Thanks!
I see no issue. Your code is slightly over-complicated. The following code
import numpy as np
import tensorflow as tf
# creating data
B = 15
MAX_LEN = 4
data = np.zeros([B, MAX_LEN], dtype=np.float32)
for b in range(B):
current_len = np.random.randint(2, MAX_LEN)
current_vector = np.concatenate([np.random.randn(current_len), np.zeros(MAX_LEN - current_len)], axis=-1)
print("{}\t\t{}".format(current_vector, current_vector.shape))
data[b, ...] = current_vector
data_op = tf.convert_to_tensor(data)
def tf_length(x):
assert len(x.get_shape().as_list()) == 2
length = tf.count_nonzero(x, axis=1, keepdims=True)
return length
x = tf.reduce_sum(data_op, axis=1) / tf_length(data_op)
# test gradients
grads = tf.gradients(tf.reduce_mean(x), [data_op])
with tf.Session() as sess:
print sess.run(grads)
runs perfectly fine here without any NaNs. Are you sure, you are really using this code? If I need to guess, I would bet you forget the tf.abs somewhere in your sequence length computation.
Be aware: your length function, as well as tf_length in this post, assume non-zero values in the sequence! The calculating the sequence-length should be the task of the data-producer and fed into the computation graph. Everything else, I consider as a hacky solution.