right now I'm working on control.matlap.tf2ss and I would like to access my array in my state space.
Here is my code
Gs = tf([P.l], [P.Jzz, 0, 0])
Cs = tf([P.Kp, P.Kd], 1)
Gcl = feedback(series(Cs, Gs), 1)
po = pole(Gcl)
num, den = tfdata(Gs)
sys = tf2ss(Gs)
print sys
Result:
A = [[ 0. 0.]
[ 1. 0.]]
B = [[-10.58350385]
[ 0. ]]
C = [[ 0. -1.]]
D = [[ 0.]]
How can I access array A, B, C, D?
For arrays of state-space models with variable numbers of states, use the syntax:
[a,b,c,d] = ssdata(sys,'cell')
Related
I am currently trying to create a sparse matrix that will look like this.
[[ 50. -25. 0. 0.]
[-25. 50. -25. 0.]
[ 0. -25. 50. -25.]
[ 0. 0. -25. 50.]]
But when I run it through I keep getting the value error
'data array must have rank 2' in my data array.
I am positive it is a problem with my B variable. I have tried several things but nothing is working. Any advice?
def sparse(a,b,N):
h = (b-a)/(N+1)
e = np.ones([N,1])/h**2
B = np.array([e, -2*e, e])
diags = np.array([-1,0,1])
A = spdiags(B,diags,N,N).toarray()
return A
print(sparse(0,1,4))
Just change to this:
import numpy as np
from scipy.sparse import spdiags
def sparse(a, b, N):
h = (b - a) / (N + 1)
e = np.ones(N) / h ** 2
diags = np.array([-1, 0, 1])
A = spdiags([-1 * e, 2 * e, -1 * e], diags, N, N).toarray()
return A
print(sparse(0, 1, 4))
Output
[[-50. 25. 0. 0.]
[ 25. -50. 25. 0.]
[ 0. 25. -50. 25.]
[ 0. 0. 25. -50.]]
The main change is this:
e = np.ones([N,1])/h**2
by
e = np.ones(N) / h ** 2
Note that toarray transforms the sparse matrix into a dense one, from the documentation:
Return a dense ndarray representation of this matrix.
I have written the following code. Something very weird is happening. I have 2 variables and when I print them, I get the values sums[d_index][k]=[0 0] and rewards[k]=[1]. So when I perform sums[d_index][k] = sums[d_index][k]+rewards[k] for k=0, I should expect to get sums[d_index][k]=[1 0]. But for some absurd reason, I get sums[d_index][k]=[0.2 0]. I have no idea how on earth this is even possible. Why is this happening and how can I fix it?
I have marked the problem line with the comment #HERE!!!!
import numpy as np
import math
e = 0.1
np.random.seed(2)
#Initializing the parameters of the bernoulli distributions randomly
p = np.random.rand(1,2)[0]
#>>>>>>>>>>> p = np.array([ 0.26363424, 0.70255294])
suboptimality_gap = np.max(p)-p
print p
powers = [1]
cumulative_regret = np.zeros((len(powers),1,10))
for round_number in range(1):
#Initializing the arrays to store the estimate and sum of rewards, and count of each action
estimates = np.zeros((len(powers),2))
estimates[:,0] = np.random.binomial(1, p[0], 1)
estimates[:,1] = np.random.binomial(1, p[1], 1)
counts = np.ones((len(powers),2))
sums = estimates[:]
#Updating estimates for action at time t>K=2
for t in range(1,10):
rewards = np.array([np.random.binomial(1, p[0], 1),np.random.binomial(1, p[1], 1)])
for d_index,d in enumerate([1./(t**power) for power in powers]):
#print (np.asarray([(estimates[d_index][i]+((2*math.log(1/d))/(counts[d_index][i]))**0.5) for i in [0,1]]))
k = np.argmax(np.asarray([(estimates[d_index][i]+((2*math.log(1/d))/(counts[d_index][i]))**0.5) for i in [0,1]]))
counts[d_index][k] = counts[d_index][k]+1
print "rewards=",rewards[k]
print "sums=",sums[d_index]
sums[d_index][k] = sums[d_index][k]+rewards[k] #HERE!!!!
estimates[d_index] = np.true_divide(sums[d_index], counts[d_index])
cumulative_regret[d_index][round_number][t]=cumulative_regret[d_index][round_number][t-1]+suboptimality_gap[k]
#print counts
Output:
[ 0.4359949 0.02592623]
rewards= 0
sums= [ 0. 0.]
rewards= 0
sums= [ 0. 0.]
rewards= 0
sums= [ 0. 0.]
rewards= 0
sums= [ 0. 0.]
rewards= 0
sums= [ 0. 0.]
rewards= 0
sums= [ 0. 0.]
rewards= 1
sums= [ 0. 0.]
rewards= 1
sums= [ 0.2 0. ]
rewards= 0
sums= [ 0.2 0. ]
I apologize that my code is kind of not organized. But that is because I have been trying to debug the problem for last hour.
As mentioned in the comments of your question, sums = estimates doesn't create a new copy of your array, just a new reference pointing to the original object which can cause things to get messy. To get your desired results you can use:
sums = estimates.copy()
'car3.csv' file download link
import csv
num = open('car3.csv')
nums = csv.reader(num)
nums_list = []
for i in nums:
nums_list.append(i)
import numpy as np
nums_arr = np.array(nums_list, dtype = np.float32)
print(nums_arr)
print(np.std(nums_arr, axis=0))
The result is this.
[[ 1. 1. 2.]
[ 1. 1. 2.]
[ 1. 1. 2.]
...,
[ 0. 0. 5.]
[ 0. 0. 5.]
[ 0. 0. 5.]]
[ 0.5 0.5 1.11803401]
There are lots of spaces that I didn't expected.
How can I handle these anyway?
That is not a spacing problem. What all you need to do is to save the output of the standard deviation. Then, you can access each value like this:
std_arr = np.std(nums_arr, axis=0) # array which holds std of each column
# now, you can access them by indexing:
print(std_arr[0]) # output here is 0.5
print(std_arr[1]) # output here is 0.5
print(std_arr[2]) # output here is 1.118034
I currently have a (1631160,78) np array as my input to a neural network. I would like to try something with LSTM which requires a 3D structure as input data. I'm currently using the following code to generate the 3D structure needed but it is super slow (ETA > 1day). Is there a better way to do this with numpy?
My current code to generate data:
def transform_for_rnn(input_x, input_y, window_size):
output_x = None
start_t = time.time()
for i in range(len(input_x)):
if i > 100 and i % 100 == 0:
sys.stdout.write('\rTransform Data: %d/%d\tETA:%s'%(i, len(input_x), str(datetime.timedelta(seconds=(time.time()-start_t)/i * (len(input_x) - i)))))
sys.stdout.flush()
if output_x is None:
output_x = np.array([input_x[i:i+window_size, :]])
else:
tmp = np.array([input_x[i:i+window_size, :]])
output_x = np.concatenate((output_x, tmp))
print
output_y = input_y[window_size:]
assert len(output_x) == len(output_y)
return output_x, output_y
Here's an approach using NumPy strides to vectorize the creation of output_x -
nrows = input_x.shape[0] - window_size + 1
p,q = input_x.shape
m,n = input_x.strides
strided = np.lib.stride_tricks.as_strided
out = strided(input_x,shape=(nrows,window_size,q),strides=(m,m,n))
Sample run -
In [83]: input_x
Out[83]:
array([[ 0.73089384, 0.98555845, 0.59818726],
[ 0.08763718, 0.30853945, 0.77390923],
[ 0.88835985, 0.90506367, 0.06204614],
[ 0.21791334, 0.77523643, 0.47313278],
[ 0.93324799, 0.61507976, 0.40587073],
[ 0.49462016, 0.00400835, 0.66401908]])
In [84]: window_size = 4
In [85]: out
Out[85]:
array([[[ 0.73089384, 0.98555845, 0.59818726],
[ 0.08763718, 0.30853945, 0.77390923],
[ 0.88835985, 0.90506367, 0.06204614],
[ 0.21791334, 0.77523643, 0.47313278]],
[[ 0.08763718, 0.30853945, 0.77390923],
[ 0.88835985, 0.90506367, 0.06204614],
[ 0.21791334, 0.77523643, 0.47313278],
[ 0.93324799, 0.61507976, 0.40587073]],
[[ 0.88835985, 0.90506367, 0.06204614],
[ 0.21791334, 0.77523643, 0.47313278],
[ 0.93324799, 0.61507976, 0.40587073],
[ 0.49462016, 0.00400835, 0.66401908]]])
This creates a view into the input array and as such memory-wise we are being efficient. In most cases, this should translate to benefits on performance too with further operations involving it. Let's verify that its a view indeed -
In [86]: np.may_share_memory(out,input_x)
Out[86]: True # Doesn't guarantee, but is sufficient in most cases
Another sure-shot way to verify would be to set some values into output and check the input -
In [87]: out[0] = 0
In [88]: input_x
Out[88]:
array([[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0.93324799, 0.61507976, 0.40587073],
[ 0.49462016, 0.00400835, 0.66401908]])
import scipy.sparse.linalg as scial
import scipy.sparse as scisp
import numpy
def buildB(A,x,col_size_A):
d = numpy.zeros(col_size_A)
for index in xrange(col_size_A):
d[index] = 2*x[index]-1
tmp = scisp.spdiags(d,0,col_size_A,col_size_A)
return scisp.bmat([[A],[tmp]])
def buildQ(l,row_size_A):
q = numpy.zeros(row_size_A)
for index in xrange(row_size_A):
q[index] = 2*l[index]
return scisp.spdiags(q,0,row_size_A,row_size_A)
def buildh(A,x,b,col_size_A):
p = A.dot(x)
p = numpy.subtract(p, b)
quad = numpy.zeros(col_size_A)
for index in xrange(col_size_A):
quad[index] = x[index]*x[index]-x[index]
return numpy.concatenate((p, quad))
def ini():
A = numpy.array([[1,1],[1,-1]])
b = [1, 0]
c = [1, 1]
col_size_A = 2
row_size_A = 2
main(A,b,c,col_size_A,row_size_A)
def main(A,b,c, col_size_A, row_size_A):
x = numpy.zeros(col_size_A)
l = numpy.zeros(row_size_A*2)
eps = 10e-6
k = 0
while True:
B = buildB(A,x,col_size_A)
Q = buildQ(l[row_size_A/2:row_size_A+1], col_size_A)
Bt = B.transpose()
h = buildh(A,x,b,col_size_A)
g = numpy.add(c,Bt.dot(l))
F = numpy.concatenate((g, h))
print "Iteration " + str(k),
tol = numpy.amax(F)
print "- Tol "+ str(tol)
if tol < eps:
print "Done"
break
tF = -numpy.concatenate((c, h))
FGrad2 = scisp.csc_matrix(scisp.bmat([[Q,Bt],[B, None]]))
print FGrad2
print FGrad2.todense()
print " "
print tF
xdelta = scial.spsolve(FGrad2,tF)
print xdelta
x = x + xdelta[0:col_size_A]
l = x[col_size_A:]
k = k + 1
if __name__ == "__main__":
ini()
The output is:
(2, 0) 1.0
(3, 0) 1.0
(4, 0) -1.0
(2, 1) 1.0
(3, 1) -1.0
(5, 1) -1.0
(0, 2) 1.0
(1, 2) 1.0
(0, 3) 1.0
(1, 3) -1.0
(0, 4) -1.0
(1, 5) -1.0
[[ 0. 0. 1. 1. -1. 0.]
[ 0. 0. 1. -1. 0. -1.]
[ 1. 1. 0. 0. 0. 0.]
[ 1. -1. 0. 0. 0. 0.]
[-1. 0. 0. 0. 0. 0.]
[ 0. -1. 0. 0. 0. 0.]]
lda must be >= MAX(N,1): lda=2 N=3BLAS error: Parameter number 7 passed to cblas_dtrsv had an invalid value
[-1. -1. 1. -0. -0. -0.]
So FGrad2 seems to be a valid csc matrix and tF a valid numpy.array.
What is wrong with this code? I don't even know why the error is before the print of tF even so the error is behind at spsolve
Edit
Ok i fixed that, it is because the first guess for the parameters was wrong leading to a singular matrix, but suppling a valid guess for l, leads to wrong calculation of spsolve
as mentioned i labeled all output as you can see spsolve returns the wrong calculation.
$FGrad2 * xdelta != tF$
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import scipy.sparse.linalg as scial
import scipy.sparse as scisp
import numpy
def buildB(A,x,col_size_A):
d = numpy.zeros(col_size_A)
for index in xrange(col_size_A):
d[index] = 2*x[index]-1
tmp = scisp.spdiags(d,0,col_size_A,col_size_A)
return scisp.bmat([[A],[tmp]])
def buildQ(l,row_size_A):
q = numpy.zeros(row_size_A)
for index in xrange(row_size_A):
q[index] = 2*l[index]
return scisp.spdiags(q,0,row_size_A,row_size_A)
def buildh(A,x,b,col_size_A):
p = A.dot(x)
p = numpy.subtract(p, b)
quad = numpy.zeros(col_size_A)
for index in xrange(col_size_A):
quad[index] = x[index]*x[index]-x[index]
return numpy.concatenate((p, quad))
def ini():
A = numpy.array([[1,1],[1,0]])
b = [1, 0]
c = [1, 1]
col_size_A = 2
row_size_A = 2
main(A,b,c,col_size_A,row_size_A)
def main(A,b,c, col_size_A, row_size_A):
x = numpy.zeros(col_size_A)
x[0] = 0
x[1] = 1
l = numpy.ones(row_size_A*2)
eps = 10e-6
k = 0
while True:
B = buildB(A,x,col_size_A)
Q = buildQ(l[row_size_A:], col_size_A)
Bt = B.transpose()
h = buildh(A,x,b,col_size_A)
g = numpy.add(c,Bt.dot(l))
F = numpy.concatenate((g, h))
print "Iteration " + str(k),
tol = numpy.amax(numpy.absolute(F))
print "- Tol "+ str(tol)
if tol < eps:
print "Done"
print x
break
tF = -numpy.concatenate((c, h))
FGrad2 = scisp.csc_matrix(scisp.bmat([[Q,Bt],[B, None]]))
print "FGrad2"
print FGrad2.todense()
print "tF"
print tF
xdelta = scial.spsolve(FGrad2,tF)
print "spsolution"
print xdelta
print ""
x = x + xdelta[0:col_size_A]
l = xdelta[col_size_A:]
k = k + 1
if __name__ == "__main__":
ini()
Output:
Iteration 0 - Tol 3.0
FGrad2
[[ 2. 0. 1. 1. -1. 0.]
[ 0. 2. 1. 0. 0. 1.]
[ 1. 1. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0.]
[-1. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0.]]
tF
[-1. -1. -0. -0. -0. -0.]
spsolution
[-1. -1. -0. -0. -0. -0.]
I think this is failing for you because your matrix is singular. E.g. convert to dense and use the regular numpy.linalg.solve:
>>> xdelta = numpy.linalg.solve(FGrad2.todense(), tF)
...
raise LinAlgError('Singular matrix')
numpy.linalg.linalg.LinAlgError: Singular matrix
The error I get is:
File "stack27538259.py", line 62, in main
xdelta = scial.spsolve(FGrad2,tF)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 143, in spsolve
b, flag, options=options)
RuntimeError: superlu failure (singular matrix?) at line 100 in file scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c
As xnx wrote, FGrad2 is singular.
np.linalg.det(FGrad2.todense()) # 0.0
(scipy version 0.14.0)
after the change I get:
/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/linsolve.py:145: MatrixRankWarning: Matrix is exactly singular
and
spsolution
[ nan nan nan nan nan nan]
and an infinite loop unless I add k counter and break.
Documentation for cblas_dtrsv may be found (here)
Accordingly,
the routine solves a triangular system A*X = B (presumably)
lda is the leading dimension of matrix B
N is the order of the matrix A
the error message says lda = 2 and N = 3 but lda must be >= MAX(N,1)
Perhaps this helps track down the problem.