Related
I keep getting a value error when I try to add the last step to my array x[i+1]=... What am I doing wrong here? allocated arrays with similar code and had no issues earlier. Ay help is appreciated.
Source Code:
import numpy as np
import matplotlib.pyplot as plt
#parameters
sigma=10.
rho=28.
beta=8/3.
ti=0.
tf=100
dt=0.01
#pre-allocation
x = np.zeros(tf)
y = np.zeros(tf)
z = np.zeros(tf)
#initial conditions
x[0]=1.
y[0]=1.
z[0]=1.
#functions
fx= lambda x: sigma*(y-x) #y too?
fy= lambda y: x*(rho-z)-y
fz= lambda z: x*y-(beta*z)
#euler-richardson
for i in np.arange(0,tf-1):
k1_x = fx(x[i])
k1_y = fy(y[i])
k1_z = fz(z[i])
k2_x = fx((x[i]+(0.5*k1_x))*dt) #maybe just dt?
k2_y = fy((y[i]+(0.5*k1_y))*dt)
k2_z = fz((z[i]+(0.5*k1_z))*dt)
x[i+1] = x[i] + k2_x
y[i+1] = y[i] + k2_y
z[i+1] = z[i] + k2_z
Error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Input In [10], in <cell line: 2>()
8 k2_y = fy((y[i]+(0.5*k1_y))*dt)
9 k2_z = fz((z[i]+(0.5*k1_z))*dt)
---> 11 x[i+1] = x[i] + k2_x
12 y[i+1] = y[i] + k2_y
13 z[i+1] = z[i] + k2_z
ValueError: setting an array element with a sequence.
Steps needed to prevent this error:
Easiest way to fix this problem is to use the data-type which support all type of data-type.
Second way to fix this problem is to match the default data-type of array and assigning value.
You need to pass the dtype of array as object, as we must have same dtype for element in the array while in the List we can have a different dtype. It will work fine, I have run your code, you need to change only these lines
#pre-allocation
x = np.zeros(tf, dtype=object)
y = np.zeros(tf, dtype=object)
z = np.zeros(tf, dtype=object)
This solution is helpful.
Let's try this in IPython to see what happens:
In [1]: import numpy as np
In [2]: sigma=10.
...: rho=28.
...: beta=8/3.
...: ti=0.
...: tf=100
...: dt=0.01
...: #pre-allocation
...: x = np.zeros(tf)
...: y = np.zeros(tf)
...: z = np.zeros(tf);
In [3]: #initial conditions
...: x[0]=1.
...: y[0]=1.
...: z[0]=1.
In [5]: # functions
...: fx= lambda x: sigma*(y-x)
...: fy= lambda y: x*(rho-z)-y
...: fz= lambda z: x*y-(beta*z);
Out[5]:
Personally, I find the usage of x, y and z as both global names and function parameters confusing.
Now we call one of your functions:
In [6]: fx(x[4])
Out[6]:
array([10., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.])
Your functions only have one parameter. That means they will use the other referenced objects from the context they are called in.
In this case these are the numpy arrays x, y and z.
Hence the return value of an array.
Effectivtly, fx(x[4]) is the same as:
In [10]: sigma*(y-x[4])
Out[10]:
array([10., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.])
I'm having trouble with using the Jacobian function in autograd.py. Here are some functions I've defined:
import autograd.numpy as np
from autograd import elementwise_grad as egrad
from autograd import hessian,jacobian
def unicycle_continuous_dynamics(x, u):
# x = [x position, y position, heading, forward velocity]
# u = [omega, forward acceleration]
#m = sym if x.dtype == object else np # Check type for autodiff
x_pos = x[3]*np.cos(x[2])
y_pos = x[3]*np.sin(x[2])
heading_rate = u[0]
v_dot = u[1]
x_d = np.array([
x_pos,
y_pos,
heading_rate,
v_dot
])
return x_d
def discrete_dynamics(x, u):
dt = 0.05
#Euler integrator below and return the next state
x = x + dt*unicycle_continuous_dynamics(x,u)
x_next = x
return x_next
def discrete_dynamics_multiple(x,u,x_dim,u_dim):
x_new = []
for i,j in zip(range(len(x_dim)),range(len(u_dim))):
x_new.append(discrete_dynamics(x[i*n_states:(i+1)*n_states],u[j*n_inputs:(j+1)*n_inputs]))
return np.asarray(x_new).flatten()
def f(x,u,x_dim,u_dim):
return discrete_dynamics_multiple(x,u,x_dim,u_dim)
def f_x(x,u,x_dim,u_dim):
return jacobian(f,0)(x,u,x_dim,u_dim)
def f_u(x,u,x_dim,u_dim):
return jacobian(f,1)(x,u,x_dim,u_dim)
n_states=4
n_inputs=2
x_try = x0 = np.array([0.5, 1.5, 0, 0, #1st row for agent1, 2nd row for agent2, 3rd row for agent3
2.5, 1.5, np.pi ,0 ,
1.5, 1.3, np.pi/2 , 0.1 ])
u_try = np.array([0.5,0.5,0.2,0.2,0.1,0.1])
now when I call f_x, the following is returned with a warning:
f_x(x_try,u_try,(4,4,4),(2,2,2))
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
D:\Softwares\Anaconda\lib\site-packages\autograd\tracer.py:14: UserWarning: Output seems independent of input.
warnings.warn("Output seems independent of input.")
The shape of f_x is correct, but it shouldn't be zeros everywhere. I can't figure out what's wrong with the jacobian function. Any suggestions?
I'm a typical user of R, but in python I'm stuck.
I have a lot of images saved as NumPy array I need to resize the pad of array/images to 4k resolution from different widths which oscillated between 1620 to 2800, the height is constant: 2160.
I need to resize the pad of each array/image to 3840*2160, ie. add a black border on right and left side, so that the array/image itself remains unchanged.
For resizing I try this, but the code adds black edges to all sides.
arr = np.array([[1,1,1],[1,1,1],[1,1,1],[1,1,1]])
FinalWidth = 20
def pad_with(vector, pad_width, iaxis, kwargs):
pad_value = kwargs.get('padder', 0)
vector[:pad_width[0]] = pad_value
arr2 = np.pad(arr,FinalWidth/2,pad_with)
I think you just need hstack, assuming you want half the width to go on either side:
def pad_with(vector, pad_width):
temp = np.hstack((np.zeros((vector.shape[0], pad_width//2)), vector))
return np.hstack((temp, np.zeros((vector.shape[0], pad_width//2))))
arr2 = pad_with(arr,FinalWidth)
arr2
>>> array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]])
arr2.shape
>>> (4, 23)
Given a list of coordinates that represent a rectangle in a grid (e.g. the upper-left and lower-right coordinate), which would be the most efficient way to fill a binary NumPy array with ones in the place of that rectangles?
The simple way would be to do a for loop such as
arr = np.zeros((w, h))
for x1, y1, x2, y2 in coordinates:
arr[x1:x2, y1:y2] = True
where coordinates is something like [(x_11, y_11, x_22, y_22), ..., (x_n1, y_n1, x_n2, y_n2)]
However, I want to try to avoid it, as it is one of the advantages of using vectorial inner NumPy operations. I have tried the logical_and but it seems that it works for a single rectangle or condition. How could I do it in a more "numpy" way?
The resulting image would be something like this for 2 rectangles:
Let say (1,1) are the upper-left coordinates of the rectangle,
and (5,4) the lower-right.
Then
arr = np.zeros((10, 10))
arr[1:5, 1:4] = 1
returns
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
I am running an optimization problem in torch. My torch installation is GPU compatible but for some odd reason it does not use the GPU at all when running. Everything seems to be done by the CPU and my local RAM.
import numpy as np
import scipy.sparse.csgraph as csg
import torch
from torch.autograd import Variable
import torch.autograd as autograd
import matplotlib.pyplot as plt
%matplotlib inline
def cmdscale(D):
# Number of points
n = len(D)
# Centering matrix
H = np.eye(n) - np.ones((n, n))/n
# YY^T
B = -H.dot(D**2).dot(H)/2
# Diagonalize
evals, evecs = np.linalg.eigh(B)
# Sort by eigenvalue in descending order
idx = np.argsort(evals)[::-1]
evals = evals[idx]
evecs = evecs[:,idx]
# Compute the coordinates using positive-eigenvalued components only
w, = np.where(evals > 0)
L = np.diag(np.sqrt(evals[w]))
V = evecs[:,w]
Y = V.dot(L)
return Y, evals
Y = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.]])
temp = Y[~np.all(Y == 0, axis=1)]
temp = temp[:,~np.all(Y == 0, axis=1)]
Y = np.asarray(temp, dtype='uint8')
n = np.shape(Y)[0]
k = 2
D = csg.shortest_path(Y, directed=True)
Z = cmdscale(D)[0][:,0:k]
Z = Z - Z.mean(axis=0, keepdims=True)
def distMatrix(m):
n = m.size(0)
d = m.size(1)
x = m.unsqueeze(1).expand(n, n, d)
y = m.unsqueeze(0).expand(n, n, d)
return torch.sqrt(torch.pow(x - y, 2).sum(2) + 1e-4)
def loss(tY):
d = -distMatrix(tZ)+B
sigmoidD = torch.sigmoid(d)
reduce = tY*torch.log(sigmoidD)+(1-tY)*torch.log(1-sigmoidD)
#remove diagonal
reduce[torch.eye(n).byte().cuda()] = 0
return -reduce.sum()
tZ = autograd.Variable(torch.cuda.FloatTensor(Z), requires_grad=True)
B = autograd.Variable(torch.cuda.FloatTensor([0]), requires_grad=True)
tY = autograd.Variable(torch.cuda.FloatTensor(Y), requires_grad=False)
losses = []
biases = []
#rocAuc = []
learning_rate = 1e-3
epochs = 10000
percentDone = 0
percent = 5
for i in range(epochs):
if i % (epochs*percent*0.01) == 0:
percentDone += percent
print(str(percentDone) + "%")
l = loss(tY)
l.backward(retain_graph=True)
losses.append(float(l))
biases.append(B.data)
tZ.data = tZ.data - learning_rate * tZ.grad.data
B.data = B.data - learning_rate * B.grad.data
tZ.grad.data.zero_()
B.grad.data.zero_()
plt.subplot(122)
plt.plot(losses)
plt.title('Loss')
plt.xlabel('Iteration')
plt.ylabel('loss')
plt.show()
Theres an awful lot of code, but it is a working example, how do I make this code run on my GPU? is it even possible? Any hints or nudges in the right directions would be greatly appreciated.
Maybe you're missing the cuda toolkit or it doesn't work properly with your PyTorch installation.
Can you first check if this function
torch.cuda.is_available()
returns true. If not, you should check if the cuda toolkit works and the PyTorch version you are using is the correct one for your cuda installation.