I have this line of code in a MATLAB program:
x(:,i) = gamrnd(a(i),1,dim,1)
I was wondering in what way I could write this same line in Python. I think the equivalent statement is:
gamma.rvs(a, size=1000)
However, this keeps giving me an Index Error.
Here is my full code for this part:
x = np.array([])
for i in range(N-1):
# generates dim random variables
x[:, i] = gamma.rvs(a[i], dim-1) # generates dim random variables
# with gamma distribution
Thanks for the help!
You initialized x = np.array([]) and then tried accessing x[:, 0], which doesn't exist, hence the Index Error. You'll want to append instead:
x = np.array([])
for i in range(N-1):
np.append(x, gamma.rvs(a[i], dim - 1))
The documentation for np.append can be found here.
Related
I have error in my code and I did code for test.
A little description for my testing code: I imported numpy module.
I did variable for starting coordinate and then did array 7×4.
After that I came in cycle for and iterating over the array where I did steps x by 10 and y by 5 from variable for starting coordinate. Then I added x and y in cortege and cortege in array.Printed array:3** When I started code I had it:
ValueError: cannot copy sequence with size 2 to array axis with dimension 4
How to slove this error?
Here is CODE:
import numpy as np
#FOR TEST
pose = (640, 154)
all_poses = np.zeros((1, 7, 4))
for i in range(0, 6):
for j in range(0, 4):
y = pose[1] - i * 5
x = pose[0] - j * 10
cortege = (x, y)
all_poses[i, j] = cortege
print(all_poses)
The reason you are getting the error is that the dimensions of cortege are 1x2, whereas the dimensions of all_poses[i, j] are 1x4. So when you do all_poses[i, j] = cortege, you are doing something like [0, 0, 0, 0] = [650, 154]. The dimensions don't match here and you get the error.
A way to avoid the error would be make the dimensions of all_poses in the beginning to 7x2 instead of 7x4, or do all_poses[i, j] = cortege*2, which will add [650, 154, 650, 154] to all_poses[i, j], thus matching the dimensions. Though what you do to avoid the error depends on what you want to achieve with the code, which is not clear in your question. Could you explain what exactly you want your code to do?
I have used interp2 in Matlab, such as the following code, that is part of #rayryeng's answer in: Three dimensional (3D) matrix interpolation in Matlab:
d = size(volume_image)
[X,Y] = meshgrid(1:1/scaleCoeff(2):d(2), 1:1/scaleCoeff(1):d(1));
for ind = z
%Interpolate each slice via interp2
M2D(:,:,ind) = interp2(volume_image(:,:,ind), X, Y);
end
Example of Dimensions:
The image size is 512x512 and the number of slices is 133. So:
volume_image(rows, columns, slices in 3D dimenson) : 512x512x133 in 3D dimenson
X: 288x288
Y: 288x288
scaleCoeff(2): 0.5625
scaleCoeff(1): 0.5625
z = 1 up to 133 ,hence z: 1x133
ind: 1 up to 133
M2D(:,:,ind) finally is 288x288x133 in 3D dimenson
Aslo, Matlabs syntax for size: (rows, columns, slices in 3rd dimenson) and Python syntax for size: (slices in 3rd dim, rows, columns).
However, after convert the Matlab code to Python code occurred an error, ValueError: Invalid length for input z for non rectangular grid:
for ind in range(0, len(z)+1):
M2D[ind, :, :] = interpolate.interp2d(X, Y, volume_image[ind, :, :]) # ValueError: Invalid length for input z for non rectangular grid
What is wrong? Thank you so much.
In MATLAB, interp2 has as arguments:
result = interp2(input_x, input_y, input_z, output_x, output_y)
You are using only the latter 3 arguments, the first two are assumed to be input_x = 1:size(input_z,2) and input_y = 1:size(input_z,1).
In Python, scipy.interpolate.interp2 is quite different: it takes the first 3 input arguments of the MATLAB function, and returns an object that you can call to get interpolated values:
f = scipy.interpolate.interp2(input_x, input_y, input_z)
result = f(output_x, output_y)
Following the example from the documentation, I get to something like this:
from scipy import interpolate
x = np.arange(0, volume_image.shape[2])
y = np.arange(0, volume_image.shape[1])
f = interpolate.interp2d(x, y, volume_image[ind, :, :])
xnew = np.arange(0, volume_image.shape[2], 1/scaleCoeff[0])
ynew = np.arange(0, volume_image.shape[1], 1/scaleCoeff[1])
M2D[ind, :, :] = f(xnew, ynew)
[Code not tested, please let me know if there are errors.]
You might be interested in scipy.ndimage.zoom. If you are interpolating from one regular grid to another, it is much faster and easier to use than scipy.interpolate.interp2d.
See this answer for an example:
https://stackoverflow.com/a/16984081/1295595
You'd probably want something like:
import scipy.ndimage as ndimage
M2D = ndimage.zoom(volume_image, (1, scaleCoeff[0], scaleCoeff[1])
I want to parallelize a function using numba.vectorize, but my function doesn't take any input. Currently, I use a dummy array and dummy input for my function that is never used.
Is there a more elegant/fast way (possibly without using numba.vectorize)?
Code example (not my actual code, only for demonstration how I discard input):
import numpy as np
from numba import vectorize
#vectorize(["int32(int32)"], nopython=True)
def particle_path(discard_me):
x = 0
for _ in range(10):
x += np.random.uniform(0, 1)
return np.int32(x)
arr = particle_path(np.empty(1024, dtype=np.int32))
print(arr)
If you'll simply be dealing with 1D arrays, then you can use the following, where the array must be instantiated outside the function. There doesn't seem to be any reason to use vectorize here, you can achieve the goal simply with jit although you do have to explicitly write the loop over the array elements using this. If your array will always be 1D, then you can use:
import numpy as np
from numba import jit
#jit(nopython=True)
def particle_path(out):
for i in range(len(out)):
x = 0
for _ in range(10):
x += np.random.uniform(0, 1)
out[i] = x
arr = np.empty(1024, dtype=np.int32)
particle_path(arr)
You can similarly deal with any-dimensional arrays using the flat attribute (and make sure to use .size to get total number of elements in the array):
#jit(nopython=True)
def particle_path(out):
for i in range(out.size):
x = 0
for _ in range(10):
x += np.random.uniform(0, 1)
out.flat[i] = x
arr = np.empty(1024, dtype=np.int32)
particle_path(arr)
and finally you can create your array inside your function if you need a new array each time you run the function (use the above instead if you'll be calling the function repeatedly and want to overwrite the same array, hence saving the time to re-allocate the same array over and over again).
#jit(nopython=True)
def particle_path(num):
out = np.empty(shape=num, dtype=np.int32)
for i in range(num):
x = 0
for _ in range(10):
x += np.random.uniform(0, 1)
out[i] = x
return out
arr = particle_path(1024)
I'm currently trying to video stabilization using OpenCV and Python.
I use the following function to calculate rotation:
def accumulate_rotation(src, theta_x, theta_y, theta_z, timestamps, prev, current, f, gyro_delay=None, gyro_drift=None, shutter_duration=None):
if prev == current:
return src
pts = []
pts_transformed = []
for x in range(10):
current_row = []
current_row_transformed = []
pixel_x = x * (src.shape[1] / 10)
for y in range(10):
pixel_y = y * (src.shape[0] / 10)
current_row.append([pixel_x, pixel_y])
if shutter_duration:
y_timestamp = current + shutter_duration * (pixel_y - src.shape[0] / 2)
else:
y_timestamp = current
transform = getAccumulatedRotation(src.shape[1], src.shape[0], theta_x, theta_y, theta_z, timestamps, prev,
current, f, gyro_delay, gyro_drift)
output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform)
current_row_transformed.append(output)
pts.append(current_row)
pts_transformed.append(current_row_transformed)
o = utilities.meshwarp(src, pts_transformed)
return o
I get the following error when it gets to output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform):
cv2.error: /Users/travis/build/skvark/opencv-python/opencv/modules/core/src/matmul.cpp:2271: error: (-215) scn + 1 == m.cols in function perspectiveTransform
Any help or suggestions would really be appreciated.
This implementation really needs to be changed in a future version, or the docs should be more clear.
From the OpenCV docs for perspectiveTransform():
src – input two-channel (...) floating-point array
Slant emphasis added by me.
>>> A = np.array([[0, 0]], dtype=np.float32)
>>> A.shape
(1, 2)
So we see from here that A is just a single-channel matrix, that is, two-dimensional. One row, two cols. You instead need a two-channel image, i.e., a three-dimensional matrix where the length of the third dimension is 2 or 3 depending on if you're sending in 2D or 3D points.
Long story short, you need to add one more set of brackets to make the set of points you're sending in three-dimensional, where the x values are in the first channel, and the y values are in the second channel.
>>> A = np.array([[[0, 0]]], dtype=np.float32)
>>> A.shape
(1, 1, 2)
Also, as suggested in the comments:
If you have an array points of shape (n_points, dimension) (i.e. dimension is 2 or 3), a nice way to re-format it for this use-case is points[np.newaxis]
It's not intuitive, and though it's documented, it's not very explicit on that point. That's all you need. I've answered an identical question before, but for the cv2.transform() function.
Here's the code:
x = range(-6,7)
tmp1 = []
for i in range(len(x)):
tmp1.append(math.exp(-(i*i)/(2*self.sigma*self.sigma)))
max_tmp1 = max(tmp1)
mod_tmp1 = []
for i in range(len(tmp1)):
mod_tmp1.append(max_tmp1 - i)
ht1 = np.kron(np.ones((9,1)),tmp1)
sht1 = sum(ht1.flatten(1))
mean = sht1/(13*9)
ht1 = ht1 - mean
ht1 = ht1/sht1
print ht1.shape
h = np.zeros((16,16))
for i in range(0, 9):
for j in range(0, 13):
h[i+3, j+1] = ht1[i, j]
for i in range(0, 10):
ag = 15*i
np.append(h, scipy.misc.imrotate(h, ag, 'bicubic'))
R = []
print h.shape
print self.img.shape
for i in range(0, 11):
print 'here'
R[i] = scipy.signal.convolve2d(self.img, h[i], mode = 'same')
rt = np.zeros(self.img.shape)
x, y = self.img.shape
The error I get states:
ValueError: object of too small depth for desired array
It looks to me as if the problem is that you're setting h up wrongly. I assume you want h[i] to be a 16x16 array suitable for convolving with, but that's not what you've actually made it, for a couple of different reasons.
I suggest you change the loop with the imrotate calls to this:
h = [scipy.misc.imrotate(h, 15*i, 'bicubic') for i in range(10)]
(What your existing code does is: first set up h as a single 16x16 array; then, repeatedly: compute a rotated version, "flatten" both h and that to make 256-element vectors, compute the result of appending them to make a 512-element vector, and throw the result away. numpy.append doesn't operate in place, and defaults to flattening its arguments before it appends. Neither of those is what you want!)
The list comprehension above will give you a 10-element Python list containing rotated versions of your convolution kernel.
... Oh, I see that your loop computing R actually wants 11 kernels, not 10. Make it range(11), then. (Your original code generated rotations of 0, 0, 15, 30, ..., 135 degrees, but I'm guessing 0, 15, 30, ..., 150 degrees is more likely to be what you want.)