Background: I have millions of points in 2D space with (x_position, y_position, value) associated with each point. I am trying to summarize these points by creating an image, where each pixel can contain multiple points. To summarize, each pixel stores the sum of values at that (x_pixel, y_pixel) location in the image.
Question: How can I do this efficiently? Currently, my code does something like this:
image = np.zeros((4096,4096))
for each point in data:
x_pixel, y_pixel = convertPointPos2PixelPos(point)
image[x_pixel, y_pixel] += point.getValue()
but the ETA for this code completing is 450 hours, which is unacceptable. Is there a way to parallelize this? The code is writing to the same image[x,y] index multiple times. I found StackOverflow posts that suggest using multiprocessing, but I think needing to lock to prevent race conditions will mean this will take just as much time as it would without parallelizing.
Assuming you want something on a regular grid, you can use simple division to bin your data. Here is an example:
size = (4096, 4096)
data = np.random.rand(100000000, 3)
image = np.zeros(size)
coords = data[:, :2]
min = coords.min(0)
max = coords.max(0)
index = np.floor_divide(coords - min, (max - min) / np.subtract(size, 1), out=np.empty(coords.shape, dtype=int), casting='unsafe')
index is now an array of indices into image where you want to add the corresponding values. You can do an unbuffered add using np.add.at:
np.add.at(image, tuple(index.T), data[:, -1])
If your data range is better defined than just the bounding box of the coordinates, you can save a little time by not computing coord.max() and coord.min().
The result is something like this:
This entire operation takes 6.4sec on my very moderately powered machine for 10M points, including the call to plt.imshow, plt.colorbar and garbage collection before runs.
Timing collected using the %%timeit cell magic in IPython.
Either way, you're well under 450 hours. Even if your coordinate transformation is not linear binning, I expect that you can run in reasonable time as long as you vectorize it properly. Also, multiprocessing is not likely to give you a huge boost since it requires copying data around.
Related
I am writing a python script for some geometrical data manipulation (calculating motion trajectories for a multi-drive industrial machine). Generally, the idea is that there is a given shape (let's say - an ellipse, but it general case it can be any convex shape, defined with a series of 2D points), which is rotated and it's uppermost tangent point must be followed. I don't have a problem with the latter part but I need a little hint with the 2D shape preparation.
Let's say that the ellipse was defined with too little points, for example - 25. (As I said, ultimately this can be any shape, for example a rounded hexagon). To maintain necessary precision I need far more points (let's say - 1000), preferably equally distributed over whole shape or with higher density of points near corners, sharp curves, etc.
I have a few things ringing in my head, I guess that DFT (FFT) would be a good starting point for this resampling, analyzing the scipy.signal.resample() I have found out that there are far more functions in the scipy.signal package which sound promising to me...
What I'm asking for is a suggestion which way I should follow, what tool I should try for this job, which may be the most suitable. Maybe there is a tool meant exactly for what I'm looking for or maybe I'm overthinking this and one of the implementations of FFT like resample() will work just fine (of course, after some adjustments at the starting and ending point of the shape to make sure it's closing without issues)?
Scipy.signal sounds promising, however, as far as I understand, it is meant to work with time series data, not geometrical data - I guess this may cause some problems as my data isn't a function (in a mathematical understanding).
Thanks and best regards!
As far as I understood, what you want is to get an interpolated version of your original data.
The DFT (or FFT) will not achieve this purpose, since it will perform an Fourier Transform (which is not what you want).
Talking theoretically, what you need to interpolate your data is to define a function to calculate the result in the new-data-points.
So, let's say your data contains 5 points, in which one you have a 1D (to simplify) number stored, representing your data, and you want a new array with 10 points, filled with the linear-interpolation of your original data.
Using numpy.interp:
import numpy as np
original_data = [2, 0, 3, 5, 1] # define your data in 1D
new_data_resolution = 0.5 # define new sampling distance (i.e, your x-axis resolution)
interp_data = np.interp(
x = np.arange(0, 5-1+new_data_resolution , new_data_resolution), # new sampling points (new axis)
xp = range(original_data),
fp = original_data
)
# now interp_data contains (5-1) / 0.5 + 1 = 9 points
After this, you will have a (5-1) / new_resolution (which is greater than 5, since new_resolution < 1)-length data, which values will be (in this case) a linear interpolation of your original data.
After you have achieved/understood this example, you can dive in the scipy.interpolate module to get a better understanding in the interpolation functions (my example uses a linear function to get the data in the missing points).
Applying this to n-D dimensional arrays is straight-forward, iterating over each dimension of your data.
Note: Drastic speed improvements since posting, see edits at bottom.
I have some working code by it over utilizes loops and I'm pretty sure there should be a faster way of doing it. The size of the output array ends up being pretty large and so when I try to make other arrays the same size of the output, I run out of memory rather quickly.
I am simulating many speakers placed around a sphere all pointing toward the center. I have a simulation of a single speaker and I would like to leverage this single simulation by using the principle of superposition. Basically I want to sum up rotated copies of the single transducer simulation to get my final result.
I have an axisymmetric simulation of acoustic pressure data in cylindrical coordinates ("polar_coord_r", "polar_coord_z"). The pressure field from the simulation is unique at each R and Z value and completely described by a 2D array ("P_real_RZ").
I want to sum together multiple, rotated copies of the this pressure field onto a 3D Cartesian output array. Each copy is rotated to a different location on the sphere. Currently, I am specifying the rotation with an x,y,z point because it allows me to do vector math (spherical coordinates wouldn't allow me to do this as elegantly). The output array is rather large (770 × 770 × 804).
I have some working code to get the output from a single copy of the speaker ("transducer"). It takes about 12 seconds for each slice so it would take over two hours to add each new speaker!! I want to have a dozen or so copies of the speaker so this will take way to long.
The code takes a slice with constant X and computes the R and Z positions at each location in the that slice. "r_distance" is a 2D array containing the radial distance from a line passing between the origin and a point ("point"). Similarlity, "z_distance" is a 2D array containing the distance along that same line.
To get the pressure for the slice, I find the indices of the closest matching "polar_coord_r" and "polar_coord_z" to the computed R distances and Z distances. I use these indices to find what value of pressure (from P_real_RZ) to place at each value in the output.
Some definitions:
xx, yy, and zz are 1D arrays of describing the slices through the output volume
XXX, YYY, and ZZZ are 3D arrays produced by numpy.meshgrid
point is a point which defines the direction that the speaker is rotated. Basically it's just a position vector of the speakers center.
P_real_RZ is a 2D array which specifies the real pressure at each unique R and Z value.
polar_coord_r and polar_coord_z are 1D arrays which define the unique values of R and Z on which P_real_RZ is defined.
current_transducer (only one so far represented in this code) is the pressure values computer for the current point.
output is the result from summing many speakers/transducers together.
Any suggestions to speed up this code is greatly appreciated.
Working loop:
for i, x in enumerate(xx):
# Creates a unit vector from origin to a point
vector = normalize(point)
# Create a slice of the cartesian space with constant X
xyz_slice = np.array([x*np.ones_like(XXX[i,:,:]), YYY[i,:,:], ZZZ[i,:,:]])
# Projects the position vector of each point of the slice onto the unit vector.
projection = np.array(list(map(np.dot, xyz_slice, vector )))
# Normalizes the projection which results in the Z distance along the line passing through the point
#z_distance = np.apply_along_axis(np.linalg.norm, 0, projection) # this is the slow bit
z_distance = np.linalg.norm(projection, axis=0) # I'm an idiot
# Uses vector math to determine the distance from the line
# Each point in the XYZ slice is the sum of vector along the line and the vector away from the line (radial vector).
# By extension the position of the xyz point minus the projection of the point against the unit vector, results in the radial vector
# Norm the radial vector to get the R value for everywhere in the slice
#r_distance = np.apply_along_axis(np.linalg.norm, 0, xyz_slice - projection) # this is the slow bit
r_distance = np.linalg.norm(xyz_slice - projection, axis=0) # I'm an idiot
# Map the pressure data to each point in the slice using the R and Z distance with the RZ pressure slice.
# look for a more efficient way to do this perhaps. currently takes about 12 seconds per slice
r_indices = r_map_v(r_distance) # 1.3 seconds by itself
z_indices = z_map_v(z_distance)
r_indices[r_indices>384] = 384 # find and remove indicies above the max for r_distance
z_indices[r_indices>803] = 803
current_transducer[i,:,:] = P_real_RZ[z_indices, r_indices]
# Sum the mapped pressure data into the output.
output += current_transducer
I have also tried to work with the simulation data in the form of a 3D Cartesian array. That is the pressure data from the simulation for all x, y, and z values the same size as the output.I can rotate this 3D array in one direction (not two rotations needed for speakers arranged on a sphere). This takes up waaaay too much memory and is still painfully slow. I end up getting memory errors with this approach.
Edit: I found a slightly simpler way to do it but it is still slow. I've updated the code above so that there are no longer nested loops.
I ran a line profiler and the slowest lines by far were the two containing np.apply_along_axis(). I'm afraid I might have to rethink how I do this completely.
Final Edit: I initially had a nested loop which I assumed to be the issue. I don't know what made me think I needed to use apply_along_axis with linalg.norm. In any case that was the issue.
I haven't looked for all the ways that you could optimize this code, but this issue jumped out at me: "I ran a line profiler and the slowest lines by far were the two containing np.apply_along_axis()." np.linalg.norm accepts an axis argument. You can replace the line
z_distance = np.apply_along_axis(np.linalg.norm, 0, projection)
with
z_distance = np.linalg.norm(projection, axis=0)
(and likewise for the other use of np.apply_along_axis and np.linalg.norm).
That should improve the performance a bit.
I'm developing a small software for a college project and I'm having a problem: The code has a way too low performance.
It's an image editing software, and the image is a larg 3d list (the main list is the whole image, each list inside it is an horizontal line and each list inside that one is a pixel, containing three elements).
I need to make pixel-by-pixel adjustments, like multiplying all of them by a constant, so it would go like
for y in range(0,len(image)):
for x in range (0,len(image[0])):
for c in range (0,3):
im[y][x][c] = (im[y][x][c])*a
Where image is the 3d list
len(image) is the amount of horizontal lines in the image (vertical size)
len(image[0]) is the amount of pixels in a horizontal line (horizontal size)
and c is the component of the pixel (going from 0 to 2).
This loop takes several minutes to go through a single 12 MP image and the amount of images I have to process is in the order of the hundreds, so this is just impossible.
What can I do to get a greater performance? Even editing softwares take some seconds because it can be a pretty large operation, but this code is just too slow.
Thank you!
I also (as in the comments) suggest using Numpy.
Sample code would be something like this:
import numpy as np
im = np.array(image,dtype="float16")
# Define your custom function
def myFunc(x,a):
x = x * a
return x
# Vectorise function
vfunc = np.vectorize(myFunc)
# Apply function to the array with the parameter a = 5
im = vfunc(im,5)
I compared timings for a vectorized numpy function and nested loops for an array of the size roughly eqivalent to a 12MP image: 4242 x 2828 x 3.
Nested loops took 99 second, while numpy took about 6.5 secs.
For your reference here is a question about numpy functions efficiency: Most efficient way to map function over numpy array
For simple functions like multiplication using numpy native functions is the fastest.
# Multiply each element by 5
im = im * 5
This code took only 0.5 sec on my machine.
I have two arrays of size (n, m, m) (n number of images of size (m,m)). I want to perform a cross correlation between each corresponding n of the two arrays.
Example: n=1 -> corr2d([m,m]<sub>1</sub>,[m,m]<sub>2</sub>)
My current way include a bunch of for loops in python:
for i in range(len(X)):
X_co = X[i,0,:,:]/(np.max(X[i,0,:,:]))
X_x = X[i,1,:,:]/(np.max(X[i,1,:,:]))
autocorr[i,0,:,:]=correlate2d(X_co, X_x, mode='same', boundary='fill', fillvalue=0)
Obviously this is very slow when the input contain many images, and becomes a substantial part of the total run time if (m,m) << n.
The obvious optimization is to skip the loop and feed everything directly to the compiled correlation function. Currently I'm using scipy's correlate2d.
I've looked around but haven't found any function that allows correlation along some axis or multiple inputs.
Any tips on how to make scipy's correlate2d work or alternatives?
I decided to implement it via the FFT instead.
def fft_xcorr2D(x):
# Over axes (-2,-1) (default in the fft2 function)
## Pad because of cyclic (circular?) behavior of the FFT
x = np.fft2(np.pad(x,([0,0],[0,0],[0,34],[0,34]),mode='constant'))
# Conjugate for correlation, not convolution (Conv. Theorem)
x[:,1,:,:] = np.conj(x[:,1,:,:])
# Over axes (-2,-1) (default in the ifft2 function)
## Multiply elementwise over 2:nd axis (2 image bands for me)
### fftshift over rows and column over images
corr = np.fft.fftshift(np.ifft2(np.prod(x,axis=1)),axes=(-2,-1))
# Return after removing padding
return np.abs(corr)[:,3:-2,3:-2]
Call via:
ts=fft_xcorr2D(X)
If anybody wants to use it:
My input is a 4D array: (N, 2, #Rows, #Cols)
E.g. (500, 2, 30, 30): 500 images, 2 bands (polarizations, for example), of 30x30 pixels
If your input is different, adjust the padding to your liking
Check so your input order is the same as mine otherwise change the axes arguments in the fft2 and ifft2 functions, the np.prod and fftshift. I use fftshift to get the maximum value in the middle (otherwise in the corners), so be wary of that if that's not what you want.
Why is it the maximum value? Technically, it doesn't have to be, but for my purpose it is. fftshift is used to get a correlation that looks like you're used to. Otherwise, the quadrants are turned "inside out". If you wonder what I mean, remove fftshift (just the fftshift part, not its arguments), call the function as before, and plot it.
Afterwards, it should be ready to use.
Possibly x.prod(axis=1) is faster than np.prod(x,axis=1) but it's an old post. It shows no improvement for me after trying.
I have big data set, representing 1.2M points in 220 dimensional periodic space (x changes fom (-pi,pi))... (matrix: 1.2M x 220).
I would like to calculate histogram of distances between these points taking into account periodicity. I have written some code in python but still it works quite slow for my test case (I am not even trying to run it on the whole set...).
Can you maybe take a look and help me with some tweaking?
Any suggestions and comments much appreciated.
import numpy as np
# 1000x220 test set (-pi,pi)
d=np.random.random((1000, 220))*2*np.pi-np.pi
# calculating theoretical limit on the histogram range, max distance between
# two points can be pi in each dimension
m=np.zeros(np.shape(d)[1])+np.pi
m_=np.sqrt(np.sum(m**2))
# hist range is from 0 to mm
mm=np.floor(m_)
bins=mm/0.01
m=np.zeros(bins)
# proper calculations
import time
start_time = time.time()
for i in range(np.shape(d)[0]):
diff=d[:-(i+1),:]-d[i+1:,:]
diff=np.absolute(diff)
adiff=diff-np.pi
diff=np.pi-np.absolute(adiff)
s=np.sqrt(np.einsum('ij,ij->i', diff,diff))
m+=np.histogram(s,range=(0,mm),bins=bins)[0]
print time.time() - start_time
I think you will see the most improvement from breaking the main loop to smaller parts by dividing range(...) to a couple of smaller ranges and use the threading module to have a couple of threads run the loop concurrently