I'm working on some map projection interface code in pyqtgraph. My image is an aitoff-hammer projection, with the same general shape as the following atlas example:
aitoff-hammer map projection:
.
The problem is, my image doesn't have those grid lines. I've verified my code to convert pixel values to latitude/longitude coordinates works, but I'm having trouble coming up with an approach to make a custom grid. My current thought is to make a new image that just has grid lines, then make that a semi-opaque ImageItem. But, is it possible to stack ImageItems in pyqtgraph?
As musicamante pointed out, I was able to stack ImageItems. I'll be evaluating the accuracy of the curves, but here is the relevant code:
def draw_grid(self, lon, lat, lon_int, lat_int):
lon[np.isnan(lon)] = lon_int + 2
lat[np.isnan(lat)] = lat_int + 2
lon_cond = lon - (lon_int * (lon // lon_int)) <= 0.5
lat_cond = lat - (lat_int * (lat // lat_int)) <= 0.5
inds = np.where(lon_cond | lat_cond)
grid = np.zeros(self.map_data.shape, dtype = np.uint8)
grid[inds] = 255
grid_image = pg.ImageItem(grid, axisOrder = 'row-major')
grid_image.setZValue(10)
grid_image.setOpacity(0.3)
self.map_plot.addItem(grid_image)
Related
I am working on a project related to charge distribution on the sphere and I decided to simulate the problem using vpython and Coulomb's law. I ran into an issue when I created a sphere because I am trying to evenly place out like 1000 points (charges) on the sphere and I can't seem to succeed, I have tried several ways but can't seem to make the points be on the sphere.
I defined an arbitrary value SOYDNR as a number to divide the diameter of the sphere into smaller segments. This would allow me to create a smaller rings of charges and fill out the surface of the spahre with charges. Then I make a list with 4 values that represent different parts of the radius to create the charge rings on the surface. Then I run into a problem and I am not sure how to deal with it. I have tried calculating the radius at those specific heights but yet I couldn't implement it. This is how it looks visually:![Sphere with charges on the surface].(https://i.stack.imgur.com/3N4x6.png) If anyone has any suggestions, I would be greatful, thanks!
SOYDNR = 10 #can be changed
SOYD = 2*radi/SOYDNR # strips of Y direction; initial height + SOYD until = 2*radi
theta = 0
dtheta1 = 2*pi/NCOS
y_list = np.arange(height - radi + SOYD, height, SOYD).tolist()
print(y_list)
for i in y_list:
while Nr<NCOS and theta<2*pi:
position = radi*vector(cos(theta),i*height/radi,sin(theta))
points_on_sphere = points_on_sphere + [sphere(pos=position, radius=radi/50, color=vector(1, 0, 0))]
Nr = Nr + 1
theta = theta + dtheta1
Nr = 0
theta = 0
I found a great way to do it, it creates a bunch of spheres in the area that is described by an if statement this is the code I am using for my simulation that creates the sphere with points on it.
def SOSE (radi, number_of_charges, height):
Charged_Sphere = sphere(pos=vector(0,height,0), radius=radi, color=vector(3.5, 3.5, 3.5), opacity=(0.2))
points_on_sphere = []
NCOS = number_of_charges
theta = 0
dtheta = 2*pi/NCOS
dr = radi/60
direcVector = vector(0, height, 0)
while theta<2*pi:
posvec1 = radi*vector(1-radi*random(),1-radi*random()/radi,1-radi*random())
posvec2 = radi*vector(1-radi*random(),-1+radi*random()/radi,1-radi*random())
if mag(posvec1)<radi and mag(posvec1)>(radi-dr):
posvec1 = posvec1+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec1,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
if mag(posvec2)<radi and mag(posvec2)>(radi-dr):
posvec2 = posvec2+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec2,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
This code can be edited to add more points and I have two if statements because I want to change the height at which the sphere is present, and if I have just one statement I only see half of the sphere. :)
To start, Im basically trying to go from this:
To this:
Given that each coordinate [x,y] correspond with a given point in the second image after a function is applied to x and y. f(x,y)=coords of the second image for the value of [x,y]. The way Im handling this part as of now is to make a "map" array of x and y and the lookup in that array to find the new point. so mapArrayX[x] will give the new x value and mapArray[y] will give the new Y value. The Issue with this is that I have to iterate over the entire image (256,000 points) and that takes roughly .4 seconds. Is there a better way to do this?
The second issue is after transforming the coordinates I get an image with holes in it that looks like this:
which I make look like the image above without the holes by doing this:
dewarpedImage[dewarpedImage == 0] = np.nan
x = np.arange(0, dewarpedImage.shape[1])
y = np.arange(0, dewarpedImage.shape[0])
# mask invalid values
dewarpedImage = np.ma.masked_invalid(dewarpedImage)
xx, yy = np.meshgrid(x, y)
# get only the valid values
x1 = xx[~dewarpedImage.mask]
y1 = yy[~dewarpedImage.mask]
newarr = dewarpedImage[~dewarpedImage.mask]
startTime = time.time()
dewarpedImage = interpolate.griddata((x1, y1), newarr.ravel(),
(xx, yy),
method='linear')
This takes roughly 3 seconds to perform. Is there a faster way to do this maybe. I ideally need to get this whole process to go from taking 3+seconds to less than 1 second.
Here is my conversion function/how I generate my mapping:
RANGE_BIN_SIZE = .39
def rangeBinToRange(rangeBin):
return rangeBin * RANGE_BIN_SIZE
def azToDegree(azBin):
degree = math.degrees(math.asin((azBin - 127.5) * 0.3771/(0.19812*255)))
return degree
def makeWarpMap():
print("making warp maps")
xMap = np.zeros((1024, 256))
yMap = np.zeros((1024, 256))
for az in range(256):
for rang in range(1024):
azDegree = azToDegree(az)
dist = rangeBinToRange(rang)
x = round(dist * math.sin(math.radians(azDegree)) + 381)
y = round(dist * math.cos(math.radians(azDegree)))
xMap[rang][az] = x
yMap[rang][az] = y
np.save("warpmapX", xMap)
np.save("warpmapY", yMap)
print(azToDegree(0))
if not path.exists("warpmapX.npy") or not path.exists("warpmapY.npy"):
makeWarpMap()
data = np.load(filename)
xMap = np.load("warpmapX.npy")
yMap = np.load("warpmapY.npy")
dewarpedImage = np.zeros((400, 762))
print(data.shape)
for az in range(256):
azslice = data[:, az]
for rang in range(1024):
intensity = azslice[rang]
x = xMap[rang][az]
y = yMap[rang][az]
dewarpedImage[int(y)][int(x)] = intensity
You have holes in your converted image because your conversion does not span the entire polar image. I would recommend to do the reverse conversion. In other words, for each (X,Y) in polar image, find corresponding point (x,y) in cartesian image and get that color. That way you won't need to deal with holes at all and it will give you a full image (it will get rid of 3sec conversion). If you provide your conversion function, I can help you do the reverse conversion.
I'm currently attempting to implement this algorithm for volume rendering in Python, and am conceptually confused about their method of generating the LH histogram (see section 3.1, page 4).
I have a 3D stack of DICOM images, and calculated its gradient magnitude and the 2 corresponding azimuth and elevation angles with it (which I found out about here), as well as finding the second derivative.
Now, the algorithm is asking me to iterate through a set of voxels, and "track a path by integrating the gradient field in both directions...using the second order Runge-Kutta method with an integration step of one voxel".
What I don't understand is how to use the 2 angles I calculated to integrate the gradient field in said direction. I understand that you can use trilinear interpolation to get intermediate voxel values, but I don't understand how to get the voxel coordinates I want using the angles I have.
In other words, I start at a given voxel position, and want to take a 1 voxel step in the direction of the 2 angles calculated for that voxel (one in the x-y direction, the other in the z-direction). How would I take this step at these 2 angles and retrieve the new (x, y, z) voxel coordinates?
Apologies in advance, as I have a very basic background in Calc II/III, so vector fields/visualization of 3D spaces is still a little rough for me.
Creating 3D stack of DICOM images:
def collect_data(data_path):
print "collecting data"
files = [] # create an empty list
for dirName, subdirList, fileList in os.walk(data_path):
for filename in fileList:
if ".dcm" in filename:
files.append(os.path.join(dirName,filename))
# Get reference file
ref = dicom.read_file(files[0])
# Load dimensions based on the number of rows, columns, and slices (along the Z axis)
pixel_dims = (int(ref.Rows), int(ref.Columns), len(files))
# Load spacing values (in mm)
pixel_spacings = (float(ref.PixelSpacing[0]), float(ref.PixelSpacing[1]), float(ref.SliceThickness))
x = np.arange(0.0, (pixel_dims[0]+1)*pixel_spacings[0], pixel_spacings[0])
y = np.arange(0.0, (pixel_dims[1]+1)*pixel_spacings[1], pixel_spacings[1])
z = np.arange(0.0, (pixel_dims[2]+1)*pixel_spacings[2], pixel_spacings[2])
# Row and column directional cosines
orientation = ref.ImageOrientationPatient
# This will become the intensity values
dcm = np.zeros(pixel_dims, dtype=ref.pixel_array.dtype)
origins = []
# loop through all the DICOM files
for filename in files:
# read the file
ds = dicom.read_file(filename)
#get pixel spacing and origin information
origins.append(ds.ImagePositionPatient) #[0,0,0] coordinates in real 3D space (in mm)
# store the raw image data
dcm[:, :, files.index(filename)] = ds.pixel_array
return dcm, origins, pixel_spacings, orientation
Calculating gradient magnitude:
def calculate_gradient_magnitude(dcm):
print "calculating gradient magnitude"
gradient_magnitude = []
gradient_direction = []
gradx = np.zeros(dcm.shape)
sobel(dcm,0,gradx)
grady = np.zeros(dcm.shape)
sobel(dcm,1,grady)
gradz = np.zeros(dcm.shape)
sobel(dcm,2,gradz)
gradient = np.sqrt(gradx**2 + grady**2 + gradz**2)
azimuthal = np.arctan2(grady, gradx)
elevation = np.arctan(gradz,gradient)
azimuthal = np.degrees(azimuthal)
elevation = np.degrees(elevation)
return gradient, azimuthal, elevation
Converting to patient coordinate system to get actual voxel position:
def get_patient_position(dcm, origins, pixel_spacing, orientation):
"""
Image Space --> Anatomical (Patient) Space is an affine transformation
using the Image Orientation (Patient), Image Position (Patient), and
Pixel Spacing properties from the DICOM header
"""
print "getting patient coordinates"
world_coordinates = np.empty((dcm.shape[0], dcm.shape[1],dcm.shape[2], 3))
affine_matrix = np.zeros((4,4), dtype=np.float32)
rows = dcm.shape[0]
cols = dcm.shape[1]
num_slices = dcm.shape[2]
image_orientation_x = np.array([ orientation[0], orientation[1], orientation[2] ]).reshape(3,1)
image_orientation_y = np.array([ orientation[3], orientation[4], orientation[5] ]).reshape(3,1)
pixel_spacing_x = pixel_spacing[0]
# Construct affine matrix
# Method from:
# http://nipy.org/nibabel/dicom/dicom_orientation.html
T_1 = origins[0]
T_n = origins[num_slices-1]
affine_matrix[0,0] = image_orientation_y[0] * pixel_spacing[0]
affine_matrix[0,1] = image_orientation_x[0] * pixel_spacing[1]
affine_matrix[0,3] = T_1[0]
affine_matrix[1,0] = image_orientation_y[1] * pixel_spacing[0]
affine_matrix[1,1] = image_orientation_x[1] * pixel_spacing[1]
affine_matrix[1,3] = T_1[1]
affine_matrix[2,0] = image_orientation_y[2] * pixel_spacing[0]
affine_matrix[2,1] = image_orientation_x[2] * pixel_spacing[1]
affine_matrix[2,3] = T_1[2]
affine_matrix[3,3] = 1
k1 = (T_1[0] - T_n[0])/ (1 - num_slices)
k2 = (T_1[1] - T_n[1])/ (1 - num_slices)
k3 = (T_1[2] - T_n[2])/ (1 - num_slices)
affine_matrix[:3, 2] = np.array([k1,k2,k3])
for z in range(num_slices):
for r in range(rows):
for c in range(cols):
vector = np.array([r, c, 0, 1]).reshape((4,1))
result = np.matmul(affine_matrix, vector)
result = np.delete(result, 3, axis=0)
result = np.transpose(result)
world_coordinates[r,c,z] = result
# print "Finished slice ", str(z)
# np.save('./data/saved/world_coordinates_3d.npy', str(world_coordinates))
return world_coordinates
Now I'm at the point where I want to write this function:
def create_lh_histogram(patient_positions, dcm, magnitude, azimuthal, elevation):
print "constructing LH histogram"
# Get 2nd derivative
second_derivative = gaussian_filter(magnitude, sigma=1, order=1)
# Determine if voxels lie on boundary or not (thresholding)
# Still have to code out: let's say the thresholded voxels are in
# a numpy array called voxels
#Iterate through all thresholded voxels and integrate gradient field in
# both directions using 2nd-order Runge-Kutta
vox_it = voxels.nditer(voxels, flags=['multi_index'])
while not vox_it.finished:
# ???
I'm running quite a complex code so I won't bother with details as I've had it working before but now im getting this error.
Particle is a 3D tuple filled with 0 or 255, and I am using the scipy centre of mass function and then trying to turn the value into its closest integer (as I'm dealing with arrays). The error is found with on the last line... can anyone explain why this might be??
2nd line fills Particle
3rd line deletes any surrounding particles with a different label (This is in a for loop for all labels)
Particle = []
Particle = big_labelled_stack[x_start+20:x_stop+20,y_start+20:y_stop+20,z_start+20:z_stop+20]
Particle = np.where(Particle == i ,255,0)
CoM = scipy.ndimage.measurements.center_of_mass(Particle)
CoM = [ (int(round(x)) for x in CoM ]
Thanks in advance. If you need more code just ask but I dont think it will help you and its very messy.
################## MORE CODE
border = 30
[labelled_stack,no_of_label] = label(labelled,structure_array,output_type)
# RE-LABEL particles now no. of seeds has been reduced! LAST LABELLING
#Increase size of stack by increasing borders and equal them to 0; to allow us to cut out particles into cube shape which else might lye outside the border
h,w,l = labelled.shape
big_labelled_stack = np.zeros(shape=(h+60,w+60,l+60),dtype=np.uint32)
# Creates an empty border around labelled_stack full of zeros of size border
if (no_of_label > 0): #Small sample may return no particles.. so this stage not neccesary
info = np.zeros(shape=(no_of_label,19)) #Creates array to store coordinates of particles
for i in np.arange(1,no_of_label,1):
coordinates = find_objects(labelled_stack == i)[0] #Find coordinates of label i.
x_start = int(coordinates[0].start)
x_stop = int(coordinates[0].stop)
y_start = int(coordinates[1].start)
y_stop = int(coordinates[1].stop)
z_start = int(coordinates[2].start)
z_stop = int(coordinates[2].stop)
dx = (x_stop - x_start)
dy = (y_stop - y_start)
dz = (z_stop - z_start)
Particle = np.zeros(shape=(dy,dx,dz),dtype = np.uint16)
Particle = big_labelled_stack[x_start+30:x_start+dx+30,y_start+30:y_start+dy+30,z_start+30:z_start+dz+30]
Particle = np.where(Particle == i ,255,0)
big_labelled_stack[border:h+border,border:w+border,border:l+border] = labelled_stack
big_labelled_stack = np.where(big_labelled_stack == i , 255,0)
CoM_big_stack = scipy.ndimage.measurements.center_of_mass(big_labelled_stack)
C = np.asarray(CoM_big_stack) - border
if dx > dy:
b = dx
else: #Finds the largest of delta_x,y,z and saves as b, so that we create 'Cubic_Particle' of size 2bx2bx2b (cubic box)
b = dy
if dz > b:
b = dz
CoM = scipy.ndimage.measurements.center_of_mass(Particle)
CoM = [ (int(round(x))) for x in CoM ]
Cubic_Particle = np.zeros(shape=(2*b,2*b,2*b))
Cubic_Particle[(b-CoM[0]):(b+dx-CoM[0]),(b-CoM[1]):(b+dy-CoM[1]),(b-CoM[2]):(b+dz-CoM[2])] = Particle
volume = Cubic_Particle.size # Gives volume of the box in voxels
info[i-1,:] = [C[0],C[1],C[2],i,C[0]-b,C[1]-b,C[2]-b,C[0]+b,C[1]+b,C[2]+b,volume,0,0,0,0,0,0,0,0] # Fills an array with label.No., size of box, and co-ords
else:
print('No particles found, try increasing the sample size')
info = []
Ok, so I have a stack full of labelled particles, there are two things I am trying to do, first find the centre of masses of each particle with respect ot the labelled_stack which is what CoM_big_labelled_stack (and C) does. and stores the co-ords in a list (tuple) called info. I am also trying to create a cubic box around the particle, with its centre of mass as the centre (which is relating to the CoM variable), so first I use the find objects function in scipy to find a particle, i then use these coordinates to create a non-cubic box around the particle, and find its centre of mass.I then find the longest dimension of the box and call it b, creating a cubic box of size 2b and filling it with particle in the right position.
Sorry this code is a mess, I am very new to Python
I have an image processing problem I'm currently solving in python, using numpy and scipy. Briefly, I have an image that I want to apply many local contractions to. My prototype code is working, and the final images look great. However, processing time has become a serious bottleneck in our application. Can you help me speed up my image processing code?
I've tried to boil down our code to the 'cartoon' version below. Profiling suggests that I'm spending most of my time on interpolation. Are there obvious ways to speed up execution?
import cProfile, pstats
import numpy
from scipy.ndimage import interpolation
def get_centered_subimage(
center_point, window_size, image):
x, y = numpy.round(center_point).astype(int)
xSl = slice(max(x-window_size-1, 0), x+window_size+2)
ySl = slice(max(y-window_size-1, 0), y+window_size+2)
subimage = image[xSl, ySl]
interpolation.shift(
subimage, shift=(x, y)-center_point, output=subimage)
return subimage[1:-1, 1:-1]
"""In real life, this is experimental data"""
im = numpy.zeros((1000, 1000), dtype=float)
"""In real life, this mask is a non-zero pattern"""
window_radius = 10
mask = numpy.zeros((2*window_radius+1, 2*window_radius+1), dtype=float)
"""The x, y coordinates in the output image"""
new_grid_x = numpy.linspace(0, im.shape[0]-1, 2*im.shape[0])
new_grid_y = numpy.linspace(0, im.shape[1]-1, 2*im.shape[1])
"""The grid we'll end up interpolating onto"""
grid_step_x = new_grid_x[1] - new_grid_x[0]
grid_step_y = new_grid_y[1] - new_grid_y[0]
subgrid_radius = numpy.floor(
(-1 + window_radius * 0.5 / grid_step_x,
-1 + window_radius * 0.5 / grid_step_y))
subgrid = (
window_radius + 2 * grid_step_x * numpy.arange(
-subgrid_radius[0], subgrid_radius[0] + 1),
window_radius + 2 * grid_step_y * numpy.arange(
-subgrid_radius[1], subgrid_radius[1] + 1))
subgrid_points = ((2*subgrid_radius[0] + 1) *
(2*subgrid_radius[1] + 1))
"""The coordinates of the set of spots we we want to contract. In real
life, this set is non-random:"""
numpy.random.seed(0)
num_points = 10000
center_points = numpy.random.random(2*num_points).reshape(num_points, 2)
center_points[:, 0] *= im.shape[0]
center_points[:, 1] *= im.shape[1]
"""The output image"""
final_image = numpy.zeros(
(new_grid_x.shape[0], new_grid_y.shape[0]), dtype=numpy.float)
def profile_me():
for m, cp in enumerate(center_points):
"""Take an image centered on each illumination point"""
spot_image = get_centered_subimage(
center_point=cp, window_size=window_radius, image=im)
if spot_image.shape != (2*window_radius+1, 2*window_radius+1):
continue #Skip to the next spot
"""Mask the image"""
masked_image = mask * spot_image
"""Resample the image"""
nearest_grid_index = numpy.round(
(cp - (new_grid_x[0], new_grid_y[0])) /
(grid_step_x, grid_step_y))
nearest_grid_point = (
(new_grid_x[0], new_grid_y[0]) +
(grid_step_x, grid_step_y) * nearest_grid_index)
new_coordinates = numpy.meshgrid(
subgrid[0] + 2 * (nearest_grid_point[0] - cp[0]),
subgrid[1] + 2 * (nearest_grid_point[1] - cp[1]))
resampled_image = interpolation.map_coordinates(
masked_image,
(new_coordinates[0].reshape(subgrid_points),
new_coordinates[1].reshape(subgrid_points))
).reshape(2*subgrid_radius[1]+1,
2*subgrid_radius[0]+1).T
"""Add the recentered image back to the scan grid"""
final_image[
nearest_grid_index[0]-subgrid_radius[0]:
nearest_grid_index[0]+subgrid_radius[0]+1,
nearest_grid_index[1]-subgrid_radius[1]:
nearest_grid_index[1]+subgrid_radius[1]+1,
] += resampled_image
cProfile.run('profile_me()', 'profile_results')
p = pstats.Stats('profile_results')
p.strip_dirs().sort_stats('cumulative').print_stats(10)
Vague explanation of what the code does:
We start with a pixellated 2D image, and a set of arbitrary (x, y) points in our image that don't generally fall on an integer grid. For each (x, y) point, I want to multiply the image by a small mask centered precisely on that point. Next we contract/expand the masked region by a finite amount, before finally adding this processed sub-image to a final image, which may not have the same pixel size as the original image. (Not my finest explanation. Ah well).
I'm pretty sure that, as you said, the bulk of the calculation time happens in interpolate.map_coordinates(…), which gets called once for every iteration on center_points, here 10,000 times. Generally, working with the numpy/scipy stack, you want the repetitive task over a large array to happen in native Numpy/Scipy functions -- i.e. in a C loop over homogeneous data -- as opposed to explicitely in Python.
One strategy that might speed up the interpolation, but that will also increase the amount of memory used, is :
First, fetch all the subimages (here named masked_image) in a 3-dimensional array (window_radius x window_radius x center_points.size)
Make a ufunc (read that, it's useful) that wraps the work that has to be done on each subimage, using numpy.frompyfunc, which should return another 3-dimensional array (subgrid_radius[0] x subgrid_radius[1] x center_points.size). In short, this creates a vectorized version of the python function, that can be broadcast element-wise on an array.
Build the final image by summing over the third dimension.
Hope that gets you closer to your goals!