I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()
Related
I've created random points and added a list these points double. Then i've drawn graphic and save as a image.
I'm able to draw a line from one point to another point with this code :
cv2.line(img=result,pt1=,pt2=,color=(0,255,255),thickness=5)
I'have a problem there . If i use plt.show() for graphic , i have all points coordinates in list. But when i save this graphic as a image and show with cv2 lib, then all points coordinates changes.
How can i find these points coordinates on image ?
For exapmle : On this graphic you can see (1,4) point . If i save this graphic as a image then this point gets a (104 , 305) coordinates on image.
import numpy as np
import random
import matplotlib.pyplot as plt
import cv2
points = np.random.randint(0, 9, size=(18,2))
print(points)
plt.plot(points[:,0], points[:,1], '.',color='k')
plt.savefig("graphic.png",bbox_inches="tight")
result = cv2.imread("graphic.png")
cv2.imshow("Graphic",result)
I think you are confusing yourself.
Your x,y coordinates start at bottom-left corner of image, have x coordinate first and assume the image is 9 pixels wide.
OpenCV stores points relative to the top-left corner, have the y coordinate first and refer to an image hundreds of pixels wide.
For a robotics project, I've used ultrasound as vision. From edge detection algorithms I've generated a binary numpy array. Now, I'm not sure what is the most cost efficient way of calculating the distance to the object. Say I wanted to calculated the shortest distanse from a one to the top left corner? Would it be possible to use "np.where" and "dst = numpy.linalg.norm( )"?
import numpy as np
from scipy import ndimage
from PIL import Image
Max_filtrated = np.where(result>np.amax(result)*0.8,0,result)
Band_filtrated = np.where(Max_filtrated>np.amax(Max_filtrated)*0.11,
1,0)
####### Define connected region and remove noise ########
mask = Band_filtrated> Band_filtrated.mean()
label_im, nb_labels = ndimage.label(mask)
sizes = ndimage.sum(mask, label_im, range(nb_labels + 1))
mean_vals = ndimage.sum(im, label_im, range(1, nb_labels + 1))
mask_size = sizes < 500
remove_pixel = mask_size[label_im]
label_im[remove_pixel] = 0
Ferdig= np.where(label_im>np.amax(label_im)*0.1,1,0)
#########################################################
Thanks
I tried doing this a different way - using the same image as I trimmed for my other answer. This time I calculate each pixel as the square of the distance from the origin and then make all black pixels in the input image inelligible for being the nearest by setting them to a big number. Then I find the smallest number in the array.
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
# Open image in greyscale and make into Numpy array
im = Image.open('curve.png').convert('L')
na = np.array(im)
# Make grid where every pixel is the squared distance from origin - no need to sqrt()
# This could be done outside main loop, btw
x,y = np.indices(na.shape)
dist = x*x + y*y
# Make all black pixels inelligible to be nearest
dist[np.where(na<128)] = sys.maxsize
# Find cell with smallest value, i.e. smallest distance
resultY, resultX = np.unravel_index(dist.argmin(), dist.shape)
print(f'Coordinates: [{resultY},{resultX}]')
Sample Output
Coordinates: [159,248]
Keywords: Python, image processing, nearest white pixel, nearest black pixel, nearest foreground pixel, nearest background pixel, Numpy
I trimmed your image as follows - please don't post images with axes and labels if folks need to process them!
I then leverage Scipy's cdist() function. So, first generate a list of all the white pixels in the image, then calculate the distance from the origin at top-left to each pixel in list. Then find the minimum one.
#!/usr/bin/env python3
import numpy as np
from PIL import Image
from scipy.spatial.distance import cdist
# Open image in greyscale and make into Numpy array
im = Image.open('curve.png').convert('L')
na = np.array(im)
# Get coordinates of white pixels
whites = np.where(na>127)
# Get distance from [0,0] to each white pixel
distances = cdist([(0,0)],np.transpose(whites))
# Index of nearest
ind = distances.argmin()
# Distance of nearest
d = distances[0,ind]
# Coords of nearest
x, y = whites[0][ind], whites[1][ind]
print(f'distance [{x},{y}] = {d}')
Sample Output
distance [159,248] = 294.5929394944828
If I draw a red circle radius=294 centred on the origin and a blue circle centred on those x,y coordinates:
Keywords: Python, image processing, nearest white pixel, nearest black pixel, nearest foreground pixel, nearest background pixel, Numpy, cdist()
I have image, that is rotated by 30deg.
However i need to rotate the bounding box too. The coordinations of bounding box are [xmin,ymin,xmax,ymax] = [101,27,270,388] (xmin,ymin) = top left corner , (xmax,ymax) = bottom right corner.
Now i wanted to rotate this matrix by running it over rotations matrix
theta = np.radians(30)
c, s = np.cos(theta), np.sin(theta)
r = np.array(((c,-s), (s, c)))
Using
labels = np.array([[101,270],[27,388]])
print(np.dot(r,labels))
But this trows incorrect values. If i am not mistaken the linear transformation should be correct did i overlook something or i made mistake somewhere? THanks for help.
Option 1
you can simply use angle parameter in patches.Rectangle:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
import numpy as np
from skimage import data, io, filters
im = np.array(data.coins(), dtype=np.uint8)
# Create figure and axes
fig,ax = plt.subplots(1)
# Display the image
ax.imshow(im)
# Create a Rectangle patch
rect = patches.Rectangle((50,100),100,80,linewidth=1,edgecolor='r',facecolor='none')
rect_2 = patches.Rectangle((50,100),100,80,linewidth=1,edgecolor='r',facecolor='none', angle=30)
# Add the patch to the Axes
ax.add_patch(rect)
ax.add_patch(rect_2)
plt.show()
Option 2
Or if you want to do this in a mathematical way, use a rotation matrix. I just show you how can calculate the corner points of your rotated box
Previous corner points
First I set up the unrotated points:
x = [101,101,270, 270]
y = [27, 388, 27,388]
Now we create the rotation matrix
rot_mat = np.array([[np.cos(pi/6), -np.sin(pi/6)], [ np.sin(pi/6), np.cos(pi/6)]])
Now we centralize x and y, by shifting them (so that center of the rectangle is equivalent to the origin)
x_cen = np.array(x) - np.mean(x)
y_cen = np.array(y) -np.mean(y)
Apply the rotation matrix to the centralized arrays and shift back
x_rot = np.dot(rot_mat, np.array((x_cen,y_cen)))[0,:] + np.mean(x)
y_rot = np.dot(rot_mat, np.array((x_cen,y_cen)))[1,:] + np.mean(x)
Rotated corner points:
I'm currently using skimage.measure.find_contours() to find contours on a surface. Now that I've found the contours I need to able to find the area enclosed within them.
When all of the vertices are within the data set this is fine as a have a fully enclosed polygon.
However, how do I ensure the polygon is fully enclosed if the contour breaches the edge of the surface, either at an edge or at a corner? When this happens I would like to use the edge of the surface as additional vertices to close off the polygon. For example in the following image, with contours shown, you can see that the contours end at the edge of the image, how do I close them up? Also in the example of the brown contour, which is just a single line, I don't think I want an area returned, how would I single out this case?
I know I can check for enclosed contours/polygons by checking if the last vertices of the polygon is the same as the first.
I have code for calculating the area inside a polygon, taken from here
def find_area(array):
a = 0
ox,oy = array[0]
for x,y in array[1:]:
a += (x*oy-y*ox)
ox,oy = x,y
return -a/2
I just need help in closing off the polygons. And checking for the different cases that might occur.
Thanks
Update:
After applying the solution suggested by #soupault I have this code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
# Construct some test data
x, y = np.ogrid[-np.pi:np.pi:100j, -np.pi:np.pi:100j]
r = np.sin(np.exp((np.sin(x)**3 + np.cos(y)**2)))
# Coordinates of point of interest
pt = [(49,75)]
# Apply thresholding to the surface
threshold = 0.8
blobs = r > threshold
# Make a labelled image based on the thresholding regions
blobs_labels = measure.label(blobs, background = 0)
# Show the thresholded regions
plt.figure()
plt.imshow(blobs_labels, cmap='spectral')
# Apply regionprops to charactersie each of the regions
props = measure.regionprops(blobs_labels, intensity_image = r)
# Loop through each region in regionprops, identify if the point of interest is
# in that region. If so, plot the region and print it's area.
plt.figure()
plt.imshow(r, cmap='Greys')
plt.plot(pt[0][0], pt[0][1],'rx')
for prop in props:
coords = prop.coords
if np.sum(np.all(coords[:,[1,0]] == pt[0], axis=1)):
plt.plot(coords[:,1],coords[:,0],'r.')
print(prop.area)
This solution assumes that each pixel is 1x1 in size. In my real data solution this isn't the case so I have also applied the following function to apply linear interpolation to the data. I believe you can also apply a similar function to make the area of each pixel smaller and increase the resolution of the data.
import numpy as np
from scipy import interpolate
def interpolate_patch(x,y,patch):
x_interp = np.arange(np.ceil(x[0]), x[-1], 1)
y_interp = np.arange(np.ceil(y[0]), y[-1], 1)
f = interpolate.interp2d(x, y, patch, kind='linear')
patch_interp = f(x_interp, y_interp)
return x_interp, y_interp, patch_interp
If you need to measure the properties of different regions, it is natural to start with finding the regions (not contours).
The algorithm will be the following, in this case:
Prepare a labeled image:
1.a Either fill the areas between different contour lines with the different colors;
1.b Or apply some image thresholding function, and then run skimage.measure.label (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label);
Execute regionprops using the very labeled image as an input (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops);
Iterate over regions in regionprops and calculate the desired parameters (area, perimeter, etc).
Once you identified the regions in your image via regionprops, you can call .coords for each of them to get the enclosed contour.
If someone will need close open contours by image edges (and make a polygon) here is:
import shapely.geometry as sgeo
import shapely.ops as sops
def close_contour_with_image_edge(contour, image_shape):
"""
this function uses shapely because its easiest way to do that
:param contour: contour generated by skimage.measure.find_contours()
:param image_shape: tuple (row, cols), standard return of numpy shape()
:return:
"""
# make contour linestring
contour_line = sgeo.LineString(contour)
# make image box linestring
box_rows, box_cols = image_shape[0], image_shape[1]
img_box = sgeo.LineString(coordinates=(
(0, 0),
(0, box_cols-1),
(box_rows-1, box_cols-1),
(box_rows-1, 0),
(0, 0)
))
# intersect box with non-closed contour and get shortest line which touch both of contour ends
edge_points = img_box.intersection(contour_line)
edge_parts = sops.split(img_box, edge_points)
edge_parts = list(part for part in edge_parts.geoms if part.touches(edge_points.geoms[0]) and part.touches(edge_points.geoms[1]))
edge_parts.sort(reverse=False, key=lambda x: x.length)
contour_edge = edge_parts[0]
# weld it
contour_line = contour_line.union(contour_edge)
contour_line = sops.linemerge(contour_line)
contour_polygon = sgeo.Polygon(contour_line.coords)
return contour_polygon
I have a closed line made of raster cells of which I know the indexes (col and raw of each cell stored in a list). List is like -
I would like to get the indexes of the cells within this closed line and store them in a separate list. I want to do this in python. Here is an image to be more clear: Raster boundary line
One way to approach this is to implement your own (naive) algorithm, which was my first idea. One the other hand, why reinvent the wheel:
One can easily see that the problem can be interpreted as a black and white (raster/pixel) image. Then the outer and inner area form the background (black) while the border is a closed (white) loop. (Obviously the colors could also switched, but I will use white on black for now.) As it happens there are some fairly sophisticated image processing libraries for python, namely skimage, ndimage and mahotas.
I'm no expert but I think skimage.draw.polygon, skimage.draw.polygon_perimiter are the easiest way to solve your problem.
My experimentation yielded the following:
import matplotlib.pyplot as plt
import numpy as np
from skimage.draw import polygon, polygon_perimeter
from skimage.measure import label, regionprops
# some test data
# I used the format that your input data is in
# These are 4+99*4 points describing the border of a 99*99 square
border_points = (
[[100,100]] +
[[100,100+i] for i in range(1,100)] +
[[100,200]] +
[[100+i,200] for i in range(1,100)] +
[[200,200]] +
[[200,200-i] for i in range(1,100)] +
[[200,100]] +
[[200-i,100] for i in range(1,100)]
)
# convert to numpy arrays which hold the x/y coords for all points
# repeat first point at the end to close polygon.
border_points_x = np.array( [p[0] for p in border_points] + [border_points[0][0]] )
border_points_y = np.array( [p[1] for p in border_points] + [border_points[0][1]] )
# empty (=black) 300x300 black-and-white image
image = np.zeros((300, 300))
# polygon() calculates the indices of a filled polygon
# one would expect this to be inner+border but apparently it is inner+border/2
# probably some kind of "include only the left/top half"
filled_rr, filled_cc = polygon(border_points_y, border_points_x)
# set the image to white at these points
image[filled_rr, filled_cc] = 1
# polygon_perimeter() calculates the indices of a polygon perimiter (i.e. border)
border_rr, border_cc = polygon_perimeter(border_points_y, border_points_x)
# exclude border, by setting it to black
image[border_rr, border_cc] = 0
# label() detects connected patches of the same color and enumerates them
# the resulting image has each of those regions filled with its index
label_img, num_regions = label(image, background=0, return_num=True)
# regionprops() takes a labeled image and computes some measures for each region
regions = regionprops(label_img)
inner_region = regions[0]
print("area", inner_region.area)
# expecting 9801 = 99*99 for inner
# this is what you want, the coords of all inner points
inner_region.coords
# print it
fig, ax = plt.subplots()
ax.imshow(image, cmap=plt.cm.gray)