Find point coordinates on image - python

I've created random points and added a list these points double. Then i've drawn graphic and save as a image.
I'm able to draw a line from one point to another point with this code :
cv2.line(img=result,pt1=,pt2=,color=(0,255,255),thickness=5)
I'have a problem there . If i use plt.show() for graphic , i have all points coordinates in list. But when i save this graphic as a image and show with cv2 lib, then all points coordinates changes.
How can i find these points coordinates on image ?
For exapmle : On this graphic you can see (1,4) point . If i save this graphic as a image then this point gets a (104 , 305) coordinates on image.
import numpy as np
import random
import matplotlib.pyplot as plt
import cv2
points = np.random.randint(0, 9, size=(18,2))
print(points)
plt.plot(points[:,0], points[:,1], '.',color='k')
plt.savefig("graphic.png",bbox_inches="tight")
result = cv2.imread("graphic.png")
cv2.imshow("Graphic",result)

I think you are confusing yourself.
Your x,y coordinates start at bottom-left corner of image, have x coordinate first and assume the image is 9 pixels wide.
OpenCV stores points relative to the top-left corner, have the y coordinate first and refer to an image hundreds of pixels wide.

Related

Find new Polygon Coordinates when I already have Old Image Size New Image Size and polygon Coordinates

I want to resize a polygon when I already have it's old coordinate as [(100,200),(200,300),(300,400),(50,60),(90,100),(400,300)] and old Image size is 1980x1080 and new image size is 640x480. How can I get new Coordinates of Polygon?
Multiply the x-coordinates by 640/1980 and the y-coordinates by 480/1080.
Since you tagged numpy:
import numpy as np
old_polygon = np.array([(100,200),(200,300),(300,400),(50,60),(90,100),(400,300)])
new_polygon = old_polygon * (np.array([640, 480]) / np.array([1980, 1080]))

How do I get pixel information from a FITs image in Python?

I am trying to find the intensities of given pixels in a FITs image in Python. The image is black and white so I'm only looking for the values of the pixels.
The code I'm using is:
import matplotlib.pyplot as plt
import astropy
from astropy.io import fits
from astropy.utils.data import get_pkg_data_filename
image_file = get_pkg_data_filename('jet.fits')
image_data = fits.getdata(image_file,ext=0)
image = fits.open('jet.fits')
image.info()
image_data[400][500] #the 400 being the x coordinate of the pixel and the #500 being the y coordinate of the pixel
The last line gives me an output which I am assuming is the value of the pixel, however I get a value of around 109 instead of a value around 0 as the pixel in the image in black or very close to it.
I have tried taking (0,0) as both the upper left corner of the picture and the lower left corner and neither get 0.
I have also tried using PIL and skimage to get the value of the pixel but both result in an "cannot find loader for this fits file" error when I try and open the image.
Any suggestions on how I can get the pixel value?
(0,0) might be the position of the lower left corner and the x and y axis in image_data are reversed (image_data[y][x]). See: https://docs.astropy.org/en/stable/io/fits/usage/image.html

Python Basemap Coordinates

I am using python 3.6 to open a shapefile of the Amazon River on to basemap. However I am confused with how coordinates work in python. I looked up coordinates of the the Amazon River and found it to be lon,lat=-55.126648,-2.163106. But to open my map I need the lat/lon values of corners, which I am not sure how to get.
Here is my code so far:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map= Basemap(projection='tmerc',
lon_0=180,
lat_0=0,
resolution='l')
map.drawmapboundary(fill_color='aqua')
map.fillcontinents(color='#ddaa66',lake_color='aqua')
map.drawcoastlines()
map.readshapefile('filename','Amazon')
plt.show()
Here is the error message I get when I try to run it:
ValueError: must either specify lat/lon values of corners
(llcrnrlon,llcrnrlat,ucrnrlon,urcrnrlat) in degrees or width and height in meters
When creating your map (map = Basemap(...)) , you need to specify those values. They are lower left corner longitude, lower left corner latitude, upper right corner longitude, and upper right corner latitude. These define the extents of the map. You could just plot the whole earth, then look at the region you want and pick the points off of it for your new corners.
The best method for this type of point plotting is to create your own corners by 'zooming out' from the point. this means you'll need to specify llcrnrlat (lower left corner latitude), etc. as such:
my_coords = [38.9719980,-76.9219820]
# How much to zoom from coordinates (in degrees)
zoom_scale = 1
# Setup the bounding box for the zoom and bounds of the map
bbox = [my_coords[0]-zoom_scale,my_coords[0]+zoom_scale,\
my_coords[1]-zoom_scale,my_coords[1]+zoom_scale]
plt.figure(figsize=(12,6))
# Define the projection, scale, the corners of the map, and the resolution.
m = Basemap(projection='merc',llcrnrlat=bbox[0],urcrnrlat=bbox[1],\
llcrnrlon=bbox[2],urcrnrlon=bbox[3],lat_ts=10,resolution='i')
If you want to see a full tutorial on plotting lat/lon points from a .csv file, check out my tutorial where I go through the whole process and include the full code:
Geographic Mapping from a CSV File Using Python and Basemap
You end up with a result that looks like the following:

New coordinates after image rotation using scipy.ndimage.rotate [duplicate]

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

Matplolib: imgshw() and contourfc() coherent representation?

I'm trying to draw 2-d contours of a 2d matrix. I am curious to know if it is normal that imshow() and contour()/contourfc() of matplotlib package work differently with respect the origin of the matrix coordinates. It appears that imshow() treats the origin of coordinates flipped with respect to contour().
I illustrate this with a quite simple example:
import numpy as np
from matplotlib import pyplot as plt
a = np.diag([1.0, 2, 3])
plt.imshow(a)
produces a nice picture with colors along the main diagonal (from left-upper corner to right-bottom corner), But if instead I execute
plt.contour(a)
the figure is a set of contours aligned along the perpendicular diagonal (form left-bottom corner to upper-right corner).
If I flip the array with numpy flipud() function
plt.contour(np.flipud(a))
then both contour() and imgshw() coincide.
Maybe is a stupid question, I apologize for that !
Thanks anyway

Categories