I'm trying to draw 2-d contours of a 2d matrix. I am curious to know if it is normal that imshow() and contour()/contourfc() of matplotlib package work differently with respect the origin of the matrix coordinates. It appears that imshow() treats the origin of coordinates flipped with respect to contour().
I illustrate this with a quite simple example:
import numpy as np
from matplotlib import pyplot as plt
a = np.diag([1.0, 2, 3])
plt.imshow(a)
produces a nice picture with colors along the main diagonal (from left-upper corner to right-bottom corner), But if instead I execute
plt.contour(a)
the figure is a set of contours aligned along the perpendicular diagonal (form left-bottom corner to upper-right corner).
If I flip the array with numpy flipud() function
plt.contour(np.flipud(a))
then both contour() and imgshw() coincide.
Maybe is a stupid question, I apologize for that !
Thanks anyway
Related
The coordinate axis directions seem to be inverted between array indices and plots. Does someone have a satisfactory answer as to why matplotlib likes to display images inverted?
Code that demonstrates my question:
import numpy as np
import matplotlib.pyplot as plt
img = np.zeros((10, 10))
img[2, 4] = 1
plt.imshow(img)
plt.scatter(2, 4)
The output image will show a white pixel and a dot at positiones mirrored at the image diagonal.
This effect caused a lot of confusion for me (and others I have talked to) and I was wondering if there is a sensible explanation for it. To me, it seems that matplotlib interprets the first axis of a numpy array as the y-coordinate and the second as x. As this is not aligned with the mathematical x-y right hand notation this is confusing.
I've created random points and added a list these points double. Then i've drawn graphic and save as a image.
I'm able to draw a line from one point to another point with this code :
cv2.line(img=result,pt1=,pt2=,color=(0,255,255),thickness=5)
I'have a problem there . If i use plt.show() for graphic , i have all points coordinates in list. But when i save this graphic as a image and show with cv2 lib, then all points coordinates changes.
How can i find these points coordinates on image ?
For exapmle : On this graphic you can see (1,4) point . If i save this graphic as a image then this point gets a (104 , 305) coordinates on image.
import numpy as np
import random
import matplotlib.pyplot as plt
import cv2
points = np.random.randint(0, 9, size=(18,2))
print(points)
plt.plot(points[:,0], points[:,1], '.',color='k')
plt.savefig("graphic.png",bbox_inches="tight")
result = cv2.imread("graphic.png")
cv2.imshow("Graphic",result)
I think you are confusing yourself.
Your x,y coordinates start at bottom-left corner of image, have x coordinate first and assume the image is 9 pixels wide.
OpenCV stores points relative to the top-left corner, have the y coordinate first and refer to an image hundreds of pixels wide.
I'm working on sunspot detection and I'm trying to build ground truth masks using sunpy.net.hek client to download solar events from the knowledge base.
I followed this tutorial.
My problem is that I'm not able to get the polygon pixel coordinates after the rotation. That is:
ch_boundary = SkyCoord( [(float(v[0]), float(v[1])) * u.arcsec for v in p3],
obstime=ch_date,
frame=frames.Helioprojective)
rotated_ch_boundary = solar_rotate_coordinate(ch_boundary, aia_map.date)
Where p3 holds the original coordinates of the event (they have to be rotated because your picture could not have the same timing as the event on hek). rotated_ch_boundary is an Astropy SkyCoord but cannot figure out how to get the coordinates in pixel relative to the image from that.
Then in the tutorial it just plots the coordinates using Sunpy Map and matplotlib:
aia_map.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
I cannot do that because I want to print the polygon (filled) on a numpy array and save it.
I also tried to build a custom Sunpy map and use the same function to plot:
from sunpy.net.helioviewer import HelioviewerClient
hv = HelioviewerClient()
filepath = hv.download_jp2('2017/07/10 10:00:00', observatory='SDO',
instrument='HMI', detector='HMI', measurement='continuum')
hmi = sunpy.map.Map(filepath)
# QUERY AND ROTATION CODE HERE...
hmi.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
but it doesn't even show the polygon on the plot, I don't know if it's for the different resolution or whatever.
Do you have any idea on how I can plot the polygon on a custom image and save it in order to use it later?
My purpose is to create a black image with a white polygon highlighted. The polygon should be in the exact same position as the sunspot in the corresponding image, let's say an SDO HMI intensitygram of the same day I downloaded from helioviewer.
Solution:
aia_map.world_to_pixel(rotated_ch_boundary)
or
rotated_ch_boundary.to_pixel(aia_map.wcs)
Thanks to fraserwatson for this post
I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()
I've decided to use guiqwt as my main plot library in Python and it works quite well. However, I'm missing a contour plot feature, so I had to work out my own contours in my image plots. That was quite easy by using scikit. Now I have my plot showing the image and the contours on top. Scale unit in x and y-direction is pixel as the image raw data is given per pixel and the calculated contours as well.
My problem is to convert the pixel-scale into e. g. mm-scale without scaling the image. I want to replace the original scale with a scale that represents the measured distances. The distances are available in an array.
In my first attempt I tried to change the AxisScaleDivision by creating a new one and using QwtPlot::setAxisScaleDiv. But that seems to work like a zoom-function as the image is reduced to the new interval.
Here is my code for a small example:
from guiqwt.plot import ImageDialog
from guiqwt.builder import make
from skimage import measure
import numpy as np
data = np.random.rand(80,30)
contours = measure.find_contours(data, 0.1)
win = ImageDialog(edit=False, toolbar=True, wintitle="Contrast test",
options=dict(show_contrast=True))
img = make.image(data)
plot = win.get_plot()
plot.add_item(img)
for n, contour in enumerate(contours):
curve = make.curve(contour[:, 1], contour[:, 0], 'k-')
plot.add_item(curve)
win.show()
scaleEng = plot.axisScaleEngine(2)
scaleDiv = scaleEng.divideScale(20, 30, 5, 5, 0)
plot.setAxisScaleDiv(2, scaleDiv)
plot.replot()
The syntax is very close to qwt, so I think anybody who is familiar with qwt might be able to help me :)
The image zoom should stay unaltered. Only the axis should be recalculated to a mm-scale and afterwards, of course, adapted when the zoom function is used.
I solved the problem by using a completely different approach. I used the xyimage-function of guiqwt. However, I had to scale my contours too. I missed that the last time, that's why I posted the question.