I am using python 3.6 to open a shapefile of the Amazon River on to basemap. However I am confused with how coordinates work in python. I looked up coordinates of the the Amazon River and found it to be lon,lat=-55.126648,-2.163106. But to open my map I need the lat/lon values of corners, which I am not sure how to get.
Here is my code so far:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map= Basemap(projection='tmerc',
lon_0=180,
lat_0=0,
resolution='l')
map.drawmapboundary(fill_color='aqua')
map.fillcontinents(color='#ddaa66',lake_color='aqua')
map.drawcoastlines()
map.readshapefile('filename','Amazon')
plt.show()
Here is the error message I get when I try to run it:
ValueError: must either specify lat/lon values of corners
(llcrnrlon,llcrnrlat,ucrnrlon,urcrnrlat) in degrees or width and height in meters
When creating your map (map = Basemap(...)) , you need to specify those values. They are lower left corner longitude, lower left corner latitude, upper right corner longitude, and upper right corner latitude. These define the extents of the map. You could just plot the whole earth, then look at the region you want and pick the points off of it for your new corners.
The best method for this type of point plotting is to create your own corners by 'zooming out' from the point. this means you'll need to specify llcrnrlat (lower left corner latitude), etc. as such:
my_coords = [38.9719980,-76.9219820]
# How much to zoom from coordinates (in degrees)
zoom_scale = 1
# Setup the bounding box for the zoom and bounds of the map
bbox = [my_coords[0]-zoom_scale,my_coords[0]+zoom_scale,\
my_coords[1]-zoom_scale,my_coords[1]+zoom_scale]
plt.figure(figsize=(12,6))
# Define the projection, scale, the corners of the map, and the resolution.
m = Basemap(projection='merc',llcrnrlat=bbox[0],urcrnrlat=bbox[1],\
llcrnrlon=bbox[2],urcrnrlon=bbox[3],lat_ts=10,resolution='i')
If you want to see a full tutorial on plotting lat/lon points from a .csv file, check out my tutorial where I go through the whole process and include the full code:
Geographic Mapping from a CSV File Using Python and Basemap
You end up with a result that looks like the following:
Related
I'm trying to set as initial camera of a 3D volume plot where the upper left corner is the origin (x, y, z = 0). I've read the documentation about the camera controls but cannot figure out how can I accomplish this.
The initial view I want it's something like this:
I tried it and this one work on me
If you want the front upper left corner as (0,0,0)
camera = dict(
eye=dict(x=0, y=-0.5, z=-2.5)
)
fig.update_layout(scene_camera=camera, title=name)
fig.show()
what I understand from this eye is basically the position of the eye(or you) look at eyepoint(0,0,0) which is I believe the center of the 3D graph (not the coordinate)
And if you need to change the axes direction to the opposite, you can try to put it on negative on the eye position, and if it is zero you can put negative small number (in this example I used -0.5, but you can use -0.01 too)
I've created random points and added a list these points double. Then i've drawn graphic and save as a image.
I'm able to draw a line from one point to another point with this code :
cv2.line(img=result,pt1=,pt2=,color=(0,255,255),thickness=5)
I'have a problem there . If i use plt.show() for graphic , i have all points coordinates in list. But when i save this graphic as a image and show with cv2 lib, then all points coordinates changes.
How can i find these points coordinates on image ?
For exapmle : On this graphic you can see (1,4) point . If i save this graphic as a image then this point gets a (104 , 305) coordinates on image.
import numpy as np
import random
import matplotlib.pyplot as plt
import cv2
points = np.random.randint(0, 9, size=(18,2))
print(points)
plt.plot(points[:,0], points[:,1], '.',color='k')
plt.savefig("graphic.png",bbox_inches="tight")
result = cv2.imread("graphic.png")
cv2.imshow("Graphic",result)
I think you are confusing yourself.
Your x,y coordinates start at bottom-left corner of image, have x coordinate first and assume the image is 9 pixels wide.
OpenCV stores points relative to the top-left corner, have the y coordinate first and refer to an image hundreds of pixels wide.
I'm working on sunspot detection and I'm trying to build ground truth masks using sunpy.net.hek client to download solar events from the knowledge base.
I followed this tutorial.
My problem is that I'm not able to get the polygon pixel coordinates after the rotation. That is:
ch_boundary = SkyCoord( [(float(v[0]), float(v[1])) * u.arcsec for v in p3],
obstime=ch_date,
frame=frames.Helioprojective)
rotated_ch_boundary = solar_rotate_coordinate(ch_boundary, aia_map.date)
Where p3 holds the original coordinates of the event (they have to be rotated because your picture could not have the same timing as the event on hek). rotated_ch_boundary is an Astropy SkyCoord but cannot figure out how to get the coordinates in pixel relative to the image from that.
Then in the tutorial it just plots the coordinates using Sunpy Map and matplotlib:
aia_map.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
I cannot do that because I want to print the polygon (filled) on a numpy array and save it.
I also tried to build a custom Sunpy map and use the same function to plot:
from sunpy.net.helioviewer import HelioviewerClient
hv = HelioviewerClient()
filepath = hv.download_jp2('2017/07/10 10:00:00', observatory='SDO',
instrument='HMI', detector='HMI', measurement='continuum')
hmi = sunpy.map.Map(filepath)
# QUERY AND ROTATION CODE HERE...
hmi.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
but it doesn't even show the polygon on the plot, I don't know if it's for the different resolution or whatever.
Do you have any idea on how I can plot the polygon on a custom image and save it in order to use it later?
My purpose is to create a black image with a white polygon highlighted. The polygon should be in the exact same position as the sunspot in the corresponding image, let's say an SDO HMI intensitygram of the same day I downloaded from helioviewer.
Solution:
aia_map.world_to_pixel(rotated_ch_boundary)
or
rotated_ch_boundary.to_pixel(aia_map.wcs)
Thanks to fraserwatson for this post
I'm doing some plotting using cartopy and matplotlib, and I was recently using the PlateCarree transformation, but I changed to Mercator because Louisiana was a bit too squished for my liking. Prior to the switch, I had a logo displayed in the bottom left corner, using these two lines of code logo = matplotlib.image.imread('/Users/ian/Desktop/M.png')
plt.imshow(logo, extent =(lon-offset -1 + .25, lon - offset + .75, lat - offset + .25, lat - offset + 1 + .75), zorder=35)
Where the extent of the axis was set using these points
ax.set_extent([lon-offset-1, lon+offset+1, lat-offset, lat+offset])
this is what the plot looked like using PlateCarree:
After switching to Mercator, I've gotten everything to work well except for the logo. I've added the transformation keyword argument to the image plotting line, so now it reads:
plt.imshow(logo, extent =(lon-offset -1 +.25, lon - offset + .75, lat - offset + .25, lat - offset + 1 + .75), zorder=35, transform=ccrs.PlateCarree())
but now the the logo has lost its crispness and become skewed with the transformation, and most mysteriously, it has switched corners to the upper left corner of the plot. It now looks like this:
Does anyone know how I can change the projection of my plot without skewing this image? What I really need to do is make sure that the corner of the image is in the corner of the plot in the transformed coordinate stystem, but leave the rest of the image's placement independent of the coordinate system. I was thinking about possibly putting the image all alone in its own seperate subplot, and than trying to place that subplot directly on top of the main one. Seems like a pretty bad solution though. Thanks!
You might get better results if you plot the image logo in axes coordinates rather than data coordinates. You can use the ax.transAxes transform for this, and specify the extent in axes coordinates ([0, 0] in bottom left, [1, 1] in top right).
I have a GeoTiff in UTM32 and coordinates of a rectangle also in UTM32.
(This projection may not always be the case, but the projections will always be the same)
I simply need to crop the image using the rectangle.
The rectangle is given by: (xmin, xmax, ymin, ymax)
699934.584491, 700160.946739, 6168703.00544, 6169364.0093
I know how to make a polygon from the points, how to make a shapefile from the polygon, and I know how to create a masked numpy array using the points. However, I don't know how to use the polygon, the shapefile or the mask to actually crop the image.
I already looked at the description at:
https://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html#clip-a-geotiff-with-shapefile
However, I don't really understand it and it seems overly complicated. (like I don't know what histogram stretching is supposed to be doing there except confusing)
Akin's answer is mostly right, but doesn't provide a complete explanation.
You can crop a gdal file using gdal_translate, which can be used in python via gdal.Translate.
Best option: projwin
The easiest way is with the projwin flag, which takes 4 values:
window = (upper_left_x, upper_left_y, lower_right_x, lower_right_y)
These values are in map coordinates. The bounds of the input file can be obtained via gdalinfo input_raster.tif from the command line.
NOTE: for many coordinate systems, ymax is actually less than ymin, so it's important to use "upper_left" and "lower_right" to identify the coordinates instead of "max" and "min." Akin's answer didn't work for me because of this difference.
The complete solution, then is:
from osgeo import gdal
upper_left_x = 699934.584491
upper_left_y = 6169364.0093
lower_right_x = 700160.946739
lower_right_y = 6168703.00544
window = (upper_left_x,upper_left_y,lower_right_x,lower_right_y)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', projWin = window)
Additional option: srcwin
srcwin is another gdal_translate flag similar to projwin, but takes in the pixel and line window via an offset and size, instead of using the map coordinate bounds. You would use it like this:
window = (offset_x, offset_y, size_x, size_y)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', srcWin = window)
Try to use bbox = (xmin,ymin,xmax,ymax)
from osgeo import gdal
bbox = (xmin,ymin,xmax,ymax)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', projWin = bbox)