What I want to do:
I want to get the position and dimensions of a text instance in matplotlib world units (not screen pixels), with the intention of calculating and preventing text overlaps.
I'm developing on Mac OSX 10.9.3, Python 2.7.5, matplotlib 1.3.1.
What I've tried:
Let t be a text instance.
t.get_window_extent(renderer):
This gets bounding box dimensions in pixels, and I need world coordinates (normalized between -1.0 and 1.0 in my case).
t._get_bbox_patch():
t = ax.text(x, y, text_string, prop_dict, bbox=dict(facecolor='red', alpha=0.5, boxstyle='square'))
print t._get_bbox_patch()
When I execute the above sequence, the output is FancyBboxPatchFancyBboxPatch(0,0;1x1). In the image I produce, the text instance is rendered properly with a red bounding box, so that output leads me to think that the FancyBbox is instantiated but not actually populated with real dimensions until render time.
So, how can I get the position and dimensions of the text instance's bounding box in the same coord system units that I used for the x and y parameters I passed to ax.text(...)?
This may help a bit.
import matplotlib.pyplot as plt
f = plt.figure()
ax = f.add_subplot(111)
ax.plot([0,10], [4,0])
t = ax.text(3.2, 2.1, "testing...")
# get the inverse of the transformation from data coordinates to pixels
transf = ax.transData.inverted()
bb = t.get_window_extent(renderer = f.canvas.renderer)
bb_datacoords = bb.transformed(transf)
# Bbox('array([[ 3.2 , 2.1 ],\n [ 4.21607125, 2.23034396]])')
This should give what you want. If you want to have the coordinates in terms of figure coordinates (0..1,0..1), then use the inverse of ax.transAxes.
However, there is a small catch in this solution. An excerpt from the matplotlib documentation:
Any Text instance can report its extent in window coordinates (a negative x coordinate is outside the window), but there is a rub.
The RendererBase instance, which is used to calculate the text size, is not known until the figure is drawn (draw()). After the window is drawn and the text instance knows its renderer, you can call get_window_extent().
So, before the figure is really drawn, there seems to be no way to find out the text size.
BTW, you may have noticed that the Bbox instances have method overlaps which may be used to find out whether the Bbox overlaps with another one (bb1.overlaps(bb2)). This may be useful in some cases, but it does not answer the question "how much".
If you have rotated texts, you will have hard time seeing if they overlap, but that you probably already know.
Little late, but here is other example, which shows how to get bounding box of a text object in data coordinates/units. It also draws the bounding box obtained around the text for its visual representation.
import matplotlib.pyplot as plt
# some example plot
plt.plot([1,2,3], [2,3,4])
t = plt.text(1.1, 3.1, "my text", fontsize=18)
# to get the text bounding box
# we need to draw the plot
plt.gcf().canvas.draw()
# get bounding box of the text
# in the units of the data
bbox = t.get_window_extent()\
.inverse_transformed(plt.gca().transData)
print(bbox)
# prints: Bbox(x0=1.1, y0=3.0702380952380954, x1=1.5296875, y1=3.2130952380952382)
# plot the bounding box around the text
plt.plot([bbox.x0, bbox.x0, bbox.x1, bbox.x1, bbox.x0],
[bbox.y0, bbox.y1, bbox.y1, bbox.y0, bbox.y0])
plt.show()
Related
I am faced now to a new problem using GetDist library available on home page of GetDist. Examples are given in this getdist plot gallery.
This is a tool to plot joint distribution for a set of covariance matrices.
Everything works fine except one detail that disturbs me : If I zoom very deeply, I notice a slight shift between the contours filled and the lines contours. I illustrate this by the following zoomed figure (smallest contours refers to 1 sigma uncertainty and the largest 2 sigma) representing the ellipse of 2 covariance matrices.
In this figure, I zoom very deeply on a subplot. Classically, if I unzoom the figure, I get this kind of image :
The relevant section that generates the triplot is :
# Call triplot
g.triangle_plot([matrix1, matrix2],
names,
filled = True,
legend_labels = [],
contour_colors = ['darkblue','red'],
line_args = [{'lw':2, 'color':'darkblue'},
{'lw':2, 'color':'red'}],
)
I don't understand why filled area (red and darkblue) exceeds slightly the lines of the corresponding contours.
Maybe it is related to my conputation of limits of ellipse along x-coordinates and y-coordinates in order to fully fill the subplot and the rounding errors. I tried to modify these paramters without success.
I haven't looked in the code, but what I can see from the image is, that the border is half inset and half outset. I assume that the border has a transparency like the shape's fill color and thus it has the effect of a shifted dark border while this is just the part where the transparent border and the transparent background overlay.
The following example shows two circles, with a backgroundcolor rgba(0,0,0,0.5). The border on circle A has no opacity: rgb(0,0,0,1) while on circle B the border color matches the fill color (so 50% opacity: rgba(0,0,0,0.5).
I'm working on sunspot detection and I'm trying to build ground truth masks using sunpy.net.hek client to download solar events from the knowledge base.
I followed this tutorial.
My problem is that I'm not able to get the polygon pixel coordinates after the rotation. That is:
ch_boundary = SkyCoord( [(float(v[0]), float(v[1])) * u.arcsec for v in p3],
obstime=ch_date,
frame=frames.Helioprojective)
rotated_ch_boundary = solar_rotate_coordinate(ch_boundary, aia_map.date)
Where p3 holds the original coordinates of the event (they have to be rotated because your picture could not have the same timing as the event on hek). rotated_ch_boundary is an Astropy SkyCoord but cannot figure out how to get the coordinates in pixel relative to the image from that.
Then in the tutorial it just plots the coordinates using Sunpy Map and matplotlib:
aia_map.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
I cannot do that because I want to print the polygon (filled) on a numpy array and save it.
I also tried to build a custom Sunpy map and use the same function to plot:
from sunpy.net.helioviewer import HelioviewerClient
hv = HelioviewerClient()
filepath = hv.download_jp2('2017/07/10 10:00:00', observatory='SDO',
instrument='HMI', detector='HMI', measurement='continuum')
hmi = sunpy.map.Map(filepath)
# QUERY AND ROTATION CODE HERE...
hmi.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
but it doesn't even show the polygon on the plot, I don't know if it's for the different resolution or whatever.
Do you have any idea on how I can plot the polygon on a custom image and save it in order to use it later?
My purpose is to create a black image with a white polygon highlighted. The polygon should be in the exact same position as the sunspot in the corresponding image, let's say an SDO HMI intensitygram of the same day I downloaded from helioviewer.
Solution:
aia_map.world_to_pixel(rotated_ch_boundary)
or
rotated_ch_boundary.to_pixel(aia_map.wcs)
Thanks to fraserwatson for this post
I am using python 3.6 to open a shapefile of the Amazon River on to basemap. However I am confused with how coordinates work in python. I looked up coordinates of the the Amazon River and found it to be lon,lat=-55.126648,-2.163106. But to open my map I need the lat/lon values of corners, which I am not sure how to get.
Here is my code so far:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map= Basemap(projection='tmerc',
lon_0=180,
lat_0=0,
resolution='l')
map.drawmapboundary(fill_color='aqua')
map.fillcontinents(color='#ddaa66',lake_color='aqua')
map.drawcoastlines()
map.readshapefile('filename','Amazon')
plt.show()
Here is the error message I get when I try to run it:
ValueError: must either specify lat/lon values of corners
(llcrnrlon,llcrnrlat,ucrnrlon,urcrnrlat) in degrees or width and height in meters
When creating your map (map = Basemap(...)) , you need to specify those values. They are lower left corner longitude, lower left corner latitude, upper right corner longitude, and upper right corner latitude. These define the extents of the map. You could just plot the whole earth, then look at the region you want and pick the points off of it for your new corners.
The best method for this type of point plotting is to create your own corners by 'zooming out' from the point. this means you'll need to specify llcrnrlat (lower left corner latitude), etc. as such:
my_coords = [38.9719980,-76.9219820]
# How much to zoom from coordinates (in degrees)
zoom_scale = 1
# Setup the bounding box for the zoom and bounds of the map
bbox = [my_coords[0]-zoom_scale,my_coords[0]+zoom_scale,\
my_coords[1]-zoom_scale,my_coords[1]+zoom_scale]
plt.figure(figsize=(12,6))
# Define the projection, scale, the corners of the map, and the resolution.
m = Basemap(projection='merc',llcrnrlat=bbox[0],urcrnrlat=bbox[1],\
llcrnrlon=bbox[2],urcrnrlon=bbox[3],lat_ts=10,resolution='i')
If you want to see a full tutorial on plotting lat/lon points from a .csv file, check out my tutorial where I go through the whole process and include the full code:
Geographic Mapping from a CSV File Using Python and Basemap
You end up with a result that looks like the following:
I have a GeoTiff in UTM32 and coordinates of a rectangle also in UTM32.
(This projection may not always be the case, but the projections will always be the same)
I simply need to crop the image using the rectangle.
The rectangle is given by: (xmin, xmax, ymin, ymax)
699934.584491, 700160.946739, 6168703.00544, 6169364.0093
I know how to make a polygon from the points, how to make a shapefile from the polygon, and I know how to create a masked numpy array using the points. However, I don't know how to use the polygon, the shapefile or the mask to actually crop the image.
I already looked at the description at:
https://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html#clip-a-geotiff-with-shapefile
However, I don't really understand it and it seems overly complicated. (like I don't know what histogram stretching is supposed to be doing there except confusing)
Akin's answer is mostly right, but doesn't provide a complete explanation.
You can crop a gdal file using gdal_translate, which can be used in python via gdal.Translate.
Best option: projwin
The easiest way is with the projwin flag, which takes 4 values:
window = (upper_left_x, upper_left_y, lower_right_x, lower_right_y)
These values are in map coordinates. The bounds of the input file can be obtained via gdalinfo input_raster.tif from the command line.
NOTE: for many coordinate systems, ymax is actually less than ymin, so it's important to use "upper_left" and "lower_right" to identify the coordinates instead of "max" and "min." Akin's answer didn't work for me because of this difference.
The complete solution, then is:
from osgeo import gdal
upper_left_x = 699934.584491
upper_left_y = 6169364.0093
lower_right_x = 700160.946739
lower_right_y = 6168703.00544
window = (upper_left_x,upper_left_y,lower_right_x,lower_right_y)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', projWin = window)
Additional option: srcwin
srcwin is another gdal_translate flag similar to projwin, but takes in the pixel and line window via an offset and size, instead of using the map coordinate bounds. You would use it like this:
window = (offset_x, offset_y, size_x, size_y)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', srcWin = window)
Try to use bbox = (xmin,ymin,xmax,ymax)
from osgeo import gdal
bbox = (xmin,ymin,xmax,ymax)
gdal.Translate('output_crop_raster.tif', 'input_raster.tif', projWin = bbox)
How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t