I am having an issue with matching up the color table/brightness on CMI01 through CMI06 when creating GOES16 imagery with MetPy. I've tried using stock color tables and using random vmin/vmax to try and get a match. I've also tried using custom made color tables and even tried integrating things like min_reflectance_factor && max_reflectance_factor as vmin/vmax values.
Maybe I'm making this way more difficult than it is? Is there something I'm missing? Below are excerpts of code helping to create the current image output that I have:
grayscale = {"colors": [(0,0,0),(0,0,0),(255,255,255),(255,255,255)], "position": [0, 0.0909, 0.74242, 1]}
CMI_C02 = {"name": "C02", "commonName": "Visible Red Band", "grayscale": True, "baseDir": "visRed", "colorMap": grayscale}
dat = data.metpy.parse_cf('CMI_'+singleChannel['name'])
proj = dat.metpy.cartopy_crs
maxConcat = "max_reflectance_factor_"+singleChannel['name']
vmax = data[maxConcat]
sat = ax.pcolormesh(x, y, dat, cmap=make_cmap(singleChannel['colorMap']['colors'], position=singleChannel['colorMap']['position'], bit=True), transform=proj, vmin=0, vmax=vmax)
make_cmap is a handy dandy method I found that helps to create custom color tables. This code is part of a multiprocessing process, so singleChannel is actually CMI_C02.
For reference, the first image is from College of DuPage and the second is my output...
Any help/guidance would be greatly appreciated!
So your problem is, I believe, because there's a non-linear transformation being applied to the data on College of DuPage, in this case a square root (sqrt). This has been applied to GOES imagery in the past, as mentioned in the GOES ABI documentation. I think that's what is being done by CoD.
Here's a script to compare with and without sqrt:
import cartopy.feature as cfeature
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import metpy
import numpy as np
from siphon.catalog import TDSCatalog
# Trying to find the most recent image from around ~18Z
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/satellite/goes16'
'/GOES16/CONUS/Channel02/current/catalog.xml')
best_time = datetime.utcnow().replace(hour=18, minute=0, second=0, microsecond=0)
if best_time > datetime.utcnow():
best_time -= timedelta(days=1)
ds = cat.datasets.filter_time_nearest(best_time)
# Open with xarray and pull apart with some help using MetPy
data = ds.remote_access(use_xarray=True)
img_data = data.metpy.parse_cf('Sectorized_CMI')
x = img_data.metpy.x
y = img_data.metpy.y
# Create a two panel figure: one with no enhancement, one using sqrt()
fig = plt.figure(figsize=(10, 15))
for panel, func in enumerate([None, np.sqrt]):
if func is not None:
plot_data = func(img_data)
title = 'Sqrt Enhancement'
else:
plot_data = img_data
title = 'No Enhancement'
ax = fig.add_subplot(2, 1, panel + 1, projection=img_data.metpy.cartopy_crs)
ax.imshow(plot_data, extent=(x[0], x[-1], y[-1], y[0]),
cmap='Greys_r', origin='upper')
ax.add_feature(cfeature.COASTLINE, edgecolor='cyan')
ax.add_feature(cfeature.BORDERS, edgecolor='cyan')
ax.add_feature(cfeature.STATES, edgecolor='cyan')
ax.set_title(title)
Which results in:
The lower image, with the sqrt transformation applied seems to match the CoD image pretty well.
After polling some meteorologists, I ended up making a color table that was in between the two images as the agreed general consensus was that they thought my version was too dark and the standard was too light.
I still used vmax and vmin for pcolormesh() and simplified my grayscale object to just two colors with a slightly darker gray than the standard.
Thanks to all who looked at this.
Related
I followed this excellent guide by Adam Symington and successfully created the following topographic map of Sabah (a state in Malaysia, which is a Southeast Asian nation). The awkward blob of black in the upper left corner is my attempt to plot certain coordinates on the map.
I would like to improve this diagram in the following ways:
EDIT: I have figured item (1) out and posted the solution below. (2) and (3) pending.
[SOLVED] The sch dataframe contains coordinates of all schools in the state. I would like to plot these on the map. I suspect that it is currently going wonky because the axes are not "geo-axes" (meaning, not using lat/lon scales) - you can confirm this by setting ax.axis('on'). How do I get around this? [SOLVED]
I'd like to set the portion outside the actual territory to white. Calling ax.set_facecolor('white') isn't working. I know that the specific thing setting it to grey is the ax.imshow(hillshade, cmap='Greys', alpha=0.3) line (because changing the cmap changes the background); I just don't know how to alter it while keeping the color within the map as grey.
If possible, I'd like the outline of the map to be black, but this is just pedantic.
All code to reproduce the diagram above is below. The downloadSrc function gets and saves the dependencies (a 5.7MB binary file containing the topographic data and a 0.05MB csv containing the coordinates of points to plot) in a local folder; you need only run that once.
import rasterio
from rasterio import mask as msk
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd
import geopandas as gpd
import earthpy.spatial as es
from shapely.geometry import Point
def downloadSrc(dl=1):
if dl == 1:
import os
os.mkdir('sabah')
import requests
r = requests.get('https://raw.githubusercontent.com/Thevesh/Display/master/sabah_tiff.npy')
with open('sabah/sabah_topog.npy', 'wb') as f: f.write(r.content)
df = pd.read_csv('https://raw.githubusercontent.com/Thevesh/Display/master/schools.csv')
df.to_csv('sabah/sabah_schools.csv')
# Set dl = 0 after first run; the files will be in your current working directory + /sabah
downloadSrc(dl=1)
# Load topography of Sabah, pre-saved from clipped tiff file (as per Adam Symington guide)
value_range = 4049
sabah_topography = np.load('sabah/sabah_topog.npy')
# Load coordinates of schools in Sabah
crs={'init':'epsg:4326'}
sch = pd.read_csv('sabah/sabah_schools.csv',usecols=['lat','lon'])
geometry = [Point(xy) for xy in zip(sch.lon, sch.lat)]
schools = gpd.GeoDataFrame(sch, crs=crs, geometry=geometry)
# Replicated directly from guide, with own modifications only to colours
sabah_colormap = LinearSegmentedColormap.from_list('sabah', ['lightgray', '#e6757b', '#CD212A', '#CD212A'], N=value_range)
background_color = np.array([1,1,1,1])
newcolors = sabah_colormap(np.linspace(0, 1, value_range))
newcolors = np.vstack((newcolors, background_color))
sabah_colormap = ListedColormap(newcolors)
hillshade = es.hillshade(sabah_topography[0], azimuth=180, altitude=1)
# Plot
plt.rcParams["figure.figsize"] = [5,5]
plt.rcParams["figure.autolayout"] = True
fig, ax = plt.subplots()
ax.imshow(sabah_topography[0], cmap=sabah_colormap)
ax.imshow(hillshade, cmap='Greys', alpha=0.3)
schools.plot(color='black', marker='x', markersize=10,ax=ax)
ax.axis('off')
plt.show()
As it turns out, I had given myself the hint to answering point (1), and also managed to solve (2).
For (1), the points simply needed to be rescaled, and we get this:
I did so by getting the max/min points of the map from the underlying shapefile, and then scaling it based on the max/min points of the axes, as follows:
# Get limit points
l = gpd.read_file('param_geo/sabah.shp')['geometry'].bounds
lat_min,lat_max,lon_min,lon_max = l['miny'].iloc[0], l['maxy'].iloc[0], l['minx'].iloc[0], l['maxx'].iloc[0]
xmin,xmax = ax.get_xlim()
ymin,ymax = ax.get_ylim()
# Load coordinates of schools in Sabah and rescale
crs={'init':'epsg:4326'}
sch = pd.read_csv('sabah/sabah_schools.csv',usecols=['lat','lon'])
sch.lat = ymin + (sch.lat - lat_min)/(lat_max - lat_min) * (ymax - ymin)
sch.lon = xmin + (sch.lon - lon_min)/(lon_max - lon_min) * (xmax - xmin)
For (2), the grey background is coming from the fact that the hillshade array has values outside the map area which are being mapped to grey. To remove the grey, we need to nullify these values.
In this specific case, we can leverage on the fact that we know the top right corner of this map is "outside" the map (every country in the world will have at least one corner for which this is true, because no country is a perfect square):
top_right = hillshade[0,-1]
hillshade[hillshade == top_right] = np.nan
And voila, a beautiful white background:
For (3), I suspect it requires us to rescale the Polygon from the shapefile in a manner similar to how we rescaled the coordinates.
I am trying desperately to project some geostationary data from GOES-16 netCDF file to a different projection. I can get the background map to re-project but can't seem to get the data to follow.
I'm not super versed in this yet, but here is what I have thus far:
Reading the data through NetCDF4:
from netCDF4 import Dataset
nc = Dataset('OR_ABI-L1b-RadF-
M3C13_G16_s20182831030383_e20182831041161_c20182831041217.nc')
data = nc.variables['Rad'][:]
Here I'm trying to get the geostationary info:
sat_h = nc.variables['goes_imager_projection'].perspective_point_height
X = nc.variables['x'][:] * sat_h
Y = nc.variables['y'][:] * sat_h
# Satellite longitude
sat_lon =
nc.variables['goes_imager_projection'].longitude_of_projection_origin
# Satellite sweep
sat_sweep = nc.variables['goes_imager_projection'].sweep_angle_axis
Here I'm taking projection data from the .nc file:
proj_var = nc.variables['goes_imager_projection']
sat_height = proj_var.perspective_point_height
central_lon = proj_var.longitude_of_projection_origin
semi_major = proj_var.semi_major_axis
semi_minor = proj_var.semi_minor_axis
print proj_var
<type 'netCDF4._netCDF4.Variable'>
int32 goes_imager_projection()
long_name: GOES-R ABI fixed grid projection
grid_mapping_name: geostationary
perspective_point_height: 35786023.0
semi_major_axis: 6378137.0
semi_minor_axis: 6356752.31414
inverse_flattening: 298.2572221
latitude_of_projection_origin: 0.0
longitude_of_projection_origin: -75.0
sweep_angle_axis: x
unlimited dimensions:
current shape = ()
filling on, default _FillValue of -2147483647 used
And here is a small snippet of my code that's relevant:
fig = plt.figure(figsize=(30,20))
globe = ccrs.Globe(semimajor_axis=semi_major, semiminor_axis=semi_minor)
proj = ccrs.Geostationary(central_longitude=central_lon,
satellite_height=sat_height, globe=globe)
ax = fig.add_subplot(1, 1, 1, projection=proj)
IR_img = ax.imshow(data[:,:],origin='upper',extent=(X.min(), X.max(), Y.min(), Y.max()),
cmap=IR_cmap,interpolation='nearest',vmin=162.,vmax=330.)
And an image of everyone playing nicely:
Data and map working
When I try and get say a Plate Carree projection I try:
proj = ccrs.PlateCarree(central_longitude=central_lon,globe=globe)
And an image of my failure:
Data and map not working
I've tried messing with the extent in the imshow method, I've tried adding a
transform=proj
in the imshow and no luck, it just gets hung up and I have to restart the kernel.
Clearly it is a lack of understanding on my part. If anyone can quickly and easily help/explain the way I want to change my projection from geostationary, I would greatly appreciate that.
I'm running archaic python2.
Thanks for looking.
EDIT: Problem seems to be resolved thanks to insight from DopplerShift and ajdawson, I guess I was maybe a little impatient/ignorant of how long a full disk transformation would take.
It looks like you need to specify the transform keyword to imshow. This keyword tells cartopy what coordinates your data are in, which in this case should be geostationary.
I don't have your dataset so I cannot test this, but the snippet below illustrates the concept. The projection and the transform are independent so you should define both. The value of the transform argument (crs in the example below) is fixed for the data set, but the projection can be anything you like (including the same as crs).
See this example of reprojecting a geostationary image: https://scitools.org.uk/cartopy/docs/v0.16/gallery/geostationary.html#sphx-glr-gallery-geostationary-py. Also see the guide to projection and transform arguments here: https://scitools.org.uk/cartopy/docs/v0.16/tutorials/understanding_transform.html.
globe = ccrs.Globe(semimajor_axis=semi_major, semiminor_axis=semi_minor)
crs = ccrs.Geostationary(central_longitude=central_lon,
satellite_height=sat_height, globe=globe)
proj = ccrs.PlateCarree(central_longitude=central_lon, globe=globe)
ax = fig.add_subplot(1, 1, 1, projection=proj)
IR_img = ax.imshow(data[:,:], origin='upper',
extent=(X.min(), X.max(), Y.min(), Y.max()),
transform=crs,
cmap=IR_cmap,
interpolation='nearest', vmin=162., vmax=330.)
I'm using statsmodels to make OLS estimates. The results can be studied in the console using print(results.summary()). I'd like to store the very same table as a .png file. Below is a snippet with a reproducible example.
import pandas as pd
import numpy as np
import matplotlib.dates as mdates
import statsmodels.api as sm
# Dataframe with some random numbers
np.random.seed(123)
rows = 10
df = pd.DataFrame(np.random.randint(90,110,size=(rows, 2)), columns=list('AB'))
datelist = pd.date_range(pd.datetime(2017, 1, 1).strftime('%Y-%m-%d'), periods=rows).tolist()
df['dates'] = datelist
df = df.set_index(['dates'])
df.index = pd.to_datetime(df.index)
print(df)
# OLS estimates using statsmodels.api
x = df['A']
y = df['B']
model = sm.OLS(y,sm.add_constant(x)).fit()
# Output
print(model.summary())
I've made some naive attempts using suggestions here, but I suspect I'm way off target:
os.chdir('C:/images')
sys.stdout = open("model.png","w")
print(model.summary())
sys.stdout.close()
So far this only raises a very long error message.
Thank you for any suggestions!
This is a pretty unusual task and your approach is kind of crazy. You are trying to combine a string (which has no positions in some metric-space) with some image (which is based on absolute positions; at least for pixel-based formats -> png, jpeg and co.).
No matter what you do, you need some text-rendering engine!
I tried to use pillow, but results are ugly. Probably because it's quite limited and a post-processing anti-aliasing is not saving anything. But maybe i did something wrong.
from PIL import Image, ImageDraw, ImageFont
image = Image.new('RGB', (800, 400))
draw = ImageDraw.Draw(image)
font = ImageFont.truetype("arial.ttf", 16)
draw.text((0, 0), str(model.summary()), font=font)
image = image.convert('1') # bw
image = image.resize((600, 300), Image.ANTIALIAS)
image.save('output.png')
When you use statsmodels, i assume you already got matplotlib. This one can be used too. Here is some approach, which is quite okay, although not perfect (some line-shifts; i don't know why; edit: OP managed to repair these by using a monospace-font):
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(12, 7))
#plt.text(0.01, 0.05, str(model.summary()), {'fontsize': 12}) old approach
plt.text(0.01, 0.05, str(model.summary()), {'fontsize': 10}, fontproperties = 'monospace') # approach improved by OP -> monospace!
plt.axis('off')
plt.tight_layout()
plt.savefig('output.png')
Output:
Edit: OP managed to improve the matplotlib-approach by using a monospace-font! I incorporated that here and it's reflected in the output image.
Take this as a demo and research python's text-rendering options. Maybe the matplotlib-approach can be improved, but maybe you need to use something like pycairo. Some SO-discussion.
Remark: On my system your code does give those warnings!
Edit: It seems you can ask statsmodels for a latex-representation. So i recommend using this, probably writing this to a file and use subprocess to call pdflatex or something similar (here some similar approach). matplotlib can use latex too (but i won't test it as i'm currently on windows) but in this case we again need to tune text to window ratios somehow (compared to a full latex document given some A5-format for example).
I am trying to create four gabor patches, very similar to those below.
I don't need them to be identical to the pictures below, but similar.
Despite a bit of tinkering, I have been unable to reproduce these images...
I believe they were created in MATLAB originally. I don't have access to the original MATLAB code.
I have the following code in python (2.7.10):
import numpy as np
from scipy.misc import toimage # One can also use matplotlib*
data = gabor_fn(sigma = ???, theta = 0, Lambda = ???, psi = ???, gamma = ???)
toimage(data).show()
*graphing a numpy array with matplotlib
gabor_fn, from here, is defined below:
def gabor_fn(sigma,theta,Lambda,psi,gamma):
sigma_x = sigma;
sigma_y = float(sigma)/gamma;
# Bounding box
nstds = 3;
xmax = max(abs(nstds*sigma_x*numpy.cos(theta)),abs(nstds*sigma_y*numpy.sin(theta)));
xmax = numpy.ceil(max(1,xmax));
ymax = max(abs(nstds*sigma_x*numpy.sin(theta)),abs(nstds*sigma_y*numpy.cos(theta)));
ymax = numpy.ceil(max(1,ymax));
xmin = -xmax; ymin = -ymax;
(x,y) = numpy.meshgrid(numpy.arange(xmin,xmax+1),numpy.arange(ymin,ymax+1 ));
(y,x) = numpy.meshgrid(numpy.arange(ymin,ymax+1),numpy.arange(xmin,xmax+1 ));
# Rotation
x_theta=x*numpy.cos(theta)+y*numpy.sin(theta);
y_theta=-x*numpy.sin(theta)+y*numpy.cos(theta);
gb= numpy.exp(-.5*(x_theta**2/sigma_x**2+y_theta**2/sigma_y**2))*numpy.cos(2*numpy.pi/Lambda*x_theta+psi);
return gb
As you may be able to tell, the only difference (I believe) between the images is contrast. So, gabor_fn would likely needed to be altered to do allow for this (unless I misunderstand one of the params)...I'm just not sure how.
UPDATE:
from math import pi
from matplotlib import pyplot as plt
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=12.5,psi=90,gamma=1.)
unit = #From left to right, unit was set to 1, 3, 7 and 9.
bound = 0.0009/unit
fig = plt.imshow(
data
,cmap = 'gray'
,interpolation='none'
,vmin = -bound
,vmax = bound
)
plt.axis('off')
The problem you are having is a visualization problem (although, I think you are chossing too large parameters).
By default matplotlib, and scipy's (toimage) use bilinear (or trilinear) interpolation, depending on your matplotlib's configuration script. That's why your image looks so smooth. It is because your pixels values are being interpolated, and you are not displaying the raw kernel you have just calculated.
Try using matplotlib with no interpolation:
from matplotlib import pyplot as plt
plt.imshow(data, 'gray', interpolation='none')
plt.show()
For the following parameters:
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=25.,psi=90,gamma=1.)
You get this output:
If you reduce lamda to 15, you get something like this:
Additionally, the sigma you choose changes the strength of the smoothing, adding parameters vmin=-1 and vmax=1 to imshow (similar to what #kazemakase) suggested, will give you the desired contrast.
Check this guide for sensible values (and ways to use) gabor kernels:
http://scikit-image.org/docs/dev/auto_examples/plot_gabor.html
It seems like toimage scales the input data so that the min/max values are mapped to black/white.
I do not know what amplitudes to reasonably expect from gabor patches, but you should try something like this:
toimage(data, cmin=-1, cmax=1).show()
This tells toimage what range your data is in. You can try to play around with cmin and cmax, but make sure they are symmetric (i.e. cmin=-x, cmax=x) so that a value of 0 maps to grey.
I'm working on a project using numpy and scipy and I need to fill in nanvalues. Currently I use scipy.interpolate.rbf, but it keeps causing python to crash so severely try/except won't even save it. However, after running it a few times, it seems as if it may keep failing in cases where there is data in the middle surrounded by all nans, like an island. Is there a better solution to this that won't keep crashing?
By the way, this is a LOT of data I need to extrapolate. Sometimes as much as half the image (70x70, greyscale), but it doesn't need to be perfect. It's part of an image stitching program, so as long as it's similar to the actual data, it'll work. I've tried nearest neighbor to fill in the nans, but the results are too different.
EDIT:
The image it always seems to fail on. Isolating this image allowed it to pass the image ONCE before crashing.
I'm using at least version NumPy 1.8.0 and SciPy 0.13.2.
Using SciPy's LinearNDInterpolator. If all images are of the same size, grid coordinates can be pre-computed and re-used.
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
x = np.linspace(0, 1, 500)
y = x[:, None]
image = x + y
# Destroy some values
mask = np.random.random(image.shape) > 0.7
image[mask] = np.nan
valid_mask = ~np.isnan(image)
coords = np.array(np.nonzero(valid_mask)).T
values = image[valid_mask]
it = interpolate.LinearNDInterpolator(coords, values, fill_value=0)
filled = it(list(np.ndindex(image.shape))).reshape(image.shape)
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.imshow(image, cmap='gray', interpolation='nearest')
ax0.set_title('Input image')
ax1.imshow(filled, cmap='gray', interpolation='nearest')
ax1.set_title('Interpolated data')
plt.show()
This proved sufficient for my needs. It is actually quite fast and produces reasonable results:
ipn_kernel = np.array([[1,1,1],[1,0,1],[1,1,1]]) # kernel for inpaint_nans
def inpaint_nans(im):
nans = np.isnan(im)
while np.sum(nans)>0:
im[nans] = 0
vNeighbors = scipy.signal.convolve2d((nans==False),ipn_kernel,mode='same',boundary='symm')
im2 = scipy.signal.convolve2d(im,ipn_kernel,mode='same',boundary='symm')
im2[vNeighbors>0] = im2[vNeighbors>0]/vNeighbors[vNeighbors>0]
im2[vNeighbors==0] = np.nan
im2[(nans==False)] = im[(nans==False)]
im = im2
nans = np.isnan(im)
return im