I am trying desperately to project some geostationary data from GOES-16 netCDF file to a different projection. I can get the background map to re-project but can't seem to get the data to follow.
I'm not super versed in this yet, but here is what I have thus far:
Reading the data through NetCDF4:
from netCDF4 import Dataset
nc = Dataset('OR_ABI-L1b-RadF-
M3C13_G16_s20182831030383_e20182831041161_c20182831041217.nc')
data = nc.variables['Rad'][:]
Here I'm trying to get the geostationary info:
sat_h = nc.variables['goes_imager_projection'].perspective_point_height
X = nc.variables['x'][:] * sat_h
Y = nc.variables['y'][:] * sat_h
# Satellite longitude
sat_lon =
nc.variables['goes_imager_projection'].longitude_of_projection_origin
# Satellite sweep
sat_sweep = nc.variables['goes_imager_projection'].sweep_angle_axis
Here I'm taking projection data from the .nc file:
proj_var = nc.variables['goes_imager_projection']
sat_height = proj_var.perspective_point_height
central_lon = proj_var.longitude_of_projection_origin
semi_major = proj_var.semi_major_axis
semi_minor = proj_var.semi_minor_axis
print proj_var
<type 'netCDF4._netCDF4.Variable'>
int32 goes_imager_projection()
long_name: GOES-R ABI fixed grid projection
grid_mapping_name: geostationary
perspective_point_height: 35786023.0
semi_major_axis: 6378137.0
semi_minor_axis: 6356752.31414
inverse_flattening: 298.2572221
latitude_of_projection_origin: 0.0
longitude_of_projection_origin: -75.0
sweep_angle_axis: x
unlimited dimensions:
current shape = ()
filling on, default _FillValue of -2147483647 used
And here is a small snippet of my code that's relevant:
fig = plt.figure(figsize=(30,20))
globe = ccrs.Globe(semimajor_axis=semi_major, semiminor_axis=semi_minor)
proj = ccrs.Geostationary(central_longitude=central_lon,
satellite_height=sat_height, globe=globe)
ax = fig.add_subplot(1, 1, 1, projection=proj)
IR_img = ax.imshow(data[:,:],origin='upper',extent=(X.min(), X.max(), Y.min(), Y.max()),
cmap=IR_cmap,interpolation='nearest',vmin=162.,vmax=330.)
And an image of everyone playing nicely:
Data and map working
When I try and get say a Plate Carree projection I try:
proj = ccrs.PlateCarree(central_longitude=central_lon,globe=globe)
And an image of my failure:
Data and map not working
I've tried messing with the extent in the imshow method, I've tried adding a
transform=proj
in the imshow and no luck, it just gets hung up and I have to restart the kernel.
Clearly it is a lack of understanding on my part. If anyone can quickly and easily help/explain the way I want to change my projection from geostationary, I would greatly appreciate that.
I'm running archaic python2.
Thanks for looking.
EDIT: Problem seems to be resolved thanks to insight from DopplerShift and ajdawson, I guess I was maybe a little impatient/ignorant of how long a full disk transformation would take.
It looks like you need to specify the transform keyword to imshow. This keyword tells cartopy what coordinates your data are in, which in this case should be geostationary.
I don't have your dataset so I cannot test this, but the snippet below illustrates the concept. The projection and the transform are independent so you should define both. The value of the transform argument (crs in the example below) is fixed for the data set, but the projection can be anything you like (including the same as crs).
See this example of reprojecting a geostationary image: https://scitools.org.uk/cartopy/docs/v0.16/gallery/geostationary.html#sphx-glr-gallery-geostationary-py. Also see the guide to projection and transform arguments here: https://scitools.org.uk/cartopy/docs/v0.16/tutorials/understanding_transform.html.
globe = ccrs.Globe(semimajor_axis=semi_major, semiminor_axis=semi_minor)
crs = ccrs.Geostationary(central_longitude=central_lon,
satellite_height=sat_height, globe=globe)
proj = ccrs.PlateCarree(central_longitude=central_lon, globe=globe)
ax = fig.add_subplot(1, 1, 1, projection=proj)
IR_img = ax.imshow(data[:,:], origin='upper',
extent=(X.min(), X.max(), Y.min(), Y.max()),
transform=crs,
cmap=IR_cmap,
interpolation='nearest', vmin=162., vmax=330.)
Related
I have a series of rgb files in png format, as well as the corresponding depth file in txt format, which can be loaded with np.loadtxt. How could I merge these two files to point cloud using open3d?
I followed the procedure as obtain point cloud from depth numpy array using open3d - python, but the result is not readable for human.
The examples is listed here:
the source png:
the pcd result:
You can get the source file from this link ![google drive] to reproduce my result.
By the way, the depth and rgb are not registerd.
Thanks.
I had to play a bit with the settings and data and used mainly the answer of your SO link.
import cv2
import numpy as np
import open3d as o3d
color = o3d.io.read_image("a542c.png")
depth = np.loadtxt("a542d.txt")
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
vertices.append((float(x), float(y), depth[x][y]))
pcd = o3d.geometry.PointCloud()
point_cloud = np.asarray(np.array(vertices))
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.estimate_normals()
pcd = pcd.normalize_normals()
o3d.visualization.draw_geometries([pcd])
However, if you keep the code as provided, the whole scene looks very weird and unfamiliar. That is because your depth file contains data between 0 and almost 2.5 m.
I introduced a cut-off at 500 or 1000 mm plus removed all 0s as suggested in the other answer. Additionally I flipped the x-axis (float(-x) instead of float(x)) to resemble your photo.
# ...
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
if 0< depth[x][y]<500:
vertices.append((float(-x), float(y), depth[x][y]))
For a good perspective I had to rotate the images manually. Probably open3d provides methods to do it automatically (I quickly tried pcd.transform() from your SO link above, it can help you if needed).
Results
500 mm cut-off: and 1000 mm cut-off: .
I used laspy instead of open3d because wanted to give some colors to your image:
import imageio
import numpy as np
# first reading the image for RGB values
image = imageio.imread(".../a542c.png")
loading the depth file
depth = np.loadtxt("/home/shaig93/Documents/internship_FWF/a542d.txt")
# creating fake x, y coordinates with meshgrid
xv, yv = np.meshgrid(np.arange(400), np.arange(640), indexing='ij')
# save_las is a function based on laspy that was provided to me by my supervisor
save_las("fn.laz", image[:400, :, 0].flatten(), np.c_[yv.flatten(), xv.flatten(), depth.flatten()], cmap = plt.cm.magma_r)
and the result is this. As you can see objects are visible from front.
However from side they are not easy to distinguish.
This means to me to think that your depth file is not that good.
Another idea would be also getting rid off 0 values from your depth file so that you can get point cloud without a wall kind of structure in the front. But still does not solve depth issue of course.
ps. I know this is not a proper answer but I hope it was helpful on identifying the problem.
I am having an issue with matching up the color table/brightness on CMI01 through CMI06 when creating GOES16 imagery with MetPy. I've tried using stock color tables and using random vmin/vmax to try and get a match. I've also tried using custom made color tables and even tried integrating things like min_reflectance_factor && max_reflectance_factor as vmin/vmax values.
Maybe I'm making this way more difficult than it is? Is there something I'm missing? Below are excerpts of code helping to create the current image output that I have:
grayscale = {"colors": [(0,0,0),(0,0,0),(255,255,255),(255,255,255)], "position": [0, 0.0909, 0.74242, 1]}
CMI_C02 = {"name": "C02", "commonName": "Visible Red Band", "grayscale": True, "baseDir": "visRed", "colorMap": grayscale}
dat = data.metpy.parse_cf('CMI_'+singleChannel['name'])
proj = dat.metpy.cartopy_crs
maxConcat = "max_reflectance_factor_"+singleChannel['name']
vmax = data[maxConcat]
sat = ax.pcolormesh(x, y, dat, cmap=make_cmap(singleChannel['colorMap']['colors'], position=singleChannel['colorMap']['position'], bit=True), transform=proj, vmin=0, vmax=vmax)
make_cmap is a handy dandy method I found that helps to create custom color tables. This code is part of a multiprocessing process, so singleChannel is actually CMI_C02.
For reference, the first image is from College of DuPage and the second is my output...
Any help/guidance would be greatly appreciated!
So your problem is, I believe, because there's a non-linear transformation being applied to the data on College of DuPage, in this case a square root (sqrt). This has been applied to GOES imagery in the past, as mentioned in the GOES ABI documentation. I think that's what is being done by CoD.
Here's a script to compare with and without sqrt:
import cartopy.feature as cfeature
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import metpy
import numpy as np
from siphon.catalog import TDSCatalog
# Trying to find the most recent image from around ~18Z
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/satellite/goes16'
'/GOES16/CONUS/Channel02/current/catalog.xml')
best_time = datetime.utcnow().replace(hour=18, minute=0, second=0, microsecond=0)
if best_time > datetime.utcnow():
best_time -= timedelta(days=1)
ds = cat.datasets.filter_time_nearest(best_time)
# Open with xarray and pull apart with some help using MetPy
data = ds.remote_access(use_xarray=True)
img_data = data.metpy.parse_cf('Sectorized_CMI')
x = img_data.metpy.x
y = img_data.metpy.y
# Create a two panel figure: one with no enhancement, one using sqrt()
fig = plt.figure(figsize=(10, 15))
for panel, func in enumerate([None, np.sqrt]):
if func is not None:
plot_data = func(img_data)
title = 'Sqrt Enhancement'
else:
plot_data = img_data
title = 'No Enhancement'
ax = fig.add_subplot(2, 1, panel + 1, projection=img_data.metpy.cartopy_crs)
ax.imshow(plot_data, extent=(x[0], x[-1], y[-1], y[0]),
cmap='Greys_r', origin='upper')
ax.add_feature(cfeature.COASTLINE, edgecolor='cyan')
ax.add_feature(cfeature.BORDERS, edgecolor='cyan')
ax.add_feature(cfeature.STATES, edgecolor='cyan')
ax.set_title(title)
Which results in:
The lower image, with the sqrt transformation applied seems to match the CoD image pretty well.
After polling some meteorologists, I ended up making a color table that was in between the two images as the agreed general consensus was that they thought my version was too dark and the standard was too light.
I still used vmax and vmin for pcolormesh() and simplified my grayscale object to just two colors with a slightly darker gray than the standard.
Thanks to all who looked at this.
I am trying to create four gabor patches, very similar to those below.
I don't need them to be identical to the pictures below, but similar.
Despite a bit of tinkering, I have been unable to reproduce these images...
I believe they were created in MATLAB originally. I don't have access to the original MATLAB code.
I have the following code in python (2.7.10):
import numpy as np
from scipy.misc import toimage # One can also use matplotlib*
data = gabor_fn(sigma = ???, theta = 0, Lambda = ???, psi = ???, gamma = ???)
toimage(data).show()
*graphing a numpy array with matplotlib
gabor_fn, from here, is defined below:
def gabor_fn(sigma,theta,Lambda,psi,gamma):
sigma_x = sigma;
sigma_y = float(sigma)/gamma;
# Bounding box
nstds = 3;
xmax = max(abs(nstds*sigma_x*numpy.cos(theta)),abs(nstds*sigma_y*numpy.sin(theta)));
xmax = numpy.ceil(max(1,xmax));
ymax = max(abs(nstds*sigma_x*numpy.sin(theta)),abs(nstds*sigma_y*numpy.cos(theta)));
ymax = numpy.ceil(max(1,ymax));
xmin = -xmax; ymin = -ymax;
(x,y) = numpy.meshgrid(numpy.arange(xmin,xmax+1),numpy.arange(ymin,ymax+1 ));
(y,x) = numpy.meshgrid(numpy.arange(ymin,ymax+1),numpy.arange(xmin,xmax+1 ));
# Rotation
x_theta=x*numpy.cos(theta)+y*numpy.sin(theta);
y_theta=-x*numpy.sin(theta)+y*numpy.cos(theta);
gb= numpy.exp(-.5*(x_theta**2/sigma_x**2+y_theta**2/sigma_y**2))*numpy.cos(2*numpy.pi/Lambda*x_theta+psi);
return gb
As you may be able to tell, the only difference (I believe) between the images is contrast. So, gabor_fn would likely needed to be altered to do allow for this (unless I misunderstand one of the params)...I'm just not sure how.
UPDATE:
from math import pi
from matplotlib import pyplot as plt
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=12.5,psi=90,gamma=1.)
unit = #From left to right, unit was set to 1, 3, 7 and 9.
bound = 0.0009/unit
fig = plt.imshow(
data
,cmap = 'gray'
,interpolation='none'
,vmin = -bound
,vmax = bound
)
plt.axis('off')
The problem you are having is a visualization problem (although, I think you are chossing too large parameters).
By default matplotlib, and scipy's (toimage) use bilinear (or trilinear) interpolation, depending on your matplotlib's configuration script. That's why your image looks so smooth. It is because your pixels values are being interpolated, and you are not displaying the raw kernel you have just calculated.
Try using matplotlib with no interpolation:
from matplotlib import pyplot as plt
plt.imshow(data, 'gray', interpolation='none')
plt.show()
For the following parameters:
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=25.,psi=90,gamma=1.)
You get this output:
If you reduce lamda to 15, you get something like this:
Additionally, the sigma you choose changes the strength of the smoothing, adding parameters vmin=-1 and vmax=1 to imshow (similar to what #kazemakase) suggested, will give you the desired contrast.
Check this guide for sensible values (and ways to use) gabor kernels:
http://scikit-image.org/docs/dev/auto_examples/plot_gabor.html
It seems like toimage scales the input data so that the min/max values are mapped to black/white.
I do not know what amplitudes to reasonably expect from gabor patches, but you should try something like this:
toimage(data, cmin=-1, cmax=1).show()
This tells toimage what range your data is in. You can try to play around with cmin and cmax, but make sure they are symmetric (i.e. cmin=-x, cmax=x) so that a value of 0 maps to grey.
I'd like to access the polarization from a fits file and plot it using Healpy. I see that Healpix has a built in option called POLARIZATION, but I don't see the same options for Healpy.
How can I plot this data?
Thanks!
Edit:
#AndreaZonca
Here is the code I am currently using to plot temperature:
t = "%s Field - WMAP at W Band (90 GHz) Foreground Reduced iquma"%(name)
map = hp.read_map(band_iquma)
hp.cartview(map,title = t, coord = ['G','C'],max = .25, min = -.25, unit = 'mk',\
lonra = range[0], latra = range[1])
hp.graticule()
plt.show()
The website where I got this file says that polarization data is included in the fits file.
And this website says that HealPix can plot polarization if given the keyword POLARIZATION and a number 1,2, or 3.
http://healpix.jpl.nasa.gov/html/idlnode10.htm
The keyword you are looking for is field in read_map, if you set it to (0,1,2) you will read both temperature and polarization.
Then you can plot Q slicing the map with [1] and U with [2].
See:
http://healpy.readthedocs.org/en/latest/generated/healpy.fitsfunc.read_map.html
I've been searching the documentation like crazy and cannot find an answer for this one.
I'm generating FITS images in python and need to assign WCS coordinates to the image. I know there are lots of ways to do this by matching up point sources with a known catalog, but in this case I'm generating a dust map, so point source matching won't work (as far as I can tell).
So the image is a 2D Numpy array of shape (240,240). It's written like so (the x and y coordinate assignments are a little weird, it works somehow):
H, xedges, yedges = np.histogram2d(glat, glon, bins=[ybins, xbins], weights=Av)
count, x, y = np.histogram2d(glat, glon, bins=[ybins, xbins])
H/=count
hdu = pyfits.PrimaryHDU(H)
hdu.writeto(filename)
>>> print H.shape
(240,240)
That all works fine on its own. For assigning galactic coordinates seems like all you'd need to do is something like:
glon_coords = np.linspace(np.amin(glon), np.amax(glon), 240)
glat_coords = np.linspace(np.amin(glat), np.amax(glat), 240)
But I don't understand how the FITS image stores these coordinates, so I don't know how to write them. I've tried assigning them in SAO DS9, too, but no luck. I just need a straightforward way of assigning these coordinates to the image.
Thanks for any help you can provide.
I would advise you to start using astropy. For the purposes of your project the astropy.wcs package can help you write a FITS WCS header, and the astropy.io.fits API is basically identical to the pyfits one you are using now. Moreover the help pages are excellent, and all I'm about to do is translate their WCS building page to match your example.
To your question: FITS does not "tag" each pixel with a coordinate. I suppose it is possible to create a pixel lookup table or something like that, but the actual WCS is an algorithmic translation of X,Y pixels to astrometric coordinates (in your case "Galactic"). A nice page is here.
The example I would point you to is here:
http://docs.astropy.org/en/latest/wcs/index.html#building-a-wcs-structure-programmatically
And here is my untested pseudocode for your project:
# untested code
from __future__ import division # confidence high
# astropy
from astropy.io import fits as pyfits
from astropy import wcs
# your code
H, xedges, yedges = np.histogram2d(glat, glon, bins=[ybins, xbins], weights=Av)
count, x, y = np.histogram2d(glat, glon, bins=[ybins, xbins])
H/=count
# characterize your data in terms of a linear translation from XY pixels to
# Galactic longitude, latitude.
# lambda function given min, max, n_pixels, return spacing, middle value.
linwcs = lambda x, y, n: ((x-y)/n, (x+y)/2)
cdeltaX, crvalX = linwcs(np.amin(glon), np.amax(glon), len(glon))
cdeltaY, crvalY = linwcs(np.amin(glat), np.amax(glat), len(glat))
# wcs code ripped from
# http://docs.astropy.org/en/latest/wcs/index.html
w = wcs.WCS(naxis=2)
# what is the center pixel of the XY grid.
w.wcs.crpix = [len(glon)/2, len(glat)/2]
# what is the galactic coordinate of that pixel.
w.wcs.crval = [crvalX, crvalY]
# what is the pixel scale in lon, lat.
w.wcs.cdelt = numpy.array([cdeltX, cdeltY])
# you would have to determine if this is in fact a tangential projection.
w.wcs.ctype = ["GLON-TAN", "GLAT-TAN"]
# write the HDU object WITH THE HEADER
header = w.to_header()
hdu = pyfits.PrimaryHDU(H, header=header)
hdu.writeto(filename)