I am trying to use global wateruse data from the following website
https://zenodo.org/record/897933#.Yj4vbufMKUl.
I managed to open the zipped file, domestic water use.7z, but noticed that the both files cons_dom.nc and withd_dom.ncare in .nc format.
I am fairly new with these types of files and I tried downloading the data to .tif using the following code. However it seems that I keep on getting a wrong result. The data should be a coarse cover of the world, but all I am seeing is a long rectangle in the middle of the screen when opening the .tif in arcpro.
Also, I am sorry for not sharing an image, but I do not know how to plot the tifs directly into python.
from osgeo import gdal
import glob
import os
import numpy as np
from osgeo import osr
NCfilenames = sorted(glob.glob("*.NC"))
for ncfile in NCfilenames:
#print(ncfile)
basencfilename = os.path.basename(ncfile)
# print(basencfilename)
domestic_name = basencfilename
print(domestic_name)
CleanBaseStr = os.path.splitext(basencfilename)[0]
OutFileName = CleanBaseStr + "_domestic_wateruse.tif"
print(OutFileName)
NDVI_ds = gdal.Open(domestic_name, gdal.GA_ReadOnly)
# print(NDVI_ds)
width = NDVI_ds.RasterXSize
height = NDVI_ds.RasterYSize
NDVI_band = NDVI_ds.GetRasterBand(1)
NDVI_arr = NDVI_band.ReadAsArray()
gt = NDVI_ds.GetGeoTransform()
wkt = NDVI_ds.GetProjection()
driver = gdal.GetDriverByName("GTiff")
out_ds = driver.Create(OutFileName, NDVI_band.XSize, NDVI_band.YSize, 1, gdal.GDT_Int16)
# #writing output raster
out_ds.GetRasterBand(1).WriteArray(NDVI_arr)
out_ds.GetRasterBand(1).SetNoDataValue(-9999)
# setting extent of output raster
# top left x, w-e pixel resolution, rotation, top left y, rotation, n-s pixel resolution
out_ds.SetGeoTransform(gt)
# setting spatial reference of output raster
srs = osr.SpatialReference()
srs.ImportFromWkt(wkt)
out_ds.SetProjection(srs.ExportToWkt())
print('Processing Done')
gdalinfo for one of the inputs cons_dom.nc
Driver: netCDF/Network Common Data Format
Files: cons_dom.nc
Size is 67420, 480
Metadata:
cons_dom#description= global gridded domestic water consumption results: 67420 grids, 480 Months
cons_dom#units=mm/month
month#units=months since 1971-1
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 480.0)
Upper Right (67420.0, 0.0)
Lower Right (67420.0, 480.0)
Center (33710.0, 240.0)
Band 1 Block=67420x1 Type=Float64, ColorInterp=Undefined
NoData Value=9.96920996838686905e+36
Unit Type: mm/month
Metadata:
description= global gridded domestic water consumption results: 67420 grids, 480 Months
NETCDF_VARNAME=cons_dom
units=mm/month
gdalinfo for the output cons_dom_domestic_wateruse.tif
Driver: GTiff/GeoTIFF
Files: cons_dom_domestic_wateruse.tif
Size is 67420, 480
Origin = (0.000000000000000,0.000000000000000)
Pixel Size = (1.000000000000000,1.000000000000000)
Image Structure Metadata:
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 0.0000000, 0.0000000)
Lower Left ( 0.000, 480.000)
Upper Right ( 67420.000, 0.000)
Lower Right ( 67420.000, 480.000)
Center ( 33710.000, 240.000)
Band 1 Block=67420x1 Type=Int16, ColorInterp=Gray
NoData Value=-9999
I downloaded and examined your dataset.
This is not spatial information at all: read carefully its description:
cons_dom#description=global gridded domestic water consumption results: 67420 grids, 480 Months
Your raster dataset has 67420x480 values.
This is a temporal dataset.
You can still use GDAL to read it - but everything it can do for you is to present you with the 67420x480 values - it is up to you to interpret that data.
Related
How to convert the NetCDF to TIFF, when the coordinates are stored in another NetCDF file (and are a irregular grid, since this covers the Arctic region)?
An example of the NetCDF file can be downloaded here: https://drive.google.com/uc?export=download&id=1i4OGCQhKlZ056H1YHq4hTb0EbEkl-pYd
The NetCDF file with the coordinates can be donwnloaded here: https://drive.google.com/uc?export=download&id=1WVzZ--NnHSPkJmBqlGwXAN7abXM5_uNh
(Just additional information files only provide the following in what regards coordinates):
NC_GLOBAL#geospatial_bounds_crs=EPSG:4326
NC_GLOBAL#geospatial_lat_max=90
NC_GLOBAL#geospatial_lat_min=57.8
NC_GLOBAL#geospatial_lon_max=180
NC_GLOBAL#geospatial_lon_min=-180
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 512.0)
Upper Right ( 512.0, 0.0)
Lower Right ( 512.0, 512.0)
Center ( 256.0, 256.0)
I know how to a do a conversion with Gdaltranslate, but the problem is if I apply it, my generated netCDF file will not be georeferenced, as lat/lon are not as coordinates, but stored as variables on another netCDF file. So below I have my so far progress trying to do this
with a GDAL-Python. It results on the a rotated image, still seems not georeferenced.
Also: it seems I managed to insert the coordinates but their name do not change to y and x, and keep as c and r, despite having changed them (see pics below).
EDIT - -
This is what I tried so far, and the output is a tiff (wrongly rotated) and no coordinates on the axis:
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import rioxarray as rio
xds = xr.open_dataset(r'path_to_netdfc')
xdc = xr.open_dataset(r"path_to_netcdf_with_coordinates")
# Adds coordinates to x and y
xds.coords["c"] = xdc.mp_lon[1,:]
xds.coords["r"] = xdc.mp_lat[:,1]
xds
# Reorganize the netCDF file into standard names/locations
xds = xds.squeeze().rename_dims({"c": "x", "r": "y"}).transpose('y', 'x')
xds.rio.write_crs('epsg:4326', inplace=True)
#Take the variable that I'm interested in
df = xds['daily_fraction']
#It was giving me error later on, so i needed to set_spatial_dims
df = df.rio.set_spatial_dims(x_dim='x', y_dim='y')
#Save the GeoTIFF file:
df.rio.to_raster(r"C:\PHD\name_of_output.tiff")
Your files do not follow any standard that I know of. Each dimension is in its separate dataset.
If you are sure that the longitude/latitude is linear - which it might not be given that your dataset covers the polar regions - you can simply use gdal_translate to convert to TIFF and then gdal_edit.py -a_ulurll ulx uly urx ury llx lly to set a geotransform with your coordinates. But this will work only if the longitude/latitude are linear relative to your pixels.
I have a Satellite Image of a Terrain, and I have the Latitude and Longitude measures of the four cordinates of the file. How can I add the following data to the GeoTiff file, using Python or Matlab
You can open the file in update-mode and set the geotransform.
It does require your image to be a regular grid, and often the assumption is that it isn't rotated (which you can tell of you know all four corner coordinates).
The geotransform also requires the x/y size in pixels, which you can calculate from the image and the corner coordinates if you don't already know.
If this is more or less where you're starting from:
ulx = -180
uly = 90
lrx = 180
lry = -90
fn = "somefile.tif"
You can calculate and set the geotransform with:
from osgeo import gdal
ds = gdal.OpenEx(fn, gdal.OF_UPDATE)
# calculate the resolution
xres = (lrx - ulx) / ds.RasterXSize
yres = (lry - uly) / ds.RasterYSize
# define the geotransform
gt = (ulx, xres, 0, uly, 0, yres)
# set the geotransform in the metadata
ds.SetGeoTransform(gt)
# close file
ds = None
Since you're modifying an existing file, it would probably be best to first do it on a copy and make sure it's doing what you expect.
If the geotransform is missing, your file might also lack other metadata like the projection. If that's the case you can set it similarly with ds.SetProjection(srs_wkt).
I have a geotiff file that I'm opening with gdal in Python, and I need to find the single pixel closest to a specified latitude/longitude. I was previously working with an unrelated file type for similar data, so I'm completely new to both gdal and geotiff.
How does one do this? What I have so far is
import gdal
ds = gdal.Open('foo.tiff')
width = ds.RasterXSize
height = ds.RasterYSize
gt = ds.GetGeoTransform()
gp = ds.GetProjection()
data = np.array(ds.ReadAsArray())
print(gt)
print(gp)
which produces (for my files)
(-3272421.457337171, 2539.703, 0.0, 3790842.1060354356, 0.0, -2539.703)
and
PROJCS["unnamed",GEOGCS["Coordinate System imported from GRIB file",DATUM["unnamed",SPHEROID["Sphere",6371200,0]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]]],PROJECTION["Lambert_Conformal_Conic_2SP"],PARAMETER["latitude_of_origin",25],PARAMETER["central_meridian",265],PARAMETER["standard_parallel_1",25],PARAMETER["standard_parallel_2",25],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]
Ideally, there'd be a single simple function call, and it would also return an indication whether the specified location falls outside the bounds of the raster.
My fallback is to obtain a grid from another source containing the latitudes and longitudes for each pixel and then do a brute force search for the desired location, but I'm hoping there's a more elegant way.
Note: I think what I'm trying to do is equivalent to the command line
gdallocationinfo -wgs84 foo.tif <longitude> <latitude>
which returns results like
Report:
Location: (1475P,1181L)
Band 1:
Value: 66
This suggests to me that the functionality is probably already in the gdal module, if I can just find the right method to call.
You basically need two steps:
Convert the lat/lon point to the raster-projection
Convert the mapx/mapy (in raster proj) to pixel coordinates
Given the code you already posted above, defining both projection systems can be done with:
from osgeo import gdal, osr
point_srs = osr.SpatialReference()
point_srs.ImportFromEPSG(4326) # hardcode for lon/lat
# GDAL>=3: make sure it's x/y
# see https://trac.osgeo.org/gdal/wiki/rfc73_proj6_wkt2_srsbarn
point_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
file_srs = osr.SpatialReference()
file_srs.ImportFromWkt(gp)
Creating the coordinate transformation, and using it to convert the point from lon/lat to mapx/mapy coordinates (whatever projection it is) with:
ct = osr.CoordinateTransformation(point_srs, file_srs)
point_x = -114.06138 # lon
point_y = 51.03163 # lat
mapx, mapy, z = ct.TransformPoint(point_x, point_y)
To go from map coordinates to pixel coordinates, the geotransform needs to be inverted first. And can then be used to retrieve the pixel coordinates like:
gt_inv = gdal.InvGeoTransform(gt)
pixel_x, pixel_y = gdal.ApplyGeoTransform(gt_inv, mapx, mapy)
Rounding those pixel coordinates should allow you to use them for indexing the data array. You might need to clip them if the point you're querying is outside the raster.
# round to pixel
pixel_x = round(pixel_x)
pixel_y = round(pixel_y)
# clip to file extent
pixel_x = max(min(pixel_x, width-1), 0)
pixel_y = max(min(pixel_y, height-1), 0)
pixel_data = data[pixel_y, pixel_x]
update :
any idea how to delete pixel from specific range value raster image with
using numpy/scipy or gdal?
or how to can create new raster with some class using raster calculation expressions(better)
for example i have a raster image with the
5 class :
1. 0-100
2. 100-200
3. 200-300
4. 300-500
5. 500-1000
and i want to delete class 1 range value
or maybe class 1,2,4,5
i begin with this script :
import numpy as np
from osgeo import gdal
ds = gdal.Open("raster3.tif")
myarray = np.array(ds.GetRasterBand(1).ReadAsArray())
#print myarray.shape
#print myarray.size
#print myarray
new=np.delete(myarray[::2], 1)
but i cant to complete
the image
White in class 5 and black class 1
Rasters are 2-D arrays of values, with each value being stored in a pixel (which stands for picture element). Each pixel must contain some information. It is not possible to delete or remove pixels from the array because rasters are usually encoded as a simple 1-dimensional string of bits. Metadata commonly helps explain where line breaks are and the length of the bitstring, so that the 1-D bitstring can be understood as a 2-D array. If you "remove" a pixel, then you break the raster. The 2-D grid is no longer valid.
Of course, there are many instances where you do want to effectively discard or clean the raster of data. Such an example might be to remove pixels that cover land from a raster of sea-surface temperatures. To accomplish this goal, many geospatial raster formats hold metadata describing what are called NoData values. Pixels containing a NoData value are interpreted as not existing. Recall that in a raster, each pixel must contain some information. The NoData paradigm allows the structure and format of rasters to be met, while also giving a way to mask pixels from being displayed or analyzed. There is still data (bits, 1s and 0s) at the masked pixels, but it only serves to identify the pixel as invalid.
With this in mind, here is an example using gdal which will mask values in the range of 0-100 so they are NoData, and "do not exist". The NoData value will be specified as 0.
from osgeo import gdal
# open dataset to read, and get a numpy array
ds = gdal.Open("raster3.tif", 'r')
myarray = ds.GetRasterBand(1).ReadAsArray()
# modify numpy array to mask values
myarray[myarray <= 100] = 0
# open output dataset, which is a copy of original
driver = gdal.GetDriverByName('GTiff')
ds_out = driver.CreateCopy("raster3_with_nodata.tif", ds)
# write the modified array to the raster
ds_out.GetRasterBand(1).WriteArray(myarray)
# set the NoData metadata flag
ds_out.GetRasterBand(1).SetNoDataValue(0)
# clear the buffer, and ensure file is written
ds_out.FlushCache()
I want to contribute with an example for landsat data.
In this qick guide, you will be able to exclude Landsat cloud pixels.
Landsat offers the Quality Assessment Band (BQA), which includes int32 values (classes) regarding natural features such as Clouds, Rocks, Ice, Water, Cloud Shadow etc.
We will use the BQA to clip the cloud pixels in the other bands.
# Import Packages
import rasterio as rio
import earthpy.plot as ep
from matplotlib import pyplot
import rioxarray as rxr
from numpy import ma
# Open the Landsat Band 3
Landsat_Image = rxr.open_rasterio(r"C:\...\LC08_L1TP_223075_20210311_20210317_01_T1_B3.tif")
# Open the Quality Assessment Band
BQA = rxr.open_rasterio(r"C:\...\LC08_L1TP_223075_20210311_20210317_01_T1_BQA.tif").squeeze()
# Create a list with the QA values that represent cloud, cloud_shadow, etc.
Cloud_Values = [6816, 6848, 6896, 7072]
# Mask the data using the pixel QA layer
landsat_masked = Landsat_Image.where(~BQA.isin(Cloud_Values))
landsat_masked
# Plot the masked data
landsat_masked_plot = ma.masked_array(landsat_masked.values,landsat_masked.isnull())
# Plot
ep.plot_rgb(landsat_masked_plot, rgb=[2, 1, 0], title = "Masked Data")
plt.show()
###############################################################################
# Export the masked Landsat Scenes to Directory "Masked_Bands_QA"
out_img = landsat_masked
out_img.shape
out_transform = landsat_masked.rio.transform()
# Get a Band of the same Scene for reference
rastDat = rio.open(r"C:\Dados_Espaciais\NDVI_Usinas\Adeco\Indices\Imagens\LC08_L1TP_223075_20210311_20210317_01_T1\LC08_L1TP_223075_20210311_20210317_01_T1_B3.tif")
#copying metadata from original raster
out_meta = rastDat.meta.copy()
#amending original metadata
out_meta.update({'nodata': 0,
'height' : out_img.shape[1],
'width' : out_img.shape[2],
'transform' : out_transform})
# writing and then re-reading the output data to see if it looks good
for i in range(out_img.shape[0]):
with rio.open(rf"C:\Dados_Espaciais\DSM\Bare_Soil_Landsat\Teste_{i+1}_masked.tif",'w',**out_meta) as dst:
dst.write(out_img[i,:,:],1)
This way you tell the program:
Check the areas in BQA with these "Cloud_Values" and exclude these areas, but in the landsat image that I provided.
I hope it works.
I have the results of a simulation on an unstructured 2D mesh. I usually export the results in VTK and visualize them with Paraview. This is what results look like.
I would like to obtain a raster image from the results (with or without interpolation) to use it as a texture for visualization in a 3D software. From reading around I have gathered that I need to do some kind of resampling in order to convert from the unstructured grid to a 2d regular grid for the raster image.
VTK can export to raster, but it exports only a full scene without any defined boundary so it requires manual tweaking to fit the image.
Ideally I would like to export only the results within the results bounding box and 'map' them to a raster image programmatically with Ruby or Python.
This script uses paraview and creates an image perfectly centered and scaled so that it can be used as a texture. Notice the 855 value for the vertical size. It seems to be related to the resolution of the screen and it is needed only on OSX according to Paraview mailing list.
It should be run to the Paraview Python interpreter pvbatch.
import sys, json
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
args = json.loads(sys.argv[1])
# create a new 'Legacy VTK Reader'
vtk_file = args["file"]
data = LegacyVTKReader(FileNames=[vtk_file])
# get active view
renderView1 = GetActiveViewOrCreate('RenderView')
# uncomment following to set a specific view size
xc = float(args["center"][0])
yc = float(args["center"][1])
zc = float(args["center"][2])
width = float(args["width"])
height = float(args["height"])
output_file = args["output_file"]
scalar = args["scalar"]
colormap_min = float(args["colormap_min"])
colormap_max = float(args["colormap_max"])
ratio = height / width
magnification = 2
height_p = 855 * magnification
width_p = int(height_p * 1.0 / ratio / magnification)
renderView1.ViewSize = [width_p , height_p]
# show data in view
dataDisplay = Show(data, renderView1)
# trace defaults for the display properties.
dataDisplay.ColorArrayName = ['CELLS', scalar]
# set scalar coloring
ColorBy(dataDisplay, ('CELLS', scalar))
# rescale color and/or opacity maps used to include current data range
dataDisplay.RescaleTransferFunctionToDataRange(True)
# get color transfer function/color map for 'irradiation'
irradiationLUT = GetColorTransferFunction(scalar)
# Rescale transfer function
irradiationLUT.RescaleTransferFunction(colormap_min, colormap_max)
irradiationLUT.LockDataRange = 1
irradiationLUT.ColorSpace = 'RGB'
irradiationLUT.NanColor = [0.498039, 0.0, 0.0]
#changing interaction mode based on data extents
renderView1.InteractionMode = '2D'
renderView1.CameraPosition = [xc, yc, 10000.0 + zc]
renderView1.CameraFocalPoint = [xc, yc, zc]
# hide color bar/color legend
dataDisplay.SetScalarBarVisibility(renderView1, False)
# current camera placement for renderView1
renderView1.InteractionMode = '2D'
#renderView1.CameraPosition = [3.641002, 197.944122, 10001.75]
#renderView1.CameraFocalPoint = [3.641002, 197.944122, 1.75]
renderView1.CameraParallelScale = (height / 2.0)
# save screenshot
SaveScreenshot(output_file, magnification=magnification, quality=100, view=renderView1)
I have a DIY solution. Usually, I do as follows:
Open my mesh as a polygon layer in QGIS and do the following:
calculate mesh centroids in QGIS (Vector/Geometry Tools/Polygon Centroids)
right click on the newly created layer, select Save As, select CSV format and under Layer options/GEOMETRY select xy or xyz
Then, with a simple python script I associate the vtk data (like e.g. water depth) to the centroids (be aware that ParaView numbers the nodes with a -1 offset in respect to QGIS, so node 2 in ParaView is node 3 in QGIS).
Eventually, again in QGIS, I interpolate a raster from vector points e.g. with the GRASSS GIS module v.to.rast.attribute