I have a series of rgb files in png format, as well as the corresponding depth file in txt format, which can be loaded with np.loadtxt. How could I merge these two files to point cloud using open3d?
I followed the procedure as obtain point cloud from depth numpy array using open3d - python, but the result is not readable for human.
The examples is listed here:
the source png:
the pcd result:
You can get the source file from this link ![google drive] to reproduce my result.
By the way, the depth and rgb are not registerd.
Thanks.
I had to play a bit with the settings and data and used mainly the answer of your SO link.
import cv2
import numpy as np
import open3d as o3d
color = o3d.io.read_image("a542c.png")
depth = np.loadtxt("a542d.txt")
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
vertices.append((float(x), float(y), depth[x][y]))
pcd = o3d.geometry.PointCloud()
point_cloud = np.asarray(np.array(vertices))
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.estimate_normals()
pcd = pcd.normalize_normals()
o3d.visualization.draw_geometries([pcd])
However, if you keep the code as provided, the whole scene looks very weird and unfamiliar. That is because your depth file contains data between 0 and almost 2.5 m.
I introduced a cut-off at 500 or 1000 mm plus removed all 0s as suggested in the other answer. Additionally I flipped the x-axis (float(-x) instead of float(x)) to resemble your photo.
# ...
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
if 0< depth[x][y]<500:
vertices.append((float(-x), float(y), depth[x][y]))
For a good perspective I had to rotate the images manually. Probably open3d provides methods to do it automatically (I quickly tried pcd.transform() from your SO link above, it can help you if needed).
Results
500 mm cut-off: and 1000 mm cut-off: .
I used laspy instead of open3d because wanted to give some colors to your image:
import imageio
import numpy as np
# first reading the image for RGB values
image = imageio.imread(".../a542c.png")
loading the depth file
depth = np.loadtxt("/home/shaig93/Documents/internship_FWF/a542d.txt")
# creating fake x, y coordinates with meshgrid
xv, yv = np.meshgrid(np.arange(400), np.arange(640), indexing='ij')
# save_las is a function based on laspy that was provided to me by my supervisor
save_las("fn.laz", image[:400, :, 0].flatten(), np.c_[yv.flatten(), xv.flatten(), depth.flatten()], cmap = plt.cm.magma_r)
and the result is this. As you can see objects are visible from front.
However from side they are not easy to distinguish.
This means to me to think that your depth file is not that good.
Another idea would be also getting rid off 0 values from your depth file so that you can get point cloud without a wall kind of structure in the front. But still does not solve depth issue of course.
ps. I know this is not a proper answer but I hope it was helpful on identifying the problem.
Related
I am trying to use the function zonal_stats from rasterstats Python package to get the raster statistics from a .tif file of each shape in a .shp file. I manage to do it in QGIS without any problems, but I have to do the same with more than 200 files, which will take a lot of time, so I'm trying the Python way. Both files and replication code are in my Google Drive.
My script is:
import rasterio
import geopandas as gpd
import numpy as np
from rasterio.plot import show
from rasterstats import zonal_stats
from rasterio.transform import Affine
# Import .tif file
raster = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Arroz_2019-03.tif')
# Read the raster values
array = raster.read(1)
# Get the affine
affine = raster.transform
# Import shape file
shapefile = gpd.read_file(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Setores_Censit_SP_WGS84.shp')
# Zonal stats
zs_shapefile = zonal_stats(shapefile, array, affine = affine,
stats=['min', 'max', 'mean', 'median', 'majority'])
I get the following error:
Input In [1] in <cell line: 22>
zs_shapefile = zonal_stats(shapefile, array, affine = affine,
File ~\Anaconda3\lib\site-packages\rasterstats\main.py:32 in zonal_stats
return list(gen_zonal_stats(*args, **kwargs))
File ~\Anaconda3\lib\site-packages\rasterstats\main.py:164 in gen_zonal_stats
rv_array = rasterize_geom(geom, like=fsrc, all_touched=all_touched)
File ~\Anaconda3\lib\site-packages\rasterstats\utils.py:41 in rasterize_geom
rv_array = features.rasterize(
File ~\Anaconda3\lib\site-packages\rasterio\env.py:387 in wrapper
return f(*args, **kwds)
File ~\Anaconda3\lib\site-packages\rasterio\features.py:353 in rasterize
raise ValueError("width and height must be > 0")
I have found this question about the same problem, but I can't make it work with the solution: I have tried to reverse the signal of the items in the Affine of my raster data, but I couldn't make it work:
''' Trying to use the same solution of question: https://stackoverflow.com/questions/62010050/from-zonal-stats-i-get-this-error-valueerror-width-and-height-must-be-0 '''
old_tif = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Arroz_2019-03.tif')
print(old_tif.profile) # copy & paste the output and change signs
new_tif_profile = old_tif.profile
# Affine(0.004611149999999995, 0.0, -46.828504575,
# 0.0, 0.006521380000000008, -24.01169169)
new_tif_profile['transform'] = Affine(0.004611149999999995, 0.0, -46.828504575,
0.0, -0.006521380000000008, 24.01169169)
new_tif_array = old_tif.read(1)
new_tif_array = np.fliplr(np.flip(new_tif_array))
with rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\tentativa.tif', "w", **new_tif_profile) as dest:
dest.write(new_tif_array, indexes=1)
dem = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\tentativa.tif')
# Read the raster values
array = dem.read(1)
# Get the affine
affine = dem.transform
# Import shape file
shapefile = gpd.read_file(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Setores_Censit_SP_WGS84.shp')
# Zonal stats
zs_shapefile = zonal_stats(shapefile, array, affine=affine,
stats=['min', 'max', 'mean', 'median', 'majority'])
Doing this way, I don't get the "width and height must be > 0" error! But every stat in zs_shapefile is "NoneType", so it doesn't help my problem.
Does anyone understands why this error happens, and which sign I have to reverse for making it work? Thanks in advance!
I would be careful with overriding the geotransform of your raster like this, unless you are really convinced the original metadata is incorrect. I'm not too familiar with Affine, but it looks like you're setting the latitude now as positive? Placing the raster on the northern hemisphere. My guess would be that this lack of intersection between the vector and raster causes the NoneType results.
I'm also not familiar with raster_stats, but I'm guessing it boils down to GDAL & Numpy at the core of it. So something you could try as a test is to add the all_touched=True keyword:
https://pythonhosted.org/rasterstats/manual.html#rasterization-strategy
If that works, it might indicate that the rasterization fails because your polygons are so small compared to the pixels, that the default rasterization method results in a rasterized polygon of size 0 (in at least one of the dimensions). And that's what the error also hints at (my guess).
Keep in mind that all_touched=True changes the stats you get in result, so I would only do it for testing, or if you're comfortable with this difference.
If you really need a valid value for these (too) small polygons, there are a few workarounds you could try. Something I've done is to simply take the centroid for these polygons, and take the value of the pixel where this centroid falls on.
A potential way to identify these polygons would be to use all_touched with the "count" statistic, every polygon with a count of only 1 might be too small to get rasterized correctly. To really find this out you would probably have to do the rasterization yourself using GDAL, given that raster_stats doesn't seem to allow it.
Note that due to the shape of some of the polygons you use, the centroid might fall outside of the polygon. But given how course your raster data is, relative to the vector, I don't think it would impact the result all that much.
An alternative is, instead of modifying the vector, to significantly increase the resolution of your raster. You could use gdal_translate to output this to a VRT, with some form of resampling, and avoid having to write this data to disk. Once the resolution is high enough that all polygons rasterize to at least a 1x1 array, it should probably work. But your polygons are tiny compared to the pixels, so it'll be a lot. You could guess it, or analyze the envelopes of all polygons. For example take the smallest edge of the envelope as more or less the resolution that's necessary for a correct rasterization.
Edit; To clarify the above a bit further.
The default rasterization strategy of GDAL (all_touched=False) is to consider a pixel "within" the polygon if the centroid of the pixel intersects with the polygon.
Using QGIS you can for example convert the pixels to points, and then do a spatial join with your vector. If you remove polygons that can't be joined (there's a checkbox), you'll get a different vector that most likely should work with raster_stats, given your current raster.
You could perhaps use that in the normal way (all_touched=False), and get the stats for the small polygons using all_touched=True.
In the image below, the green polygons are the ones that intersect with the centroid of a pixel, the red ones don't (and those are probably the ones raster_stats "tries" to rasterize to a size 0 array).
I want to clip one raster based on the extent of another (smaller) raster. First I determine the coordinates of the corners of the smaller raster using
import rasterio as rio
import gdal
from shapely.geometry import Polygon
src = gdal.Open(smaller_file.tif)
ulx, xres, xskew, uly, yskew, yres = src.GetGeoTransform()
lrx = ulx + (src.RasterXSize * xres)
lry = uly + (src.RasterYSize * yres)
geometry = [[ulx,lry], [ulx,uly], [lrx,uly], [lrx,lry]]
This gives me the following output geometry = [[-174740.0, 592900.0], [-174740.0, 2112760.0], [900180.0, 2112760.0], [900180.0, 592900.0]]. (Note that the crs is EPSG: 32651).
Now I would like to clip the larger file using rio.mask.mask(). According to the documentation, the shape variable should be GeoJSON-like dict or an object that implements the Python geo interface protocol (such as a Shapely Polygon). Therefore I create a Shapely Polygon out of the variable geometry, using
roi = Polygon(geometry)
Now everything is ready to use the rio.mask() function.
output = rio.mask.mask(larger_file.tif, roi, crop = True)
But this gives me the following error
TypeError: 'Polygon' object is not iterable
What do I do wrong? Or if someone knows a more elegant way to do it, please let me know.
(Unfortunately I cannot upload the two files since they're too large)
I found your question when I needed to figure out this kind of clipping myself. I got the same error and fixed it the following way:
rasterio.mask expects a list of features, not a single geometry. So the algorithm wants to run masking over several features bundled in an iterable (e.g. list or tuple) so we need to pass it our polygon within a list (or tuple) object.
The code you posted works after following change:
roi = [Polygon(geometry)]
All we have to do is to enclose the geometry in a list/tuple and then rasterio.mask works as expected.
I am trying to calculate the normals of a point cloud formed by three planes each aligned with an axis.
In matlab the function pcnormals gives me a coherent result, while when I try to do the same with estimate_normals of open3d the result is incorrect.
The code is here:
import numpy as np
from open3d import *
pcd = read_point_cloud("D:\Artificial.txt",format = 'xyz')
estimate_normals(pcd, search_param = KDTreeSearchParamKNN(knn = 25))
x = np.concatenate((np.asarray(pcd.points),np.asarray(pcd.normals)),axis=1)
np.savetxt("D:\ArtificialN_python.txt",x,delimiter=',')
I also have tried with differen knn value and search_param, but the result is similar.
I enclose the images of the coloured clouds according to the third component of the normal one (red-horizontal and green-inclined) calculated with matlab and python.
matlab result:
python result:
Anybody know what that might be due to?
I have a mayavi object with a number of vertices and I would like to set RGB or RGBA values directly for these vertices, rather than restrict myself to a single colormap with scalars. How can this be accomplished?
As far as I am aware, there is no documentation for doing this, but I have found a way to do it with only a minimum amount of hacking around. Here is a minimal example, which might require a little tinkering for different kinds of sources:
from tvtk.api import tvtk; from mayavi import mlab; import numpy as np
x,y,z=np.random.random((3,nr_points)) #some data
colors=np.random.randint(256,size=(100,3)) #some RGB or RGBA colors
pts=mlab.points3d(x,y,z)
sc=tvtk.UnsignedCharArray()
sc.from_array(colors)
pts.mlab_source.dataset.point_data.scalars=sc
pts.mlab_source.dataset.modified()
It also looks like sometimes you have to ensure that the mapper points to the right thing. This is not necessary for the above example, but it may be for other sources
pts.actor.mapper.input=pts.mlab_source.dataset
At some point the mayavi API should be fixed better so that there is an API exposed to just do this for all the pipeline functions, but that turns out to be a rather complicated and sweeping set of changes which I don't currently have time to finish.
Edit:
User eqzx posted an answer to another question (Specify absolute colour for 3D points in MayaVi) which may be simpler, especially for certain source types that are hard to get to work with tvtk.UnsignedCharArray.
His idea is to create a LUT spanning the entire range of 256x256x256 RGB values. Note that this LUT therefore has 16,777,216 entries. Which, if you wanted to use it in many vtk objects, may waste quite a lot of memory if you are not careful.
#create direct grid as 256**3 x 4 array
def create_8bit_rgb_lut():
xl = numpy.mgrid[0:256, 0:256, 0:256]
lut = numpy.vstack((xl[0].reshape(1, 256**3),
xl[1].reshape(1, 256**3),
xl[2].reshape(1, 256**3),
255 * numpy.ones((1, 256**3)))).T
return lut.astype('int32')
# indexing function to above grid
def rgb_2_scalar_idx(r, g, b):
return 256**2 *r + 256 * g + b
#N x 3 colors
colors = numpy.array([_.color for _ in points])
#N scalars
scalars = numpy.zeros((colors.shape[0],))
for (kp_idx, kp_c) in enumerate(colors):
scalars[kp_idx] = rgb_2_scalar_idx(kp_c[0], kp_c[1], kp_c[2])
rgb_lut = create_8bit_rgb_lut()
points_mlab = mayavi.mlab.points3d(x, y, z
keypoint_scalars,
mode = 'point')
#magic to modify lookup table
points_mlab.module_manager.scalar_lut_manager.lut._vtk_obj.SetTableRange(0, rgb_lut.shape[0])
points_mlab.module_manager.scalar_lut_manager.lut.number_of_colors = rgb_lut.shape[0]
points_mlab.module_manager.scalar_lut_manager.lut.table = rgb_lut
I've been searching the documentation like crazy and cannot find an answer for this one.
I'm generating FITS images in python and need to assign WCS coordinates to the image. I know there are lots of ways to do this by matching up point sources with a known catalog, but in this case I'm generating a dust map, so point source matching won't work (as far as I can tell).
So the image is a 2D Numpy array of shape (240,240). It's written like so (the x and y coordinate assignments are a little weird, it works somehow):
H, xedges, yedges = np.histogram2d(glat, glon, bins=[ybins, xbins], weights=Av)
count, x, y = np.histogram2d(glat, glon, bins=[ybins, xbins])
H/=count
hdu = pyfits.PrimaryHDU(H)
hdu.writeto(filename)
>>> print H.shape
(240,240)
That all works fine on its own. For assigning galactic coordinates seems like all you'd need to do is something like:
glon_coords = np.linspace(np.amin(glon), np.amax(glon), 240)
glat_coords = np.linspace(np.amin(glat), np.amax(glat), 240)
But I don't understand how the FITS image stores these coordinates, so I don't know how to write them. I've tried assigning them in SAO DS9, too, but no luck. I just need a straightforward way of assigning these coordinates to the image.
Thanks for any help you can provide.
I would advise you to start using astropy. For the purposes of your project the astropy.wcs package can help you write a FITS WCS header, and the astropy.io.fits API is basically identical to the pyfits one you are using now. Moreover the help pages are excellent, and all I'm about to do is translate their WCS building page to match your example.
To your question: FITS does not "tag" each pixel with a coordinate. I suppose it is possible to create a pixel lookup table or something like that, but the actual WCS is an algorithmic translation of X,Y pixels to astrometric coordinates (in your case "Galactic"). A nice page is here.
The example I would point you to is here:
http://docs.astropy.org/en/latest/wcs/index.html#building-a-wcs-structure-programmatically
And here is my untested pseudocode for your project:
# untested code
from __future__ import division # confidence high
# astropy
from astropy.io import fits as pyfits
from astropy import wcs
# your code
H, xedges, yedges = np.histogram2d(glat, glon, bins=[ybins, xbins], weights=Av)
count, x, y = np.histogram2d(glat, glon, bins=[ybins, xbins])
H/=count
# characterize your data in terms of a linear translation from XY pixels to
# Galactic longitude, latitude.
# lambda function given min, max, n_pixels, return spacing, middle value.
linwcs = lambda x, y, n: ((x-y)/n, (x+y)/2)
cdeltaX, crvalX = linwcs(np.amin(glon), np.amax(glon), len(glon))
cdeltaY, crvalY = linwcs(np.amin(glat), np.amax(glat), len(glat))
# wcs code ripped from
# http://docs.astropy.org/en/latest/wcs/index.html
w = wcs.WCS(naxis=2)
# what is the center pixel of the XY grid.
w.wcs.crpix = [len(glon)/2, len(glat)/2]
# what is the galactic coordinate of that pixel.
w.wcs.crval = [crvalX, crvalY]
# what is the pixel scale in lon, lat.
w.wcs.cdelt = numpy.array([cdeltX, cdeltY])
# you would have to determine if this is in fact a tangential projection.
w.wcs.ctype = ["GLON-TAN", "GLAT-TAN"]
# write the HDU object WITH THE HEADER
header = w.to_header()
hdu = pyfits.PrimaryHDU(H, header=header)
hdu.writeto(filename)