How to split multiple planes using ransac in 3D Pointcloud?My code can only split one plane at present.
Right now I am working to do plane segmentation of 3D point cloud data using RANSAC.My code can only split one plane at present. I want to remove one plane after splitting it and continue to split other planes. How can I do this.
`# -*-coding:utf-8 -*-
import os
import open3d as o3d
import numpy as np
test_data_dir = 'D:/test data'
point_cloud_file_name = 'Area6_office_5.pcd'
point_cloud_file_path = os.path.join(test_data_dir, point_cloud_file_name)
# 读取点云
pcd = o3d.io.read_point_cloud(point_cloud_file_path)
# 平面分割
plane_model, inliers = pcd.segment_plane(distance_threshold=0.03,
ransac_n=3,
num_iterations=1000)
# 模型参数
[a, b, c, d] = plane_model
print(f"Plane equation: {a:.2f}x + {b:.2f}y + {c:.2f}z + {d:.2f} = 0")
# 平面内的点
inlier_cloud = pcd.select_by_index(inliers)
inlier_cloud.paint_uniform_color([255.0, 0, 255.0])
# 平面外的点
outlier_cloud = pcd.select_by_index(inliers, invert=True)
# 可视化
o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],
zoom=0.8,
front=[-0.4999, -0.1659, -0.8499],
lookat=[2.1813, 2.0619, 2.0999],
up=[0.1204, -0.9852, 0.1215])
`
enter image description here
Since you already have outlier_cloud which is the point cloud with the first plane removed. You can easily do the same for outlier_cloud to get the rest planes.
For example, try the code below after running your script.
plane_model_2, inliers_2 = outlier_cloud.segment_plane(distance_threshold=0.03,
ransac_n=3,
num_iterations=1000)
# inliner2
inlier_cloud_2 = outlier_cloud.select_by_index(inliers_2)
inlier_cloud_2.paint_uniform_color([1, 0, 1])
# outlier2
outlier_cloud_2 = outlier_cloud.select_by_index(inliers, invert=True)
# visualization
o3d.visualization.draw_geometries([inlier_cloud_2, outlier_cloud_2],
zoom=0.8,
front=[-0.4999, -0.1659, -0.8499],
lookat=[2.1813, 2.0619, 2.0999],
up=[0.1204, -0.9852, 0.1215])
Related
I have quite complex problem and I have two options to solve it.
For a multiline shapefile (river) I would like to get cross profiles and extract DEM values for the lines.
I was thinking 1: Create ortogonal lines at defined step:
#Define a shp for the output features. Add a new field called 'M100' where the z-value of the line is stored to uniquely identify each profile
layerOut = outShp.CreateLayer('line_utm_neu_perp', layerRef, osgeo.ogr.wkbLineString)
layerDefn = layerOut.GetLayerDefn() # gets parameters of the current shapefile
layerOut.CreateField(ogr.FieldDefn('M100', ogr.OFTReal))
# Calculate the number of profiles/perpendicular lines to generate
n_prof = int(geomIn.Length()/spc)
# Define rotation vectors
rot_anti = np.array([[0, -1], [1, 0]])
rot_clock = np.array([[0, 1], [-1, 0]])
# Start iterating along the line
for prof in range(1, n_prof):
# Get the start, mid and end points for this segment
seg_st = geomIn.GetPoint(prof-1) # (x, y, z)
seg_mid = geomIn.GetPoint(prof)
seg_end = geomIn.GetPoint(prof+1)
# Get a displacement vector for this segment
vec = np.array([[seg_end[0] - seg_st[0],], [seg_end[1] - seg_st[1],]])
# Rotate the vector 90 deg clockwise and 90 deg counter clockwise
vec_anti = np.dot(rot_anti, vec)
vec_clock = np.dot(rot_clock, vec)
# Normalise the perpendicular vectors
len_anti = ((vec_anti**2).sum())**0.5
vec_anti = vec_anti/len_anti
len_clock = ((vec_clock**2).sum())**0.5
vec_clock = vec_clock/len_clock
# Scale them up to the profile length
vec_anti = vec_anti*sect_len
vec_clock = vec_clock*sect_len
# Calculate displacements from midpoint
prof_st = (seg_mid[0] + float(vec_anti[0]), seg_mid[1] + float(vec_anti[1]))
prof_end = (seg_mid[0] + float(vec_clock[0]), seg_mid[1] + float(vec_clock[1]))
# Write to output
geomLine = ogr.Geometry(ogr.wkbLineString)
geomLine.AddPoint(prof_st[0],prof_st[1])
geomLine.AddPoint(prof_end[0],prof_end[1])
featureLine = ogr.Feature(layerDefn)
featureLine.SetGeometry(geomLine)
featureLine.SetFID(prof)
featureLine.SetField('M100',round(seg_mid[2],1))
layerOut.CreateFeature(featureLine)
Problem here is that it works on one line only and not on multiline.
2 option could be creating parallel lines with offset and extract values at the same distance from the start. But I tried it only once and it did not work on my objects.
z = shapefile.offset_curve(10.0,'left')
But here I do not know what object to pass in order to make it work. Also I was thinking about creating buffer and extracting values of raster.
I will be grateful for any suggestions.
I have two .ply files that contain mesh of objects that are similar in shape. They are initially unaligned. I would like to achieve global registration for the two mesh objects. Is there a way that I can do this without having to initially import the point cloud data, do global registration, and then reconstruct the mesh?
I have tried the steps listed in the open3d documentation (http://www.open3d.org/docs/0.12.0/tutorial/pipelines/global_registration.html) and it works well for the point clouds. However, reconstructing a mesh from the point clouds is challenging, as they are a relatively complex shape, so I would like to avoid that.
Thank you in advance!
The main idea is you don't need to reconstruct mesh from point cloud.
Mark the data you have as mesh_a, mesh_b, pcl_a, pcl_b.
if your pcl_a/b is extracted directly from mesh_a/b or pcl_a/b and mesh_a/b has the same Transformation Matrix, You can simply apply the transformation matrix obtained from the point cloud alignment to the mesh.
if your point cloud data has no relations with your mesh data. You can first sample mesh_a/b to point cloud and do registration or directly get mesh vertex as point cloud from mesh data. The rest of the work is the same as in case one.
Here are the example of situation two.
import copy
import numpy as np
import open3d as o3d
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp])
def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)
radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh
def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, True,
distance_threshold,
o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
3, [
o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
0.9),
o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
return result
def main():
voxel_size = 0.01
print(":: Load two mesh.")
target_mesh = o3d.io.read_triangle_mesh('bunny.ply')
source_mesh = copy.deepcopy(target_mesh)
source_mesh.rotate(source_mesh.get_rotation_matrix_from_xyz((np.pi / 4, 0, np.pi / 4)), center=(0, 0, 0))
source_mesh.translate((0, 0.05, 0))
draw_registration_result(target_mesh, source_mesh, np.identity(4))
print(":: Sample mesh to point cloud")
target = target_mesh.sample_points_uniformly(1000)
source = source_mesh.sample_points_uniformly(1000)
draw_registration_result(source, target, np.identity(4))
source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
draw_registration_result(source_mesh, target_mesh, result_ransac.transformation)
if __name__ == '__main__':
main()
I need to edit the geometry of intersecting polygons and I don't know how I can save modified geometry to a shapefile. Is it even possible?
from shapely.geometry import Polygon, shape
import matplotlib.pyplot as plt
import fiona
c = fiona.open('polygon23.shp', 'r')
d = fiona.open('polygon23.shp', 'r')
for poly in c.values():
for poly2 in d.values():
p_poly = shape(poly['geometry'])
p_poly2 = shape(poly2['geometry'])
intersect_polygons = p_poly.intersection(p_poly2)
if type(intersect_polygons) == Polygon:
intersect_polygons = p_poly.intersection(p_poly2).exterior.coords
if p_poly.exterior.xy != p_poly2.exterior.xy:
y_difference = abs(intersect_polygons[0][1]) - abs(intersect_polygons[2][1])
coords_polygonB = p_poly2.exterior.coords[:]
coords_polygonB[0] = (coords_polygonB[0][0], coords_polygonB[0][1] + (y_difference))
coords_polygonB[1] = (coords_polygonB[1][0], coords_polygonB[1][1] + (y_difference))
coords_polygonB[2] = (coords_polygonB[2][0], coords_polygonB[2][1] + (y_difference))
coords_polygonB[3] = (coords_polygonB[3][0], coords_polygonB[3][1] + (y_difference))
coords_polygonB[4] = (coords_polygonB[4][0], coords_polygonB[4][1] + (y_difference))
p_poly2 = Polygon(coords_polygonB)
x,y = p_poly.exterior.xy
plt.plot(x,y)
x,y = p_poly2.exterior.xy
plt.plot(x,y)
plt.show()
The removal of intersections between many polygons is most likely a complex problem. Moreover, I used your method as the solver in my solution.
Answer
The answer to your question, is yes. You can rectify the intersections between the polygons in your shp file; however, you need to create new Polygon objects, you can't just change the exterior coordinates of an existing Polygon.
Store metadata and disc from original shp file
The solution below writes/creates the resulting polygon set to a new shp file. This requires us to store the metadata from the original shp file, and pass it to the new one. We also need to store the properties of each polygon, I store these in a separate list, set_of_properties.
No need for two for loops
You don't need to for loops, just use combinations from the itertools standard library to loop through all possible polygon combinations. I use index combinations to replace polygons that are intersecting with new ones.
Outer do...while-loop
In very cringe caes, a rectification using your method may actually introduce new intersections. We can catch these and rectify them by looping through your solver until there are no intersections left. This requires a do... while loop, but there is no do...while loop in Python. Moreover, it can be implemented with while-loops (see Solution for implementation).
Solution
from itertools import combinations
from shapely.geometry import Polygon, Point, shape, mapping
import matplotlib.pyplot as plt
import fiona
SHOW_NEW_POLYGONS = False
polygons, set_of_properties = [], []
with fiona.open("polygon23.shp", "r") as source:
for line in source:
polygons.append(shape(line["geometry"]))
set_of_properties.append(line["properties"])
meta = source.meta
poly_index_combinations = combinations(tuple(range(len(polygons))), 2)
while True:
intersection_record = []
for i_poly_a, i_poly_b in poly_index_combinations:
poly_a, poly_b = polygons[i_poly_a], polygons[i_poly_b]
if poly_a.exterior.xy == poly_b.exterior.xy:
# print(f"The polygons have identical exterior coordinates:\n{poly_a} and {poly_b}\n")
continue
intersecting = poly_a.intersection(poly_b)
if type(intersecting) != Polygon:
continue
intersecting_polygons = intersecting.exterior.coords
if not intersecting_polygons:
# print(f"No intersections between\n{poly_a} and {poly_b}\n")
continue
print("Rectifying intersection")
y_diff = abs(intersecting_polygons[0][1]) - abs(intersecting_polygons[2][1])
new_poly_b = Polygon((
Point(float(poly_b.exterior.coords[0][0]), float(poly_b.exterior.coords[0][1] + y_diff)),
Point(float(poly_b.exterior.coords[1][0]), float(poly_b.exterior.coords[1][1] + y_diff)),
Point(float(poly_b.exterior.coords[2][0]), float(poly_b.exterior.coords[2][1] + y_diff)),
Point(float(poly_b.exterior.coords[3][0]), float(poly_b.exterior.coords[3][1] + y_diff)),
Point(float(poly_b.exterior.coords[4][0]), float(poly_b.exterior.coords[4][1] + y_diff))
))
if SHOW_NEW_POLYGONS:
x, y = poly_a.exterior.xy
plt.plot(x, y)
x, y = new_poly_b.exterior.xy
plt.plot(x, y)
plt.show()
polygons[i_poly_b] = new_poly_b
intersection_record.append(True)
if not intersection_record:
break
with fiona.open("new.shp", "w", **meta) as sink:
for poly, properties in zip(polygons, set_of_properties):
sink.write({
"geometry": mapping(poly),
"properties": properties
})
I'm facing an issue when trying to use 'sampleRectangle()' function in GEE, it is returning 1x1 arrays and I can't seem to find a workaround. Please, see below a python code in which I'm using an approach posted by Justin Braaten. I suspect there's something wrong with the geometry object I'm passing to the function, but at the same time I've tried several ways to check how this argument is behaving and couldn't no spot any major issue.
Can anyone give me a hand trying to understand what is happening?
Thanks!
import json
import ee
import numpy as np
import matplotlib.pyplot as plt
ee.Initialize()
point = ee.Geometry.Point([-55.8571, -9.7864])
box_l8sr = ee.Geometry(point.buffer(50).bounds())
box_l8sr2 = ee.Geometry.Polygon(box_l8sr.coordinates())
# print(box_l8sr2)
# Define an image.
# l8sr_y = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_038029_20180810')
oli_sr_coll = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
## Function to mask out clouds and cloud-shadows present in Landsat images
def maskL8sr(image):
## Bits 3 and 5 are cloud shadow and cloud, respectively.
cloudShadowBitMask = (1 << 3)
cloudsBitMask = (1 << 5)
## Get the pixel QA band.
qa = image.select('pixel_qa')
## Both flags should be set to zero, indicating clear conditions.
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0)
mask = qa.bitwiseAnd(cloudsBitMask).eq(0)
return image.updateMask(mask)
l8sr_y = oli_sr_coll.filterDate('2019-01-01', '2019-12-31').map(maskL8sr).mean()
l8sr_bands = l8sr_y.select(['B2', 'B3', 'B4']).sampleRectangle(box_l8sr2)
print(type(l8sr_bands))
# Get individual band arrays.
band_arr_b4 = l8sr_bands.get('B4')
band_arr_b3 = l8sr_bands.get('B3')
band_arr_b2 = l8sr_bands.get('B2')
# Transfer the arrays from server to client and cast as np array.
np_arr_b4 = np.array(band_arr_b4.getInfo())
np_arr_b3 = np.array(band_arr_b3.getInfo())
np_arr_b2 = np.array(band_arr_b2.getInfo())
print(np_arr_b4.shape)
print(np_arr_b3.shape)
print(np_arr_b2.shape)
# Expand the dimensions of the images so they can be concatenated into 3-D.
np_arr_b4 = np.expand_dims(np_arr_b4, 2)
np_arr_b3 = np.expand_dims(np_arr_b3, 2)
np_arr_b2 = np.expand_dims(np_arr_b2, 2)
# # print(np_arr_b4.shape)
# # print(np_arr_b5.shape)
# # print(np_arr_b6.shape)
# # Stack the individual bands to make a 3-D array.
rgb_img = np.concatenate((np_arr_b2, np_arr_b3, np_arr_b4), 2)
# print(rgb_img.shape)
# # Scale the data to [0, 255] to show as an RGB image.
rgb_img_test = (255*((rgb_img - 100)/3500)).astype('uint8')
# plt.imshow(rgb_img)
plt.show()
# # # create L8OLI plot
# fig, ax = plt.subplots()
# ax.set(title = "Satellite Image")
# ax.set_axis_off()
# plt.plot(42, 42, 'ko')
# img = ax.imshow(rgb_img_test, interpolation='nearest')
I have the same issue. It seems to have something to do with .mean(), or any reduction of image collections for that matter.
One solution is to reproject after the reduction. For example, you could try adding "reproject" at the end:
l8sr_y = oli_sr_coll.filterDate('2019-01-01', '2019-12-31').map(maskL8sr).mean().reproject(crs = ee.Projection('EPSG:4326'), scale=30)
It should work.
I'm am working on a project at the moment where I am trying to create a Hilbert curve using the Python Imaging Library. I have created a function which will generate new coordinates for the curve through each iteration and place them into various lists which then I want to be able to move, rotate and scale. I was wondering if anyone could give me some tips or a way to do this as I am completely clueless. Still working on the a lot of the code.
#! usr/bin/python
import Image, ImageDraw
import math
# Set the starting shape
img = Image.new('RGB', (1000, 1000))
draw = ImageDraw.Draw(img)
curve_X = [0, 0, 1, 1]
curve_Y = [0, 1, 1, 0]
combinedCurve = zip(curve_X, curve_Y)
draw.line((combinedCurve), fill=(220, 255, 250))
iterations = 5
# Start the loop
for i in range(0, iterations):
# Make 4 copies of the curve
copy1_X = list(curve_X)
copy1_Y = list(curve_Y)
copy2_X = list(curve_X)
copy2_Y = list(curve_Y)
copy3_X = list(curve_X)
copy3_Y = list(curve_Y)
copy4_X = list(curve_X)
copy4_Y = list(curve_Y)
# For copy 1, rotate it by 90 degree clockwise
# Then move it to the bottom left
# For copy 2, move it to the top left
# For copy 3, move it to the top right
# For copy 4, rotate it by 90 degrees anticlockwise
# Then move it to the bottom right
# Finally, combine all the copies into a big list
combinedCurve_X = copy1_X + copy2_X + copy3_X + copy4_X
combinedCurve_Y = copy1_Y + copy2_Y + copy3_Y + copy4_Y
# Make the initial curve equal to the combined one
curve_X = combinedCurve_X[:]
curve_Y = combinedCurve_Y[:]
# Repeat the loop
# Scale it to fit the canvas
curve_X = [x * xSize for x in curve_X]
curve_Y = [y * ySize for y in curve_Y]
# Draw it with something that connects the dots
curveCoordinates = zip(curve_X, curve_Y)
draw.line((curveCoordinates), fill=(255, 255, 255))
img2=img.rotate(180)
img2.show()
Here is a solution working on matrices (which makes sense for this type of calculations, and in the end, 2D coordinates are matrices with 1 column!),
Scaling is pretty easy, just have to multiply each element of the matrix by the scale factor:
scaled = copy.deepcopy(original)
for i in range(len(scaled[0])):
scaled[0][i]=scaled[0][i]*scaleFactor
scaled[1][i]=scaled[1][i]*scaleFactor
Moving is pretty easy to, all you have to do is to add the offset to each element of the matrix, here's a method using matrix multiplication:
import numpy as np
# Matrix multiplication
def mult(matrix1,matrix2):
# Matrix multiplication
if len(matrix1[0]) != len(matrix2):
# Check matrix dimensions
print 'Matrices must be m*n and n*p to multiply!'
else:
# Multiply if correct dimensions
new_matrix = np.zeros(len(matrix1),len(matrix2[0]))
for i in range(len(matrix1)):
for j in range(len(matrix2[0])):
for k in range(len(matrix2)):
new_matrix[i][j] += matrix1[i][k]*matrix2[k][j]
return new_matrix
Then create your translation matrix
import numpy as np
TranMatrix = np.zeros((3,3))
TranMatrix[0][0]=1
TranMatrix[0][2]=Tx
TranMatrix[1][1]=1
TranMatrix[1][2]=Ty
TranMatrix[2][2]=1
translated=mult(TranMatrix, original)
And finally, rotation is a tiny bit trickier (do you know your angle of rotation?):
import numpy as np
RotMatrix = np.zeros((3,3))
RotMatrix[0][0]=cos(Theta)
RotMatrix[0][1]=-1*sin(Theta)
RotMatrix[1][0]=sin(Theta)
RotMatrix[1][1]=cos(Theta)
RotMatrix[2][2]=1
rotated=mult(RotMatrix, original)
Some further reading on what I've done:
http://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations
http://en.wikipedia.org/wiki/Homogeneous_coordinates
http://www.essentialmath.com/tutorial.htm (concerning all the algebra transformations)
So basically, it should work if you insert those operations inside your code, multiplying your vectors by the rotation / translation matrices
EDIT
I just found this Python library that seems to provide all type of transformations: http://toblerity.org/shapely/index.html