Turn samples into Mesh - python

i want to load a mesh file (.obj), then want to use the trimesh.sample.sample_surface_even() function to get some points on the surface, turn the resulting points back into a mesh and save them back as an .obj file.
My problem is, that i dont know how to turn the samples back into a mesh that can be saved. Can somebody tell me what i should do step by step, to achieve that goal?
Here is my code so far:
import numpy as np
import trimesh
mesh = trimesh.load_mesh('mesh10.obj')
sampledmesh= trimesh.sample.sample_surface_even(mesh,500)
#? How to turn sampledmesh back into a mesh?
sampledmesh.export('mesh10_export.obj')

You can use the submesh function on the sampled face indices, which is the second element in the returned tuple:
sampledmesh = trimesh.sample.sample_surface_even(mesh,500)
sampled_submesh = mesh.submesh([sampledmesh[1]])[0]
submesh returns an array of meshes, but here we just have one, so we take the first mesh.

Related

Morphology with OpenCV on Xarray data array is shifting image

I am having an issue using the morphology functions in OpenCV on xarray data with python.
I have generated a list of binary xarray datarrays (I understand that I probably should make these just another dimension of the dataset, but I haven't gotten there yet).
I am trying to run morphological closing on this data. In the code snippet below, I extract the actual data from the datarray as a numpy array, then try to run the closing on that. My understanding of OpenCV is that when it reads an image in, it essentially translates it into a numpy array anyway, so I thought this might work. I should mention, the data is originally a geotiff, and I am reading it in using rioxarray, if that makes a difference.
Essentially, what seems to be happening is that the data is being shifted by the size of the kernel for each iteration. I have confirmed this by reading the same slice of the data in the array before and after running the closing operation. It also throws an error that "Axis Limits cannot be NaN or Inf". The output numpy array is the same size as the original, but parts of it have been cut off in one corner, and the other corner, it appears that a No Data value has been added(?) (value is -1.798e+308).
I am not sure why this is doing this. When I read an image in using imread, the same process seems to work just fine from what I can tell. I also cannot tell if the closing operation is even doing what it is supposed to be doing either. First glance, it only seems like it is shifting it. Code and image below.
import rioxarray as rxr
import xarray as xr
import cv2 as cv
import numpy as np
kSize = 15 #Kernel size
iters = 2 #number of iterations
#Binary list is a list of several binary images generated using xarray.where function
binaryCopy = binaryList[0].copy() #Copy one datarray from list of datarrays...prob should just put this all into one xr dataset, but oh well
inAttrs = binaryCopy.attrs.copy() #Copy attributes to read back in at the end
inDims = binaryCopy.dims
inCoords = binaryCopy.coords
kern = cv.getStructuringElement(cv.MORPH_RECT,(kSize,kSize)) #Create square kernel
print(binaryCopy.data.shape) #Print shape of data array (appears to just be a numpy array)
#I had tried this v first, but it did not work, so I tried the steps individually...same issue
#closed = cv.morphologyEx(binaryCopy.data, cv.MORPH_CLOSE, kern)
dilated = cv.dilate(binaryCopy.data, kern, iters)
closed = cv.erode(dilated, kern, iters)
newBinaryArray= xr.DataArray(closed,
coords=inCoords,
dims=inDims,
attrs=inAttrs)
fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True)
#Plot the original first
binaryList[0].plot(ax=ax[0])
#Plot the closed data
newBinaryArray.plot(ax=ax[1])
plt.rcParams['figure.figsize'] = (15,8)
Before (left) and after (right) I run morphological closing. Notice the blue bar on the bottom and left of the image to right. This appears to be a no data value (-1.798e+308)
Ok, so it looks like my issue was the anchor. If you set argument anchor=(0,0), that seems to prevent the shift in the data. (I thought this should be (-1,-1) since other places on the internet seem to indicate that puts the anchor in the middle, but the (0,0) appears to work better for me.
Also, when I added borderType=cv.BORDER_REPLICATE (see here) as an argument to the morphologEx function (I just did it with that one instead of doing dilation and erosion), that prevented the extra strip that was inserted when the data was shifted from being an enormous "No Data" value and uses the data value at the border.

How do i draw an image from an array in pyglet

I working on a voxel engine based using raycasting. But i need a way to display my frame who is a nested list like that (100x100x3). The only idea i had was to create an image in pyglet using pyglet.image.create(width,height) and to next modifie it's data using image.set_data("RGB",width*3,data). But the problem i have is how to organize my data create like that : numpy.zeros([100,100,3]) to be used in set_data(). I have try data.tobytes() or ctypes but i always get an image glitter with random particules.
See numpy.zeros. numpy.zeros([100,100,3]) generates an array of floats. You should generate an array of uint8:
numpy.zeros([100,100,3])
numpy.zeros([100,100,3], dtype = numpy.uint8)

Voxelization of STL-file; writing into np.array

I would like to voxelise a .stl file and write it into an np.array. The resolution of the voxels should be adjustable.
Here is my code for this:
component_path = r"C:\Users\User\documents\components\Test_1.stl"
mesh = o3d.io.read_triangle_mesh(component_path)
voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(mesh, voxel_size = 3)
ply_path = "voxel.ply"
o3d.io.write_voxel_grid(ply_path, voxel_grid, True,True,True)
pcd = o3d.io.read_point_cloud(ply_path)
list_path = "list.xyz"
o3d.io.write_point_cloud(list_path, pcd)
Then I read the coordinate points from the list, write them into a 3D array and plot them. When plotting, the border is not displayed for certain voxel sizes, as can be seen in the image (although it is present in the original). Is there a solution for this so that it doesn't happen no matter what voxel size?
voxelized picture with missing border
In addition, the voxel size changes the maximum dimension. So the component originally has three times the length as it is shown here. How can this be adjusted? (If I just multiply a factor, the voxels stay small but pull the distances apart).
Is there perhaps a more reasonable way to write a voxelisation of a .stl file and put the centers of voxels into an np.array?
If anyone ever has the same problem and is looking for a solution:
This project worked for me: GitHub: stl-to-voxel
The model is then also filled. If the maximum dimension is known, you can determine the exact voxel size via the resolution.
Here is some code:
import stl_reader
import stltovoxel
import numpy as np
import copy
import os
import sys
input=r"C:\Users\user\Example.stl"
output=r"C:\Users\user\Test.xyz"
resolution = 50 #Resolution, into how many layers the model should be divided
stltovoxel.doExport(input, output, resolution)
Afterwards, you can read the coordinates from the list, write them into an array and process them further (quite normally).

How to extract a profile of value from a raster along a given line?

How to extract a profile of values from a raster along a given shapefile line in Python?
I am struggling finding a method to extract a profile of values (e.g. topographic profile) from a raster (geotiff). The library Rasterio has a method to clip/extract value from a raster based on a polygon, but I cannot find an equivalent method for a line shapefile.
There is a basic method with scipy, but it does not inherently conserve geographic information like a method based on higher level toolbox like rasterio could provide.
In other words, I am looking for an equivalent in Python of what the tool Terrain Profile in QGIS offers.
Thanks
This is a bit different than extracting for a polygon, as you want to sample every pixel touched by the line, in the order they are touched (the polygon approaches don't care about pixel order).
It looks like it would be possible to adapt this approach to use rasterio instead. Given a line read from a shapefile using geopandas or fiona as a shapely object, you use the endpoints to derive a new equidistant projection that you use as dst_crs in a WarpedVRT and read pixel values from that. It looks like you would need to calculate the length of your line in terms of the number of pixels you want sampled, this is the width parameter of the WarpedVRT.
This approach may need to be adapted further if your line is not an approximately straight line between the endpoints.
If you want to just get the raw pixel values under the line, you should be able to use a mask in rasterio or rasterize directly, for each line. You may want to use the all_touched=True in the case of lines.
I had a similar problem and found a solution which works for me. The solution uses shapely to sample points on a line/lines and then accesses respective values from the GeoTiff, therefore the extracted profile follows the direction of the line. Here is the method that I ended up with:
def extract_along_line(xarr, line, n_samples=256):
profile = []
for i in range(n_samples):
# get next point on the line
point = line.interpolate(i / n_samples - 1., normalized=True)
# access the nearest pixel in the xarray
value = xarr.sel(x=point.x, y=point.y, method="nearest").data
profile.append(value)
return profile
Here is a working example with data from the copernicus-dem database and the line is the diagonal of the received tile:
import rioxarray
import shapely.geometry
import matplotlib.pyplot as plt
sample_tif = ('https://elevationeuwest.blob.core.windows.net/copernicus-dem/'
'COP30_hh/Copernicus_DSM_COG_10_N35_00_E138_00_DEM.tif')
# Load xarray
tile = rioxarray.open_rasterio(sample_tif).squeeze()
# create a line (here its the diagonal of tile)
line = shapely.geometry.MultiLineString([[
[tile.x[-1],tile.y[-1]],
[tile.x[0], tile.y[0]]]])
# use the method from above to extract the profile
profile = extract_along_line(tile, line)
plt.plot(profile)
plt.show()

Consecutive update figure matplotlib

I am plotting pairs of images from two different folders, for each pair I plotted them one after the other using: plt.imshow() and save the fig after each loop.
In order to speed it up I want to use the option .set_data so that I only update the figure and not redraw everything. Since I have to do it two times it seems to use only the last one and no update is done on the first call. I am doing somthing like this:
data=np.arange(9).reshape(3,3)*10
im1=plt.imshow([data]*4) # simulating an rgb image that i read from file
mask=np.ma.array(np.arange(9).reshape(3,3)*10, mask=np.eye(3)) # 1D map
im2=plt.imshow(mask, vmin=2, vmax=7)
for i in range(10):
data=np.arange(9).reshape(3,3)*10 + np.random.randint(0,100,size=(3,3))
im1.set_data([data]*4) # simulating another rgb image read from file
mask=np.ma.array(np.arange(9).reshape(3,3)*10+np.random.randint(0,100,size=(3,3)), mask=np.eye(3)) # 1D map
im2.set_data(mask)
plt.savefig("{}.png".format(i))
with this code, only the mask is being updated but not the background image. Is there a way to make matplotlib use both updates of the data?

Categories