The idea behind this is to create a detection area for a security camera. Currently, I know how to find and use the modelview matrix data as shown below in the function "matrixTransformation". The value for the matrix should then be calculated for each increase of rotation of the security camera in the initialization function.
I would like to know how you would find coordinates of the edges of each security camera, a cylinder shape, using the matrix. I am using Pygame 1.9.2, Python 3.5 and PyOpenGL-3.1.0.
Picture of coordinates on the security camera which need to be calculated
def matrixTransformation(x,y,z):
matrix = (GLfloat * 16)()
glGetFloatv(GL_MODELVIEW_MATRIX, matrix)
xp = matrix[0] * x + matrix[4] * y + matrix[8] * z + matrix[12]
yp = matrix[1] * x + matrix[5] * y + matrix[9] * z + matrix[13]
zp = matrix[2] * x + matrix[6] * y + matrix[10] * z + matrix[14]
wp = matrix[3] * x + matrix[7] * y + matrix[11] * z + matrix[15]
xp /= wp
yp /= wp
zp /= wp
return xp,yp,zp
def init():
securityCameraRotation=380
glEnable(GL_DEPTH_TEST)
multipleRotations=0
result=[]
glPushMatrix()
glTranslatef(-4,1.5,5.5)
glRotate(315,1,1,1)
while True:
if securityCameraRotation>=380:
clockwise=True
multipleRotations+=1
elif securityCameraRotation<=310:
clockwise=False
glRotate(securityCameraRotation,0,1,0)
#append the transformed coordinates to result
if clockwise==True:
securityCameraRotation-=0.2
elif clockwise==False:
securityCameraRotation+=0.2
if multipleRotations>1:
#End the loop when one complete rotation between 310 and 380 has occured
break
glPopMatrix()
return result
def securityCamera(radius, height, num_slices,frontCircleColour,backCircleColour,tubeColour):
r = radius
h = height
n = float(num_slices)
circle_pts = []
for i in range(int(n) + 1):
angle = 2 * math.pi * (i/n)
x = r * math.cos(angle)
y = r * math.sin(angle)
pt = (x, y)
circle_pts.append(pt)
glBegin(GL_TRIANGLE_FAN) #drawing the back circle
glColor(backCircleColour)
glVertex(0, 0, h/2.0)
for (x, y) in circle_pts:
z = h/2.0
glVertex(x, y, z)
glEnd()
glBegin(GL_TRIANGLE_FAN) #drawing the front circle
glColor(frontCircleColour)
glVertex(0, 0, h/2.0)
for (x, y) in circle_pts:
z = -h/2.0
glVertex(x, y, z)
glEnd()
glBegin(GL_TRIANGLE_STRIP) #draw the tube
glColor(tubeColour)
for (x, y) in circle_pts:
z = h/2.0
glVertex(x, y, z)
glVertex(x, y, -z)
glEnd()
In OpenGL, there are a bunch of transformations that occur. First, we treat the object as if it is in model space, where the the object is centered at the origin and we draw the mesh (in this case, the cylinder). Then, we apply a model matrix transform (where we translate/rotate/scale our cylinder) and a view matrix transform (where we shift our scene relative to the imaginary camera). Finally we apply the projection matrix that adds the "3d perspective" to our scene by creating a matrix with gluPerspective or some more modern means. All of these matrix multiplications basically put the coordinates of your 3d models in the right place on our 2d screens (sort of, more detailed info here).
In terms of the model space, the yellow points you highlighted in your picture are actually just (0, 0, -h/2.0) and (0, 0, h/2.0). This is fine if you are just drawing your yellow points with glBegin(GL_POINTS) in your securityCamera function. However, you are probably more interested in calculating where these yellow points are located in world space (that is, after multiplication by the modelview matrix).
One simple way to get these world space coordinates is to multiply the yellow points' model space coordinates by the modelview matrix. Use your matrixTransformation function on (0, 0, -h/2.0) and (0, 0, h/2.0) and that should work!
Alternatively, as I hinted at in the comments, matrices like your modelview matrix are actually contain useful information that results from the accumulation of multiplications of translation, rotation, and scaling matrices. I pointed to this picture:
Each of these column axes actually corresponds to the rows of your numpy array (which is interesting since numpy is row-major while OpenGL is column-major). You can get the following axes of how your model is pointing in world spaces with the following snippet:
mv_matrix = glGetFloatv(GL_MODELVIEW_MATRIX)
left, up, forward, position = [v/(np.linalg.norm(v)) for v in mv_matrix[:,:3]]
Note that I cut off the last row in the numpy array and normalized each of the axes. If you take the forward array you get from there, you get the direction in world space of where that particular camera is pointing, while the position array gives you the world space position of the center (model space equivalent of (0, 0, 0)) of the security camera. Multiply the normalized forward array by h/2.0 and add that to position and you should get the world space position of the front of your security camera. This is not too useful for rendering to the screen, but could be used for "behind the scenes" math for interesting objects with the security camera.
As a side note, I realized I made a sign error in this line for the cylinder drawing code:
glBegin(GL_TRIANGLE_FAN) #drawing the front circle
glColor(frontCircleColour)
glVertex(0, 0, -h/2.0)#this was + in the original code!!!
Let me know if this helps you make sense of my earlier comments!
Related
I am using a DeepSORT to gather the x and y coordinates of an object. I am using homography to get a zoomed in bird's-eye-view of a specific portion of the video. I know the real-world distances of the area I zoom into, and want to know the real world position and speed of the object as it moves through the identified area.
Here is my current code for the homographical transformation
# points for tracking window
pt_A = [x_0, y_0]
pt_B = [x_1, y_1]
pt_C = [x_2, y_2]
pt_D = [x_3, y_3]
# euclidean distances between each point
width_AD = np.sqrt(((pt_A[0] - pt_D[0]) ** 2) + ((pt_A[1] - pt_D[1]) ** 2))
width_BC = np.sqrt(((pt_B[0] - pt_C[0]) ** 2) + ((pt_B[1] - pt_C[1]) ** 2))
max_width = max(int(width_AD), int(width_BC))
height_AB = np.sqrt(((pt_A[0] - pt_B[0]) ** 2) + ((pt_A[1] - pt_B[1]) ** 2))
height_CD = np.sqrt(((pt_C[0] - pt_D[0]) ** 2) + ((pt_C[1] - pt_D[1]) ** 2))
max_height = max(int(height_AB), int(height_CD))
input_pts = np.float32([pt_A, pt_B, pt_C, pt_D])
output_pts = np.float32([[0, 0], [0, max_height - 1], [max_width - 1, max_height - 1], [max_width - 1, 0]])
# Compute the perspective transform h_transform
h_transform = cv2.getPerspectiveTransform(input_pts, output_pts)
This h_transform warps the video in the orientation I would like to use when using warpPerspective. I want to know how I can now apply the transformation to the tracked points of objects moving through the area and measure the position and speed of the objects in m/s using the known length and width of the area I zoom into.
This was one thing I tried to convert the points of each object into the new plane, would this be correct or seem correct?
#Finally, any 2d point in rectangle A can be found in rectangle B using this operation:
point_in_A = (x,y,1)
tempMatrix (1x3) = h_transform * point_in_A(1x3)
tempMatrix will be (x, y, scale);
#using tempMatrix values below:
result xy_in_B = (x / scale , y / scale);
Once this is correct, how do I convert these pixel coordinates into real world coordinates? I believe the above transformation would place the coordinates onto a plane where the corners are the selected area with the origin being in the bottom left side.
I have confused myself trying to wrap my head around this so I apologize if this is not a viable question or is very confusing!
I want to implement Paint.NET's polar inversion effect in Python.
If you don't know Paint.NET's polar inversion effect, basically, it transforms this (I created the image using Python):
To this:
After a bit of Google searching I found this:
protected override void InverseTransform(ref WarpEffectBase.TransformData data)
{
double x = data.X;
double y = data.Y;
double invertDistance = DoubleUtil.Lerp(1.0, base.DefaultRadius2 / (x * x + y * y), this.amount);
data.X = x * invertDistance;
data.Y = y * invertDistance;
}
Source
After a bit more Google searching I found this:
float Lerp(float firstFloat, float secondFloat, float by)
{
return firstFloat * (1 - by) + secondFloat * by;
}
Source
So putting the pieces together, this is the transformation that needs to be applied to every pixel, implemented in Python:
def lerp(x, y, by):
return x * (1 - by) + y * by
def transform_xy(x, y, width, height):
cx = width/2
cy = height/2
return x-cx, cy-y
def base_radius_squared(width, height):
radius = min(width, height) / 2
return radius ** 2
def polar_inversion(x, y, radius, strength):
invertDistance = lerp(1, radius/(x**2+y**2), strength)
return x*invertDistance, y*invertDistance
Strength is a float between -4 and 4 (inclusive). x and y are not the pixel coordinates of the pixel in the image, namely the origin is not at the upper left corner of the image, and y axis isn't downwards.
The x, y values used here are relative to the center of transformation, the origin is at the center of transformation, the center of transformation is defaulted at the center of image, and the y axis is upwards. I just want to clarify the coordinate system here.
So how do I apply the transformation to every pixel of an image as efficiently as possible without using for loop to iterate every pixel? How do I apply the transformation in a vectorized way?
I'm working with 3D images and have to rotate them according to Euler angles (phi,psi,theta) in 'zxz' convention (these Euler angles are part of a dataset, so I have to use that convention). I found the function scipy.ndimage.rotate that seems useful in that regard.
arrayR = scipy.ndimage.rotate(array , phi, axes=(0,1), reshape=False)
arrayR = scipy.ndimage.rotate(arrayR, psi, axes=(1,2), reshape=False)
arrayR = scipy.ndimage.rotate(arrayR, the, axes=(0,1), reshape=False)
Sadly, this does not do what intended. This is why:
Definition:
In the z-x-z convention, the x-y-z frame is rotated three times: first
about the z-axis by an angle phi; then about the new x-axis by an
angle psi; then about the newest z-axis by an angle theta.
However with above code, the rotations are always with respect to the original axes. Which is why obtained rotations are not correct. Anyone has a suggestion to obtain correct rotations, as explained in the definition?
In other words, in the present 'zxz' convention the rotations are intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation after each elemental rotation). If I use the above code, the rotations are extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless). I need a way for doing extrinsic rotations, in python.
I found a satisfying solution following this link: https://nbviewer.jupyter.org/gist/lhk/f05ee20b5a826e4c8b9bb3e528348688
This method uses np.meshgrid, scipy.ndimage.map_coordinates. The above link uses some third party library for generating the rotation matrix, however I use scipy.spatial.transform.Rotation. This function allows to define both intrinsic and extrinsic rotations: see description of scipy.spatial.transform.Rotation.from_euler.
Here is my function:
import numpy as np
from scipy.spatial.transform import Rotation as R
from scipy.ndimage import map_coordinates
# Rotates 3D image around image center
# INPUTS
# array: 3D numpy array
# orient: list of Euler angles (phi,psi,the)
# OUTPUT
# arrayR: rotated 3D numpy array
# by E. Moebel, 2020
def rotate_array(array, orient):
phi = orient[0]
psi = orient[1]
the = orient[2]
# create meshgrid
dim = array.shape
ax = np.arange(dim[0])
ay = np.arange(dim[1])
az = np.arange(dim[2])
coords = np.meshgrid(ax, ay, az)
# stack the meshgrid to position vectors, center them around 0 by substracting dim/2
xyz = np.vstack([coords[0].reshape(-1) - float(dim[0]) / 2, # x coordinate, centered
coords[1].reshape(-1) - float(dim[1]) / 2, # y coordinate, centered
coords[2].reshape(-1) - float(dim[2]) / 2]) # z coordinate, centered
# create transformation matrix
r = R.from_euler('zxz', [phi, psi, the], degrees=True)
mat = r.as_matrix()
# apply transformation
transformed_xyz = np.dot(mat, xyz)
# extract coordinates
x = transformed_xyz[0, :] + float(dim[0]) / 2
y = transformed_xyz[1, :] + float(dim[1]) / 2
z = transformed_xyz[2, :] + float(dim[2]) / 2
x = x.reshape((dim[1],dim[0],dim[2]))
y = y.reshape((dim[1],dim[0],dim[2]))
z = z.reshape((dim[1],dim[0],dim[2])) # reason for strange ordering: see next line
# the coordinate system seems to be strange, it has to be ordered like this
new_xyz = [y, x, z]
# sample
arrayR = map_coordinates(array, new_xyz, order=1)
Note:
You can also use this function for intrinsic rotations, simply adapt the first argument of 'from_euler' to your Euler convention. In this case, you obtain equivalent result than in my 1st post (using scipy.ndimage.rotate). However I noticed that the present code is 3x faster (0.01s for 40^3 volume) than when using scipy.ndimage.rotate (0.03s for 40^3 volume).
Hope this will help someone!
There seem to be a bit confusion about the "axes" parameter in your first post. To do a rotation about the x axis, the plane of rotation would be the yz plane which means your "axes" parameter should be set to (1,2). Also the first and the third rotations are, presumably about the x and z axes. But, both your rotations are in the xy plane. Could these be possibly the reasons behind the discrepancies in your answers? I am not convinced by your explanations about the new and original axes. The independent calls to the "rotate" function does not have access to the old data in any form or shape. It only sees the new axes and rotated array.
I check the code https://nbviewer.jupyter.org/gist/lhk/f05ee20b5a826e4c8b9bb3e528348688
There is a minor bug. The tested image is square, but if doing rectangular image, it will encounter some problems. below are correct ones for 2D and 3D rotations (noted that the Euler angle sequence used in my example is 'ZYZ', you should define this before using it):
def rotate_array_2D(array, orient):
# create a transformation matrix
angle=orient/180.*np.pi
c=np.cos(angle)
s=np.sin(angle)
mat=np.array([[c,s],[-s,c]])
# create meshgrid
dim = array.shape
ax = np.arange(dim[0])
ay = np.arange(dim[1])
coords = np.meshgrid(ax, ay)
# stack the meshgrid to position vectors, center them around 0 by substracting dim/2
xy = np.vstack([coords[0].reshape(-1) - float(dim[0]) / 2, # x coordinate, centered
coords[1].reshape(-1) - float(dim[1]) / 2]) # y coordinate, centered
# apply transformation
transformed_xy = np.dot(mat, xy)
# extract coordinates
x = transformed_xy[0, :] + float(dim[0]) / 2
y = transformed_xy[1, :] + float(dim[1]) / 2
x = x.reshape((dim[1],dim[0]))
y = y.reshape((dim[1],dim[0]))
new_xy = [x,y]
# sample
arrayR = map_coordinates(array, new_xy, order=1).T
return arrayR
def rotate_array_3D(array, orient):
rot = orient[0]
tilt = orient[1]
phi = orient[2]
# create meshgrid
dim = array.shape
ax = np.arange(dim[0])
ay = np.arange(dim[1])
az = np.arange(dim[2])
coords = np.meshgrid(ax, ay, az)
# stack the meshgrid to position vectors, center them around 0 by substracting dim/2
xyz = np.vstack([coords[0].reshape(-1) - float(dim[0]) / 2, # x coordinate, centered
coords[1].reshape(-1) - float(dim[1]) / 2, # y coordinate, centered
coords[2].reshape(-1) - float(dim[2]) / 2]) # z coordinate, centered
# create transformation matrix
r = R.from_euler('ZYZ', [rot, tilt, phi], degrees=True)
mat = r.as_matrix()
# apply transformation
transformed_xyz = np.dot(mat, xyz)
# extract coordinates
x = transformed_xyz[0, :] + float(dim[0]) / 2
y = transformed_xyz[1, :] + float(dim[1]) / 2
z = transformed_xyz[2, :] + float(dim[2]) / 2
x = x.reshape((dim[1],dim[0],dim[2]))
y = y.reshape((dim[1],dim[0],dim[2]))
z = z.reshape((dim[1],dim[0],dim[2])) # I test the rotation in 2D and this strange thing can be explained
new_xyz = [x,y,z]
arrayR = map_coordinates(array, new_xyz, order=1).T
return arrayR
Ray Casting Algorithm
MandelBulb Ray Casting Algorithm Python Example
So, if I understand correctly, the ray casting algorithm requires that an observer be located external to the 3D fractal at which point vectors are drawn from the observer toward a point on the plane normal to the vector and intersecting the origin.
It would seem to me that this would either severely limit the rendered view of the fractal or require stereoscopic 3D reconstruction of the fractal using multiple observer positions (which seems difficult to me). Additionally, no information can be gathered regarding the internal structure of the fractal.
Other Algorithms
Alternatively, Direct Volume Rendering seems intuitive enough however, computationally expensive and potentially inefficient in and of itself. Indirect Volume Rendering using an algorithm such as marching cubes might also employ a bit of a learning curve it seems.
Somewhere in the pdf of the 2nd link it talks about cut plane views in order to see slices of the fractal.
Question:
Why not use cut planes as a rendering method?
1) Using a modified ray tracing algorithm, say we put the observer at point Q at the origin (0, 0, 0).
2) Let us then emit rays from the origin toward the incident plane spanned by y & z point combinations that is slicing the fractal.
3) Calculate the distance to the fractal surface using the algorithm in the 1st link. If the x component of computed distance is within a certain tolerance, dx of the slicing plane, then the y & z coordinates along with the x value of the slicing plane are stored as the x, y, z coordinates. These coordinates are now representative of the surface at that specific slice in x.
4) Let us say that the slicing plane has one degree of freedom in the x direction. By moving the plane in its degree of freedom, we can receive yet another set of x, y, z coordinates for a given slice.
5) The final result is a calculable surface generated by the point cloud created in the previous steps.
6) Additionally, the degree of freedom of the slicing plane can be altered to create an another point cloud which can then be verified against the previous as a means of post-processing.
Please see the image below as a visual aid (the sphere represents the MandelBulb).
Below is my Python code so far, adapted from the first link. I successfully generate the plane of points and am able to get the directions from the origin to the points on the plane. There must be something fundamentally flawed in the distance estimator function because thats where everything breaks down and I get nans for the total distances
def get_plane_points(x, y_res=500, z_res=500, y_min=-10, y_max=10, z_min=-10, z_max=10):
y = np.linspace(y_min, y_max, y_res)
z = np.linspace(z_min, z_max, z_res)
x, y, z = np.meshgrid(x, y, z)
x, y, z = x.reshape(-1), y.reshape(-1) , z.reshape(-1)
P = np.vstack((x, y, z)).T
return P
def get_directions(P):
v = np.array(P - 0)
v = v/np.linalg.norm(v, axis=1)[:, np.newaxis]
return v
#jit
def DistanceEstimator(positions, plane_loc, iterations, degree):
m = positions.shape[0]
x, y, z = np.zeros(m), np.zeros(m), np.zeros(m)
x0, y0, z0 = positions[:, 0], positions[:, 1], positions[:, 2]
dr = np.zeros(m) + 1
r = np.zeros(m)
theta = np.zeros(m)
phi = np.zeros(m)
zr = np.zeros(m)
for _ in range(iterations):
r = np.sqrt(x * x + y * y + z * z)
dx = .01
x_loc = plane_loc
idx = (x < x_loc + dx) & (x > x_loc - dx)
dr[idx] = np.power(r[idx], degree - 1) * degree * dr[idx] + 1.0
theta[idx] = np.arctan2(np.sqrt(x[idx] * x[idx] + y[idx] * y[idx]), z[idx])
phi[idx] = np.arctan2(y[idx], x[idx])
zr[idx] = r[idx] ** degree
theta[idx] = theta[idx] * degree
phi[idx] = phi[idx] * degree
x[idx] = zr[idx] * np.sin(theta[idx]) * np.cos(phi[idx]) + x0[idx]
y[idx] = zr[idx] * np.sin(theta[idx]) * np.sin(phi[idx]) + y0[idx]
z[idx] = zr[idx] * np.cos(theta[idx]) + z0[idx]
return 0.5 * np.log(r) * r / dr
def trace(directions, plane_location, max_steps=50, iterations=50, degree=8):
total_distance = np.zeros(directions.shape[0])
keep_iterations = np.ones_like(total_distance)
steps = np.zeros_like(total_distance)
for _ in range(max_steps):
positions = total_distance[:, np.newaxis] * directions
distance = DistanceEstimator(positions, plane_location, iterations, degree)
total_distance += distance * keep_iterations
steps += keep_iterations
# return 1 - (steps / max_steps) ** power
return total_distance
def run():
plane_location = 2
plane_points = get_plane_points(x=plane_location)
directions = get_directions(plane_points)
distance = trace(directions, plane_location)
return distance
I am eager to hear thoughts on this and what limitations/issues I may encounter. Thanks in advance for the help!
If I am not mistaken, it is not impossible for this algorithm to work. There is inherent potential for problems with any assumptions made about the internal structure of the MandelBulb and what positions an observer is allowed to occupy. That is, if the observer is known to initially be in a zone of convergence then the ray tracing algorithm with return nothing meaningful since the furthest distance that could be measured is 0. This is due to the fact that the current ray tracing algorithm terminates upon first contact with the surface. It is likely this could be altered, however.
Rather than slicing the fractal with plane P, it might make more sense to prevent the termination of the ray upon first contact and instead, terminate based on a distance thats known to exist past the surface of the MandelBulb.
Here's a rough explanation of what I do in vtk:
Create a surface (a minimal surface, not too relevant what it is, the geometry is important though: the gyroid has two labyrinths that are completely shut off from each other).
use vtkClipClosedSurface to shut off one of the labyrinths so that I get an object that has no open surfaces anymore. A regular surface looks like this, with a closed surface it looks like this.
Here's my problem: For more complicated versions of my structure, I get this:
Can you see how on the top left it works fine and near the bottom right it stops creating surfaces? Sometimes I also get really weird triangles in that last part.
To my understanding vtkClipClosedSurface knows from the surface normals where to close a surface and where not. The thing is: The normals of my structure are fine and they all point in the right direction. If you take a closer look on the structure you will notice that the lower part is basically an inversion of the top part that changes gradually, all in one surface.
I tried to modify my structure before cutting with many things like vtkSmoothPolyDataFilter, vtkCleanPolyData or vtkPolyDataNormals. I even tried extracting the boundary surfaces with vtkFeatureEdges, which led to an even worse result. Even vtkFillHolesFilter didn't yield any acceptable results. My surface seems flawless and easy enough to create a boundary.
I have no idea what else to try. This happens for other structures, too. Fixing it with a CAD tool is out of question, because it is supposed to work out of box. Please help me!
Here's another example of a geometry that doesn't close the surface properly. This time I used vtkFillHolesFilter which results in surfaces on the inside of the structure, while they should only occupy the boundary of te object.
In case you need a more detailed rundown of my pipeline, here goes:
create surface using mayavi.mlab.contour3d
get the PolyData by extracting the actor.mapper.input
convert format from tvtk to regular vtk
vtkClipClosedSurface with a plane collection that cuts away part of the structure (errors occur when the plane collection is the same as the structure boundary)
visualize it
Edit: Okay, this did not receive enough attention, so I constructed a minimal, complete and verifiable working example that reproduces the behaviour:
import numpy as np
import vtk # VTK version 7.0
from mayavi import mlab # mayavi version 4.4.4
from mayavi.api import Engine, OffScreenEngine
from tvtk.api import tvtk
def schwarz_D(x, y, z, linear_term=0):
"""This is the function for the Schwarz Diamond level surface."""
return (np.sin(x) * np.sin(y) * np.sin(z) + np.sin(x) * np.cos(y) * np.cos(z) +
np.cos(x) * np.sin(y) * np.cos(z) + np.cos(x) * np.cos(y) * np.sin(z)) - linear_term * z
def plane_collection(xn, x, yn, y, zn, z):
"""Defines the 6 planes for cutting rectangular objects to the right size."""
plane1 = vtk.vtkPlane()
plane1.SetOrigin(x, 0, 0)
plane1.SetNormal(-1, 0, 0)
plane2 = vtk.vtkPlane()
plane2.SetOrigin(0, y, 0)
plane2.SetNormal(0, -1, 0)
plane3 = vtk.vtkPlane()
plane3.SetOrigin(0, 0, z)
plane3.SetNormal(0, 0, -1)
plane4 = vtk.vtkPlane()
plane4.SetOrigin(xn, 0, 0)
plane4.SetNormal(1, 0, 0)
plane5 = vtk.vtkPlane()
plane5.SetOrigin(0, yn, 0)
plane5.SetNormal(0, 1, 0)
plane6 = vtk.vtkPlane()
plane6.SetOrigin(0, 0, zn)
plane6.SetNormal(0, 0, 1)
plane_list = [plane4, plane1, plane5, plane2, plane6, plane3]
planes = vtk.vtkPlaneCollection()
for item in plane_list:
planes.AddItem(item)
return planes
[nx, ny, nz] = [2, 2, 8] # amount of unit cells
cell_size = 1
gradient_value = 0.04 # only values below 0.1 produce the desired geometry; this term is essential
x, y, z = np.mgrid[-cell_size*(nx + 1)/2:cell_size*(nx + 1)/2:100j,
-cell_size*(ny + 1)/2:cell_size*(ny + 1)/2:100j,
-cell_size*(nz + 1)/2:cell_size*(nz + 1)/2:100*2j] * np.pi / (cell_size/2)
# engine = Engine()
engine = OffScreenEngine() # do not start mayavi GUI
engine.start()
fig = mlab.figure(figure=None, engine=engine)
contour3d = mlab.contour3d(x, y, z, schwarz_D(x, y, z, gradient_value), figure=fig)
scene = engine.scenes[0]
actor = contour3d.actor.actors[0]
iso_surface = scene.children[0].children[0].children[0]
iso_surface.contour.minimum_contour = 0
iso_surface.contour.number_of_contours = 1
iso_surface.compute_normals = False
iso_surface.contour.auto_update_range = False
mlab.draw(fig)
# mlab.show() # enable if you want to see the mayavi GUI
polydata = tvtk.to_vtk(actor.mapper.input) # convert tvtkPolyData to vtkPolyData
# Move object to the coordinate center to make clipping easier later on.
center_coords = np.array(polydata.GetCenter())
center = vtk.vtkTransform()
center.Translate(-center_coords[0], -center_coords[1], -center_coords[2])
centerFilter = vtk.vtkTransformPolyDataFilter()
centerFilter.SetTransform(center)
centerFilter.SetInputData(polydata)
centerFilter.Update()
# Reverse normals in order to receive a closed surface after clipping
reverse = vtk.vtkReverseSense()
reverse.SetInputConnection(centerFilter.GetOutputPort())
reverse.ReverseNormalsOn()
reverse.ReverseCellsOn()
reverse.Update()
bounds = np.asarray(reverse.GetOutput().GetBounds())
clip = vtk.vtkClipClosedSurface()
clip.SetInputConnection(reverse.GetOutputPort())
clip.SetTolerance(10e-3)
# clip.TriangulationErrorDisplayOn() # enable to see errors for not watertight surfaces
clip.SetClippingPlanes(plane_collection(bounds[0] + cell_size/2, bounds[1] - cell_size/2,
bounds[2] + cell_size/2, bounds[3] - cell_size/2,
bounds[4] + cell_size/2, bounds[5] - cell_size/2))
clip.Update()
# Render the result
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(clip.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
renderer = vtk.vtkRenderer()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtk.vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
renderer.AddActor(actor)
renderWindow.Render()
renderWindowInteractor.Start()
This really is a short as it gets, I stripped as much as I could. The problem still persists and I can't figure out a solution.
Try using pymeshfix. I had a very similar problem with some low-res mandelbulbs I was generating.
You may also want ot check out pyvista, it's a nice python wrapper for vtk.
Great problem and thanks for the example.
I was able to get this to work in pyvista with some modifications:
import numpy as np
import pyvista as pv
def schwarz_D(x, y, z, linear_term=0):
"""This is the function for the Schwarz Diamond level surface."""
return (np.sin(x) * np.sin(y) * np.sin(z) + np.sin(x) * np.cos(y) * np.cos(z) +
np.cos(x) * np.sin(y) * np.cos(z) + np.cos(x) * np.cos(y) * np.sin(z)) - linear_term * z
# Create the grid
[nx, ny, nz] = [2, 2, 8] # amount of unit cells
cell_size = 1
gradient_value = 0.04 # only values below 0.1 produce the desired geometry; this term is essential
x, y, z = np.mgrid[-cell_size*(nx + 1)/2:cell_size*(nx + 1)/2:100j,
-cell_size*(ny + 1)/2:cell_size*(ny + 1)/2:100j,
-cell_size*(nz + 1)/2:cell_size*(nz + 1)/2:100*2j] * np.pi / (cell_size/2)
# make a grid and exclude cells below 0.1
grid = pv.StructuredGrid(x, y, z)
grid['scalars'] = schwarz_D(x, y, z, gradient_value).ravel(order='F')
contour = grid.clip_scalar(value=0.1)
# make a bunch of clips
# bounds = contour.bounds
# contour.clip(origin=(bounds[0] + cell_size/2, 0, 0), normal='-x', inplace=True)
# contour.clip(origin=(0, bounds[1] - cell_size/2, 0), normal='-y', inplace=True)
# contour.clip(origin=(0, 0, bounds[2] + cell_size/2), normal='-z', inplace=True)
# contour.clip(origin=(bounds[3] - cell_size/2, 0, 0), normal='x', inplace=True)
# contour.clip(origin=(0, bounds[4] + cell_size/2, 0), normal='y', inplace=True)
# contour.clip(origin=(0, 0, bounds[5] - cell_size/2), normal='z', inplace=True)
contour.plot(smooth_shading=True, color='w')
I'm not sure why you're using clipping planes, and I think that you would be better off simply limiting your x, y, and z ranges put into creating the grids. That way, you wouldn't have to clip the final mesh.