Python Occlusion Culling - python

Quick summary:
I am looking for a pythonic implementation of Occlusion Culling (given a camera location and known polygon coordinates of a mesh) that does not use ray casting in (blender, numpy, OpenCV).
Background:
I have a dataset of simplified geometry coordinates represented as vectors and I would like to compute occlusion culling in a optimal manner as to turn that coordinate data into an SVG image. I am not so much concerned with the SVG implementation but rather the best mathematical solution to computing occlusion culling. The typical way render engines compute this is by casting rays from the camera to each mesh vertex and testing if they collide or not. Since I am looking to make simple SVG line images I would like to avoid ray casting.
Searching through the Hidden-surface determination wiki there are a few methods described.
Some of those being Portal Rendering, Potentially visible set, and Z-culling. Given the few pythonic implementations of these algorithms I am wondering which implementation would be the most computationally efficient and yield the best looking results. If you have better methods or combinations feel free to suggest them. Occlusion culling has a lot of use cases but the two main ones I am concerned with are as follows.
Internal Mesh Occlusion
pictured: 1. 3d render in blender, 2. All visible mesh edges, 3. Mesh edges culled with ray casting plugin
For the coordinate data I have most of the geometry uses simple shapes. As such, back-face culling as described here is a quick and easy solution to rendering basic shapes. Problems arise however when there is geometry that is not back-facing and is inside visible geometry. As you can see with the ray casting example above when the camera is facing the flat surface the other polygon faces of the mesh are not visible. When using back-face culling pictured below
you can see the polygon faces that face away from the camera get culled but the faces that are pointed at the camera remain. Occlusion culling for mesh polygons given camera location is important for rendering small objects with some complexity.
Multiple Object Occlusion
pictured: 1. 3d render in blender, 2. All visible mesh edges, 3. Mesh edges culled with ray casting plugin
The more obvious example of occlusion culling is in the case of multiple objects. While I could simply draw the SVG layers from back to front this adds unneeded layers. Culling polygons in a way that solves internal mesh culling ideally would also allow for layered object culling.
Summary
I am looking for solutions to compute Occlusion Culling given a list of vertices (structured to form polygons) and calculate which ones are visible to the camera without casting rays to each vertex.

Related

How can I coarsen an overly fine mesh in Python without losing geometric details?

I have a following fine mesh as a trimesh.base.Trimesh mesh for a quarter box geometry. This mesh is good at capturing geometric details such as the tight fillet radii, but it is unnecessary fine all over, which makes it computationally expensive. For example, the flat planar regions do not need to be so fine.
I want to coarsen this mesh to remove the computational burden, but I want to keep fine details, such as tight radii. So far, I have tried the following:
myMesh = myMesh.simplify_quadratic_decimation(2000)
Which uses a quadratic decimation algorithm to simplify the mesh to only have 2000 faces, as opposed to 50,000+ before (https://trimsh.org/trimesh.base.html). Here is the mesh it produces:
It works, and radii regions still have locally finer mesh, but the mesh becomes messy. What I would like is a mesh which looks like the following, where the mesh is neat, not too many elements, and accurately captures geometric details:
I've not been able to find any resources online and this is not my field of expertise. Does anyone know how I can achieve the above neat mesh?
Best regards

Removing points that are occluded after perspective projection

I have a point cloud (.ply) and a projection matrix,
I've rendered the view from the first camera using the projection matrix and got this result: (python & opencv)
This is the original view:
Question: How can I render only the points that are seen from the particular viewpoint of the camera, in order not to see the occluded points?
I thought about converting it to a mesh w/ some surface reconstruction algorithm, and working with the mesh, like generating an occlusion map. Any ideas?
Implicit Surface Octrees (https://www.cse.iitb.ac.in/~rhushabh/publications/icvgip10/icvgip10.pdf) can be used to reconstruct the surface and visualize point clouds. Recent advances in real-time point cloud rendering have been achieved with this method. An overview of developments in this area can be found in this article - https://trepo.tuni.fi/bitstream/handle/10024/117953/KiviPetrus.pdf?sequence=2&isAllowed=y. In it, you can also find other approaches to solving this problem.
After building the octree, you get the ability to drop non-rendered points and render the surface with texturing and shading.
An experimental method for drawing only points. Here I mean that you want to draw the frame once, so this method works asymptotically O (N) and in the worst case O (P * N), where P is the number of pixels on the screen (when the points are too far / close (depending from the implementation) and the rendering queue from far to near). To optimize and obtain stable asymptotics for some input data, it may be useful to sort by distance from the camera.
Convert the coordinates of the points to 2D screen space.
create Z-buffer
for each point
if the coordinate in Z-buffer is closer to the viewer than for this point - skip (continue)
draw a dot on the screen
instead of marking one pixel in the Z-buffer, draw a circle in it (possibly with a radial gradient) with a radius depending on the distance (something like a distance * eps, where eps - you can use the angle in radians between two projection points on the screen)
Profit!
Fast and easy, but I've never done that, so I don't know how well it works.
Translated by Google Translate

How to get a list the visible vertices and segments of a mesh

I work on pose estimation of a 3d objects. I am using CAD model of that object to generate all the possible hypothesis of its pose.
I am using pyopengl to render the view of the object from a specific POV. Can anyone explain how to get a list of all the visible edges?
So I use face culling to eliminate the occluded faces, but I don't know how to pass the visible edges(indices and segments) to other python functions.
If there are any other approaches (not using OpenGL), I would really appreciate it.
So I want to get the drawn edges in the The rendered image:
I don't really want the image to be displayed.
In summary, I have a CAD model, and I want a function that can return the visible segments out of a specific POV.
Thanks
Face culling
This works only for single convex strict winding rule mesh without holes!
The idea is that sign of dot product of 2 vectors will tell you if the vectors are opposite or not. So if we have a normal pointing out and view direction their dot should be negative for faces turned towards camera/viewer.
As you do not want to render just select visible planar faces/edges you can do this on CPU side entirely. What you need is to have your mesh in form of planar faces (does not matter if triangles,quads or whatever) so let assume triangles (for more points you just add them to _face but for computation still use only v0,v1,v2) ... Each face should have the vertexes and normal.
struct _face
{
double v0[3],v1[3],v2[3],n[3];
};
List<_face> mesh;
Now the vertexes v0,v1,v2 you already have. All of them should be ordered in strict winding rule. That means if you look at any face from outside the points should form only CW (clockwise) loop (or only CCW (counter-clockwise) loop). To compute normal you simply exploit cross product which returns vector perpendicular to both operands:
n = cross(v1-v0,v2-v1) // cross product
n = n / |n| // optional normalize to unit vector
If you need the vector math see
Understanding 4x4 homogenous transform matrices
On the bottom is how to compute this... Also the whole answer you will need for the camera direction so read it...
Now if your mesh has strict winding rule than all the computed normals are pointing out of mesh (or inwards depends on your coordinate system, CW/CCW and order of operands in cross product). Let assume they all pointing out (if not just negate normal).
In case you do not have strict winding rule compute avg point of your mesh (sum all vertexes and divide by their count) this will be the center c of your object. Now just compute
dot(n,(v0+v1+v2)/3 - c)
and if not positive negate the n. This will repair your normals (you can also reverse the v0,v1,v2 to repair the mesh.
Now the camera and mesh usually has its own 4x4 transform matrix. one transfroms from mesh LCS (local coordinate system) to GCS ("world" global coordinate system) and the other from GCS to camera LCS (screen). We do not need projections for this as we do not render ... So what we need to do for each face is:
convert n to GCS
compute dot(n,camera_view_direction)
where camera_view_direction is GCS vector pointing in view direction. You can take it from direct camera matrix directly. It is usually the Z axis vector (in OpenGL Perspective view it is -Z). Beware camera matrix used for rendering is inverse matrix so if the case either compute inverse first or transpose it as we do not need the offset anyway ...
decide if face visible from the sign of #2
Again all the math is explained in the link above...
In case you do not have mesh matrix (does not have changing position or orientation) you can assume its matrix is unit one which means GCS = mesh LCS so no need for transformations.
In some cases there is no camera and only mesh matrix (I suspect your case) then it is similar you just ignore the camera transforms and use (0,0,-1) or (0,0,+1) as view direction.
Also see this:
Understanding lighting in OpenGL
It should shine some light on the normals topic.

How to flatten 3D object surface into 2D array?

I've got 3D objects which are represented as numpy arrays.
How can I unfold the "surface" of such objects to get a 2D map of values (I don't care about inner values)?
It's similar to unwrapping globe surface, but the shape is varied from case to case.
This is a vertices problem. Each triangle on the model is a flat surface that can be mapped to a 2D plane. So the most naive solution without any assumed structure would be to:
for triangle in mesh:
// project to plane defined by normal to avoid stretching
This solution is not ideal as it places all of the uv's on top of each other. The next step would be to spread out the triangles to fill a certain space. This is the layout stage that defines how the vertices are layed out in the 2D space.
Usually it is ideal to fit the UV's within a unit square. This allows for easy UV mapping from a single image.
Option: 2
You surround the object with a known 2D mapped shape and project each triangle onto the shape based on its normal. This provides a mechanism for unwrapping the uv's in a structured manor. An example object would be to project onto a cube.
Option: 3
consult academic papers and opensource libraries/tools like blender:
https://wiki.blender.org/index.php/Doc:2.4/Manual/Textures/Mapping/UV/Unwrapping
blender uses methods as described above to unwrap arbitrary geometry. There are other methods to accomplish this as described on the blender unwrap page. The nice thing about blender is that you can consult the source code for the implementation of the uv unwrap methods.
Hope this is helpful.

How to determine and extract surface points of 3D object?

I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.

Categories