I've got 3D objects which are represented as numpy arrays.
How can I unfold the "surface" of such objects to get a 2D map of values (I don't care about inner values)?
It's similar to unwrapping globe surface, but the shape is varied from case to case.
This is a vertices problem. Each triangle on the model is a flat surface that can be mapped to a 2D plane. So the most naive solution without any assumed structure would be to:
for triangle in mesh:
// project to plane defined by normal to avoid stretching
This solution is not ideal as it places all of the uv's on top of each other. The next step would be to spread out the triangles to fill a certain space. This is the layout stage that defines how the vertices are layed out in the 2D space.
Usually it is ideal to fit the UV's within a unit square. This allows for easy UV mapping from a single image.
Option: 2
You surround the object with a known 2D mapped shape and project each triangle onto the shape based on its normal. This provides a mechanism for unwrapping the uv's in a structured manor. An example object would be to project onto a cube.
Option: 3
consult academic papers and opensource libraries/tools like blender:
https://wiki.blender.org/index.php/Doc:2.4/Manual/Textures/Mapping/UV/Unwrapping
blender uses methods as described above to unwrap arbitrary geometry. There are other methods to accomplish this as described on the blender unwrap page. The nice thing about blender is that you can consult the source code for the implementation of the uv unwrap methods.
Hope this is helpful.
Related
Quick summary:
I am looking for a pythonic implementation of Occlusion Culling (given a camera location and known polygon coordinates of a mesh) that does not use ray casting in (blender, numpy, OpenCV).
Background:
I have a dataset of simplified geometry coordinates represented as vectors and I would like to compute occlusion culling in a optimal manner as to turn that coordinate data into an SVG image. I am not so much concerned with the SVG implementation but rather the best mathematical solution to computing occlusion culling. The typical way render engines compute this is by casting rays from the camera to each mesh vertex and testing if they collide or not. Since I am looking to make simple SVG line images I would like to avoid ray casting.
Searching through the Hidden-surface determination wiki there are a few methods described.
Some of those being Portal Rendering, Potentially visible set, and Z-culling. Given the few pythonic implementations of these algorithms I am wondering which implementation would be the most computationally efficient and yield the best looking results. If you have better methods or combinations feel free to suggest them. Occlusion culling has a lot of use cases but the two main ones I am concerned with are as follows.
Internal Mesh Occlusion
pictured: 1. 3d render in blender, 2. All visible mesh edges, 3. Mesh edges culled with ray casting plugin
For the coordinate data I have most of the geometry uses simple shapes. As such, back-face culling as described here is a quick and easy solution to rendering basic shapes. Problems arise however when there is geometry that is not back-facing and is inside visible geometry. As you can see with the ray casting example above when the camera is facing the flat surface the other polygon faces of the mesh are not visible. When using back-face culling pictured below
you can see the polygon faces that face away from the camera get culled but the faces that are pointed at the camera remain. Occlusion culling for mesh polygons given camera location is important for rendering small objects with some complexity.
Multiple Object Occlusion
pictured: 1. 3d render in blender, 2. All visible mesh edges, 3. Mesh edges culled with ray casting plugin
The more obvious example of occlusion culling is in the case of multiple objects. While I could simply draw the SVG layers from back to front this adds unneeded layers. Culling polygons in a way that solves internal mesh culling ideally would also allow for layered object culling.
Summary
I am looking for solutions to compute Occlusion Culling given a list of vertices (structured to form polygons) and calculate which ones are visible to the camera without casting rays to each vertex.
I have a .obj and .ply file of a 3D model.
What I want to do is read this 3D model file and see if a list of 3D coordinates are either inside or outside the 3D model space.
For example, if the 3D model is a sphere with radius 1, (0,0,0) would be inside (True) and (2,0,0) would be outside (False). Of course the 3D model I'm using is not as simple as a sphere.
I would like to add some of the methods I considered using.
Since I am using Python, I thought of using PyMesh, as their intersection feature looked promising. However the list of coordinates I have are not mesh files but just vectors, so it didn't seem to be the appropriate function to use.
I also found this method using ray casting. However, how to do this with PyMesh, or any other Python tool is something I need advice on.
Cast a ray from the 3D point along X-axis and check how many intersections with outer object you find.
Depending on the intersection number on each axis (even or odd) you can understand if your point is inside or outside.
You may want to repeat on Y and Z axes to improve the result (sometimes your ray is coincident with planar faces and intersection number is not reliable).
Converting my comment into an answer for future readers.
You can use a Convex Hull library to check whether a point is inside the hull. Most libraries use signed distance function to determine whether the point is inside. trimesh is one of the libraries that implements this feature.
I am comparing the performance of feature detection algorithms like Harris, ORB, MSER (OpenCV Python). I have two images of the same object taken from different viewpoints. Since I am just a beginner to this area, I am having trouble understanding what "pre-image" means in this context. How do I get the "pre-image"?
Detecting regions covariant with a class of transformations has now reached some maturity in the computer vision literature. The requirement for these regions is that they should correspond to the same pre-image for different viewpoints, i.e., their shape is not fixed but automatically adapts, based on the underlying image intensities, so that they are the projection of the same 3D surface patch.
This seems to be a quote from "A comparison of affine region detectors" paper.
You can interpret the term "pre-image" to mean the "3D surface patch" (that corresponds to a region in 2D). In other words, it's a part of the "scene"/"3D environment" that's being photographed.
The confusion arises because authors are using the term to refer to the mathematical object, rather than anything to do with photographs/images.
To elaborate more -- consider the process of taking a photograph of a 3D scene as a mathematical function. This function has several inputs: the 3D scene itself, the viewpoint, illumination, and so on. The output is a 2D array of pixel intensities. Given a 2D array of pixel intensities, and focusing on a particular 2D region, the corresponding 3D surface patch forms the "pre-image" of the function.
I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.
Please allow me to start the question with a simplest taskļ¼If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work.
Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details.