OpenGL - Tesselation of some intersecting and some non-intersecting polygons - python

I have a list of several polygons.
Some are completely separated and non-intersecting with other polygons.
And some are fully intersecting and enclosed within other polygons:
I want to tesselate it to a set of triangles so I can draw them.
I think that I have a code that works - I'm doing something similar to that:
tess = gluNewTess()
gluTessBeginPolygon (tess)
for polygon in polygons:
gluTessBeginContour(tess)
for point in polygon:
gluTessVertex(tess, point, point)
gluTessEndPolygon(tess)
gluDeleteTess(tess)
I'm wondering if that is the expected way to go?
The main reason I'm asking the question is that as you can see - the entire code is wrapped in a single gluTessBeginPolygon-gluTessEndPolygon. But it is actually a set of different polygons, which seems a bit odd (although it seems to work...)

This is actually the way to go (see also this example).
The "polygon" defined is actually a multi-polygon here, so all that matter are the contours, which you are defining properly.
OpenGL then checks internally whether some contours are self-contained or otherwise complicated and generates the proper polygon/set of polygons --- you can have a look at the code in PolygonTesselator or at this page for a walkthrough.

Related

Algorithm for scaling irregular polygons

A syntax called 'gcode' is used to tell CNC engraving machines how to move.
An example of gcode is as follows :
G00 Z1 F800 (lift z axis by 1mm)
G00 X49.546785 Y-11.48703 F800 (Go to these coordinates at 800mm/m)
G01 Z-0.35 F100 (Penetrate. Lower the tool into the work by 0.35mm)
G03 X49.126859 Y-11.464812 I-0.385599 J-3.308393 F80 (cut an anticlockwise arc at 80mm/m)
(arc ends at X,Y, arc centre is x+i, y+j)
etc.
As you can see we can describe the movement of the tool in stright lines (G0,G1) and in arcs (G2,G3) from coordinates in the x,y and z planes.
Using this mechanism we can draw (engrave) paths, often they are closed paths as below:
In this image we see a closed path (letter a) in black. The green outline is the same path but 'scaled upwards' and the red path is the same path but scaled downwards.
In Inkscape we can do this using the 'dynamic offset' tool.
I am looking for an algorithm I can apply to gcode (as described above) to scale paths as described.
My first thought is literally to just scale every single line and arc :
Say we are scaling by 'n'%
Essentially we would make every line n% longer, and every arc n% bigger.
But what would the resulting path centre on?
Does anyone know the name of this algorithm, or have any links or examples of how to achieve this in say, SVG or any other coordinate based system (preferably in python if possible).
Addendum :
The process of scaling polygons inwards and outwards largely has two distinct names: 'Dilation' and 'offsetting'.
See here for a near answer to this question
As given in the comments, Dilation, Erosion, Opening and Closing are standard morphology operations. In fact, the graphic at Wikipedia gives details that are quite similar to what you have.
The difference is that the inside of the object is included in the dilation and erosion. Just alter the structuring element size and you can subtract the images to get the traces that you want.
The erosion and dilation are simpler forms of morphology, so look at those first to understand the algorithms. They are implemented in OpenCV which has Python bindings; however, they are fairly simple to code.
It maybe possible to use XOR to get the outline without image subtraction. But simply the perimeter of the opening and closing is the outline that I think you are looking for. Dilation and erosion will give slightly different paths. You could also use Voronoi partitioning, as a crudest form. The main difference is how corners and other junctions are handled.

Modify polygons so that they don't overlap and area stays the same

I have a set of polygons and they can overlap with each other, like this:
I want to modify them in such a way that they don't overlap and the resulting surface area stays the same. Something like this:
It is okay if the shape or the position changes. The main thing is that they should not overlap with each other and the area should not change much (I know the area changed a little in the second image but I drew it manually thus let's just assume that the areas did not change).
I am trying to do it programmatically with the help of Python. Basically I stored polygons in a PostGIS database and with the help of a script I want to retrieve them and modify them.
I am very new to GIS and thus this seems like a difficult task.
What is the correct way of doing it? Is there an algorithm that solves this kind of problems?
Take a look at ST_buffer and try passing a signed float as the second argument (degrees to reduce radius by)
SELECT buffer(the_geom,-0.01) as geom
Be careful with negative buffers as you could run into issues if the buffer size exceeds the radius, see here.
Here is what I did:
Iterated over all the polygons and found overlapping polygons. Then, I moved the polygon in different directions and found the best moving direction by calculating the minimum resulting overlapping area. Then I simply moved the polygon in that best direction until there is no overlapping area.

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Spatstat: Given a list of 2-d points, how to connect them into a polygon, and further make it the study region?

Please allow me to start the question with a simplest task:If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work.
Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details.

Check if pixels lie within ellipse area in Python

I'm trying to create true/false mask based on test whether pixels are inside ellipse area or not.
Straightforward ways were proposed here - Counting points inside an ellipse
However, I'm wondering if there's more elegant solution to this, like matplotlib.nxutils. If I want to use nxutils then I have to convert my ellipse to polygon somehow, which creates more complications and introduces additional error to calculations.
Is there any module capable of running this kind of check with ellipses?

Categories