Modify polygons so that they don't overlap and area stays the same - python

I have a set of polygons and they can overlap with each other, like this:
I want to modify them in such a way that they don't overlap and the resulting surface area stays the same. Something like this:
It is okay if the shape or the position changes. The main thing is that they should not overlap with each other and the area should not change much (I know the area changed a little in the second image but I drew it manually thus let's just assume that the areas did not change).
I am trying to do it programmatically with the help of Python. Basically I stored polygons in a PostGIS database and with the help of a script I want to retrieve them and modify them.
I am very new to GIS and thus this seems like a difficult task.
What is the correct way of doing it? Is there an algorithm that solves this kind of problems?

Take a look at ST_buffer and try passing a signed float as the second argument (degrees to reduce radius by)
SELECT buffer(the_geom,-0.01) as geom
Be careful with negative buffers as you could run into issues if the buffer size exceeds the radius, see here.

Here is what I did:
Iterated over all the polygons and found overlapping polygons. Then, I moved the polygon in different directions and found the best moving direction by calculating the minimum resulting overlapping area. Then I simply moved the polygon in that best direction until there is no overlapping area.

Related

How to get the largest rectangle inside a contour?

I'd like to ask if there's a better or faster alternative way to get the largest rectangle inside an almost rectangular contour.
The rectangle should be aligned to both x and y axis and should be completely inside the rectangular contour. That means it would not contain any external white pixels, yet occupy the largest area in the contour.
Test image is here:
I've tried these two but I'm looking if there's a faster and neater way to go around this.
I also tried going through the points of a contour and getting the minimum and maximum points like in here but of course, it just shows similar results to what cv2.boundingRect already does.
Maybe this is a bit of lateral thinking, but looking at your examples and spec when not fill out white pikels contiguouys with the outside bounding box instead. (Like a 'paint pot' brush in Paint-type application).
E.g. (red pixels being the ones you would turn black from white):
You could probably even limit the process to the outer N pixels.
============================
So how might one implement this? It is essentially a version of the "flood fill" algorithm used in pixel graphics programmes, except that you start not from a single seed pixel but checking every point on the edge of the outside bounding rectangle. You start filling in and build a stack of points you need to come back to because you can't necessarily follow every area at once and may need to go back on your self.
You can look that algorithm up, but a 'pure' version will be very stack-heavy if you push every point you can't follow right now, particularly starting with the whole boundary of the shape.
I haven't implemented it this way, but my first thought would be a scan from a boundary inwards, taking a whole line of pixels at a time and mark all the 'white' pixels with a new 3rd colour, then on the next row you fill all the white pixels touching the previously marked pixels and so on. (doesn't matter whether you mark the changed pixels as a 3rd colour, a mask, or alpha-channel or whatever - but you must be able to tell newly filled in pixels from the old black ones.
As you go, you need to check for any 'stranded' areas where you need to work backwards to fill in white areas that are not directly connected to the outside:
Start filling from the edge...
Watch out for stranded areas - if you find one, scan backwards to fill before going to where you were before, to carry one (you may need to recurse if you stranded area turns back on itself again, though in your particular application this shouldn't be a huge issue, unlike some graphics applications)
And continue, not forgetting to fill in from the other edges if required (see note below) until you come to a row with no further pixels to fill and no more back-filling to do. Then restart at the far side of the image as you need to start a backward pass from the far side to catch anything else on that side.
For a practical implementation there is some thinking to do. Your examples will have a lot of filling at the edge but not much by way of complex internal shapes to follow, which keeps things simple. But you need to work from all 4 sides to do it efficiently - perhaps working in as a series of concentric rectangles rather than one side at a time. More complexity working through the design but massively more efficient in this example.
Food for thought anyhow.

how to count point which based on projection of a set of coordinates

I have some data about a set of coordinates, like {(x=1, y=2), (x=3, y=4), ...}. I projected these coordinates on the picture. Then I would like to count these point in different position. My idea is below
First, separate the picture from several pixel parts based on 10 pixels.
Second, count the point inside the pixel box.
I am new in these area, and I use python a lot. I think this may be computer graphic problem.
I am not asking for a code solution. I just want to know which libraries or approaches that are related.
Anyone could give me a hint would be highly appreciated. Thanks.
Sure, your approach seems fine. You simply want to count the number of pixels in different image regions that you placed, correct?
I answered a question recently (with Python) that was giving an indication if there was a black pixel inside an image region. It could be easily modified to count pixels instead of simply finding one. Check it out here and modify your question or post a new one if you have code problems working it out.

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Spatstat: Given a list of 2-d points, how to connect them into a polygon, and further make it the study region?

Please allow me to start the question with a simplest task:If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work.
Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details.

Image registration using python and cross-correlation

I got two images showing exaktly the same content: 2D-gaussian-shaped spots. I call these two 16-bit png-files "left.png" and "right.png". But as they are obtained thru an slightly different optical setup, the corresponding spots (physically the same) appear at slightly different positions. Meaning the right is slightly stretched, distorted, or so, in a non-linear way. Therefore I would like to get the transformation from left to right.
So for every pixel on the left side with its x- and y-coordinate I want a function giving me the components of the displacement-vector that points to the corresponding pixel on the right side.
In a former approach I tried to get the positions of the corresponding spots to obtain the relative distances deltaX and deltaY. These distances then I fitted to the taylor-expansion up to second order of T(x,y) giving me the x- and y-component of the displacement vector for every pixel (x,y) on the left, pointing to corresponding pixel (x',y') on the right.
To get a more general result I would like to use normalized cross-correlation. For this I multiply every pixelvalue from left with a corresponding pixelvalue from right and sum over these products. The transformation I am looking for should connect the pixels that will maximize the sum. So when the sum is maximzied, I know that I multiplied the corresponding pixels.
I really tried a lot with this, but didn't manage. My question is if somebody of you has an idea or has ever done something similar.
import numpy as np
import Image
left = np.array(Image.open('left.png'))
right = np.array(Image.open('right.png'))
# for normalization (http://en.wikipedia.org/wiki/Cross-correlation#Normalized_cross-correlation)
left = (left - left.mean()) / left.std()
right = (right - right.mean()) / right.std()
Please let me know if I can make this question more clear. I still have to check out how to post questions using latex.
Thank you very much for input.
[left.png] http://i.stack.imgur.com/oSTER.png
[right.png] http://i.stack.imgur.com/Njahj.png
I'm afraid, in most cases 16-bit images appear just black (at least on systems I use) :( but of course there is data in there.
UPDATE 1
I try to clearify my question. I am looking for a vector-field with displacement-vectors that point from every pixel in left.png to the corresponding pixel in right.png. My problem is, that I am not sure about the constraints I have.
where vector r (components x and y) points to a pixel in left.png and vector r-prime (components x-prime and y-prime) points to the corresponding pixel in right.png. for every r there is a displacement-vector.
What I did earlier was, that I found manually components of vector-field d and fitted them to a polynom second degree:
So I fitted:
and
Does this make sense to you? Is it possible to get all the delta-x(x,y) and delta-y(x,y) with cross-correlation? The cross-correlation should be maximized if the corresponding pixels are linked together thru the displacement-vectors, right?
UPDATE 2
So the algorithm I was thinking of is as follows:
Deform right.png
Get the value of cross-correlation
Deform right.png further
Get the value of cross-correlation and compare to value before
If it's greater, good deformation, if not, redo deformation and do something else
After maximzied the cross-correlation value, know what deformation there is :)
About deformation: could one do first a shift along x- and y-direction to maximize cross-correlation, then in a second step stretch or compress x- and y-dependant and in a third step deform quadratic x- and y-dependent and repeat this procedure iterativ?? I really have a problem to do this with integer-coordinates. Do you think I would have to interpolate the picture to obtain a continuous distribution?? I have to think about this again :( Thanks to everybody for taking part :)
OpenCV (and with it the python Opencv binding) has a StarDetector class which implements this algorithm.
As an alternative you might have a look at the OpenCV SIFT class, which stands for Scale Invariant Feature Transform.
Update
Regarding your comment, I understand that the "right" transformation will maximize the cross-correlation between the images, but I don't understand how you choose the set of transformations over which to maximize. Maybe if you know the coordinates of three matching points (either by some heuristics or by choosing them by hand), and if you expect affinity, you could use something like cv2.getAffineTransform to have a good initial transformation for your maximization process. From there you could use small additional transformations to have a set over which to maximize. But this approach seems to me like re-inventing something which SIFT could take care of.
To actually transform your test image you can use cv2.warpAffine, which also can take care of border values (e.g. pad with 0). To calculate the cross-correlation you could use scipy.signal.correlate2d.
Update
Your latest update did indeed clarify some points for me. But I think that a vector field of displacements is not the most natural thing to look for, and this is also where the misunderstanding came from. I was thinking more along the lines of a global transformation T, which applied to any point (x,y) of the left image gives (x',y')=T(x,y) on the right side, but T has the same analytical form for every pixel. For example, this could be a combination of a displacement, rotation, scaling, maybe some perspective transformation. I cannot say whether it is realistic or not to hope to find such a transformation, this depends on your setup, but if the scene is physically the same on both sides I would say it is reasonable to expect some affine transformation. This is why I suggested cv2.getAffineTransform. It is of course trivial to calculate your displacement Vector field from such a T, as this is just T(x,y)-(x,y).
The big advantage would be that you have only very few degrees of freedom for your transformation, instead of, I would argue, 2N degrees of freedom in the displacement vector field, where N is the number of bright spots.
If it is indeed an affine transformation, I would suggest some algorithm like this:
identify three bright and well isolated spots on the left
for each of these three spots, define a bounding box so that you can hope to identify the corresponding spot within it in the right image
find the coordinates of the corresponding spots, e.g. with some correlation method as implemented in cv2.matchTemplate or by also just finding the brightest spot within the bounding box.
once you have three matching pairs of coordinates, calculate the affine transformation which transforms one set into the other with cv2.getAffineTransform.
apply this affine transformation to the left image, as a check if you found the right one you could calculate if the overall normalized cross-correlation is above some threshold or drops significantly if you displace one image with respect to the other.
if you wish and still need it, calculate the displacement vector field trivially from your transformation T.
Update
It seems cv2.getAffineTransform expects an awkward input data type 'float32'. Let's assume the source coordinates are (sxi,syi) and destination (dxi,dyi) with i=0,1,2, then what you need is
src = np.array( ((sx0,sy0),(sx1,sy1),(sx2,sy2)), dtype='float32' )
dst = np.array( ((dx0,dy0),(dx1,dy1),(dx2,dy2)), dtype='float32' )
result = cv2.getAffineTransform(src,dst)
I don't think a cross correlation is going to help here, as it only gives you a single best shift for the whole image. There are three alternatives I would consider:
Do a cross correlation on sub-clusters of dots. Take, for example, the three dots in the top right and find the optimal x-y shift through cross-correlation. This gives you the rough transform for the top left. Repeat for as many clusters as you can to obtain a reasonable map of your transformations. Fit this with your Taylor expansion and you might get reasonably close. However, to have your cross-correlation work in any way, the difference in displacement between spots must be less than the extend of the spot, else you can never get all spots in a cluster to overlap simultaneously with a single displacement. Under these conditions, option 2 might be more suitable.
If the displacements are relatively small (which I think is a condition for option 1), then we might assume that for a given spot in the left image, the closest spot in the right image is the corresponding spot. Thus, for every spot in the left image, we find the nearest spot in the right image and use that as the displacement in that location. From the 40-something well distributed displacement vectors we can obtain a reasonable approximation of the actual displacement by fitting your Taylor expansion.
This is probably the slowest method, but might be the most robust if you have large displacements (and option 2 thus doesn't work): use something like an evolutionary algorithm to find the displacement. Apply a random transformation, compute the remaining error (you might need to define this as sum of the smallest distance between spots in your original and transformed image), and improve your transformation with those results. If your displacements are rather large you might need a very broad search as you'll probably get lots of local minima in your landscape.
I would try option 2 as it seems your displacements might be small enough to easily associate a spot in the left image with a spot in the right image.
Update
I assume your optics induce non linear distortions and having two separate beampaths (different filters in each?) will make the relationship between the two images even more non-linear. The affine transformation PiQuer suggests might give a reasonable approach but can probably never completely cover the actual distortions.
I think your approach of fitting to a low order Taylor polynomial is fine. This works for all my applications with similar conditions. Highest orders probably should be something like xy^2 and x^2y; anything higher than that you won't notice.
Alternatively, you might be able to calibrate the distortions for each image first, and then do your experiments. This way you are not dependent on the distribution of you dots, but can use a high resolution reference image to get the best description of your transformation.
Option 2 above still stands as my suggestion for getting the two images to overlap. This can be fully automated and I'm not sure what you mean when you want a more general result.
Update 2
You comment that you have trouble matching dots in the two images. If this is the case, I think your iterative cross-correlation approach may not be very robust either. You have very small dots, so overlap between them will only occur if the difference between the two images is small.
In principle there is nothing wrong with your proposed solution, but whether it works or not strongly depends on the size of your deformations and the robustness of your optimization algorithm. If you start off with very little overlap, then it may be hard to find a good starting point for your optimization. Yet if you have sufficient overlap to begin with, then you should have been able to find the deformation per dot first, but in a comment you indicate that this doesn't work.
Perhaps you can go for a mixed solution: find the cross correlation of clusters of dots to get a starting point for your optimization, and then tweak the deformation using something like the procedure you describe in your update. Thus:
For a NxN pixel segment find the shift between the left and right images
Repeat for, say, 16 of those segments
Compute an approximation of the deformation using those 16 points
Use this as the starting point of your optimization approach
You might want to have a look at bunwarpj which already does what you're trying to do. It's not python but I use it in exactly this context. You can export a plain text spline transformation and use it if you wish to do so.

Categories