Find new position for overlapping circles - python

I am trying to write a code that, for a given list of circles (list1), it is able to find the positions for new circles (list2). list1 and list2 have the same length, because for each circle in list1 there must be a circle from list2.
Each pair of circles (let's say circle1 from list1 and circle2 from list2), must be as close together as possible,
circles from list2 must not overlap with circles from list1, while circles of the single lists can overlap each other.
list1 is fixed, so now I have to find the right position for circles from list2.
I wrote this simple function to recognize if 2 circles overlap:
def overlap(x1, y1, x2, y2, r1, r2):
distSq = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)
radSumSq = (r1 + r2) * (r1 + r2)
if (distSq >= radSumSq):
return False # no overlap
else:
return True #overlap
and this is the list1:
with:
x=[14.11450195 14.14184093 14.15435028 14.16206741 14.16951752 14.17171097
14.18569565 14.19700241 14.23129082 14.24083233 14.24290752 14.24968338
14.2518959 14.26536751 14.27209759 14.27612877 14.2904377 14.29187012
14.29409599 14.29618549 14.30615044 14.31624985 14.3206892 14.3228569
14.36143875 14.36351967 14.36470699 14.36697292 14.37235737 14.41422081
14.42583466 14.43226814 14.43319225 14.4437027 14.4557848 14.46592999
14.47036076 14.47452068 14.47815609 14.52229309 14.53059006 14.53404236
14.5411644 ]
y=[-0.35319126 -0.44222349 -0.44763246 -0.35669261 -0.24366629 -0.3998799
-0.38940558 -0.57744932 -0.45223859 -0.21021004 -0.44250247 -0.45866323
-0.47203487 -0.51684451 -0.44884869 -0.2018993 -0.40296811 -0.23641759
-0.18019417 -0.33391538 -0.53565156 -0.45215255 -0.40939832 -0.26936951
-0.30894437 -0.55504167 -0.47177047 -0.45573688 -0.43100587 -0.5805912
-0.21770373 -0.199422 -0.17372169 -0.38522363 -0.56950212 -0.56947368
-0.48770753 -0.24940367 -0.31492445 -0.54263926 -0.53460872 -0.4053807
-0.43733299]
radius = 0.014
Copy and pasteable...
x = [14.11450195,14.14184093,14.15435028,14.16206741,14.16951752,
14.17171097,14.18569565,14.19700241,14.23129082,14.24083233,
14.24290752,14.24968338,14.2518959,14.26536751,14.27209759,
14.27612877,14.2904377,14.29187012,14.29409599,14.29618549,
14.30615044,14.31624985,14.3206892,14.3228569,14.36143875,
14.36351967,14.36470699,14.36697292,14.37235737,14.41422081,
14.42583466,14.43226814,14.43319225,14.4437027,14.4557848,
14.46592999,14.47036076,14.47452068,14.47815609,14.52229309,
14.53059006,14.53404236,14.5411644]
y = [-0.35319126,-0.44222349,-0.44763246,-0.35669261,-0.24366629,
-0.3998799,-0.38940558,-0.57744932,-0.45223859,-0.21021004,
-0.44250247,-0.45866323,-0.47203487,-0.51684451,-0.44884869,
-0.2018993,-0.40296811,-0.23641759,-0.18019417,-0.33391538,
-0.53565156,-0.45215255,-0.40939832,-0.26936951,-0.30894437,
-0.55504167,-0.47177047,-0.45573688,-0.43100587,-0.5805912,
-0.21770373,-0.199422,-0.17372169,-0.38522363,-0.56950212,
-0.56947368,-0.48770753,-0.24940367,-0.31492445,-0.54263926,
-0.53460872,-0.4053807,-0.43733299]
Now I am not sure about what I have to do, my first idea is to draw circles of list2 taking x and y from list one and do something like x+c and y+c, where c is a fixed value. Then I can call my overlapping function and, if there is overlap I can increase the c value.
In this way I have 2 for loops. Now, my questions are:
There is a way to avoid for loops?
Is there a smart solution to find a neighbor (circle from list2) for each circle from list1 (without overlaps with other circles from list2)?

Using numpy arrays, you can avoid for loops.
Setup from your example.
import numpy as np
#Using your x and y
c1 = np.array([x,y]).T
# random set of other centers within the same range as c1
c2 = np.random.random((10,2))
np.multiply(c2, c1.max(0)-c1.min(0),out = c2)
np.add(c2, c1.min(0), out=c2)
radius = 0.014
r = radius
min_d = (2*r)*(2*r)
plot_circles(c1,c2) # see function at end
An array of distances from each center in c1 to each center in c2
def dist(c1,c2):
dx = c1[:,0,None] - c2[:,0]
dy = c1[:,1,None] - c2[:,1]
return dx*dx + dy*dy
d = dist(c1,c2)
Or you could use scipy.spatial
from scipy.spatial import distance
d = distance.cdist(c1,c2,'sqeuclidean')
Create a 2d Boolean array for circles that intersect.
intersect = d <= min_d
Find the indices of overlapping circles from the two sets.
a,b = np.where(intersect)
plot_circles(c1[a],c2[b])
Using intersect or a and b to index c1,c2, and d you should be able to get groups of intersecting circles then figure out how to move the c2 centers - but I'll leave that for another question/answer. If a list2 circle intersects one list1 circle - find the line between the two and move along that line. If a list2 circle intersects more than one list1 circle - find the line between the two closestlist1circles and move thelitst2` circle along a line perpendicular to that. You didn't mention any constraints on moving the circles so maybe random movement then find the intersects again but that might be problematic. In the following image, it may be trivial to figure out how to move most of the red circles but the group circled in blue might require a different strategy.
Here are some examples for getting groups:
>>> for f,g,h in zip(c1[a],c2[b],d[a,b]):
print(f,g,h)
>>> c1[intersect.any(1)],c2[intersect.any(0)]
>>> for (f,g) in zip(c2,intersect.T):
if g.any():
print(f.tolist(),c1[g].tolist())
import matplotlib as mpl
from matplotlib import pyplot as plt
def plot_circles(c1,c2):
bounds = np.array([c1.min(0),c2.min(0),c1.max(0),c2.max(0)])
xmin, ymin = bounds.min(0)
xmax, ymax = bounds.max(0)
circles1 = [mpl.patches.Circle(xy,radius=r,fill=False,edgecolor='g') for xy in c1]
circles2 = [mpl.patches.Circle(xy,radius=r,fill=False,edgecolor='r') for xy in c2]
fig = plt.figure()
ax = fig.add_subplot(111)
for c in circles2:
ax.add_artist(c)
for c in circles1:
ax.add_artist(c)
ax.set_xlim(xmin-r,xmax+r)
ax.set_ylim(ymin-r,ymax+r)
plt.show()
plt.close()

This problem can very well be seen as an optimization problem. To be more precise, a nonlinear optimization problem with constraints.
Since optimization strategies are not always so easy to understand, I will define the problem as simply as possible and also choose an approach that is as general as possible (but less efficient) and does not involve a lot of mathematics. As a spoiler: We are going to formulate the problem and the minimization process in less than 10 lines of code using the scipy library.
However, I will still provide hints on where you can get your hands even dirtier.
Formulating the problem
As a guide for a formulation of an NLP-class problem (Nonlinear Programming), you can go directly to the two requirements in the original post.
Each pair of circles must be as close together as possible -> Hint for a cost-function
Circles must not overlap with other (moved) circles -> Hint for a constraint
Cost function
Let's start with the formulation of the cost function to be minimized.
Since the circles should be moved as little as possible (resulting in the closest possible neighborhood), a quadratic penalty term for the distances between the circles of the two lists can be chosen for the cost function:
import scipy.spatial.distance as sd
def cost_function(new_positions, old_positions):
new_positions = np.reshape(new_positions, (-1, 2))
return np.trace(sd.cdist(new_positions, old_positions, metric='sqeuclidean'))
Why quadratic? Partly because of differentiability and for stochastic reasons (think of the circles as normally distributed measurement errors -> least squares is then a maximum likelihood estimator). By exploiting the structure of the cost function, the efficiency of the optimization can be increased (elimination of sqrt). By the way, this problem is related to nonlinear regression, where (nonlinear) least squares are also used.
Now that we have a cost function at hand, we also have a good way to evaluate our optimization. To be able to compare solutions of different optimization strategies, we simply pass the newly calculated positions to the cost function.
Let's give it a try: For example, let us use the calculated positions from the Voronoi approach (by Paul Brodersen).
print(cost_function(new_positions, old_positions))
# prints 0.007999244511697411
That's a pretty good value if you ask me. Considering that the cost function spits out zero when there is no displacement at all, this cost is pretty close. We can now try to outperform this value by using classical optimization!
Non-linear constraint
We know that circles must not overlap with other circles in the new set. If we translate this into a constraint, we find that the lower bound for the distance is 2 times the radius and the upper bound is simply infinity.
import scipy.spatial.distance as sd
from scipy.optimize import NonlinearConstraint
def cons_f(x):
x = np.reshape(x, (-1, 2))
return sd.pdist(x)
nonlinear_constraint = NonlinearConstraint(cons_f, 2*radius, np.inf, jac='2-point')
Here we make life easy by approximating the Jacobi matrix via finite differences (see parameter jac='2-point'). At this point it should be said that we can increase the efficiency here, by formulating the derivatives of the first and second order ourselves instead of using approximations. But this is left to the interested reader. (It is not that hard, because we use quite simple mathematical expressions for distance calculation here.)
One additional note: You can also set a boundary constraint for the positions themselves not to exceed a specified region. This can then be used as another parameter. (See scipy.optimize.Bounds)
Minimizing the cost function under constraints
Now we have both ingredients, the cost function and the constraint, in place. Now let's minimize the whole thing!
from scipy.optimize import minimize
res = minimize(lambda x: cost_function(x, positions), positions.flatten(), method='SLSQP',
jac="2-point", constraints=[nonlinear_constraint])
As you can see, we approximate the first derivatives here as well. You can also go deeper here and set up the derivatives yourself (analytically).
Also note that we must always pass the parameters (an nx2 vector specifying the positions of the new layout for n circles) as a flat vector. For this reason, reshaping can be found several times in the code.
Evaluation, summary and visualization
Let's see how the optimization result performs in our cost function:
new_positions = np.reshape(res.x, (-1,2))
print(cost_function(new_positions, old_positions))
# prints 0.0010314079483565686
Starting from the Voronoi approach, we actually reduced the cost by another 87%! Thanks to the power of modern optimization strategies, we can solve a lot of problems in no time.
Of course, it would be interesting to see how the shifted circles look now:
Circles after Optimization
Performance: 77.1 ms ± 1.17 ms
The entire code:
from scipy.optimize import minimize
import scipy.spatial.distance as sd
from scipy.optimize import NonlinearConstraint
# Given by original post
positions = np.array([x, y]).T
def cost_function(new_positions, old_positions):
new_positions = np.reshape(new_positions, (-1, 2))
return np.trace(sd.cdist(new_positions, old_positions, metric='sqeuclidean'))
def cons_f(x):
x = np.reshape(x, (-1, 2))
return sd.pdist(x)
nonlinear_constraint = NonlinearConstraint(cons_f, 2*radius, np.inf, jac='2-point')
res = minimize(lambda x: cost_function(x, positions), positions.flatten(), method='SLSQP',
jac="2-point", constraints=[nonlinear_constraint])

One solution could be to follow the gradient of the unwanted spacing between each circle, though maybe there is a better way. This approach has a few parameters to tune and takes some time to run.
import matplotlib.pyplot as plt
from scipy.optimize import minimize as mini
import numpy as np
from scipy.optimize import approx_fprime
x = np.array([14.11450195,14.14184093,14.15435028,14.16206741,14.16951752,
14.17171097,14.18569565,14.19700241,14.23129082,14.24083233,
14.24290752,14.24968338,14.2518959,14.26536751,14.27209759,
14.27612877,14.2904377,14.29187012,14.29409599,14.29618549,
14.30615044,14.31624985,14.3206892,14.3228569,14.36143875,
14.36351967,14.36470699,14.36697292,14.37235737,14.41422081,
14.42583466,14.43226814,14.43319225,14.4437027,14.4557848,
14.46592999,14.47036076,14.47452068,14.47815609,14.52229309,
14.53059006,14.53404236,14.5411644])
y = np.array([-0.35319126,-0.44222349,-0.44763246,-0.35669261,-0.24366629,
-0.3998799,-0.38940558,-0.57744932,-0.45223859,-0.21021004,
-0.44250247,-0.45866323,-0.47203487,-0.51684451,-0.44884869,
-0.2018993,-0.40296811,-0.23641759,-0.18019417,-0.33391538,
-0.53565156,-0.45215255,-0.40939832,-0.26936951,-0.30894437,
-0.55504167,-0.47177047,-0.45573688,-0.43100587,-0.5805912,
-0.21770373,-0.199422,-0.17372169,-0.38522363,-0.56950212,
-0.56947368,-0.48770753,-0.24940367,-0.31492445,-0.54263926,
-0.53460872,-0.4053807,-0.43733299])
radius = 0.014
x0, y0 = (x, y)
def plot_circles(x, y, name='initial'):
fig, ax = plt.subplots()
for ii in range(x.size):
ax.add_patch(plt.Circle((x[ii], y[ii]), radius, color='b', fill=False))
ax.set_xlim(x.min() - radius, x.max() + radius)
ax.set_ylim(y.min() - radius, y.max() + radius)
fig.savefig(name)
plt.clf()
def spacing(s):
x, y = np.split(s, 2)
dX, dY = [np.subtract(*np.meshgrid(xy, xy, indexing='ij')).T
for xy in [x, y]]
dXY2 = dX**2 + dY**2
return np.minimum(dXY2[np.triu_indices(x.size, 1)] - (2 * radius) ** 2, 0).sum()
plot_circles(x, y)
def spacingJ(s):
return approx_fprime(s, spacing, 1e-8)
s = np.append(x, y)
for ii in range(50):
j = spacingJ(s)
if j.sum() == 0: break
s += .01 * j
x_new, y_new = np.split(s, 2)
plot_circles(x_new, y_new, 'new%i' % ii)
plot_circles(x_new, y_new, 'new%i' % ii)
https://giphy.com/gifs/x0lWDLZBz5O3gWTbLa

This answer implements a variation of the Lloyds algorithm. The basic idea is to compute the Voronoi diagram for your points / circles. This assigns each point a cell, which is a region that includes the point and which has a center that is maximally far away from all other points.
In the original algorithm, we would move each point towards the center of its Voronoi cell. Over time, this results in an even spread of points, as illustrated here.
In this variant, we only move points that overlap another point.
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import Voronoi
from scipy.spatial.distance import cdist
def remove_overlaps(positions, radii, tolerance=1e-6):
"""Use a variation of Lloyds algorithm to move circles apart from each other until none overlap.
Parameters
----------
positions : array
The (x, y) coordinates of the circle origins.
radii : array
The radii for each circle.
tolerance : float
If all circles overlap less than this threshold, the computation stops.
Higher values leads to faster convergence.
Returns
-------
new_positions : array
The (x, y) coordinates of the circle origins.
See also
--------
https://en.wikipedia.org/wiki/Lloyd%27s_algorithm
"""
positions = np.array(positions)
radii = np.array(radii)
minimum_distances = radii[np.newaxis, :] + radii[:, np.newaxis]
minimum_distances[np.diag_indices_from(minimum_distances)] = 0 # ignore distances to self
# Initialize the first loop.
distances = cdist(positions, positions)
displacements = np.max(np.clip(minimum_distances - distances, 0, None), axis=-1)
while np.any(displacements > tolerance):
centroids = _get_voronoi_centroids(positions)
# Compute the direction from each point towards its corresponding Voronoi centroid.
deltas = centroids - positions
magnitudes = np.linalg.norm(deltas, axis=-1)
directions = deltas / magnitudes[:, np.newaxis]
# Mask NaNs that arise if the magnitude is zero, i.e. the point is already center of the Voronoi cell.
directions[np.isnan(directions)] = 0
# Step into the direction of the centroid.
# Clipping prevents overshooting of the centroid when stepping into the direction of the centroid.
# We step by half the displacement as the other overlapping point will be moved in approximately the opposite direction.
positions = positions + np.clip(0.5 * displacements, None, magnitudes)[:, np.newaxis] * directions
# Initialize next loop.
distances = cdist(positions, positions)
displacements = np.max(np.clip(minimum_distances - distances, 0, None), axis=-1)
return positions
def _get_voronoi_centroids(positions):
"""Construct a Voronoi diagram from the given positions and determine the center of each cell."""
voronoi = Voronoi(positions)
centroids = np.zeros_like(positions)
for ii, idx in enumerate(voronoi.point_region):
region = [jj for jj in voronoi.regions[idx] if jj != -1] # i.e. ignore points at infinity; TODO: compute correctly clipped regions
centroids[ii] = np.mean(voronoi.vertices[region], axis=0)
return centroids
if __name__ == '__main__':
x = np.array([14.11450195,14.14184093,14.15435028,14.16206741,14.16951752,
14.17171097,14.18569565,14.19700241,14.23129082,14.24083233,
14.24290752,14.24968338,14.2518959,14.26536751,14.27209759,
14.27612877,14.2904377,14.29187012,14.29409599,14.29618549,
14.30615044,14.31624985,14.3206892,14.3228569,14.36143875,
14.36351967,14.36470699,14.36697292,14.37235737,14.41422081,
14.42583466,14.43226814,14.43319225,14.4437027,14.4557848,
14.46592999,14.47036076,14.47452068,14.47815609,14.52229309,
14.53059006,14.53404236,14.5411644])
y = np.array([-0.35319126,-0.44222349,-0.44763246,-0.35669261,-0.24366629,
-0.3998799,-0.38940558,-0.57744932,-0.45223859,-0.21021004,
-0.44250247,-0.45866323,-0.47203487,-0.51684451,-0.44884869,
-0.2018993,-0.40296811,-0.23641759,-0.18019417,-0.33391538,
-0.53565156,-0.45215255,-0.40939832,-0.26936951,-0.30894437,
-0.55504167,-0.47177047,-0.45573688,-0.43100587,-0.5805912,
-0.21770373,-0.199422,-0.17372169,-0.38522363,-0.56950212,
-0.56947368,-0.48770753,-0.24940367,-0.31492445,-0.54263926,
-0.53460872,-0.4053807,-0.43733299])
radius = 0.014
positions = np.c_[x, y]
radii = np.full(len(positions), radius)
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(14, 7))
for position, radius in zip(positions, radii):
axes[0].add_patch(plt.Circle(position, radius, fill=False))
axes[0].set_xlim(x.min() - radius, x.max() + radius)
axes[0].set_ylim(y.min() - radius, y.max() + radius)
axes[0].set_aspect('equal')
new_positions = remove_overlaps(positions, radii)
for position, radius in zip(new_positions, radii):
axes[1].add_patch(plt.Circle(position, radius, fill=False))
for ax in axes.ravel():
ax.set_aspect('equal')
plt.show()

Related

Resample points in a Geodataframe so they stay equally spaced but inside area

I have a geopandas dataframe that contains a complex area and some Point objects representing latitude and longitue inside that same area, both reading from .kml and .xlsx files, not defined by me. Thing is that those points are really close to each other, and when ploting the whole map they overlap, making it really difficult to spot each one individually, specially when using geoplot.kdeplot()
So what I would like to do is to find some way to equally space them, respecting the boundaries of my area. For sake of simplicity, consder the Polygon object as the area and the Point objects as the ones to resample:
import random
import matplotlib.pyplot as plt
y = np.array([[100,100], [200,100], [200,200], [100,200], [50.0,150]])
p = Polygon(y)
points = []
for i in range(100):
pt = Point(random.randint(50,199),random.randint(100,199))
if p.contains(pt):
points.append(pt)
xs = [point.x for point in points]
ys = [point.y for point in points]
fig = plt.figure()
ax1=fig.add_subplot(111)
x,y = p.exterior.xy
plt.plot(x,y)
ax1.scatter(xs, ys)
plt.show()
That gives something like this:
Any ideas on how to do it witout crossing the area? Thank you in advance!
EDIT:
Important to mention that the resample should not be arbitrary, meaning that a point in coordinates (100,100), for example, should be somewhere near its original location.
This kind of problems can be handled in various ways such as force based method or geometrical method; Here, geometrical one is used.
I have considered the points to be as circles, so an arbitrary diameter can be specified for them, and also the area size for plotting with matplotlib. So we have the following codes for the beginning:
import numpy as np
from scipy.spatial import cKDTree
from shapely.geometry.polygon import Polygon
import matplotlib.pyplot as plt
np.random.seed(85)
# Points' coordinates and the specified diameter
coords = np.array([[3, 4], [7, 8], [3, 3], [1, 8], [5, 4], [3, 5], [7, 7]], dtype=np.float64)
points_dia = 1.1 # can be chosen arbitrary based on the problem
# Polygon creation
polygon_verts = np.array([[0, 0], [8.1, 0], [8.1, 8.1], [0, 8.1]])
p = Polygon(polygon_verts)
# Plotting
x, y = coords.T
colors = np.random.rand(coords.shape[0])
area = 700 # can be chosen arbitrary based on the problem
min_ = coords.min() - 2*points_dia
max_ = coords.max() + 2*points_dia
plt.xlim(min_, max_)
plt.ylim(min_, max_)
xc, yc = p.exterior.xy
plt.plot(xc, yc)
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()
I separate the total answer to two steps:
dispersing (moving away) the suspected points from average position coordinates
curing the point-polygon overlaps
First step
For the first part, based on the number of points and neighbors, we can choose between Scipy and other libraries e.g. Scikit learn (my last explanations on this topic 1 (4th-5th paragraphs) and 2 can be helpful for such selections). Based on the (not large) sizes that the OP is mentioned in the comment, I will suggest using scipy.spatial.cKDTree, which has the best performance in this regard based on my experiences, to query the points. Using cKDTree.query_pairs we specify groups of points that are in the distance range of the diameter value from each other. Then by looping on the groups and averaging each group's points' coordinates, we can again use cKDTree.query to find the nearest point (k=1) of that group to the specified averaged coordinate. Now, it is easy to calculate distance of other points of that group to the nearest one and move them outward by a specified distance (here just as much as the overlaps):
nears = cKDTree(coords).query_pairs(points_dia, output_type='ndarray')
near_ids = np.unique(nears)
col1_ids = np.unique(nears[:, 0])
for i in range(col1_ids.size):
pts_ids = np.unique(nears[nears[:, 0] == i].ravel())
center_coord = np.average(coords[pts_ids], axis=0)
nearest_center_id = pts_ids[cKDTree(coords[pts_ids]).query(center_coord, k=1)[1]]
furthest_center_ids = pts_ids[pts_ids != nearest_center_id]
vecs = coords[furthest_center_ids] - coords[nearest_center_id]
dists = np.linalg.norm(vecs, axis=1) - points_dia
coords[furthest_center_ids] = coords[furthest_center_ids] + vecs / np.linalg.norm(vecs, axis=1) * np.abs(dists)
Second step
For this part, we can loop on the modified coordinates (in the previous step) and find the two nearest coordinates (k=2) of the polygon's edges and project the point to that edge to find the closest coordinate on that line. So, again as step 1, we calculate overlaps and move the points to placing them inside the polygon. The points that where single (not in groups) will be moved shorter distances. This distances can be specified as desired and I have set them as a default value:
for i, j in enumerate(coords):
for k in range(polygon_verts.shape[0]-1):
nearest_poly_ids = cKDTree(polygon_verts).query(j, k=2)[1]
vec_line = polygon_verts[nearest_poly_ids][1] - polygon_verts[nearest_poly_ids][0]
vec_proj = j - polygon_verts[nearest_poly_ids][0]
line_pnt = polygon_verts[nearest_poly_ids][0] + np.dot(vec_line, vec_proj) / np.dot(vec_line, vec_line) * vec_line
vec = j - line_pnt
dist = np.linalg.norm(vec)
if dist < points_dia / 2:
if i in near_ids:
coords[i] = coords[i] + vec / dist * 2 * points_dia
else:
coords[i] = coords[i] + vec / dist
These codes, perhaps, could be optimized to get better performances, but it is not of importance for this question. I have checked it on my small example and need to be checked by larger cases. In all conditions, I think this will be one of the best ways to achieve the goal (may some changes based on the need) and can be a template which can be modified to satisfy any other needs or any shortcomings.
so this is not particularly fast due to the loop checking for "point in polygon", but as long as you don't intend to ave a very fine grid-spacing something like this would do the job:
from scipy.interpolate import NearestNDInterpolator
# define a regular grid
gx, gy = np.meshgrid(np.linspace(50, 200, 100),
np.linspace(100, 200, 100))
# get a interpolation function
# (just used "xs" as values in here...)
i = NearestNDInterpolator(np.column_stack((xs, ys)), xs)
# crop your grid to the shape
newp = []
for x, y in zip(gx.flat, gy.flat):
pt = Point(x, y)
if p.contains(pt):
newp.append(pt.xy)
newp = np.array(newp).squeeze()
# evaluate values on the grid
vals = i(*newp.T)
ax1.scatter(*newp.T, c=vals, zorder=0)

Create a cycle out of scattered points

I know this sounds trivial, but my head is refusing to give an algorithm for this.
I have a bunch of points scattered on a 2-D plane and want to store them in a list such that they create a ring. The points do not belong on a cycle.
Start from the first point in the list (red in this pic) and sequentially add the rest based on their distance.
Since I cannot answer my question I will post here a possible answer.
This is an approach that seems to do the job.
V.pos holds the positions of the nodes and distance() is just a function that returns the Euclidean distance between two points. A faster approach would also delete the next_node after appending it to the ring so that you don't have to go through the already connected points
ring = [nodes[0]]
while len(ring) < len(nodes):
minl=99999
for i in range(len(nodes)):
dist = distance(V.pos[ring[-1]],V.pos[nodes[i]])
if dist<minl and nodes[i] not in ring:
minl = dist
next_node = nodes[i]
ring.append(next_node)
Here's an idea that will give okay-ish results if your point cloud is already ring-shaped like your example:
Determine a centre point; this can either be the centre of gravity of all points or the centre of the bounding box.
Represent all points in radial coordinates (radius, angle) with reference to the centre
Sort by angle
This will, of course, produce jagged stars for random clouds, but it is not clear, what exactly a "ring" is. You could probably use this as a first draft and start swapping nodes if that gives you a shorter overall distance. Maybe this simple code is all you need short of implementing the minimum distance over all nodes of a graph.
Anayway, here goes:
import math
points = [(0, 4), (2, 2), ...] # original points in Cartesian coords
radial = [] # list of tuples(index, angle)
# find centre point (centre of gravity)
x0, y0 = 0, 0
for x, y in points:
x0 += x
y0 += y
x0 = 1.0 * x0 / len(points)
y0 = 1.0 * y0 / len(points)
# calculate angles
for i, p in enumerate(points):
x, y = p
phi = math.atan2(y - y0, x - x0)
radial += [(i, phi)]
# sort by angle
def rsort(a, b):
"""Sorting criterion for angles"""
return cmp(a[1], b[1])
radial.sort(rsort)
# extract indices
ring = [a[0] for a in radial]

How to calculate centroid of x,y coordinates in python

I have got a lot of x,y coordinates which I have clustered based on the distance between them.
Now I would like to calculate a centroid measure for each cluster of x,y coordinates.
Is there a way to do this?
My coordinates are in the format:
coordinates_cluster = [[x1,x2,x3,...],[y1,y2,y3,...]]
Each cluster has a minimum length of three points, and all points can have both negative and positive x and y values.
I hope that someone can help me.
Best,
Martin
(I am using python 2.7 with canopy 1.1.1 (32 bit) on a Windows 7 system.)
The accepted answer given here does IMHO not apply to typical real life use cases where you want to calculate the centroid of a shape defined by a set of (x,y) vertices (aka polygon). So please excuse me answering a question that was asked almost 8 years ago, but it still came out on top in my SO search, so it might come up for others as well. I'm not saying the accepted answer is wrong in the specific case of the question, but I think, most people who find this thread actually look for centroid according to a different definition.
Centroid is not defined as arithmetic Mean of Vertices
...which is contrary to common opinion.
We have to acknowledge that usually by centroid, we think of “the arithmetic mean position of all the points in the figure. Informally, it is the point at which a cutout of the shape could be perfectly balanced on the tip of a pin” (quoting Wikipedia that’s quoting actual literature here). Note here that it is ALL the points IN the figure and not just the mean of the coordinates of the vertices.
And this is exactly, where you will go wrong if you accept most of SO answers, that imply that the centroid is the arithmetic mean of x and y coordinates of vertices and apply this to real life data that you might have collected by performing an experiment.
The density of points describing your shape might vary along the line of your shape. This is only one of many possible limitations of said method. The simple mean of coordinates then surely is not what you want. I’ll illustrate this with an example.
Example
Here we see a polygon that is made up of 8 vertices. Our intuition rightly tells us, that we could balance this shape on the tip of a pin at (x,y)=(0,0), making the centroid (0,0). But in the area around (-1,1) the density of points/vertices that we were given to describe this polygon is higher than in other areas along the line. Now if we calculate the centroid by taking the mean of the vertices, the result will be pulled towards the high density area.
The point “centroid poly“ corresponds to the true centroid. This point was calculated by implementing the algorithm described here: https://en.wikipedia.org/wiki/Centroid#Of_a_polygon (only difference: it returns the absolute value of the area)
It applies to figures described by x and y coordinates of N vertices like X = x_0, x_1, …, x_(N-1), same for Y. This figure can be any polygon as long as it is non-self-intersecting and the vertices are given in order of occurrence.
This can be used to calculate e.g. the “real” centroid of a matplotlib contour line.
Code
Here is the code for the example above and the implementation of said algorithm:
import matplotlib.pyplot as plt
def centroid_poly(X, Y):
"""https://en.wikipedia.org/wiki/Centroid#Of_a_polygon"""
N = len(X)
# minimal sanity check
if not (N == len(Y)): raise ValueError('X and Y must be same length.')
elif N < 3: raise ValueError('At least 3 vertices must be passed.')
sum_A, sum_Cx, sum_Cy = 0, 0, 0
last_iteration = N-1
# from 0 to N-1
for i in range(N):
if i != last_iteration:
shoelace = X[i]*Y[i+1] - X[i+1]*Y[i]
sum_A += shoelace
sum_Cx += (X[i] + X[i+1]) * shoelace
sum_Cy += (Y[i] + Y[i+1]) * shoelace
else:
# N-1 case (last iteration): substitute i+1 -> 0
shoelace = X[i]*Y[0] - X[0]*Y[i]
sum_A += shoelace
sum_Cx += (X[i] + X[0]) * shoelace
sum_Cy += (Y[i] + Y[0]) * shoelace
A = 0.5 * sum_A
factor = 1 / (6*A)
Cx = factor * sum_Cx
Cy = factor * sum_Cy
# returning abs of A is the only difference to
# the algo from above link
return Cx, Cy, abs(A)
# ********** example ***********
X = [-1, -0.8, -0.6, 1, 2, 1, -1, -2]
Y = [ 1, 1, 1, 1, 0.5, -1, -1, -0.5]
Cx, Cy, A = centroid_poly(X, Y)
# calculating centroid as shown by the accepted answer
Cx_accepted = sum(X)/len(X)
Cy_accepted = sum(Y)/len(Y)
fig, ax = plt.subplots()
ax.scatter(X, Y, label='vertices')
ax.scatter(Cx_accepted, Cy_accepted, label="mean of vertices")
ax.scatter(Cx, Cy, label='centroid poly')
# just so the line plot connects xy_(N-1) and xy_0
X.append(X[0]), Y.append(Y[0])
ax.plot(X, Y, label='polygon')
ax.legend(bbox_to_anchor=(1, 1))
ax.grid(), ax.set_aspect('equal')
I realized that it was not that hard, but here is the code for calculating centroids of x,y coordinates:
>>> c = [[1,4,-5],[3,-2,9]] # of the form [[x1,x2,x3],[y1,y2,y3]]
>>> centroide = (sum(c[0])/len(c[0]),sum(c[1])/len(c[1]))
>>> centroide
(0, 3)
If you are interested in calculating centroid as defined in geometry or signal processing [1, 2] :
import numpy as np
# a line from 0,0 to 1,1
x = np.linspace(0, 1, 100)
y = np.linspace(0, 1, 100)
cx = np.dot(x, y) / np.sum(y)
0.67003367003367

Generate random points on a surface of the cylinder

I want to generate random points on the surface of cylinder such that distance between the points fall in a range of 230 and 250. I used the following code to generate random points on surface of cylinder:
import random,math
H=300
R=20
s=random.random()
#theta = random.random()*2*math.pi
for i in range(0,300):
theta = random.random()*2*math.pi
z = random.random()*H
r=math.sqrt(s)*R
x=r*math.cos(theta)
y=r*math.sin(theta)
z=z
print 'C' , x,y,z
How can I generate random points such that they fall with in the range(on the surfaceof cylinder)?
This is not a complete solution, but an insight that should help. If you "unroll" the surface of the cylinder into a rectangle of width w=2*pi*r and height h, the task of finding distance between points is simplified. You have not explained how to measure "distance along the surface" between points on the top of the cylinder and the side- this is a slightly tricky bit of geometry.
As for computing the distance along the surface when we created an artificial "seam", just use both (x1-x2) and (w -x1+x2) - whichever gives the shorter distance is the one you want.
I do think that #VincentNivoliers' suggestion to use Poisson disk sampling is very good, but with the constraints of h=300 and r=20 you will get terrible results no matter what.
The basic way of creating a set of random points with constraints in the positions between them, is to have a function that modulates the probability of points being placed at a certain location. this function starts out being a constant, and whenever a point is placed, forbidden areas surrounding the point are set to zero. That is difficult to do with continuous variables, but reasonably easy if you discretize your problem.
The other thing to be careful about is the being on a cylinder part. It may be easier to think of it as random points on a rectangular area that repeats periodically. This can be handled in two different ways:
the simplest is to take into consideration not only the rectangular tile where you are placing the points, but also its neighbouring ones. Whenever you place a point in your main tile, you also place one in the neighboring ones and compute their effect on the probability function inside your tile.
A more sophisticated approach considers the probability function then convolution of a kernel that encodes forbidden areas, with a sum of delta functions, corresponding to the points already placed. If this is computed using FFTs, the periodicity is anatural by product.
The first approach can be coded as follows:
from __future__ import division
import numpy as np
r, h = 20, 300
w = 2*np.pi*r
int_w = int(np.rint(w))
mult = 10
pdf = np.ones((h*mult, int_w*mult), np.bool)
points = []
min_d, max_d = 230, 250
available_locs = pdf.sum()
while available_locs:
new_idx = np.random.randint(available_locs)
new_idx = np.nonzero(pdf.ravel())[0][new_idx]
new_point = np.array(np.unravel_index(new_idx, pdf.shape))
points += [new_point]
min_mask = np.ones_like(pdf)
if max_d is not None:
max_mask = np.zeros_like(pdf)
else:
max_mask = True
for p in [new_point - [0, int_w*mult], new_point +[0, int_w*mult],
new_point]:
rows = ((np.arange(pdf.shape[0]) - p[0]) / mult)**2
cols = ((np.arange(pdf.shape[1]) - p[1]) * 2*np.pi*r/int_w/mult)**2
dist2 = rows[:, None] + cols[None, :]
min_mask &= dist2 > min_d*min_d
if max_d is not None:
max_mask |= dist2 < max_d*max_d
pdf &= min_mask & max_mask
available_locs = pdf.sum()
points = np.array(points) / [mult, mult*int_w/(2*np.pi*r)]
If you run it with your values, the output is usually just one or two points, as the large minimum distance forbids all others. but if you run it with more reasonable values, e.g.
min_d, max_d = 50, 200
Here's how the probability function looks after placing each of the first 5 points:
Note that the points are returned as pairs of coordinates, the first being the height, the second the distance along the cylinder's circumference.

calculate turning points / pivot points in trajectory (path)

I'm trying to come up with an algorithm that will determine turning points in a trajectory of x/y coordinates. The following figures illustrates what I mean: green indicates the starting point and red the final point of the trajectory (the entire trajectory consists of ~ 1500 points):
In the following figure, I added by hand the possible (global) turning points that an algorithm could return:
Obviously, the true turning point is always debatable and will depend on the angle that one specifies that has to lie between points. Furthermore a turning point can be defined on a global scale (what I tried to do with the black circles), but could also be defined on a high-resolution local scale. I'm interested in the global (overall) direction changes, but I'd love to see a discussion on the different approaches that one would use to tease apart global vs local solutions.
What I've tried so far:
calculate distance between subsequent points
calculate angle between subsequent points
look how distance / angle changes between subsequent points
Unfortunately this doesn't give me any robust results. I probably have too calculate the curvature along multiple points, but that's just an idea.
I'd really appreciate any algorithms / ideas that might help me here. The code can be in any programming language, matlab or python are preferred.
EDIT here's the raw data (in case somebody want's to play with it):
mat file
text file (x coordinate first, y coordinate in second line)
You could use the Ramer-Douglas-Peucker (RDP) algorithm to simplify the path. Then you could compute the change in directions along each segment of the simplified path. The points corresponding to the greatest change in direction could be called the turning points:
A Python implementation of the RDP algorithm can be found on github.
import matplotlib.pyplot as plt
import numpy as np
import os
import rdp
def angle(dir):
"""
Returns the angles between vectors.
Parameters:
dir is a 2D-array of shape (N,M) representing N vectors in M-dimensional space.
The return value is a 1D-array of values of shape (N-1,), with each value
between 0 and pi.
0 implies the vectors point in the same direction
pi/2 implies the vectors are orthogonal
pi implies the vectors point in opposite directions
"""
dir2 = dir[1:]
dir1 = dir[:-1]
return np.arccos((dir1*dir2).sum(axis=1)/(
np.sqrt((dir1**2).sum(axis=1)*(dir2**2).sum(axis=1))))
tolerance = 70
min_angle = np.pi*0.22
filename = os.path.expanduser('~/tmp/bla.data')
points = np.genfromtxt(filename).T
print(len(points))
x, y = points.T
# Use the Ramer-Douglas-Peucker algorithm to simplify the path
# http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
# Python implementation: https://github.com/sebleier/RDP/
simplified = np.array(rdp.rdp(points.tolist(), tolerance))
print(len(simplified))
sx, sy = simplified.T
# compute the direction vectors on the simplified curve
directions = np.diff(simplified, axis=0)
theta = angle(directions)
# Select the index of the points with the greatest theta
# Large theta is associated with greatest change in direction.
idx = np.where(theta>min_angle)[0]+1
fig = plt.figure()
ax =fig.add_subplot(111)
ax.plot(x, y, 'b-', label='original path')
ax.plot(sx, sy, 'g--', label='simplified path')
ax.plot(sx[idx], sy[idx], 'ro', markersize = 10, label='turning points')
ax.invert_yaxis()
plt.legend(loc='best')
plt.show()
Two parameters were used above:
The RDP algorithm takes one parameter, the tolerance, which
represents the maximum distance the simplified path
can stray from the original path. The larger the tolerance, the cruder the simplified path.
The other parameter is the min_angle which defines what is considered a turning point. (I'm taking a turning point to be any point on the original path, whose angle between the entering and exiting vectors on the simplified path is greater than min_angle).
I will be giving numpy/scipy code below, as I have almost no Matlab experience.
If your curve is smooth enough, you could identify your turning points as those of highest curvature. Taking the point index number as the curve parameter, and a central differences scheme, you can compute the curvature with the following code
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
def first_derivative(x) :
return x[2:] - x[0:-2]
def second_derivative(x) :
return x[2:] - 2 * x[1:-1] + x[:-2]
def curvature(x, y) :
x_1 = first_derivative(x)
x_2 = second_derivative(x)
y_1 = first_derivative(y)
y_2 = second_derivative(y)
return np.abs(x_1 * y_2 - y_1 * x_2) / np.sqrt((x_1**2 + y_1**2)**3)
You will probably want to smooth your curve out first, then calculate the curvature, then identify the highest curvature points. The following function does just that:
def plot_turning_points(x, y, turning_points=10, smoothing_radius=3,
cluster_radius=10) :
if smoothing_radius :
weights = np.ones(2 * smoothing_radius + 1)
new_x = scipy.ndimage.convolve1d(x, weights, mode='constant', cval=0.0)
new_x = new_x[smoothing_radius:-smoothing_radius] / np.sum(weights)
new_y = scipy.ndimage.convolve1d(y, weights, mode='constant', cval=0.0)
new_y = new_y[smoothing_radius:-smoothing_radius] / np.sum(weights)
else :
new_x, new_y = x, y
k = curvature(new_x, new_y)
turn_point_idx = np.argsort(k)[::-1]
t_points = []
while len(t_points) < turning_points and len(turn_point_idx) > 0:
t_points += [turn_point_idx[0]]
idx = np.abs(turn_point_idx - turn_point_idx[0]) > cluster_radius
turn_point_idx = turn_point_idx[idx]
t_points = np.array(t_points)
t_points += smoothing_radius + 1
plt.plot(x,y, 'k-')
plt.plot(new_x, new_y, 'r-')
plt.plot(x[t_points], y[t_points], 'o')
plt.show()
Some explaining is in order:
turning_points is the number of points you want to identify
smoothing_radius is the radius of a smoothing convolution to be applied to your data before computing the curvature
cluster_radius is the distance from a point of high curvature selected as a turning point where no other point should be considered as a candidate.
You may have to play around with the parameters a little, but I got something like this:
>>> x, y = np.genfromtxt('bla.data')
>>> plot_turning_points(x, y, turning_points=20, smoothing_radius=15,
... cluster_radius=75)
Probably not good enough for a fully automated detection, but it's pretty close to what you wanted.
A very interesting question. Here is my solution, that allows for variable resolution. Although, fine-tuning it may not be simple, as it's mostly intended to narrow down
Every k points, calculate the convex hull and store it as a set. Go through the at most k points and remove any points that are not in the convex hull, in such a way that the points don't lose their original order.
The purpose here is that the convex hull will act as a filter, removing all of "unimportant points" leaving only the extreme points. Of course, if the k-value is too high, you'll end up with something too close to the actual convex hull, instead of what you actually want.
This should start with a small k, at least 4, then increase it until you get what you seek. You should also probably only include the middle point for every 3 points where the angle is below a certain amount, d. This would ensure that all of the turns are at least d degrees (not implemented in code below). However, this should probably be done incrementally to avoid loss of information, same as increasing the k-value. Another possible improvement would be to actually re-run with points that were removed, and and only remove points that were not in both convex hulls, though this requires a higher minimum k-value of at least 8.
The following code seems to work fairly well, but could still use improvements for efficiency and noise removal. It's also rather inelegant in determining when it should stop, thus the code really only works (as it stands) from around k=4 to k=14.
def convex_filter(points,k):
new_points = []
for pts in (points[i:i + k] for i in xrange(0, len(points), k)):
hull = set(convex_hull(pts))
for point in pts:
if point in hull:
new_points.append(point)
return new_points
# How the points are obtained is a minor point, but they need to be in the right order.
x_coords = [float(x) for x in x.split()]
y_coords = [float(y) for y in y.split()]
points = zip(x_coords,y_coords)
k = 10
prev_length = 0
new_points = points
# Filter using the convex hull until no more points are removed
while len(new_points) != prev_length:
prev_length = len(new_points)
new_points = convex_filter(new_points,k)
Here is a screen shot of the above code with k=14. The 61 red dots are the ones that remain after the filter.
The approach you took sounds promising but your data is heavily oversampled. You could filter the x and y coordinates first, for example with a wide Gaussian and then downsample.
In MATLAB, you could use x = conv(x, normpdf(-10 : 10, 0, 5)) and then x = x(1 : 5 : end). You will have to tweak those numbers depending on the intrinsic persistence of the objects you are tracking and the average distance between points.
Then, you will be able to detect changes in direction very reliably, using the same approach you tried before, based on the scalar product, I imagine.
Another idea is to examine the left and the right surroundings at every point. This may be done by creating a linear regression of N points before and after each point. If the intersecting angle between the points is below some threshold, then you have an corner.
This may be done efficiently by keeping a queue of the points currently in the linear regression and replacing old points with new points, similar to a running average.
You finally have to merge adjacent corners to a single corner. E.g. choosing the point with the strongest corner property.

Categories