Python performance problem: difference between two polygons - python

I am currently using Python 3.7 and I want to find the difference between a lot of polygons. With that I mean that if I have a polygon A and a polygon B I want to do the mathematical "A not B" operation. There are two possible outcomes of this operation as seen in the following illustration:
So two polygons that I subtract ("cut") from each other either give me a new polygon or are empty. All other cases can be ignored. The form of the polygon does not need to be exact for case 1. So it is acceptable if the polygon changes a bit.
For case 2 I need to know if the polygon is empty.
Furthermore polygon A and B do not have any "holes" in them so they can be described by only their outside border.
I already built a prototype that uses the difference operation of shapely to do this. I "cut" exactly as little as possible (once for every two polygons).
My code is a bit complex but it basically breaks down to this simple function:
def cut_hole(A : Polygon, B : Polygon) -> Polygon:
"""
Cuts a "hole" into shapely polygon A
:return: The polygon resulting of the operation A-B. Might be empty!
"""
outer = A #not in my code, just to point out what I mean
inner = B
return outer.difference(inner)
Now my problem is that this is very slow! I work with roughly 15.000 operations per batch (30.000 polygons) and I takes about 10 to 15 min to calculate them all. I would really like to go down to under 5 mins.
Please keep in mind that this does not account for all the other operations. 15 min just for the difference operation. I can sort every polygon A to every polygon B in under 1 min. I just need a quick way to get the resulting polygon from those.
I did this test with an "good" computer (Intel core i7, 16 GB Ram). Neither the CPU or RAM was at its limit.
So the big question is: how can I speed this up?
Is there a way to translate the polygons into a form that is easier to handle?
Or is there a "better" way to get the difference of two polygons?
Is there an alternative library that might be better? Or can I get shapely to use other hardware? If so what kind of hardware might that be?
Finally my next step would be to try and parallelize the "cutting". Is there an build-in way to do this quickly and efficiently? Because I did not find one in shapely.
Also I would be very grateful for tips on analyzing possible bottlenecks.
Addendum:
Some of the polygons seem to be rather complex. With that I mean that at average the more complex polygons contain 15.000 points. The not complex polygons less then 100 points. However usually (as in 99 %) polygon type A or type B are not complex at the same time.
Here is an example of an complex polygon in WKT

Taking your points in order:
I highly doubt there is another, better-suited format/library for manipulating polygons in python than shapely, it is the reference package. You can try to simplify your geometries, but some rapid tests showed it is a slow operation as well (pbeing the polygon you copypasted above):
p2 = p.buffer(-10) # creating a 2nd polygon
%timeit p.simplify(1) # 58.4 ms, from 15000 to 8000 points
%timeit p.difference(p2) # 53.2 ms
%timeit p.difference(p2.simplify(1)) # 127ms
%timeit p.simplify(1).difference(p2) # 114ms
Shapely uses GEOS under the hood. Maybe you can try to dig in that direction for lower-level solutions.
There is no parallelism in shapely. However as you seem to have your 'As' and 'Bs' polygons already matched, you can parallelize the shapely operation through a threadpool or processpool (see multiprocessing package). If they are not matched, you can check it quickly through intersects(much faster than intersectionor difference. If some of your polygons do not intersect, that will be a huge speedup.
Considering the size of your data (5GB is a lot of geometries...), I don't think you can spare that much time other than with parallelization, as one difference takes ~70ms which gives ~1050s = 17 min for 15000 operations

Related

How do I find the 100 most different points within a pool of 10,000 points?

I have a set of 10,000 points, each made up of 70 boolean dimensions. From this set of 10,000, I would like to select 100 points which are representative of the whole set of 10,000. In other words, I would like to pick the 100 points which are most different from one another.
Is there some established way of doing this? The first thing that comes to my mind is a greedy algorithm, which begins by selecting one point at random, then the next point is selected as the most distant one from the first point, and then the second point is selected as having the longest average distance from the first two, etc. This solution doesn't need to be perfect, just roughly correct. Preferably, this solution of 100 points can also be found within ~10 minutes but finishing within 24 hours is also fine.
I don't care about distance, in particular, that's just something that comes to mind as a way to capture "differentness."
If it matters, every point has 10 values of TRUE and 60 values of FALSE.
Some already-built Python package to do this would be ideal, but I am also happy to just write the code myself something if somebody could point me to a Wikipedia article.
Thanks
Your use of "representative" is not standard terminology, but I read your question as you wish to find 100 items that cover a wide gamut of different examples from your dataset. So if 5000 of your 10000 items were near identical, you would prefer to see only one or two items from that large sub-group. Under the usual definition, a representative sample of 100 would have ~50 items from that group.
One approach that might match your stated goal is to identify diverse subsets or groups within your data, and then pick an example from each group.
You can establish group identities for a fixed number of groups - with different membership size allowed for each group - within a dataset using a clustering algorithm. A good option for you might be k-means clustering with k=100. This will find 100 groups within your data and assign all 10,000 items to one of those 100 groups, based on a simple distance metric. You can then either take the central point from each group or a random sample from each group to find your set of 100.
The k-means algorithm is based around minimising a cost function which is the average distance of each group member from the centre of its group. Both the group centres and the membership are allowed to change, updated in an alternating fashion, until the cost cannot be reduced any further.
Typically you start by assigning each item randomly to a group. Then calculate the centre of each group. Then re-assign items to groups based on closest centre. Then recalculate the centres etc. Eventually this should converge. Multiple runs might be required to find an good optimum set of centres (it can get stuck in a local optimum).
There are several implementations of this algorithm in Python. You could start with the scikit learn library implementation.
According to an IBM support page (from comment by sascha), k-means may not work well with binary data. Other clustering algorithms may work better. You could also try to convert your records to a space where Euclidean distance is more useful and continue to use k-means clustering. An algorithm that may do that for you is principle component analysis (PCA) which is also implemented in scikit learn.
The graph partitioning tool METIS claims to be able to partition graphs with millions of vertices in 256 parts within seconds.
You could treat your 10.000 points as vertices of an undirected graph. A fully connected graph with 50 million edges would probably be too big. Therefore, you could restrict the edges to "similarity links" between points which have a Hamming distance below a certrain threshold.
In general, Hamming distances for 70-bit words have values between 0 and 70. In your case, the upper limit is 20 as there are 10 true coordinates and 60 false coordinates per point. The maximum distance occurs, if all true coordinates are differently located for both points.
Creation of the graph is a costly operation of O(n^2). But it might be possible to get it done within your envisaged time frame.

mlpy - Dynamic Time Warping depends on x?

I am trying to get the distance between these two arrays shown below by DTW.
I am using the Python mlpy package that offers
dist, cost, path = mlpy.dtw_std(y1, y2, dist_only=False)
I understand that DTW does take care of the "shifting". In addition, as can be seen from above, the mlpy.dtw_std() only takes in 2 1-D arrays. So I expect that no matter how I left/right shift my curves, the dist returned by the function should never change.
However after shifting my green curve a bit to the right, the dist returned by mlpy.dtw_std() changes!
Before shifting: Python mlpy.dwt_std reports dist = 14.014
After shifting: Python mlpy.dwt_std reports dist = 38.078
Obviously, since the curves are still those two curves, I don't expect the distances to be different!
Why is it so? Where went wrong?
Let me reiterate what I have understood, please correct me if I am going wrong anywhere. I observe that in both your plots, your 1D series in blue is remaining identical, while green colored is getting stretched. How you are doing it, that you have explained it in the post on Sep 19 '13 at 9:36. Your premise is that because (1) DTW 'takes care' of time shift and (2) all that you are doing is stretching one time-series length-wise, not affecting y-values, (Inference:) you are expecting distance to remain the same.
There is a little missing link between [(1),(2)] and [(Inference)]. Which is, individual distance values corresponding to mappings WILL change as you change set of signals itself. And this will result into difference in the overall distance computation. Plot the warping paths, cost grid to see it for yourself.
Let's take an oversimplified case...
Let
a=range(0,101,5) = [0,5,10,15...95, 100]
and b=range(0,101,5) = [0,5,10,15...95, 100].
Now intuitively speaking, you/I would expect one to one correspondence between 2 signals (for DTW mapping), and distance for all of the mappings to be 0, signals being identically looking.
Now if we make, b=range(0,101,4) = [0,4,8,12...96,100],
DTW mapping between a and b still would start with a's 0 getting mapped to b's 0, and end at a's 100 getting mapped to b's 100 (boundary constraints). Also, because DTW 'takes care' of time shift, I would also expect 20's, 40's, 60's and 80's of the two signals to be mapped with one another. (I haven't tried DTWing these two myself, saying it from intuition, so please check. There is little possibility of non-intuitive warpings taking place as well, depending on step patterns allowed / global constraints, but let's go with intuitive warpings for the moment for the ease of understanding / sake of simplicity).
For the remaining data points, clearly, distances corresponding to mapping are now non-zero, therefore the overall distance too is non-zero. Our distance/overall cost value has changed from zero to something that is non-zero.
Now, this was the case when our signals were too simplistic, linearly increasing. Imagine the variabilities that will come into picture when you have real life non-monotonous signals, and need to find time-warping between them. :)
(PS: Please don't forget to upvote answer :D). Thanks.
Obviously, the curves are not identical, and therefore the distance function must not be 0 (otherwise, it is not a distance by definition).
What IS "relatively large"? The distance probably is not infinite, is it?
140 points in time, each with a small delta, this still adds up to a non-zero number.
The distance "New York" to "Beijing" is roughly 11018 km. Or 1101800000 mm.
The distance to Alpha Centauri is small, just 4.34 lj. That is the nearest other stellar system to us...
Compare with the distance to a non-similar series; that distance should be much larger.

large scale clustering library possibly with python bindings

I've been trying to cluster some larger dataset. consisting of 50000 measurement vectors with dimension 7. I'm trying to generate about 30 to 300 clusters for further processing.
I've been trying the following clustering implementations with no luck:
Pycluster.kcluster (gives only 1-2 non-empty clusters on my dataset)
scipy.cluster.hierarchy.fclusterdata (runs too long)
scipy.cluster.vq.kmeans (runs out of memory)
sklearn.cluster.hierarchical.Ward (runs too long)
Are there any other implementations which I might miss?
50000 instances and 7 dimensions isn't really big, and should not kill an implementation.
Although it doesn't have python binding, give ELKI a try. The benchmark set they use on their homepage is 110250 instances in 8 dimensions, and they run k-means on it in 60 seconds apparently, and the much more advanced OPTICS in 350 seconds.
Avoid hierarchical clustering. It's really only for small data sets. The way it is commonly implemented on matrix operations is O(n^3), which is really bad for large data sets. So I'm not surprised these two timed out for you.
DBSCAN and OPTICS when implemented with index support are O(n log n). When implemented naively, they are in O(n^2). K-means is really fast, but often the results are not satisfactory (because it always splits in the middle). It should run in O(n * k * iter) which usually converges in not too many iterations (iter<<100). But it will only work with Euclidean distance, and just doesn't work well with some data (high-dimensional, discrete, binary, clusters with different sizes, ...)
Since you're already trying scikit-learn: sklearn.cluster.KMeans should scale better than Ward and supports parallel fitting on multicore machines. MiniBatchKMeans is better still, but won't do random restarts for you.
>>> from sklearn.cluster import MiniBatchKMeans
>>> X = np.random.randn(50000, 7)
>>> %timeit MiniBatchKMeans(30).fit(X)
1 loops, best of 3: 114 ms per loop
My package milk handles this problem easily:
import milk
import numpy as np
data = np.random.rand(50000,7)
%timeit milk.kmeans(data, 300)
1 loops, best of 3: 14.3 s per loop
I wonder whether you meant to write 500,000 data points, because 50k points is not that much. If so, milk takes a while longer (~700 sec), but still handles it well as it does not allocate any memory other than your data and the centroids.
The real answer for actually large scale situations is to use something like FAISS, Facebook Research's library for efficient similarity search and clustering of dense vectors.
See
https://github.com/facebookresearch/faiss/wiki/Faiss-building-blocks:-clustering,-PCA,-quantization
OpenCV has a k-means implementation, Kmeans2
Expected running time is on the order of O(n**4) - for an order-of-magnitude approximation, see how long it takes to cluster 1000 points, then multiply that by seven million (50**4 rounded up).

Hierarchical clustering of 1 million objects

Can anyone point me to a hierarchical clustering tool (preferable in python) that can cluster ~1 Million objects? I have tried hcluster and also Orange.
hcluster had trouble with 18k objects. Orange was able to cluster 18k objects in seconds, but failed with 100k objects (saturated memory and eventually crashed).
I am running on a 64bit Xeon CPU (2.53GHz) and 8GB of RAM + 3GB swap on Ubuntu 11.10.
The problem probably is that they will try to compute the full 2D distance matrix (about 8 GB naively with double precision) and then their algorithm will run in O(n^3) time anyway.
You should seriously consider using a different clustering algorithm. Hierarchical clustering is slow and the results are not at all convincing usually. In particular for millions of objects, where you can't just look at the dendrogram to choose the appropriate cut.
If you really want to continue hierarchical clustering, I belive that ELKI (Java though) has a O(n^2) implementation of SLINK. Which at 1 million objects should be approximately 1 million times as fast. I don't know if they already have CLINK, too. And I'm not sure if there actually is any sub-O(n^3) algorithm for other variants than single-link and complete-link.
Consider using other algorithms. k-means for example scales very well with the number of objects (it's just not very good usually either, unless your data is very clean and regular). DBSCAN and OPTICS are quite good in my opinion, once you have a feel for the parameters. If your data set is low dimensional, they can be accelerated quite well with an appropriate index structure. They should then run in O(n log n), if you have an index with O(log n) query time. Which can make a huge difference for large data sets. I've personally used OPTICS on a 110k images data set without problems, so I can imagine it scales up well to 1 million on your system.
To beat O(n^2), you'll have to first reduce your 1M points (documents)
to e.g. 1000 piles of 1000 points each, or 100 piles of 10k each, or ...
Two possible approaches:
build a hierarchical tree from say 15k points, then add the rest one by one:
time ~ 1M * treedepth
first build 100 or 1000 flat clusters,
then build your hierarchical tree of the 100 or 1000 cluster centres.
How well either of these might work depends critically
on the size and shape of your target tree --
how many levels, how many leaves ?
What software are you using,
and how many hours / days do you have to do the clustering ?
For the flat-cluster approach,
K-d_tree s
work fine for points in 2d, 3d, 20d, even 128d -- not your case.
I know hardly anything about clustering text;
Locality-sensitive_hashing ?
Take a look at scikit-learn clustering --
it has several methods, including DBSCAN.
Added: see also
google-all-pairs-similarity-search
"Algorithms for finding all similar pairs of vectors in sparse vector data", Beyardo et el. 2007
SO hierarchical-clusterization-heuristics

how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's

I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you!
Added March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated.
The actual implementation depends heavily on how you want to represent matrices… But assuming the matrix can be represented by a 14 * 10 = 140 element list:
from itertools import product
for matrix in product([0, 1], repeat=140):
# ... do stuff with the matrix ...
Of course, as other posters have noted, this probably isn't what you want to do… But if it really is what you want to do, that's the best code (given your requirements) to do it.
Generating Every possible matrix of 1's and 0's for 14*10 would generate 2**140 matrixes. I don't believe you would have enough lifetime for this. I don't know, if the sun would still shine before you finish that. This is why it is impossible to generate all those matrices. You must look for some other solution, this looks like a brute force.
This is absolutely impossible! The number of possible matrices is 2140, which is around 1.4e42. However, consider the following...
If you were to generate two 14-by-10 matrices at random, the odds that they would be the same are 1 in 1.4e42.
If you were to generate 1 billion unique 14-by-10 matrices, then the odds that the next one you generate would be the same as one of those would still be exceedingly slim: 1 in 1.4e33.
The default random number stream in MATLAB uses a Mersenne twister algorithm that has a period of 219936-1. Therefore, the random number generator shouldn't start repeating itself any time this eon.
Your approach should be thus:
Find a computer no one ever wants to use again.
Give it as much storage space as possible to save your results.
Install MATLAB on it and fire it up.
Start computing matrices at random like so:
while true
newMatrix = randi([0 1],14,10);
%# Process the matrix and output your results to disk
end
Walk away
Since there are so many combinations, you don't have to compare newMatrix with any of the previous matrices since the length of time before a repeat is likely to occur is astronomically large. Your processing is more likely to stop due to other reasons first, such as (in order of likely occurrence):
You run out of disk space to store your results.
There's a power outage.
Your computer suffers a fatal hardware failure.
You pass away.
The Earth passes away.
The Universe dies a slow heat death.
NOTE: Although I injected some humor into the above answer, I think I have illustrated one useful alternative. If you simply want to sample a small subset of the possible combinations (where even 1 billion could be considered "small" due to the sheer number of combinations) then you don't have to go through the extra time- and memory-consuming steps of saving all of the matrices you've already processed and comparing new ones to it to make sure you aren't repeating matrices. Since the odds of repeating a combination are so low, you could safely do this:
for iLoop = 1:whateverBigNumberYouWant
newMatrix = randi([0 1],14,10); %# Generate a new matrix
%# Process the matrix and save your results
end
Are you sure you want every possible 14x10 matrix? There are 140 elements in each matrix, and each element can be on or off. Therefore there are 2^140 possible matrices. I suggest you reconsider what you really want.
Edit: I noticed you mentioned in a comment that you are trying to minimize something. There is an entire mathematical field called optimization devoted to doing this type of thing. The reason this field exists is because quite often it is not possible to exhaustively examine every solution in anything resembling a reasonable amount of time.
Trying this:
import numpy
for i in xrange(int(1e9)): a = numpy.random.random_integers(0,1,(14,10))
(which is much, much, much smaller than what you require) should be enough to convince you that this is not feasible. It also shows you how to calculate one, or few, such random matrices even up to a million is pretty fast).
EDIT: changed to xrange to "improve speed and memory requirements" :)
You don't have to iterate over this:
def everyPossibleMatrix(x,y):
N=x*y
for i in range(2**N):
b="{:0{}b}".format(i,N)
yield '\n'.join(b[j*x:(j+1)*x] for j in range(y))
Depending on what you want to accomplish with the generated matrices, you might be better off generating a random sample and running a number of simulations. Something like:
matrix_samples = []
# generate 10 matrices
for i in range(10):
sample = numpy.random.binomial(1, .5, 14*10)
sample.shape = (14, 10)
matrix_samples.append(sample)
You could do this a number of times to see how results vary across simulations. Of course, you could also modify the code to ensure that there are no repeats in a sample set, again depending on what you're trying to accomplish.
Are you saying that you have a table with 140 cells and each value can be 1 or 0 and you'd like to generate every possible output? If so, you would have 2^140 possible combinations...which is quite a large number.
Instead of just suggesting the this is unfeasible, I would suggest considering a scheme that samples the important subset of all possible combinations instead of applying a brute force approach. As one of your replies suggested, you are doing minimization. There are numerical techniques to do this such as simulated annealing, monte carlo sampling as well as traditional minimization algorithms. You might want to look into whether one is appropriate in your case.
I was actually much more pessimistic to begin with, but consider:
from math import log, e
def timeInYears(totalOpsNeeded=2**140, currentOpsPerSecond=10**9, doublingPeriodInYears=1.5):
secondsPerYear = 365.25 * 24 * 60 * 60
doublingPeriodInSeconds = doublingPeriodInYears * secondsPerYear
k = log(2,e) / doublingPeriodInSeconds # time-proportionality constant
timeInSeconds = log(1 + k*totalOpsNeeded/currentOpsPerSecond, e) / k
return timeInSeconds / secondsPerYear
if we assume that computer processing power continues to double every 18 months, and you can currently do a billion combinations per second (optimistic, but for sake of argument) and you start today, your calculation will be complete on or about April 29th 2137.
Here is an efficient way to do get started Matlab:
First generate all 1024 possible rows of length 10 containing only zeros and ones:
dec2bin(0:2^10-1)
Now you have all possible rows, and you can sample from them as you wish. For example by calling the following line a few times:
randperm(1024,14)

Categories