I need some help to optimize a python code - python

I'm working on a KNN Classifier using Python but I have some problems.
The following piece of code takes 7.5s-9.0s to be completed and i'll have to run it for 60.000 times.
for fold in folds:
for dot2 in fold:
"""
distances[x][0] = Class of the dot2
distances[x][1] = distance between dot1 and dot2
"""
distances.append([dot2[0], calc_distance(dot1[1:], dot2[1:], method)])
The "folds" variable is a list with 10 folds that summed contain 60.000 inputs of images in the .csv format. The first value of each dot is the class it belongs to. All the values are in integer.
Is there a way to make this line run any faster ?
Here it is the calc_distance function
def calc_distancia(dot1, dot2, distance):
if distance == "manhanttan":
total = 0
#for each coord, take the absolute difference
for x in range(0, len(dot1)):
total = total + abs(dot1[x] - dot2[x])
return total
elif distance == "euclidiana":
total = 0
for x in range(0, len(dot1)):
total = total + (dot1[x] - dot2[x])**2
return math.sqrt(total)
elif distance == "supremum":
total = 0
for x in range(0, len(dot1)):
if abs(dot1[x] - dot2[x]) > total:
total = abs(dot1[x] - dot2[x])
return total
elif distance == "cosseno":
dist = 0
p1_p2_mul = 0
p1_sum = 0
p2_sum = 0
for x in range(0, len(dot1)):
p1_p2_mul = p1_p2_mul + dot1[x]*dot2[x]
p1_sum = p1_sum + dot1[x]**2
p2_sum = p2_sum + dot2[x]**2
p1_sum = math.sqrt(p1_sum)
p2_sum = math.sqrt(p2_sum)
quociente = p1_sum*p2_sum
dist = p1_p2_mul/quociente
return dist
EDIT:
Found a way to make it faster at least for the "manhanttan" method. Instead of:
if distance == "manhanttan":
total = 0
#for each coord, take the absolute difference
for x in range(0, len(dot1)):
total = total + abs(dot1[x] - dot2[x])
return total
i put
if distance == "manhanttan":
totalp1 = 0
totalp2 = 0
#for each coord, take the absolute difference
for x in range(0, len(dot1)):
totalp1 += dot1[x]
totalp2 += dot2[x]
return abs(totalp1-totalp2)
The abs() call is very heavy

There are many guides to "profiling python"; you should search for some, read them, and walk through the profiling process to ensure you know what parts of your work are taking the most time.
But if this is really the core of your work, it's a fair bet that that calc_distance is where the majority of the running time is being consumed.
Optimizing that deeply will probably require using NumPy accelerated math or a similar, lower-level approach.
As a quick and dirty approach requiring less invasive profiling and rewriting, try installing the PyPy implementation of Python and running under it. I have seen easy 2x or more accelerations compared to the standard (CPython) implementation.

I'm confused. Did you try the profiler?
python -m cProfile myscript.py
It will show you where the bulk of the time is being consumed and provide hard data to work with. eg. refactor to reduce the number of calls, restructure the input data, substitute this function for that, etc.
https://docs.python.org/3/library/profile.html

In the first place, you should avoid using a single calc_distance function that performs a linear search in a list of strings on every call. Define independent distance functions and call the right one. As Lee Daniel Crocker suggested, don't use the slicing, just start your loop ranges at 1.
For the cosine distance, I would recommend to normalize all the dot vectors once for all. This way the distance computation reduces to a dot product.
These micro-optimization can give you some speedup. But a better gain should be possible by switching to a better algorithm: the kNN classifier calls for a kD-tree, that will allow you to quickly remove a significant fraction of the points from consideration.
This is harder to implement (you'll have to slightly adapt for the different distances; the cosine distance will make it tricky.)

Related

faster way to erode/dilate images

I'm making a script thats does some mathemagical morphology on images (mainly gis rasters). Now, I've implemented erosion and dilation, with opening/closing with reconstruction still on the TODO but thats not the subject here.
My implementation is very simple with nested loops, which I tried on a 10900x10900 raster and it took an absurdly long amount of time to finish, obviously.
Before I continue with other operations, I'd like to know if theres a faster way to do this?
My implementation:
def erode(image, S):
(m, n) = image.shape
buffer = np.full((m, n), 0).astype(np.float64)
for i in range(S, m - S):
for j in range(S, n - S):
buffer[i, j] = np.min(image[i - S: i + S + 1, j - S: j + S + 1]) #dilation is just np.max()
return buffer
I've heard about vectorization but I'm not quite sure I understand it too well. Any advice or pointers are appreciated. Also I am aware that opencv has these morphological operations, but I want to implement my own to learn about them.
The question here is do you want a more efficient implementation because you want to learn about numpy or do you want a more efficient algorithm.
I think there are two obvious things that could be improved with your approach. One is you want to avoid looping on the python level because that is slow. The other is that your taking a maximum of overlapping parts of arrays and you can make it more efficient if you reuse all the effort you put in finding the last maximum.
I will illustrate that with 1d implementations of erosion.
Baseline for comparison
Here is basically your implementation just a 1d version:
def erode(image, S):
n = image.shape[0]
buffer = np.full(n, 0).astype(np.float64)
for i in range(S, n - S):
buffer[i] = np.min(image[i - S: i + S + 1]) #dilation is just np.max()
return buffer
You can make this faster using stride_tricks/sliding_window_view. I.e. by avoiding the loops and doing that at the numpy level.
Faster Implementation
np.lib.stride_tricks.sliding_window_view(arr,2*S+1).min(1)
Notice that it's not quite doing the same since it only starts calculating values once there are 2S+1 values to take the maximum of. But for this illustration I will ignore this problem.
Faster Algorithm
A completely different approach would be to not start calculating the min from scratch but keeping the values ordered and only adding one and removing one when considering the next window one to the right.
Here is a ruff implementation of that:
def smart_erode(arr, m):
n = arr.shape[0]
sd = SortedDict()
for new in arr[:m]:
if new in sd:
sd[new] += 1
else:
sd[new] = 1
for to_remove,new in zip(arr[:-m+1],arr[m:]):
yield sd.keys()[0]
if new in sd:
sd[new] += 1
else:
sd[new] = 1
if sd[to_remove] > 1:
sd[to_remove] -= 1
else:
sd.pop(to_remove)
yield sd.keys()[0]
Notice that an ordered set wouldn't work and an ordered list would have to have a way to remove just one element with a specific value sind you could have repeated values in your array. I am using an ordered dict to store the amount of items present for a value.
A Ruff Benchmark
I want to illustrate how the 3 implementations compare for different window sizes. So I am testing them with an array of 10^5 random integers for different window sizes ranging from 10^3 to 10^4.
arr = np.random.randint(0,10**5,10**5)
sliding_window_times = []
op_times = []
better_alg_times = []
for m in np.linspace(0,10**4,11)[1:].astype('int'):
x = %timeit -o -n 1 -r 1 np.lib.stride_tricks.sliding_window_view(arr,2*m+1).min(1)
sliding_window_times.append(x.best)
x = %timeit -o -n 1 -r 1 erode(arr,m)
op_times.append(x.best)
x = %timeit -o -n 1 -r 1 tuple(smart_erode(arr,2*m+1))
better_alg_times.append(x.best)
print("")
pd.DataFrame({"Baseline Comparison":op_times,
'Faster Implementation':sliding_window_times,
'Faster Algorithm':better_alg_times,
},
index = np.linspace(0,10**4,11)[1:].astype('int')
).plot.bar()
Notice that for very small window sizes the raw power of the numpy implementation wins out but very quickly the amount of work we are saving by not calculating the min from scratch is more important.

How can I find a check for a converging value more efficiently in python

I'm trying to estimate the value of pi up to 3 d.p but it feels extremely slow, for example it takes almost 20 minutes to loop 10,000 times. I'm assuming because it keeps individually checking every single value so I was wondering if there's anyway to loop faster since I need to find a better average.
def est(b,a,d,c):
total=0
count=0
dp=False
while not dp:
count+=1
x,y=random.uniform(a,b),random.uniform(c,d)
if x**2+y**2<1:
total+=1
areaest=((abs(b-a)*abs(d-c))/count)*total
round=float("{:.3f}".format(areaest))
if round==3.142:
dp=True
return count
There isn't an O() improvement to be had. Monte Carlo methods rely on trying things many, many times.
There are lots of low-level ways to cut cycles, though, which you pick up by experience. For example, move invariant computations out of loops, multiply once instead of raising to the power 2, use float literals where appropriate instead of integer literals that you know will have to be converted to float every time ... Here's a version with a bunch of those:
def est(b, a, d, c):
from random import uniform
box_area = float(abs(b-a) * abs(d-c))
total = count = 0
while True:
count += 1
x, y = uniform(a, b), uniform(c, d)
if x*x + y*y < 1.0:
total += 1
areaest = box_area / count * total
if abs(areaest - 3.142) < 0.0005:
break
return count
BTW, a meta-comment: why are you checking for convergence to 3.142 at all? In a Monte Carlo application, you typically don't know the result you're looking for in advance. If you did, why bother running Monte Carlo?
More typical: you pick a fixed number of iteration to run in advance. Then average over many runs each using that fixed number of iterations. Timing is at least roughly predictable then.

Efficient Particle-Pair Interactions Calculation

I have an N-body simulation that generates a list of particle positions, for multiple timesteps in the simulation. For a given frame, I want to generate a list of the pairs of particles' indices (i, j) such that dist(p[i], p[j]) < masking_radius. Essentially I'm creating a list of "interaction" pairs, where the pairs are within a certain distance of each other. My current implementation looks something like this:
interaction_pairs = []
# going through each unique pair (order doesn't matter)
for i in range(num_particles):
for j in range(i + 1, num_particles):
if dist(p[i], p[j]) < masking_radius:
interaction_pairs.append((i,j))
Because of the large number of particles, this process takes a long time (>1 hr per test), and it is severely limiting to what I need to do with the data. I was wondering if there was any more efficient way to structure the data such that calculating these pairs would be more efficient instead of comparing every possible combination of particles. I was looking into KDTrees, but I couldn't figure out a way to utilize them to compute this more efficiently. Any help is appreciated, thank you!
Since you are using python, sklearn has multiple implementations for nearest neighbours finding:
http://scikit-learn.org/stable/modules/neighbors.html
There is KDTree and Balltree provided.
As for KDTree the main point is to push all the particles you have into KDTree, and then for each particle ask query: "give me all particles in range X". KDtree usually do this faster than bruteforce search.
You can read more for example here: https://www.cs.cmu.edu/~ckingsf/bioinfo-lectures/kdtrees.pdf
If you are using 2D or 3D space, then other option is to just cut the space into big grid (which cell size of masking radius) and assign each particle into one grid cell. Then you can find possible candidates for interaction just by checking neighboring cells (but you also have to do a distance check, but for much fewer particle pairs).
Here's a fairly simple technique using plain Python that can reduce the number of comparisons required.
We first sort the points along either the X, Y, or Z axis (selected by axis in the code below). Let's say we choose the X axis. Then we loop over point pairs like your code does, but when we find a pair whose distance is greater than the masking_radius we test whether the difference in their X coordinates is also greater than the masking_radius. If it is, then we can bail out of the inner j loop because all points with a greater j have a greater X coordinate.
My dist2 function calculates the squared distance. This is faster than calculating the actual distance because computing the square root is relatively slow.
I've also included code that behaves similar to your code, i.e., it tests every pair of points, for speed comparison purposes; it also serves to check that the fast code is correct. ;)
from random import seed, uniform
from operator import itemgetter
seed(42)
# Make some fake data
def make_point(hi=10.0):
return [uniform(-hi, hi) for _ in range(3)]
psize = 1000
points = [make_point() for _ in range(psize)]
masking_radius = 4.0
masking_radius2 = masking_radius ** 2
def dist2(p, q):
return (p[0] - q[0])**2 + (p[1] - q[1])**2 + (p[2] - q[2])**2
pair_count = 0
test_count = 0
do_fast = 1
if do_fast:
# Sort the points on one axis
axis = 0
points.sort(key=itemgetter(axis))
# Fast
for i, p in enumerate(points):
left, right = i - 1, i + 1
for j in range(i + 1, psize):
test_count += 1
q = points[j]
if dist2(p, q) < masking_radius2:
#interaction_pairs.append((i, j))
pair_count += 1
elif q[axis] - p[axis] >= masking_radius:
break
if i % 100 == 0:
print('\r {:3} '.format(i), flush=True, end='')
total_pairs = psize * (psize - 1) // 2
print('\r {} / {} tests'.format(test_count, total_pairs))
else:
# Slow
for i, p in enumerate(points):
for j in range(i+1, psize):
q = points[j]
if dist2(p, q) < masking_radius2:
#interaction_pairs.append((i, j))
pair_count += 1
if i % 100 == 0:
print('\r {:3} '.format(i), flush=True, end='')
print('\n', pair_count, 'pairs')
output with do_fast = 1
181937 / 499500 tests
13295 pairs
output with do_fast = 0
13295 pairs
Of course, if most of the point pairs are within masking_radius of each other, there won't be much benefit in using this technique. And sorting the points adds a little bit of time, but Python's TimSort is rather efficient, especially if the data is already partially sorted, so if the masking_radius is sufficiently small you should see a noticeable improvement in the speed.

How can I improve this code with nested loops?

I have a function that count number of collisions between two point in each frame.
I have no idea how to improve this very slow code.
#data example
#[[89, 814, -77.1699249744415, 373.870468139648, 0.0], [71, 814, -119.887828826904, 340.433287620544, 0.0]...]
def is_collide(data, req_dist):
#req_dist - minimum distance when collision will be count
temp = data
temp.sort(key=Measurements.sort_by_frame)
max_frame = data[-1][1]
min_frame = data[0][1]
collissions = 0
# max_frame-min_frame approximately 60000
# the slowest part
for i in range(min_frame, max_frame):
frames = [line for line in temp if line[1] == i]
temp = [line for line in temp if line[1] != i]
l = len(frames)
for j in range(0, l, 1):
for k in range(j+1, l, 1):
dist = ((frames[j][2] - frames[k][2])**2 + (frames[j][3]-frames[k][3])**2)**0.5
if dist < req_dist:
collissions += 1
return collissions
Computing distance between every pair of points is expensive: an O(n**2) operation. In general, that can be very expensive even for small n.
I would suggest stepping back and seeing if there is a better data structure to do this::
Quad-trees: Check the wikipedia article on Quad-Trees. These can be used for collision detection possibly.
https://en.wikipedia.org/wiki/Quadtree
In Jon Bentley's book "Programming Pearls", Section 2, column 5 is very relevant to this. He describes all the optimizations needed for computing something similar in a N-body problem. I strongly suggest reading that for some ideas.
Having said that, I think there are some places where you could make some fairly simply improvements and get some modest speed-up.
1) The distance computation with an exponentiation (actually the square root) is an expensive operation.
2) You use n**2 to compute a square, when it's probably faster to just multiply n by itself.
You could replace it with a temp (and multiply by itself), but even better: you don't need it! As long as all distances are computed the same way (without the **.5), you can compare them. In other words, distances can be compared without the sqrt operation, as long as you only need the relative value. I answered a similar question here:
Fastest way to calculate Euclidean distance in c
Hope this helps!

python: Generating integer partitions

I need to generate all the partitions of a given integer.
I found this algorithm by Jerome Kelleher for which it is stated to be the most efficient one:
def accelAsc(n):
a = [0 for i in range(n + 1)]
k = 1
a[0] = 0
y = n - 1
while k != 0:
x = a[k - 1] + 1
k -= 1
while 2*x <= y:
a[k] = x
y -= x
k += 1
l = k + 1
while x <= y:
a[k] = x
a[l] = y
yield a[:k + 2]
x += 1
y -= 1
a[k] = x + y
y = x + y - 1
yield a[:k + 1]
reference: http://homepages.ed.ac.uk/jkellehe/partitions.php
By the way, it is not quite efficient. For an input like 40 it freezes nearly my whole system for few seconds before giving its output.
If it was a recursive algorithm I would try to decorate it with a caching function or something to improve its efficiency, but being like that I can't figure out what to do.
Do you have some suggestions about how to speed up this algorithm? Or can you suggest me another one, or a different approach to make another one from scratch?
To generate compositions directly you can use the following algorithm:
def ruleGen(n, m, sigma):
"""
Generates all interpart restricted compositions of n with first part
>= m using restriction function sigma. See Kelleher 2006, 'Encoding
partitions as ascending compositions' chapters 3 and 4 for details.
"""
a = [0 for i in range(n + 1)]
k = 1
a[0] = m - 1
a[1] = n - m + 1
while k != 0:
x = a[k - 1] + 1
y = a[k] - 1
k -= 1
while sigma(x) <= y:
a[k] = x
x = sigma(x)
y -= x
k += 1
a[k] = x + y
yield a[:k + 1]
This algorithm is very general, and can generate partitions and compositions of many different types. For your case, use
ruleGen(n, 1, lambda x: 1)
to generate all unrestricted compositions. The third argument is known as the restriction function, and describes the type of composition/partition that you require. The method is efficient, as the amount of effort required to generate each composition is constant, when you average over all the compositions generated. If you would like to make it slightly faster in python then it's easy to replace the function sigma with 1.
It's worth noting here as well that for any constant amortised time algorithm, what you actually do with the generated objects will almost certainly dominate the cost of generating them. For example, if you store all the partitions in a list, then the time spent managing the memory for this big list will be far greater than the time spent generating the partitions.
Say, for some reason, you want to take the product of each partition. If you take a naive approach to this, then the processing involved is linear in the number of parts, whereas the cost of generation is constant. It's quite difficult to think of an application of a combinatorial generation algorithm in which the processing doesn't dominate the cost of generation. So, in practice, there'll be no measurable difference between using the simpler and more general ruleGen with sigma(x) = x and the specialised accelAsc.
If you are going to use this function repeatedly for the same inputs, it could still be worth caching the return values (if you are going to use it across separate runs, you could store the results in a file).
If you can't find a significantly faster algorithm, then it should be possible to speed this up by an order of magnitude or two by moving the code into a C extension (this is probably easiest using cython), or alternatively by using PyPy instead of CPython (PyPy has its downsides - it does not yet support Python 3, or some commonly-used libraries like numpy and scipy).
The reason for this is, since python is dynamically typed, the interpreter is probably spending most of its time checking the types of the variables - for all the interpreter knows, one of the operations could turn x into a string, in which case expressions like x + y would suddenly have very different meanings. Cython gets around this problem by allowing you to statically declare the variables as integers, while PyPy has a just-in-time compiler which minimises redundant type checks.
Testing with n=75 I get:
PyPy 1.8:
w:\>c:\pypy-1.8\pypy.exe pstst.py
1.04800009727 secs.
CPython 2.6:
w:\>python pstst.py
5.86199998856 secs.
Cython + mingw + gcc 4.6.2:
w:\pstst> python -c "import pstst;pstst.run()"
4.06399989128
I saw no difference with Psyco(?)
The run function:
def run():
import time
start = time.time()
for p in accelAsc(75):
pass
print time.time() - start, 'secs.'
If I change the definition of accelAsc for Cython to start with:
def accelAsc(int n):
cdef int x, y, k
# no more changes..
I get the Cython time down to 2.27 secs.
I'd say that your performance issue is somewhere else.
I didn't compare it with other approaches, but it does seem efficient to me:
import time
start = time.time()
partitions = list(accelAsc(40))
print('time: {:.5f} sec'.format(time.time() - start))
print('length:', len(partitions))
Gave:
time: 0.03636 sec
length: 37338

Categories