Efficient computation of minimum of Haversine distances - python

I have a dataframe with >2.7MM coordinates, and a separate list of ~2,000 coordinates. I'm trying to return the minimum distance between the coordinates in each individual row compared to every coordinate in the list. The following code works on a small scale (dataframe with 200 rows), but when calculating over 2.7MM rows, it seemingly runs forever.
from haversine import haversine
df
Latitude Longitude
39.989 -89.980
39.923 -89.901
39.990 -89.987
39.884 -89.943
39.030 -89.931
end_coords_list = [(41.342,-90.423),(40.349,-91.394),(38.928,-89.323)]
for row in df.itertuples():
def min_distance(row):
beg_coord = (row.Latitude, row.Longitude)
return min(haversine(beg_coord, end_coord) for end_coord in end_coords_list)
df['Min_Distance'] = df.apply(min_distance, axis=1)
I know the issue lies in the sheer number of calculations that are happening (5.7MM * 2,000 = ~11.4BN), and the fact that running this many loops is incredibly inefficient.
Based on my research, it seems like a vectorized NumPy function might be a better approach, but I'm new to Python and NumPy so I'm not quite sure how to implement this in this particular situation.
Ideal Output:
df
Latitude Longitude Min_Distance
39.989 -89.980 3.7
39.923 -89.901 4.1
39.990 -89.987 4.2
39.884 -89.943 5.9
39.030 -89.931 3.1
Thanks in advance!

The haversine func in essence is :
# convert all latitudes/longitudes from decimal degrees to radians
lat1, lng1, lat2, lng2 = map(radians, (lat1, lng1, lat2, lng2))
# calculate haversine
lat = lat2 - lat1
lng = lng2 - lng1
d = sin(lat * 0.5) ** 2 + cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * asin(sqrt(d))
Here's a vectorized method leveraging the powerful NumPy broadcasting and NumPy ufuncs to replace those math-module funcs so that we would operate on entire arrays in one go -
# Get array data; convert to radians to simulate 'map(radians,...)' part
coords_arr = np.deg2rad(coords_list)
a = np.deg2rad(df.values)
# Get the differentiations
lat = coords_arr[:,0] - a[:,0,None]
lng = coords_arr[:,1] - a[:,1,None]
# Compute the "cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2" part.
# Add into "sin(lat * 0.5) ** 2" part.
add0 = np.cos(a[:,0,None])*np.cos(coords_arr[:,0])* np.sin(lng * 0.5) ** 2
d = np.sin(lat * 0.5) ** 2 + add0
# Get h and assign into dataframe
h = 2 * AVG_EARTH_RADIUS * np.arcsin(np.sqrt(d))
df['Min_Distance'] = h.min(1)
For further performance boost, we can make use of numexpr module to replace the transcendental funcs.
Runtime test and verification
Approaches -
def loopy_app(df, coords_list):
for row in df.itertuples():
df['Min_Distance1'] = df.apply(min_distance, axis=1)
def vectorized_app(df, coords_list):
coords_arr = np.deg2rad(coords_list)
a = np.deg2rad(df.values)
lat = coords_arr[:,0] - a[:,0,None]
lng = coords_arr[:,1] - a[:,1,None]
add0 = np.cos(a[:,0,None])*np.cos(coords_arr[:,0])* np.sin(lng * 0.5) ** 2
d = np.sin(lat * 0.5) ** 2 + add0
h = 2 * AVG_EARTH_RADIUS * np.arcsin(np.sqrt(d))
df['Min_Distance2'] = h.min(1)
Verification -
In [158]: df
Out[158]:
Latitude Longitude
0 39.989 -89.980
1 39.923 -89.901
2 39.990 -89.987
3 39.884 -89.943
4 39.030 -89.931
In [159]: loopy_app(df, coords_list)
In [160]: vectorized_app(df, coords_list)
In [161]: df
Out[161]:
Latitude Longitude Min_Distance1 Min_Distance2
0 39.989 -89.980 126.637607 126.637607
1 39.923 -89.901 121.266241 121.266241
2 39.990 -89.987 126.037388 126.037388
3 39.884 -89.943 118.901195 118.901195
4 39.030 -89.931 53.765506 53.765506
Timings -
In [163]: df
Out[163]:
Latitude Longitude
0 39.989 -89.980
1 39.923 -89.901
2 39.990 -89.987
3 39.884 -89.943
4 39.030 -89.931
In [164]: %timeit loopy_app(df, coords_list)
100 loops, best of 3: 2.41 ms per loop
In [165]: %timeit vectorized_app(df, coords_list)
10000 loops, best of 3: 96.8 µs per loop

Related

Distance Matrix Haversine

I am working on a data frame that looks like this :
lat lon
id_zone
0 40.0795 4.338600
1 45.9990 4.829600
2 45.2729 2.882000
3 45.7336 4.850478
4 45.6981 5.043200
I'm trying to make a Haverisne distance matrix. Basically for each zone, I would like to calculate the distance between it and all the others in the dataframe. So there should be only 0s on the diagonal. Here is the Haversine function that I use but I can't make my matrix.
def haversine(x):
x.lon, x.lat, x.lon2, x.lat2 = map(radians, [x.lon, x.lat, x.lon2, x.lat2])
# formule de Haversine
dlon = x.lon2 - x.lon
dlat = x.lat2 - x.lat
a = sin(dlat / 2) ** 2 + cos(x.lat) * cos(x.lat2) * sin(dlon / 2) ** 2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
km = 6367 * c
return km
You can use the solution to this answer Pandas - Creating Difference Matrix from Data Frame
Or in your specific case, where you have a DataFrame like this example:
lat lon
id_zone
0 40.0795 4.338600
1 45.9990 4.829600
2 45.2729 2.882000
3 45.7336 4.850478
4 45.6981 5.043200
And your function is defined as:
def haversine(first, second):
# convert decimal degrees to radians
lat, lon, lat2, lon2 = map(np.radians, [first[0], first[1], second[0], second[1]])
# haversine formula
dlon = lon2 - lon
dlat = lat2 - lat
a = np.sin(dlat/2)**2 + np.cos(lat) * np.cos(lat2) * np.sin(dlon/2)**2
c = 2 * np.arcsin(np.sqrt(a))
r = 6371 # Radius of earth in kilometers. Use 3956 for miles
return c * r
Where you pass the lat and lon of the first location and the second location.
You can then create a distance matrix using Numpy and then replace the zeros with the distance results from the haversine function:
# create a matrix for the distances between each pair of zones
distances = np.zeros((len(df), len(df)))
for i in range(len(df)):
for j in range(len(df)):
distances[i, j] = haversine(df.iloc[i], df.iloc[j])
pd.DataFrame(distances, index=df.index, columns=df.index)
Your output should be similar to this:
id_zone 0 1 2 3 4
id_zone
0 0.000000 659.422944 589.599339 630.083979 627.383858
1 659.422944 0.000000 171.597296 29.555376 37.325316
2 589.599339 171.597296 0.000000 161.731366 174.983855
3 630.083979 29.555376 161.731366 0.000000 15.474533
4 627.383858 37.325316 174.983855 15.474533 0.000000

Optimizing/Parallel Computing a simple but big loop based on Pandas

I have this simple loop which processes a big dataset.
for i in range (len(nbi.LONG_017)):
StoredOCC = []
for j in range (len(Final.X)):
r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
if (r < 0.03048):
SWw = Final.CUM_OCC[j]
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)
len(nbi.LONG_017) is 3000 and len(Final.X) is 6 millions data points.
I was wondering if there is an efficient way to implement this code or use parallel computing to make it faster?
I used the code provided here :
Haversine Formula in Python (Bearing and Distance between two GPS points) for my function haversine:
def haversine(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
r = 6372.8 # Radius of earth in kilometers. Use 3956 for miles
return r * c
Before talking about parallelization, you can work on optimizing your loop. The first way would be to iterate over the data instead of incremental values over the length and then access the data each time:
#toy sample
np.random.seed(1)
size_nbi = 20
size_Final = 100
nbi = pd.DataFrame({'LONG_017':np.random.random(size=size_nbi)/100+73,
'LAT_016':np.random.random(size=size_nbi)/100+73,})
Final = pd.DataFrame({'X':np.random.random(size=size_Final)/100+73,
'Y':np.random.random(size=size_Final)/100+73,
'CUM_OCC':np.random.randint(size_Final,size=size_Final)})
Using your method, you get for these size of dataframes about 75ms:
%%timeit
for i in range (len(nbi.LONG_017)):
StoredOCC = []
for j in range (len(Final.X)):
r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
if (r < 0.03048):
SWw = Final.CUM_OCC[j]
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)
# 75.6 ms ± 4.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Now if you change slightly the loops, iterate over the data themselves and use a list for the result then put the result as a column outside the loops, you can go down to 5ms so 15 times faster, by just optimizing it (and you might find even better optimization):
%%timeit
res_ADIT = []
for lon1, lat1 in zip (nbi.LONG_017.to_numpy(),
nbi.LAT_016.to_numpy()):
StoredOCC = []
for lon2, lat2, SWw in zip(Final.X.to_numpy(),
Final.Y.to_numpy(),
Final.CUM_OCC.to_numpy()):
r = haversine(lon1, lat1, lon2, lat2)
if (r < 0.03048):
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
res_ADIT.append(np.max(StoredOCC))
else:
res_ADIT.append(np.nan)
nbi['ADIT_v2'] = res_ADIT
# 5.23 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Now, you can even go further and use numpy and vectorization of your second loop, and also pass directly the radians values instead of having to map them at each loop:
# empty list for result
res_ADIT = []
# work with array in radians
arr_lon2 = np.radians(Final.X.to_numpy())
arr_lat2 = np.radians(Final.Y.to_numpy())
arr_OCC = Final.CUM_OCC.to_numpy()
for lon1, lat1 in zip (np.radians(nbi.LONG_017), #pass directly the radians
np.radians(nbi.LAT_016)):
# do all the substraction in a vectorize way
arr_dlon = arr_lon2 - lon1
arr_dlat = arr_lat2 - lat1
# same here using numpy functions
arr_dist = np.sin(arr_dlat/2)**2 + np.cos(lat1) * np.cos(arr_lat2) * np.sin(arr_dlon/2)**2
arr_dist = 2 * np.arcsin(np.sqrt(arr_dist))
arr_dist *= 6372.8
# extract the values of CUM_OCC that meet the criteria
r = arr_OCC[arr_dist<0.03048]
# check that at least one element
if r.size>0:
res_ADIT.append(max(r))
else :
res_ADIT.append(np.nan)
nbi['AUDIT_np'] = res_ADIT
If you do a timeit of this, you go down to 819 µs ± 15.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) so about 90 time faster than you original solution for the small size of dataframes and all the results are the same
print(nbi)
LONG_017 LAT_016 ADTT_02_2019 ADIT_v2 AUDIT_np
0 73.004170 73.008007 NaN NaN NaN
1 73.007203 73.009683 30.0 30.0 30.0
2 73.000001 73.003134 14.0 14.0 14.0
3 73.003023 73.006923 82.0 82.0 82.0
4 73.001468 73.008764 NaN NaN NaN
5 73.000923 73.008946 NaN NaN NaN
6 73.001863 73.000850 NaN NaN NaN
7 73.003456 73.000391 NaN NaN NaN
8 73.003968 73.001698 21.0 21.0 21.0
9 73.005388 73.008781 NaN NaN NaN
10 73.004192 73.000983 93.0 93.0 93.0
11 73.006852 73.004211 NaN NaN NaN
You can play a bit with the code and increase the size of each toy dataframe, and you'll see how by increasing the size especially of the data Final, the gain becomes interesting for example with size_nbi = 20 and size_Final = 1000, you get a gain of 400 time faster with the vectorized solution. and so on. For your full data size (3K *6M), you still need a bit of time and on my computer it would take about 25 minutes I estimate while your solution is in hundred of hours. Now if this is not enough, you can think about parallelization or also using numba
This method using Balltree will take about a minute on my machine (depending on the radius, which the smaller the faster), with the sizes you mentions (7000 and 6M)
import numpy as np
import sklearn
import pandas as pd
I generate data, thx Ben i used your code
#toy sample
np.random.seed(1)
size_nbi = 7000
size_Final = 6000000
nbi = pd.DataFrame({'LONG_017':np.random.random(size=size_nbi)/10+73,
'LAT_016':np.random.random(size=size_nbi)/10+73,})
Final = pd.DataFrame({'X':np.random.random(size=size_Final)/10+73,
'Y':np.random.random(size=size_Final)/10+73,
'CUM_OCC':np.random.randint(size_Final,size=size_Final)})
nbi_gps = nbi[["LAT_016", "LONG_017"]].values
final_gps = Final[["Y", "X"]].values
Create a Balltree
%%time
from sklearn.neighbors import BallTree
import numpy as np
nbi = np.radians(nbi_gps)
final = np.radians(final_gps)
tree = BallTree(final, leaf_size=12, metric='haversine')
Which took Wall time: 23.8 s
Query with
%%time
radius = 0.0000003
StoredOCC_indici = tree.query_radius(nbi, r=radius, return_distance=False, count_only=False)
(under a second)
And get the maximum of the things you are interested in
StoredOCC = [ np.max( Final.CUM_OCC[i] ) for i in StoredOCC_indici]
Which automatically make Nans for empty lists, which is nice. This took Wall time: 3.64 s
For this radius the whole calculation on 6 million took under a minute on my machine.

Calculate distance between consecutive GPS points and reduce GPS density based on this distance

I have a pandas dataframe that represents the GPS trajectory of a vehicle
d1 = {'id': [1, 2, 3, 4, 5, 6, 7, 8, 9], 'longitude': [4.929783, 4.932333, 4.933950, 4.933900, 4.928467, 4.924583, 4.922133, 4.921400, 4.920967], 'latitude': [52.372250, 52.370884, 52.371101, 52.372234, 52.375282, 52.375950, 52.376301, 52.376232, 52.374481]}
df1 = pd.DataFrame(data=d1)
id longitude latitude
1 4.929783 52.372250
2 4.932333 52.370884
3 4.933950 52.371101
4 4.933900 52.372234
5 4.928467 52.375282
6 4.924583 52.375950
7 4.922133 52.376301
8 4.921400 52.376232
9 4.920967 52.374481
I already calculated the (haversine) distance in meters between consecutive GPS points as follows:
import numpy as np
def haversine(lat1, lon1, lat2, lon2, earth_radius=6371):
lat1, lon1, lat2, lon2 = np.radians([lat1, lon1, lat2, lon2])
a = np.sin((lat2-lat1)/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin((lon2-lon1)/2.0)**2
km = earth_radius * 2 * np.arcsin(np.sqrt(a))
m = km * 1000
return m
df1['distance'] = haversine(df1['latitude'], df1['longitude'],
df1['latitude'].shift(), df1['longitude'].shift())
id longitude latitude distance
1 4.929783 52.372250 NaN
2 4.932333 52.370884 230.305288
3 4.933950 52.371101 112.398101
4 4.933900 52.372234 126.029572
5 4.928467 52.375282 500.896578
6 4.924583 52.375950 273.918990
7 4.922133 52.376301 170.828592
8 4.921400 52.376232 50.345227
9 4.920967 52.374481 196.908503
Now I would like to create a function that
removes the second, i.e. the following point if the distance between consecutive GPS points is less than 150 meters.
always keep the last (and the first) GPS point, regardless of the distance between the previous kept feature
Meaning this should be the output:
id longitude latitude distance
1 4.929783 52.372250 NaN
2 4.932333 52.370884 230.305288
5 4.928467 52.375282 500.896578
6 4.924583 52.375950 273.918990
7 4.922133 52.376301 170.828592
9 4.920967 52.374481 196.908503
What is the best way to achieve this in python?
NOTE: This doesn't account for maximum distance... that would require some look ahead and optimization.
I would iterate through and pass back just the index values of the rows you'd like to keep. Use those index values in a loc call.
Distance
Use whatever metric you want. I used OP's haversine distance.
def haversine(lat1, lon1, lat2, lon2, earth_radius=6371):
lat1, lon1, lat2, lon2 = np.radians([lat1, lon1, lat2, lon2])
a = np.sin((lat2-lat1)/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin((lon2-lon1)/2.0)**2
km = earth_radius * 2 * np.arcsin(np.sqrt(a))
m = km * 1000
return m
def dis(t0, t1):
lat0 = t0.latitude
lon0 = t0.longitude
lat1 = t1.latitude
lon1 = t1.longitude
return haversine(lat0, lon0, lat1, lon1)
The Loop
def f(d, threshold=50):
itups = d.itertuples()
last = next(itups)
indices = [last.Index]
distances = [0]
for tup in itups:
distance = dis(tup, last)
if distance > threshold:
indices.append(tup.Index)
distances.append(distance)
last = tup
return indices, distances
The Results
idx, distances = f(df1, 150)
df1.loc[idx].assign(distance=distances)
id longitude latitude distance
0 1 4.929783 52.372250 0.000000
1 2 4.932333 52.370884 230.305288
3 4 4.933900 52.372234 183.986479
4 5 4.928467 52.375282 500.896578
5 6 4.924583 52.375950 273.918990
6 7 4.922133 52.376301 170.828592
8 9 4.920967 52.374481 217.302775

Apply column operations to get a new column in pandas

i have the data like this
ID 8-Jan 15-Jan 22-Jan 29-Jan 5-Feb 12-Feb LowerBound UpperBound
001 618 720 645 573 503 447 - -
002 62 80 67 94 81 65 - -
003 32 10 23 26 26 31 - -
004 22 13 1 28 19 25 - -
005 9 7 9 6 8 4 - -
I want to create two columns with lower bounds and upper bounds for each product using 95% confidence intervals. I know manual way of writing a function which loops through each product ID
import numpy as np
import scipy as sp
import scipy.stats
# Method copied from http://stackoverflow.com/questions/15033511/compute-a-confidence-interval-from-sample-data
def mean_confidence_interval(data, confidence=0.95):
a = 1.0*np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * sp.stats.t._ppf((1+confidence)/2., n-1)
return m-h, m+h
Is there an efficient way in Pandas or (one liner kind of thing) ?
Of course, you want df.apply. Note you need to modify mean_confidence_interval to return pd.Series([m-h, m+h]).
df[['LowerBound','UpperBound']] = df.apply(mean_confidence_interval, axis=1)
Standard error of the mean is pretty straightforward to calculate so you can easily vectorize this:
import scipy.stats as ss
df.mean(axis=1) + ss.t.ppf(0.975, df.shape[1]-1) * df.std(axis=1)/np.sqrt(df.shape[1])
will give you the upper bound. Use - ss.t.ppf for the lower bound.
Also, pandas seems to have a sem method. If you have a large dataset, I don't suggest using apply over rows. It is pretty slow. Here are some timings:
df = pd.DataFrame(np.random.randn(100, 10))
%timeit df.apply(mean_confidence_interval, axis=1)
100 loops, best of 3: 18.2 ms per loop
%%timeit
dist = ss.t.ppf(0.975, df.shape[1]-1) * df.sem(axis=1)
mean = df.mean(axis=1)
mean - dist, mean + dist
1000 loops, best of 3: 598 µs per loop
Since you already created a function for calculating the confidence interval, simply apply it to each row of your data:
def mean_confidence_interval(data):
confidence = 0.95
m = data.mean()
se = scipy.stats.sem(data)
h = se * sp.stats.t._ppf((1 + confidence) / 2, data.shape[0] - 1)
return pd.Series((m - h, m + h))
interval = df.apply(mean_confidence_interval, axis=1)
interval.columns = ("LowerBound", "UpperBound")
pd.concat([df, interval],axis=1)

Vectorised Haversine formula with a pandas dataframe

I know that to find the distance between two latitude, longitude points I need to use the haversine function:
def haversine(lon1, lat1, lon2, lat2):
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
km = 6367 * c
return km
I have a DataFrame where one column is latitude and another column is longitude. I want to find out how far these points are from a set point, -56.7213600, 37.2175900. How do I take the values from the DataFrame and put them into the function?
example DataFrame:
SEAZ LAT LON
1 296.40, 58.7312210, 28.3774110
2 274.72, 56.8148320, 31.2923240
3 192.25, 52.0649880, 35.8018640
4 34.34, 68.8188750, 67.1933670
5 271.05, 56.6699880, 31.6880620
6 131.88, 48.5546220, 49.7827730
7 350.71, 64.7742720, 31.3953780
8 214.44, 53.5192920, 33.8458560
9 1.46, 67.9433740, 38.4842520
10 273.55, 53.3437310, 4.4716664
I can't confirm if the calculations are correct but the following worked:
In [11]:
from numpy import cos, sin, arcsin, sqrt
from math import radians
def haversine(row):
lon1 = -56.7213600
lat1 = 37.2175900
lon2 = row['LON']
lat2 = row['LAT']
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * arcsin(sqrt(a))
km = 6367 * c
return km
df['distance'] = df.apply(lambda row: haversine(row), axis=1)
df
Out[11]:
SEAZ LAT LON distance
index
1 296.40 58.731221 28.377411 6275.791920
2 274.72 56.814832 31.292324 6509.727368
3 192.25 52.064988 35.801864 6990.144378
4 34.34 68.818875 67.193367 7357.221846
5 271.05 56.669988 31.688062 6538.047542
6 131.88 48.554622 49.782773 8036.968198
7 350.71 64.774272 31.395378 6229.733699
8 214.44 53.519292 33.845856 6801.670843
9 1.46 67.943374 38.484252 6418.754323
10 273.55 53.343731 4.471666 4935.394528
The following code is actually slower on such a small dataframe but I applied it to a 100,000 row df:
In [35]:
%%timeit
df['LAT_rad'], df['LON_rad'] = np.radians(df['LAT']), np.radians(df['LON'])
df['dLON'] = df['LON_rad'] - math.radians(-56.7213600)
df['dLAT'] = df['LAT_rad'] - math.radians(37.2175900)
df['distance'] = 6367 * 2 * np.arcsin(np.sqrt(np.sin(df['dLAT']/2)**2 + math.cos(math.radians(37.2175900)) * np.cos(df['LAT_rad']) * np.sin(df['dLON']/2)**2))
1 loops, best of 3: 17.2 ms per loop
Compared to the apply function which took 4.3s so nearly 250 times quicker, something to note in the future
If we compress all the above in to a one-liner:
In [39]:
%timeit df['distance'] = 6367 * 2 * np.arcsin(np.sqrt(np.sin((np.radians(df['LAT']) - math.radians(37.2175900))/2)**2 + math.cos(math.radians(37.2175900)) * np.cos(np.radians(df['LAT'])) * np.sin((np.radians(df['LON']) - math.radians(-56.7213600))/2)**2))
100 loops, best of 3: 12.6 ms per loop
We observe further speed ups now a factor of ~341 times quicker.

Categories