Elegant way to get range of values from two columns using pandas - python

I have a dataframe like as shown below (run the full code below)
df1 = pd.DataFrame({'person_id': [11,21,31,41,51],
'date_birth': ['05/29/1967', '01/21/1957', '7/27/1959','01/01/1961','12/31/1961']})
df1 = df1.melt('person_id', value_name='date_birth')
df1['birth_dates'] = pd.to_datetime(df1['date_birth'])
df_ranges = df1.assign(until_prev_year_days=(df1['birth_dates'].dt.dayofyear - 1),
until_next_year_days=((df1['birth_dates'] + pd.offsets.YearEnd(0)) - df1['birth_dates']).dt.days)
f = {'until_prev_year_days': 'min', 'until_next_year_days': 'min'}
min_days = df_ranges.groupby('person_id',as_index=False).agg(f)
min_days.columns = ['person_id','no_days_to_prev_year','no_days_to_next_year']
df_offset = pd.merge(df_ranges[['person_id','birth_dates']], min_days, on='person_id',how='inner')
See below on what I tried to get the range
df_offset['range_to_shift'] = "[" + (-1 * df_offset['no_days_to_prev_year']).map(str) + "," + df_offset['no_days_to_next_year'].map(str) + "]"
Though my approach works, I would like to is there any better and elegant way to do the same
Please note that for values from no_days_to_prev_year, we have to prefix minus sign
I expect my output to be like as shown below

Use DataFrame.mul along with DataFrame.to_numpy:
cols = ['no_days_to_prev_year', 'no_days_to_next_year']
df_offset['range_to_shift'] = df_offset[cols].mul([-1, 1]).to_numpy().tolist()
Result:
# print(df_offset)
person_id birth_dates no_days_to_prev_year no_days_to_next_year range_to_shift
0 11 1967-05-29 148 216 [-148, 216]
1 21 1957-01-21 20 344 [-20, 344]
2 31 1959-07-27 207 157 [-207, 157]
3 41 1961-01-01 0 364 [0, 364]
4 51 1961-12-31 364 0 [-364, 0]
timeit performance results:
df_offset.shape
(50000, 5)
%%timeit -n100
cols = ['no_days_to_prev_year', 'no_days_to_next_year']
df_offset['range_to_shift'] = df_offset[cols].mul([-1, 1]).to_numpy().tolist()
15.5 ms ± 464 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

IIUC, you can use zip to create your list of range:
df = pd.DataFrame({'person_id': [11,21,31,41,51],
'date_birth': ['05/29/1967', '01/21/1957', '7/27/1959','01/01/1961','12/31/1961']})
df['date_birth'] = pd.to_datetime(df['date_birth'],format="%m/%d/%Y")
df["day_to_prev"] = df['date_birth'].dt.dayofyear - 1
df["day_to_next"] = (pd.offsets.YearEnd(0) + df['date_birth'] - df["date_birth"]).dt.days
df["range_to_shift"] = [[-x, y] for x,y in zip(df["day_to_prev"],df["day_to_next"])]
print (df)
person_id date_birth day_to_prev day_to_next range_to_shift
0 11 1967-05-29 148 216 [-148, 216]
1 21 1957-01-21 20 344 [-20, 344]
2 31 1959-07-27 207 157 [-207, 157]
3 41 1961-01-01 0 364 [0, 364]
4 51 1961-12-31 364 0 [-364, 0]

Related

More effective way of calculating/adding new columns using Pandas for large dataset

I have the following data set:
CustomerID Date Amount Department \
0 395134 2019-01-01 199 Home
1 395134 2019-01-01 279 Home
2 1356012 2019-01-07 279 Home
3 1921374 2019-01-08 269 Home
4 395134 2019-01-01 279 Home
... ... ... ... ...
18926474 1667426 2021-06-30 349 Womenswear
18926475 1667426 2021-06-30 299 Womenswear
18926476 583105 2021-06-30 349 Womenswear
18926477 538137 2021-06-30 279 Womenswear
18926478 825382 2021-06-30 2499 Home
DaysSincePurchase
0 986 days
1 986 days
2 980 days
3 979 days
4 986 days
... ...
18926474 75 days
18926475 75 days
18926476 75 days
18926477 75 days
18926478 75 days
I want to do some feature engineering and add a few columns after having aggregated (using group_by) by customerID. The Date column is unimportant and can easily be dropped. I want a data set where every row is one unique customerID which are just integers 1,2... (first col) where the other columns are:
Total amount of purchasing
Days since the last purchase
Number of total departments
This is what I've done, and it works. However when I time it, it takes about 1.5 hours. Is there another more efficient of doing this?
customer_group = joinedData.groupby(['CustomerID'])
n = originalData['CustomerID'].nunique()
# First arrange the data in a matrix.
matrix = np.zeros((n,5)) # Pre-allocate matrix
for i in range(0,n):
matrix[i,0] = i+1
matrix[i,1] = sum(customer_group.get_group(i+1)['Amount'])
matrix[i,2] = min(customer_group.get_group(i+1)['DaysSincePurchase']).days
matrix[i,3] = customer_group.get_group(i+1)['Department'].nunique()
# The above loop takes 6300 sec approx
# convert matrix to dataframe and name columns
newData = pd.DataFrame(matrix)
newData = newData.rename(columns = {0:"CustomerID"})
newData = newData.rename(columns = {1:"TotalDemand"})
newData = newData.rename(columns = {2:"DaysSinceLastPurchase"})
newData = newData.rename(columns = {3:"nrDepartments"})
Use agg:
>>> df.groupby('CustomerID').agg(TotalDemand=('Amount', sum),
DaysSinceLastPurchase=('DaysSincePurchase', min),
nrDepartments=('Department', 'nunique'))
I ran this function over a dataframe of 20,000,000 records. It took few seconds to be executed:
>>> %timeit df.groupby('CustomerID').agg(...)
14.7 s ± 225 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Generated data:
N = 20000000
df = pd.DataFrame(
{'CustomerID': np.random.randint(1000, 10000, N),
'Date': np.random.choice(pd.date_range('2020-01-01', '2020-12-31'), N),
'Amount': np.random.randint(100, 1000, N),
'Department': np.random.choice(['Home', 'Sport', 'Food', 'Womenswear',
'Menswear', 'Furniture'], N)})
df['DaysSincePurchase'] = pd.Timestamp.today().normalize() - df['Date']

pandas: how to get the percentage for each row

When I use pandas value_count method, I get the data below:
new_df['mark'].value_counts()
1 1349110
2 1606640
3 175629
4 790062
5 330978
How can I get the percentage for each row like this?
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
I need to divide each row by the sum of these data.
np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s.value_counts()
I 176
J 167
H 136
F 128
G 111
E 85
D 83
C 52
B 38
A 24
dtype: int64
As percent
s.value_counts(normalize=True)
I 0.176
J 0.167
H 0.136
F 0.128
G 0.111
E 0.085
D 0.083
C 0.052
B 0.038
A 0.024
dtype: float64
counts = s.value_counts()
percent = counts / counts.sum()
fmt = '{:.1%}'.format
pd.DataFrame({'counts': counts, 'per': percent.map(fmt)})
counts per
I 176 17.6%
J 167 16.7%
H 136 13.6%
F 128 12.8%
G 111 11.1%
E 85 8.5%
D 83 8.3%
C 52 5.2%
B 38 3.8%
A 24 2.4%
I think you need:
#if output is Series, convert it to DataFrame
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
print (df)
a per
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
Timings:
It seems faster is use sum as twice value_counts:
In [184]: %timeit (jez(s))
10 loops, best of 3: 38.9 ms per loop
In [185]: %timeit (pir(s))
10 loops, best of 3: 76 ms per loop
Code for timings:
np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s = pd.concat([s]*1000)#.reset_index(drop=True)
def jez(s):
df = s.value_counts()
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
return df
def pir(s):
return pd.DataFrame({'a':s.value_counts(),
'per':s.value_counts(normalize=True).mul(100).round(1).astype(str) + '%'})
print (jez(s))
print (pir(s))
Here's a more pythonic snippet than what is proposed above I think
def aspercent(column,decimals=2):
assert decimals >= 0
return (round(column*100,decimals).astype(str) + "%")
aspercent(df['mark'].value_counts(normalize=True),decimals=1)
This will output:
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
This also allows to adjust the number of decimals
Create two series, first one with absolute values and a second one with percentages, and concatenate them:
import pandas
d = {'mark': ['pos', 'pos', 'pos', 'pos', 'pos',
'neg', 'neg', 'neg', 'neg',
'neutral', 'neutral' ]}
df = pd.DataFrame(d)
absolute = df['mark'].value_counts(normalize=False)
absolute.name = 'value'
percentage = df['mark'].value_counts(normalize=True)
percentage.name = '%'
percentage = (percentage*100).round(2)
pd.concat([absolute, percentage], axis=1)
Output:
value %
pos 5 45.45
neg 4 36.36
neutral 2 18.18

How to speed up pandas groupby - apply function to be comparable to R's data.table

I have data like this
location sales store
0 68 583 17
1 28 857 2
2 55 190 59
3 98 517 64
4 94 892 79
...
For each unique pair (location, store), there are 1 or more sales. I want to add a column, pcnt_sales that shows what percent of the total sales for that (location, store) pair was made up by the sale in the given row.
location sales store pcnt_sales
0 68 583 17 0.254363
1 28 857 2 0.346543
2 55 190 59 1.000000
3 98 517 64 0.272105
4 94 892 79 1.000000
...
This works, but is slow
import pandas as pd
import numpy as np
df = pd.DataFrame({'location':np.random.randint(0, 100, 10000), 'store':np.random.randint(0, 100, 10000), 'sales': np.random.randint(0, 1000, 10000)})
import timeit
start_time = timeit.default_timer()
df['pcnt_sales'] = df.groupby(['location', 'store'])['sales'].apply(lambda x: x/x.sum())
print(timeit.default_timer() - start_time) # 1.46 seconds
By comparison, R's data.table does this super fast
library(data.table)
dt <- data.table(location=sample(100, size=10000, replace=TRUE), store=sample(100, size=10000, replace=TRUE), sales=sample(1000, size=10000, replace=TRUE))
ptm <- proc.time()
dt[, pcnt_sales:=sales/sum(sales), by=c("location", "store")]
proc.time() - ptm # 0.007 seconds
How do I do this efficiently in Pandas (especially considering my real dataset has millions of rows)?
For performance you want to avoid apply. You could use transform to get the result of the groupby expanded to the original index instead, at which point a division would work at vectorized speed:
>>> %timeit df['pcnt_sales'] = df.groupby(['location', 'store'])['sales'].apply(lambda x: x/x.sum())
1 loop, best of 3: 2.27 s per loop
>>> %timeit df['pcnt_sales2'] = (df["sales"] /
df.groupby(['location', 'store'])['sales'].transform(sum))
100 loops, best of 3: 6.25 ms per loop
>>> df["pcnt_sales"].equals(df["pcnt_sales2"])
True

Python and Pandas - Moving Average Crossover

There is a Pandas DataFrame object with some stock data. SMAs are moving averages calculated from previous 45/15 days.
Date Price SMA_45 SMA_15
20150127 102.75 113 106
20150128 103.05 100 106
20150129 105.10 112 105
20150130 105.35 111 105
20150202 107.15 111 105
20150203 111.95 110 105
20150204 111.90 110 106
I want to find all dates, when SMA_15 and SMA_45 intersect.
Can it be done efficiently using Pandas or Numpy? How?
EDIT:
What I mean by 'intersection':
The data row, when:
long SMA(45) value was bigger than short SMA(15) value for longer than short SMA period(15) and it became smaller.
long SMA(45) value was smaller than short SMA(15) value for longer than short SMA period(15) and it became bigger.
I'm taking a crossover to mean when the SMA lines -- as functions of time --
intersect, as depicted on this investopedia
page.
Since the SMAs represent continuous functions, there is a crossing when,
for a given row, (SMA_15 is less than SMA_45) and (the previous SMA_15 is
greater than the previous SMA_45) -- or vice versa.
In code, that could be expressed as
previous_15 = df['SMA_15'].shift(1)
previous_45 = df['SMA_45'].shift(1)
crossing = (((df['SMA_15'] <= df['SMA_45']) & (previous_15 >= previous_45))
| ((df['SMA_15'] >= df['SMA_45']) & (previous_15 <= previous_45)))
If we change your data to
Date Price SMA_45 SMA_15
20150127 102.75 113 106
20150128 103.05 100 106
20150129 105.10 112 105
20150130 105.35 111 105
20150202 107.15 111 105
20150203 111.95 110 105
20150204 111.90 110 106
so that there are crossings,
then
import pandas as pd
df = pd.read_table('data', sep='\s+')
previous_15 = df['SMA_15'].shift(1)
previous_45 = df['SMA_45'].shift(1)
crossing = (((df['SMA_15'] <= df['SMA_45']) & (previous_15 >= previous_45))
| ((df['SMA_15'] >= df['SMA_45']) & (previous_15 <= previous_45)))
crossing_dates = df.loc[crossing, 'Date']
print(crossing_dates)
yields
1 20150128
2 20150129
Name: Date, dtype: int64
The following methods gives the similar results, but takes less time than the previous methods:
df['position'] = df['SMA_15'] > df['SMA_45']
df['pre_position'] = df['position'].shift(1)
df.dropna(inplace=True) # dropping the NaN values
df['crossover'] = np.where(df['position'] == df['pre_position'], False, True)
Time taken for this approach: 2.7 ms ± 310 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Time taken for previous approach: 3.46 ms ± 307 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
As an alternative to the unutbu's answer, something like below can also be done to find the indices where SMA_15 crosses SMA_45.
diff = df['SMA_15'] < df['SMA_45']
diff_forward = diff.shift(1)
crossing = np.where(abs(diff - diff_forward) == 1)[0]
print(crossing)
>>> [1,2]
print(df.iloc[crossing])
>>>
Date Price SMA_15 SMA_45
1 20150128 103.05 100 106
2 20150129 105.10 112 105

Speed up numpy.where for extracting integer segments?

I'm trying to work out how to speed up a Python function which uses numpy. The output I have received from lineprofiler is below, and this shows that the vast majority of the time is spent on the line ind_y, ind_x = np.where(seg_image == i).
seg_image is an integer array which is the result of segmenting an image, thus finding the pixels where seg_image == i extracts a specific segmented object. I am looping through lots of these objects (in the code below I'm just looping through 5 for testing, but I'll actually be looping through over 20,000), and it takes a long time to run!
Is there any way in which the np.where call can be speeded up? Or, alternatively, that the penultimate line (which also takes a good proportion of the time) can be speeded up?
The ideal solution would be to run the code on the whole array at once, rather than looping, but I don't think this is possible as there are side-effects to some of the functions I need to run (for example, dilating a segmented object can make it 'collide' with the next region and thus give incorrect results later on).
Does anyone have any ideas?
Line # Hits Time Per Hit % Time Line Contents
==============================================================
5 def correct_hot(hot_image, seg_image):
6 1 239810 239810.0 2.3 new_hot = hot_image.copy()
7 1 572966 572966.0 5.5 sign = np.zeros_like(hot_image) + 1
8 1 67565 67565.0 0.6 sign[:,:] = 1
9 1 1257867 1257867.0 12.1 sign[hot_image > 0] = -1
10
11 1 150 150.0 0.0 s_elem = np.ones((3, 3))
12
13 #for i in xrange(1,seg_image.max()+1):
14 6 57 9.5 0.0 for i in range(1,6):
15 5 6092775 1218555.0 58.5 ind_y, ind_x = np.where(seg_image == i)
16
17 # Get the average HOT value of the object (really simple!)
18 5 2408 481.6 0.0 obj_avg = hot_image[ind_y, ind_x].mean()
19
20 5 333 66.6 0.0 miny = np.min(ind_y)
21
22 5 162 32.4 0.0 minx = np.min(ind_x)
23
24
25 5 369 73.8 0.0 new_ind_x = ind_x - minx + 3
26 5 113 22.6 0.0 new_ind_y = ind_y - miny + 3
27
28 5 211 42.2 0.0 maxy = np.max(new_ind_y)
29 5 143 28.6 0.0 maxx = np.max(new_ind_x)
30
31 # 7 is + 1 to deal with the zero-based indexing, + 2 * 3 to deal with the 3 cell padding above
32 5 217 43.4 0.0 obj = np.zeros( (maxy+7, maxx+7) )
33
34 5 158 31.6 0.0 obj[new_ind_y, new_ind_x] = 1
35
36 5 2482 496.4 0.0 dilated = ndimage.binary_dilation(obj, s_elem)
37 5 1370 274.0 0.0 border = mahotas.borders(dilated)
38
39 5 122 24.4 0.0 border = np.logical_and(border, dilated)
40
41 5 355 71.0 0.0 border_ind_y, border_ind_x = np.where(border == 1)
42 5 136 27.2 0.0 border_ind_y = border_ind_y + miny - 3
43 5 123 24.6 0.0 border_ind_x = border_ind_x + minx - 3
44
45 5 645 129.0 0.0 border_avg = hot_image[border_ind_y, border_ind_x].mean()
46
47 5 2167729 433545.8 20.8 new_hot[seg_image == i] = (new_hot[ind_y, ind_x] + (sign[ind_y, ind_x] * np.abs(obj_avg - border_avg)))
48 5 10179 2035.8 0.1 print obj_avg, border_avg
49
50 1 4 4.0 0.0 return new_hot
EDIT I have left my original answer at the bottom for the record, but I have actually looked into your code in more detail over lunch, and I think that using np.where is a big mistake:
In [63]: a = np.random.randint(100, size=(1000, 1000))
In [64]: %timeit a == 42
1000 loops, best of 3: 950 us per loop
In [65]: %timeit np.where(a == 42)
100 loops, best of 3: 7.55 ms per loop
You could get a boolean array (that you can use for indexing) in 1/8 of the time you need to get the actual coordinates of the points!!!
There is of course the cropping of the features that you do, but ndimage has a find_objects function that returns enclosing slices, and appears to be very fast:
In [66]: %timeit ndimage.find_objects(a)
100 loops, best of 3: 11.5 ms per loop
This returns a list of tuples of slices enclosing all of your objects, in 50% more time thn it takes to find the indices of one single object.
It may not work out of the box as I cannot test it right now, but I would restructure your code into something like the following:
def correct_hot_bis(hot_image, seg_image):
# Need this to not index out of bounds when computing border_avg
hot_image_padded = np.pad(hot_image, 3, mode='constant',
constant_values=0)
new_hot = hot_image.copy()
sign = np.ones_like(hot_image, dtype=np.int8)
sign[hot_image > 0] = -1
s_elem = np.ones((3, 3))
for j, slice_ in enumerate(ndimage.find_objects(seg_image)):
hot_image_view = hot_image[slice_]
seg_image_view = seg_image[slice_]
new_shape = tuple(dim+6 for dim in hot_image_view.shape)
new_slice = tuple(slice(dim.start,
dim.stop+6,
None) for dim in slice_)
indices = seg_image_view == j+1
obj_avg = hot_image_view[indices].mean()
obj = np.zeros(new_shape)
obj[3:-3, 3:-3][indices] = True
dilated = ndimage.binary_dilation(obj, s_elem)
border = mahotas.borders(dilated)
border &= dilated
border_avg = hot_image_padded[new_slice][border == 1].mean()
new_hot[slice_][indices] += (sign[slice_][indices] *
np.abs(obj_avg - border_avg))
return new_hot
You would still need to figure out the collisions, but you could get about a 2x speed-up by computing all the indices simultaneously using a np.unique based approach:
a = np.random.randint(100, size=(1000, 1000))
def get_pos(arr):
pos = []
for j in xrange(100):
pos.append(np.where(arr == j))
return pos
def get_pos_bis(arr):
unq, flat_idx = np.unique(arr, return_inverse=True)
pos = np.argsort(flat_idx)
counts = np.bincount(flat_idx)
cum_counts = np.cumsum(counts)
multi_dim_idx = np.unravel_index(pos, arr.shape)
return zip(*(np.split(coords, cum_counts) for coords in multi_dim_idx))
In [33]: %timeit get_pos(a)
1 loops, best of 3: 766 ms per loop
In [34]: %timeit get_pos_bis(a)
1 loops, best of 3: 388 ms per loop
Note that the pixels for each object are returned in a different order, so you can't simply compare the returns of both functions to assess equality. But they should both return the same.
One thing you could do to same a little bit of time is to save the result of seg_image == i so that you don't need to compute it twice. You're computing it on lines 15 & 47, you could add seg_mask = seg_image == i and then reuse that result (It might also be good to separate out that piece for profiling purposes).
While there a some other minor things that you could do to eke out a little bit of performance, the root issue is that you're using a O(M * N) algorithm where M is the number of segments and N is the size of your image. It's not obvious to me from your code whether there is a faster algorithm to accomplish the same thing, but that's the first place I'd try and look for a speedup.

Categories