selecting indexes with multiple years of observations - python
I wish to select only the rows that have observations across multiple years. For example, suppose
mlIndx = pd.MultiIndex.from_tuples([('x', 0,),('x',1),('z', 0), ('y', 1),('t', 0),('t', 1)])
df = pd.DataFrame(np.random.randint(0,100,(6,2)), columns = ['a','b'], index=mlIndx)
In [18]: df
Out[18]:
a b
x 0 6 1
1 63 88
z 0 69 54
y 1 27 27
t 0 98 12
1 69 31
My desired output is
Out[19]:
a b
x 0 6 1
1 63 88
t 0 98 12
1 69 31
My current solution is blunt so something that can scale up more easily would be great. You can assumed a sorted index.
df.reset_index(level=0, inplace=True)
df[df.level_0.duplicated() | df.level_0.duplicated(keep='last')]
Out[30]:
level_0 a b
0 x 6 1
1 x 63 88
0 t 98 12
1 t 69 31
You can figure this out with groupby (on the first level of the index) + transform, and then use boolean indexing to filter out those rows:
df[df.groupby(level=0).a.transform('size').gt(1)]
a b
x 0 67 83
1 2 34
t 0 18 87
1 63 20
Details
Output of the groupby -
df.groupby(level=0).a.transform('size')
x 0 2
1 2
z 0 1
y 1 1
t 0 2
1 2
Name: a, dtype: int64
Filtering from here is straightforward, just find those rows with size > 1.
Use the group by filter
You can pass a function that returns a boolean to
df.groupby(level=0).filter(lambda x: len(x) > 1)
a b
x 0 7 33
1 31 43
t 0 71 18
1 68 72
I've spent my fare share of time focused on speed. Not all solutions need to be the fastest solutions. However, since the subject has come up. I'll offer what I think should be a fast solution. It is my intent to keep future readers informed.
Results of Time Test
res.plot(loglog=True)
res.div(res.min(1), 0).T
10 30 100 300 1000 3000
cs 4.425970 4.643234 5.422120 3.768960 3.912819 3.937120
wen 2.617455 4.288538 6.694974 18.489803 57.416648 148.860403
jp 6.644870 21.444406 67.315362 208.024627 569.421257 1525.943062
pir 6.043569 10.358355 26.099766 63.531397 165.032540 404.254033
pir_pd_factorize 1.153351 1.132094 1.141539 1.191434 1.000000 1.000000
pir_np_unique 1.058743 1.000000 1.000000 1.000000 1.021489 1.188738
pir_best_of 1.000000 1.006871 1.030610 1.086425 1.068483 1.025837
Simulation Details
def pir_pd_factorize(df):
f, u = pd.factorize(df.index.get_level_values(0))
m = np.bincount(f)[f] > 1
return df[m]
def pir_np_unique(df):
u, f = np.unique(df.index.get_level_values(0), return_inverse=True)
m = np.bincount(f)[f] > 1
return df[m]
def pir_best_of(df):
if len(df) > 1000:
return pir_pd_factorize(df)
else:
return pir_np_unique(df)
def cs(df):
return df[df.groupby(level=0).a.transform('size').gt(1)]
def pir(df):
return df.groupby(level=0).filter(lambda x: len(x) > 1)
def wen(df):
s=df.a.count(level=0)
return df.loc[s[s>1].index.tolist()]
def jp(df):
return df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
res = pd.DataFrame(
index=[10, 30, 100, 300, 1000, 3000],
columns='cs wen jp pir pir_pd_factorize pir_np_unique pir_best_of'.split(),
dtype=float
)
np.random.seed([3, 1415])
for i in res.index:
d = pd.DataFrame(
dict(a=range(i)),
pd.MultiIndex.from_arrays([
np.random.randint(i // 4 * 3, size=i),
range(i)
])
)
for j in res.columns:
stmt = f'{j}(d)'
setp = f'from __main__ import d, {j}'
res.at[i, j] = timeit(stmt, setp, number=100)
Just a new way
s=df.a.count(level=0)
df.loc[s[s>1].index.tolist()]
Out[12]:
a b
x 0 1 31
1 70 29
t 0 42 26
1 96 29
And if you want to keep using duplicate
s=df.index.get_level_values(level=0)
df.loc[s[s.duplicated()].tolist()]
Out[18]:
a b
x 0 1 31
1 70 29
t 0 42 26
1 96 29
I'm not convinced groupby is necessary:
df = df.sort_index()
df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
# a b
# x 0 16 3
# 1 97 36
# t 0 9 18
# 1 37 30
Some benchmarking:
df = pd.concat([df]*10000).sort_index()
def cs(df):
return df[df.groupby(level=0).a.transform('size').gt(1)]
def pir(df):
return df.groupby(level=0).filter(lambda x: len(x) > 1)
def wen(df):
s=df.a.count(level=0)
return df.loc[s[s>1].index.tolist()]
def jp(df):
return df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
%timeit cs(df) # 19.5ms
%timeit pir(df) # 33.8ms
%timeit wen(df) # 17.0ms
%timeit jp(df) # 22.3ms
Related
Optimising itertools combination with grouped DataFrame and post filter
I have a DataFrame as Locality money 1 3 1 4 1 10 1 12 1 15 2 16 2 18 I have to do a combination with replacement of money column with a groupby view on Locality and a filter on the money difference. The target must be like Locality money1 money2 1 3 3 1 3 4 1 4 4 1 10 10 1 10 12 1 10 15 1 12 12 1 12 15 1 15 15 2 16 16 2 16 18 2 18 18 Note that the combination is applied for values on the same Locality and values which have a difference less than 6. My current code is from itertools import combinations_with_replacement import numpy as np import panda as pd def generate_graph(input_series, out_cols): return pd.DataFrame(list(combinations_with_replacement(input_series, r=2)), columns=out_cols) df = ( df.groupby(['Locality'])['money'].apply( lambda x: generate_graph(x, out_cols=['money1', 'money2']) ).reset_index().drop(columns=['level_1'], errors='ignore') ) # Ensure the Distance between money is within the permissible limit df = df.loc[( df['money2'] - df['money1'] < 6 )] The issue is, I have a DataFrame with 100000 rows which takes almost 33 seconds to process my code. I need to optimize the time taken by my code probably using numpy. I am looking for optimizing the groupby and the post-filter which takes extra space and time. For sample data, you can use this code to generate the DataFrame. # Generate dummy data t1 = list(range(0, 100000)) b = np.random.randint(100, 10000, 100000) a = (b/100).astype(int) df = pd.DataFrame({'Locality': a, 'money': t1}) df = df.sort_values(by=['Locality', 'money'])
To gain both running time speedup and reduce space consumption: Instead of post-filtering - apply an extended function (say combine_values) that generates dataframe on a generator expression yielding already filtered (by condition) combinations. (factor below is a default argument that indicates to the mentioned permissible limit) In [48]: def combine_values(values, out_cols, factor=6): ...: return pd.DataFrame(((m1, m2) for m1, m2 in combinations_with_replacement(values, r=2) ...: if m2 - m1 < factor), columns=out_cols) ...: In [49]: df_result = ( ...: df.groupby(['Locality'])['money'].apply( ...: lambda x: combine_values(x, out_cols=['money1', 'money2']) ...: ).reset_index().drop(columns=['level_1'], errors='ignore') ...: ) Execution time performance: In [50]: %time df.groupby(['Locality'])['money'].apply(lambda x: combine_values(x, out_cols=['money1', 'money2'])).reset_index().drop(columns=['l ...: evel_1'], errors='ignore') CPU times: user 2.42 s, sys: 1.64 ms, total: 2.42 s Wall time: 2.42 s Out[50]: Locality money1 money2 0 1 34 34 1 1 106 106 2 1 123 123 3 1 483 483 4 1 822 822 ... ... ... ... 105143 99 99732 99732 105144 99 99872 99872 105145 99 99889 99889 105146 99 99913 99913 105147 99 99981 99981 [105148 rows x 3 columns]
How do I normalise a Pandas data column with multiple conditionals?
I am trying to create a new pandas column which is normalised data from another column. I created three separate series and then merged them into one. While this approache has provided me with the desired result, I was wondering whether there's a better way to do this. x = df["Data Col"].copy() #if the value is between 70 and 30 find the difference of the previous value. #Positive difference = 1 & Negative difference = -1 btw = pd.Series(np.where(x.between(30, 70, inclusive=False), x.diff(), 0)) btw[btw < 0] = -1 btw[btw > 0] = 1 #All values above 70 are -1 up = pd.Series(np.where(x.gt(70), -1, 0)) #All values below 30 are 1 dw = pd.Series(np.where(x.lt(30), 1, 0)) combined = up + dw + btw df["Normalised Col"] = np.array(combined) I tried to use functions and loops directly on the Pandas Data Column but I couldn't figure out how to get the .diff()
Use numpy.select with chain masks by & for bitwise AND and | for bitwise OR: np.random.seed(2019) df = pd.DataFrame({'Data Col':np.random.randint(10, 100, size=10)}) #print (df) d = df["Data Col"].diff() m1 = df["Data Col"].between(30, 70, inclusive=False) m2 = d < 0 m3 = d > 0 m4 = df["Data Col"].gt(70) m5 = df["Data Col"].lt(30) df["Normalised Col1"] = np.select([(m1 & m2) | m4, (m1 & m3) | m5], [-1, 1], default=0) print (df) Data Col Normalised Col1 0 82 -1 1 41 -1 2 47 1 3 98 -1 4 72 -1 5 34 -1 6 39 1 7 25 1 8 22 1 9 26 1
Tensor indexing with matrix
I have matrix (3 x 15) dummies with sequences of tokens as rows: [[ 1 66 67 68 0 0 0 0 0 0 0 0 0 0 0] [ 1 66 67 66 68 66 67 66 0 0 0 0 0 0 0] [ 1 66 67 68 18 19 20 21 22 23 24 25 26 17 0]] Also, there's a tensor probs of shape (3 x 15 x n_tokens) with token probabilities. From probs I need to select only probabilities of tokens in dummies. I think, it may be possible to use the matrix as indices for the tensor, but I haven't found how to do that.
You can do that like this: import tensorflow as tf dummies = ... probs = ... s = tf.shape(dummies) i = tf.range(s[0]) j = tf.range(s[1]) ii, jj = tf.meshgrid(i, j, indexing='ij') idx = tf.stack([ii, jj, dummies], axis=-1) result = tf.gather_nd(probs, idx)
New column based in multiple conditions
a b 0 100 90 1 30 117 2 90 99 3 200 94 I want to create a new df["c"] with next conditions: If a > 50 and b is into (a ± 0.5a), then c = a If a > 50 and b is out (a ± 0.5a), then c = b If a <= 50, then *c = a* Output should be: a b c 0 100 90 100 1 30 117 30 2 90 99 90 3 200 94 94 I´ve tried: df['c'] = np.where(df.eval("0.5 * a <= b <= 1.5 * a"), df.a, df.b) But I don´t know how to include the last condition (If a <= 50, then c = a) in this sentence.
You're almost there, you'll just need to add an or clause inside your eval string. np.where(df.eval("(0.5 * a <= b <= 1.5 * a) or (a <= 50)"), df.a, df.b) # ~~~~~~~~~~~~ array([100, 30, 90, 94])
Binning values into groups with a minimum size using pandas
I'm trying to bin a sample of observations into n discrete groups, then combine these groups until each subgroup has a mimimum of 6 members. So far, I've generated bins, and grouped my DataFrame into them: # df is a DataFrame containing 135 measurments bins = np.linspace(df.heights.min(), df.heights.max(), 21) grp = df.groupby(np.digitize(df.heights, bins)) grp.size() 1 4 2 1 3 2 4 3 5 2 6 8 7 7 8 6 9 19 10 12 11 13 12 12 13 7 14 12 15 12 16 2 17 3 18 6 19 3 21 1 So I can see that I need to combine groups 1 - 3, 3 - 5, and 16 - 21, while leaving the others intact, but I don't know how to do this programmatically.
You can do this: df = pd.DataFrame(np.random.random_integers(1,200,135), columns=['heights']) bins = np.linspace(df.heights.min(), df.heights.max(), 21) grp = df.groupby(np.digitize(df.heights, bins)) sizes = grp.size() def f(vals, max): sum = 0 group = 1 for v in vals: sum += v if sum <= max: yield group else: group +=1 sum = v yield group #I've changed 6 by 30 for the example cause I don't have your original dataset grp.size().groupby([g for g in f(sizes, 30)]) And if you do print grp.size().groupby([g for g in f(sizes, 30)]).cumsum() you will see that the cumulative sums is grouped as expected. Also if you want to group the original values you can do something like: dat = np.random.random_integers(0,200,135) dat = np.array([78,116,146,111,147,78,14,91,196,92,163,144,107,182,58,89,77,134, 83,126,94,70,121,175,174,88,90,42,93,131,91,175,135,8,142,166, 1,112,25,34,119,13,95,182,178,200,97,8,60,189,49,94,191,81, 56,131,30,107,16,48,58,65,78,8,0,11,45,179,151,130,35,64, 143,33,49,25,139,20,53,55,20,3,63,119,153,14,81,93,62,162, 46,29,84,4,186,66,90,174,55,48,172,83,173,167,66,4,197,175, 184,20,23,161,70,153,173,127,51,186,114,27,177,96,93,105,169,158, 83,155,161,29,197,143,122,72,60]) df = pd.DataFrame({'heights':dat}) bins = np.digitize(dat,np.linspace(0,200,21)) grp = df.heights.groupby(bins) m = 15 #you should put 6 here, the minimun s = 0 c = 1 def f(x): global c,s res = pd.Series([c]*x.size,index=x.index) s += x.size if s>m: s = 0 c += 1 return res g = grp.apply(f) print df.groupby(g).size() #another way of doing the same, just a matter of taste m = 15 #you should put 6 here, the minimun s = 0 c = 1 def f2(x): global c,s res = [c]*x.size #here is the main difference with f s += x.size if s>m: s = 0 c += 1 return res g = grp.transform(f2) #call it this way print df.groupby(g).size()