Pandas: Concise way of applying different functions across a multiindex column - python

I have a multi-index dataframe. I want to create a new column whose value is a function of other columns. The problem is that the function is different for a small number of levels.
In order to do this, I am having to manually define the calculation for every leaf level in the hierarchical dataset. This is undesirable because most of the levels use the same calulation.
Here is an example of what I am doing, and how I currently have done it. NB: The data and functions are contrived for simplicity - actual use case is far more unweildy.
import pandas as pd
from io import StringIO
testdata = """
level1,level2,value1,value2
root1,child1,10,20
root1,child2,30,40
root1,child3,50,60
root1,child4,70,80
root1,child5,90,100
"""
df = pd.read_csv(StringIO(testdata), index_col=[0,1], header=[0])
print('Starting Point:')
print(df)
df = df.unstack('level2')
print('Unstacked Version allowing me to define a different function for each level.')
print(df)
# This is the bit I'd like to make simpler. Imagine there was 20 of these child levels and only
# the last 2 were special cases.
df[('derived', 'child1')] = df[('value1', 'child1')] + df[('value2', 'child1')]
df[('derived', 'child2')] = df[('value1', 'child2')] + df[('value2', 'child2')]
df[('derived', 'child3')] = df[('value1', 'child3')] + df[('value2', 'child3')]
df[('derived', 'child4')] = 0.0
df[('derived', 'child5')] = df[('value1', 'child5')] * df[('value2', 'child5')]
print('Desired outcome:')
df = df.stack()
print(df)
Output:
Starting Point:
value1 value2
level1 level2
root1 child1 10 20
child2 30 40
child3 50 60
child4 70 80
child5 90 100
Unstacked Version allowing me to define a different function for each level.
value1 value2
level2 child1 child2 child3 child4 child5 child1 child2 child3 child4 child5
level1
root1 10 30 50 70 90 20 40 60 80 100
Desired outcome:
value1 value2 derived
level1 level2
root1 child1 10 20 30.0
child2 30 40 70.0
child3 50 60 110.0
child4 70 80 0.0
child5 90 100 9000.0

We can use the original df without stacking:
from io import StringIO
testdata = """
level1,level2,value1,value2
root1,child1,10,20
root1,child2,30,40
root1,child3,50,60
root1,child4,70,80
root1,child5,90,100
"""
df = pd.read_csv(StringIO(testdata), index_col=[0,1], header=[0])
level2 = df.index.get_level_values('level2')
cond = [level2 == 'child5', level2 == 'child4']
result = [df.prod(axis=1), 0]
derived = np.select(cond, result, default = df.sum(axis=1))
df.assign(derived = derived)
value1 value2 derived
level1 level2
root1 child1 10 20 30
child2 30 40 70
child3 50 60 110
child4 70 80 0
child5 90 100 9000

Since "only the last 2 were special cases" you can reset the index, perform vectorized computations on slices and recover the index back:
df = df.reset_index()
df.loc[df.index[:-2], 'derived'] = df['value1'] + df['value2']
df.loc[df.index[-2], 'derived'] = 0
df.loc[df.index[-1], 'derived'] = df.loc[df.index[-1], 'value1'] * df.loc[df.index[-1], 'value2']
df.set_index(['level1', 'level2'], inplace=True)
print(df)
value1 value2 derived
level1 level2
root child1 10 20 30.0
child2 30 40 70.0
child3 50 60 110.0
child4 70 80 0.0
child5 90 100 9000.0

Using costume functions and lambda:
def func1(cols):
return cols["value1"] + cols["value2"]
def func2(cols):
return 0.0
def func3(cols):
return cols["value1"] * cols["value2"]
df["derived"] = df.apply(lambda cols: func1(cols) if cols.name[1] != "child4"
and cols.name[1] != "child5" else (func2(cols)
if cols.name[1] == "child4"
else func3(cols)), axis=1)
print(df)
You can also choose to simplify the lambda function using a pre-defined dictionary:
funcs = {"child1": func1, "child2": func1, "child3": func1, "child4": func2, "child5": func3}
df["derived"] = df.apply(lambda cols: funcs[cols.name[1]](cols), axis=1)
print(df)
value1 value2 derived
level1 level2
root1 child1 10 20 30.0
child2 30 40 70.0
child3 50 60 110.0
child4 70 80 0.0
child5 90 100 9000.0

Related

Optimising finding row pairs in a dataframe

I have a dataframe with rows that describe a movement of value between nodes in a system. This dataframe looks like this:
index from_node to_node value invoice_number
0 A E 10 a
1 B F 20 a
2 C G 40 c
3 D H 60 d
4 E I 35 c
5 X D 43 d
6 Y F 50 d
7 E H 70 a
8 B A 55 b
9 X B 33 a
I am looking to find "swaps" in the invoice history. A swap is defined where a node both receives and sends a value to a different node within the same invoice number. In the above dataset there are two swaps in invoice "a", and one swap in invoice "d" ("sent to" and "received from" could be the same node in the same row):
index node sent_to sent_value received_from received_value invoice_number
0 B F 20 X 33 a
1 E H 70 A 10 a
2 D H 60 X 43 d
I have solved this problem by iterating over all of the unique invoice numbers in the dataset and then iterating over each row within that invoice number to find pairs:
import pandas as pd
df = pd.DataFrame({
'from_node':['A','B','C','D','E','X','Y','E','B','X'],
'to_node':['E','F','G','H','I','D','F','H','A','B'],
'value':[10,20,40,60,35,43,50,70,55,33],
'invoice_number':['a','a','c','d','c','d','d','a','b','a'],
})
invoices = set(df.invoice_number)
list_df_swap = []
for invoice in invoices:
df_inv = df[df.invoice_number.isin([invoice])]
for r in df_inv.itertuples():
df_is_swap = df_inv[df_inv.to_node.isin([r.from_node])]
if len(df_is_swap.index) == 1:
swap = {'node': r.from_node,
'sent_to': r.to_node,
'sent_value': r.value,
'received_from': df_is_swap.iloc[0]['from_node'],
'received_value': df_is_swap.iloc[0]['value'],
'invoice_number': r.invoice_number
}
list_df_swap.append(pd.DataFrame(swap, index = [0]))
df_swap = pd.concat(list_df_swap, ignore_index = True)
The total dataset consists of several hundred million rows, and this approach is not very efficient. Is there a way to solve this problem using some kind of vectorised solution, or another method that would speed up the execution time?
Calculate all possible swaps, regradless of the invoice number:
swaps = df.merge(df, left_on='from_node', right_on='to_node')
Then select those that have the same invoice number:
columns = ['from_node_x', 'to_node_x', 'value_x', 'from_node_y', 'value_y',
'invoice_number_x']
swaps[swaps.invoice_number_x == swaps.invoice_number_y][columns]
# from_node_x to_node_x value_x from_node_y value_y invoice_number_x
#1 B F 20 X 33 a
#3 D H 60 X 43 d
#5 E H 70 A 10 a

Fill with the values from neighbor value compering other column in Pandas

I am having dataframe like this:
azimuth id
15 100
15 1
15 100
150 2
150 100
240 3
240 100
240 100
350 100
What I need is to fill instead 100 values values from row where azimuth is the closest:
Desired output:
azimuth id
15 1
15 1
15 1
150 2
150 2
240 3
240 3
240 3
350 1
350 is near to 15 because this is a circle (angle representation). The difference is 25.
What I have:
def mysubstitution(x):
for i in x.index[x['id'] == 100]:
i = int(i)
diff = (x['azimuth'] - x.loc[i, 'azimuth']).abs()
for ind in diff.index:
if diff[ind] > 180:
diff[ind] = 360 - diff[ind]
else:
pass
exclude = [y for y in x.index if y not in x.index[x['id'] == 100]]
closer_idx = diff[exclude]
closer_df = pd.DataFrame(closer_idx)
sorted_df = closer_df.sort_values('azimuth', ascending=True)
try:
a = sorted_df.index[0]
x.loc[i, 'id'] = x.loc[a, 'id']
except Exception as a:
print(a)
return x
Which works ok most of the time, but I guess there is some simpler solution.
Thanks in advance.
I tried to implement the functionality in two steps. First, for each azimuth, I grouped another dataframe that holds their id value(for values other than 100).
Then, using this array I implemented the replaceAzimuth function, which takes each row in the dataframe, first checks if the value already exists. If so, it directly replaces it. Otherwise,it replaces the id value with the closest azimuth value from the grouped dataframe.
Here is the implementation:
df = pd.DataFrame([[15,100],[15,1],[15,100],[150,2],[150,100],[240,3],[240,100],[240,100],[350,100]],columns=['azimuth','id'])
df_non100 = df[df['id'] != 100]
df_grouped = df_non100.groupby(['azimuth'])['id'].min().reset_index()
def replaceAzimuth(df_grouped,id_val):
real_id = df_grouped[df_grouped['azimuth'] == id_val['azimuth']]['id']
if real_id.size == 0:
df_diff = df_grouped
df_diff['azimuth'] = df_diff['azimuth'].apply(lambda x: min(abs(id_val['azimuth'] - x),(360 - id_val['azimuth'] + x)))
id_val['id'] = df_grouped.iloc[df_diff['azimuth'].idxmin()]['id']
else:
id_val['id'] = real_id
return id_val
df = df.apply(lambda x: replaceAzimuth(df_grouped,x), axis = 1)
df
For me, the code seems to give the output you have shown. But not sure if will work on all cases!
First set all ids to nan if they are 100.
df.id = np.where(df.id==100, np.nan, df.id)
Then calculate the angle diff pairwise and find the closest ID to fill the nans.
df.id = df.id.combine_first(
pd.DataFrame(np.abs(((df.azimuth.values[:,None]-df.azimuth.values) +180) % 360 - 180))
.pipe(np.argsort)
.applymap(lambda x: df.id.iloc[x])
.apply(lambda x: x.dropna().iloc[0], axis=1)
)
df
azimuth id
0 15 1.0
1 15 1.0
2 15 1.0
3 150 2.0
4 150 2.0
5 240 3.0
6 240 3.0
7 240 3.0
8 350 1.0

selecting indexes with multiple years of observations

I wish to select only the rows that have observations across multiple years. For example, suppose
mlIndx = pd.MultiIndex.from_tuples([('x', 0,),('x',1),('z', 0), ('y', 1),('t', 0),('t', 1)])
df = pd.DataFrame(np.random.randint(0,100,(6,2)), columns = ['a','b'], index=mlIndx)
In [18]: df
Out[18]:
a b
x 0 6 1
1 63 88
z 0 69 54
y 1 27 27
t 0 98 12
1 69 31
My desired output is
Out[19]:
a b
x 0 6 1
1 63 88
t 0 98 12
1 69 31
My current solution is blunt so something that can scale up more easily would be great. You can assumed a sorted index.
df.reset_index(level=0, inplace=True)
df[df.level_0.duplicated() | df.level_0.duplicated(keep='last')]
Out[30]:
level_0 a b
0 x 6 1
1 x 63 88
0 t 98 12
1 t 69 31
You can figure this out with groupby (on the first level of the index) + transform, and then use boolean indexing to filter out those rows:
df[df.groupby(level=0).a.transform('size').gt(1)]
a b
x 0 67 83
1 2 34
t 0 18 87
1 63 20
Details
Output of the groupby -
df.groupby(level=0).a.transform('size')
x 0 2
1 2
z 0 1
y 1 1
t 0 2
1 2
Name: a, dtype: int64
Filtering from here is straightforward, just find those rows with size > 1.
Use the group by filter
You can pass a function that returns a boolean to
df.groupby(level=0).filter(lambda x: len(x) > 1)
a b
x 0 7 33
1 31 43
t 0 71 18
1 68 72
I've spent my fare share of time focused on speed. Not all solutions need to be the fastest solutions. However, since the subject has come up. I'll offer what I think should be a fast solution. It is my intent to keep future readers informed.
Results of Time Test
res.plot(loglog=True)
res.div(res.min(1), 0).T
10 30 100 300 1000 3000
cs 4.425970 4.643234 5.422120 3.768960 3.912819 3.937120
wen 2.617455 4.288538 6.694974 18.489803 57.416648 148.860403
jp 6.644870 21.444406 67.315362 208.024627 569.421257 1525.943062
pir 6.043569 10.358355 26.099766 63.531397 165.032540 404.254033
pir_pd_factorize 1.153351 1.132094 1.141539 1.191434 1.000000 1.000000
pir_np_unique 1.058743 1.000000 1.000000 1.000000 1.021489 1.188738
pir_best_of 1.000000 1.006871 1.030610 1.086425 1.068483 1.025837
Simulation Details
def pir_pd_factorize(df):
f, u = pd.factorize(df.index.get_level_values(0))
m = np.bincount(f)[f] > 1
return df[m]
def pir_np_unique(df):
u, f = np.unique(df.index.get_level_values(0), return_inverse=True)
m = np.bincount(f)[f] > 1
return df[m]
def pir_best_of(df):
if len(df) > 1000:
return pir_pd_factorize(df)
else:
return pir_np_unique(df)
def cs(df):
return df[df.groupby(level=0).a.transform('size').gt(1)]
def pir(df):
return df.groupby(level=0).filter(lambda x: len(x) > 1)
def wen(df):
s=df.a.count(level=0)
return df.loc[s[s>1].index.tolist()]
def jp(df):
return df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
res = pd.DataFrame(
index=[10, 30, 100, 300, 1000, 3000],
columns='cs wen jp pir pir_pd_factorize pir_np_unique pir_best_of'.split(),
dtype=float
)
np.random.seed([3, 1415])
for i in res.index:
d = pd.DataFrame(
dict(a=range(i)),
pd.MultiIndex.from_arrays([
np.random.randint(i // 4 * 3, size=i),
range(i)
])
)
for j in res.columns:
stmt = f'{j}(d)'
setp = f'from __main__ import d, {j}'
res.at[i, j] = timeit(stmt, setp, number=100)
Just a new way
s=df.a.count(level=0)
df.loc[s[s>1].index.tolist()]
Out[12]:
a b
x 0 1 31
1 70 29
t 0 42 26
1 96 29
And if you want to keep using duplicate
s=df.index.get_level_values(level=0)
df.loc[s[s.duplicated()].tolist()]
Out[18]:
a b
x 0 1 31
1 70 29
t 0 42 26
1 96 29
I'm not convinced groupby is necessary:
df = df.sort_index()
df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
# a b
# x 0 16 3
# 1 97 36
# t 0 9 18
# 1 37 30
Some benchmarking:
df = pd.concat([df]*10000).sort_index()
def cs(df):
return df[df.groupby(level=0).a.transform('size').gt(1)]
def pir(df):
return df.groupby(level=0).filter(lambda x: len(x) > 1)
def wen(df):
s=df.a.count(level=0)
return df.loc[s[s>1].index.tolist()]
def jp(df):
return df.loc[[i for i in df.index.get_level_values(0).unique() if len(df.loc[i]) > 1]]
%timeit cs(df) # 19.5ms
%timeit pir(df) # 33.8ms
%timeit wen(df) # 17.0ms
%timeit jp(df) # 22.3ms

Pandas track consecutive near numbers via compare-cumsum-groupby pattern

I am trying to extend my current pattern to accommodate extra conditions of +- a percentage of the last value rather than strict does it match previous value.
data = np.array([[2,30],[2,900],[2,30],[2,30],[2,30],[2,1560],[2,30],
[2,300],[2,30],[2,450]])
df = pd.DataFrame(data)
df.columns = ['id','interval']
UPDATE 2 (id fix): Updated Data 2 with more data:
data2 = np.array([[2,30],[2,900],[2,30],[2,29],[2,31],[2,30],[2,29],[2,31],[2,1560],[2,30],[2,300],[2,30],[2,450], [3,40],[3,900],[3,40],[3,39],[3,41], [3,40],[3,39],[3,41] ,[3,1560],[3,40],[3,300],[3,40],[3,450]])
df2 = pd.DataFrame(data2)
df2.columns = ['id','interval']
for i, g in df.groupby([(df.interval != df.interval.shift()).cumsum()]):
if len(g.interval.tolist())>=3:
print(g.interval.tolist())
results in [30,30,30]
however I really want to catch near number conditions say when a number is +-10% of the previous number.
so looking at df2 I would like to pickup the series [30,29,31]
for i, g in df2.groupby([(df2.interval != <???+- 10% magic ???>).cumsum()]):
if len(g.interval.tolist())>=3:
print(g.interval.tolist())
UPDATE: Here is the end of line processing code where I store the gathered lists into a dictionary with the ID as the key
leak_intervals = {}
final_leak_intervals = {}
serials = []
for i, g in df.groupby([(df.interval != df.interval.shift()).cumsum()]):
if len(g.interval.tolist()) >= 3:
print(g.interval.tolist())
serial = g.id.values[0]
if serial not in serials:
serials.append(serial)
if serial not in leak_intervals:
leak_intervals[serial] = g.interval.tolist()
else:
leak_intervals[serial] = leak_intervals[serial] + (g.interval.tolist())
UPDATE:
In [116]: df2.groupby(df2.interval.pct_change().abs().gt(0.1).cumsum()) \
.filter(lambda x: len(x) >= 3)
Out[116]:
id interval
2 2 30
3 2 29
4 2 31
5 2 30
6 2 29
7 2 31
15 3 40
16 3 39
17 2 41
18 2 40
19 2 39
20 2 41

Write 2d dictionary into a dataframe or tab-delimited file using python

I have a 2-d dictionary in the following format:
myDict = {('a','b'):10, ('a','c'):20, ('a','d'):30, ('b','c'):40, ('b','d'):50,('c','d'):60}
How can I write this into a tab-delimited file so that the file contains the following. While filling a tuple (x, y) will fill two locations: (x,y) and (y,x). (x,x) is always 0.
The output would be :
a b c d
a 0 10 20 30
b 10 0 40 50
c 20 40 0 60
d 30 50 60 0
PS: If somehow the dictionary can be converted into a dataframe (using pandas) then it can be easily written into a file using pandas function
You can do this with the lesser-known align method and a little unstack magic:
In [122]: s = Series(myDict, index=MultiIndex.from_tuples(myDict))
In [123]: df = s.unstack()
In [124]: lhs, rhs = df.align(df.T)
In [125]: res = lhs.add(rhs, fill_value=0).fillna(0)
In [126]: res
Out[126]:
a b c d
a 0 10 20 30
b 10 0 40 50
c 20 40 0 60
d 30 50 60 0
Finally, to write this to a CSV file, use the to_csv method:
In [128]: res.to_csv('res.csv', sep='\t')
In [129]: !cat res.csv
a b c d
a 0.0 10.0 20.0 30.0
b 10.0 0.0 40.0 50.0
c 20.0 40.0 0.0 60.0
d 30.0 50.0 60.0 0.0
If you want to keep things as integers, cast using DataFrame.astype(), like so:
In [137]: res.astype(int).to_csv('res.csv', sep='\t')
In [138]: !cat res.csv
a b c d
a 0 10 20 30
b 10 0 40 50
c 20 40 0 60
d 30 50 60 0
(It was cast to float because of the intermediate step of filling in nan values where indices from one frame were missing from the other)
#Dan Allan's answer using combine_first is nice:
In [130]: df.combine_first(df.T).fillna(0)
Out[130]:
a b c d
a 0 10 20 30
b 10 0 40 50
c 20 40 0 60
d 30 50 60 0
Timings:
In [134]: timeit df.combine_first(df.T).fillna(0)
100 loops, best of 3: 2.01 ms per loop
In [135]: timeit lhs, rhs = df.align(df.T); res = lhs.add(rhs, fill_value=0).fillna(0)
1000 loops, best of 3: 1.27 ms per loop
Those timings are probably a bit polluted by construction costs, so what do things look like with some really huge frames?
In [143]: df = DataFrame({i: randn(1e7) for i in range(1, 11)})
In [144]: df2 = DataFrame({i: randn(1e7) for i in range(10)})
In [145]: timeit lhs, rhs = df.align(df2); res = lhs.add(rhs, fill_value=0).fillna(0)
1 loops, best of 3: 4.41 s per loop
In [146]: timeit df.combine_first(df2).fillna(0)
1 loops, best of 3: 2.95 s per loop
DataFrame.combine_first() is faster for larger frames.
In [49]: data = map(list, zip(*myDict.keys())) + [myDict.values()]
In [50]: df = DataFrame(zip(*data)).set_index([0, 1])[2].unstack()
In [52]: df.combine_first(df.T).fillna(0)
Out[52]:
a b c d
a 0 10 20 30
b 10 0 40 50
c 20 40 0 60
d 30 50 60 0
For posterity: If you are just tuning in, check out Phillip Cloud's answer below for a neater way to construct df.
Not as elegant as I'd like (and not using pandas) but until you find something better:
adj = dict()
for ((u, v), w) in myDict.items():
if u not in adj: adj[u] = dict()
if v not in adj: adj[v] = dict()
adj[u][v] = adj[v][u] = w
keys = adj.keys()
print '\t' + '\t'.join(keys)
for u in keys:
def f(v):
try:
return str(adj[u][v])
except KeyError:
return "0"
print u + '\t' + '\t'.join(f(v) for v in keys)
or equivalently (if you don't want to construct the adjacency matrix):
k = dict()
for ((u, v), w) in myDict.items():
k[u] = k[v] = True
keys = k.keys()
print '\t' + '\t'.join(keys)
for u in keys:
def f(v):
if (u, v) in myDict:
return str(myDict[(u, v)])
elif (v, u) in myDict:
return str(myDict[(v, u)])
else:
return "0"
print u + '\t' + '\t'.join(f(v) for v in keys)
Got it working using pandas package.
#Find all column names
z = []
[z.extend(x) for x in myDict.keys()]
colnames = sorted(set(z))
#Create an empty DataFrame using pandas
myDF = DataFrame(index= colnames, columns = colnames )
myDF = myDF.fillna(0) #Initialize with zeros
#Fill each item one by one
for val in myDict:
myDF[val[0]][val[1]] = myDict[val]
myDF[val[1]][val[0]] = myDict[val]
#Write to a file
outfilename = "matrixCooccurence.txt"
myDF.to_csv(outfilename, sep="\t", index=True, header=True, index_label = "features" )

Categories