MultiIndex Slice | slice with another index across a subset of levels - python

I am wondering the best way to slice a multi-index, using another index, where the other index is a subset of the main multi-index.
np.random.seed(1)
dict_data_russian = {'alpha':[1,2,3,4,5,6,7,8,9],'beta':['a','b','c','d','e','f','g','h','i'],'gamma':['r','s','t','u','v','w','x','y','z'],'value_r': np.random.rand(9)}
dict_data_doll = {'beta':['d','e','f'],'gamma':['u','v','w'],'dont_care': list('PQR')}
df_russian = pd.DataFrame(data=dict_data_russian)
df_russian.set_index(['alpha','beta','gamma'],inplace=True)
df_doll = pd.DataFrame(data=dict_data_doll)
df_doll.set_index(['beta','gamma'],inplace=True)
print df_russian
print df_doll.head()
Which yields:
value_r
alpha beta gamma
1 a r 0.4170
2 b s 0.7203
3 c t 0.0001
4 d u 0.3023
5 e v 0.1468
6 f w 0.0923
7 g x 0.1863
8 h y 0.3456
9 i z 0.3968
dont_care
beta gamma
d u P
e v Q
f w R
How best to use the index in df_doll to slice df_russian, on levels beta & gamma, in order to the following output?
value_r
alpha beta gamma
4 d u 0.3023
5 e v 0.1468
6 f w 0.0923

You can do
In [1131]: df_russian[df_russian.reset_index(0).index.isin(df_doll.index)]
Out[1131]:
alpha beta gamma value_r
4 d u 0.302333
5 e v 0.146756
6 f w 0.092339
This uses a boolean key derived by resetting the outer level of the main index and checking if the remaining levels are in the index of df_doll for each row.

You could strip off the index, join the frames, then add back the index
result = df_doll.reset_index().merge(df_russian.reset_index(), on=['beta', 'gamma'], how='inner')
result.set_index(['alpha', 'beta', 'gamma'], inplace=True)
result.drop('dont_care', 1)

Related

Get the middle value from a column according to a criteria

I have a dataframe with 3 columns. I need to get the value from col A and B in the middle of C when C = 1. If the amount of C = 1 is even, I want the first one from the middle
For example, this one is for an odd amount of C = 1
A B C
w y 0
c v 0
t o 1
e p 1
t b 1
u e 0
q p 0
The row in the middle when C = 1 is
A B C
e p 1
Therefore, it should return
df_return
A B C
e p 1
When we have an even amount of C = 1:
df_return
A B C
w y 0
c v 0
t o 1
e p 1
t b 1
u e 1
r e 1
u f 1
q p 0
The ones in the middle when C = 1 are
A B C
t b 1
u e 1
However, I want only 1 of them, and it should be the first one. So
df_return
A B C
t b 1
How can I do it?
One thing you should know is that A and B are ordered
Focus on the relevant part, discarding rows holding zeros:
df = df[df.C == 1]
Now it's simple. Just find the midpoint, based on length or .shape.
if len(df) > 0:
mid = (len(df) - 1) // 2
return df.iloc[mid, :]

Optimising finding row pairs in a dataframe

I have a dataframe with rows that describe a movement of value between nodes in a system. This dataframe looks like this:
index from_node to_node value invoice_number
0 A E 10 a
1 B F 20 a
2 C G 40 c
3 D H 60 d
4 E I 35 c
5 X D 43 d
6 Y F 50 d
7 E H 70 a
8 B A 55 b
9 X B 33 a
I am looking to find "swaps" in the invoice history. A swap is defined where a node both receives and sends a value to a different node within the same invoice number. In the above dataset there are two swaps in invoice "a", and one swap in invoice "d" ("sent to" and "received from" could be the same node in the same row):
index node sent_to sent_value received_from received_value invoice_number
0 B F 20 X 33 a
1 E H 70 A 10 a
2 D H 60 X 43 d
I have solved this problem by iterating over all of the unique invoice numbers in the dataset and then iterating over each row within that invoice number to find pairs:
import pandas as pd
df = pd.DataFrame({
'from_node':['A','B','C','D','E','X','Y','E','B','X'],
'to_node':['E','F','G','H','I','D','F','H','A','B'],
'value':[10,20,40,60,35,43,50,70,55,33],
'invoice_number':['a','a','c','d','c','d','d','a','b','a'],
})
invoices = set(df.invoice_number)
list_df_swap = []
for invoice in invoices:
df_inv = df[df.invoice_number.isin([invoice])]
for r in df_inv.itertuples():
df_is_swap = df_inv[df_inv.to_node.isin([r.from_node])]
if len(df_is_swap.index) == 1:
swap = {'node': r.from_node,
'sent_to': r.to_node,
'sent_value': r.value,
'received_from': df_is_swap.iloc[0]['from_node'],
'received_value': df_is_swap.iloc[0]['value'],
'invoice_number': r.invoice_number
}
list_df_swap.append(pd.DataFrame(swap, index = [0]))
df_swap = pd.concat(list_df_swap, ignore_index = True)
The total dataset consists of several hundred million rows, and this approach is not very efficient. Is there a way to solve this problem using some kind of vectorised solution, or another method that would speed up the execution time?
Calculate all possible swaps, regradless of the invoice number:
swaps = df.merge(df, left_on='from_node', right_on='to_node')
Then select those that have the same invoice number:
columns = ['from_node_x', 'to_node_x', 'value_x', 'from_node_y', 'value_y',
'invoice_number_x']
swaps[swaps.invoice_number_x == swaps.invoice_number_y][columns]
# from_node_x to_node_x value_x from_node_y value_y invoice_number_x
#1 B F 20 X 33 a
#3 D H 60 X 43 d
#5 E H 70 A 10 a

How to create data frame from pandas series containg lists of different length

I've got pandas series withe below structure:
> 0 [{k1:a,k2:b,k3:c},{k1:d,k2:e,k3:f}]
> 1 [{k1:g,k2:h,k3:i},{k1:j,k2:k,k3:l},{k1:ł,k2:m,k3:n}]
> 2 [{k1:o,k2:p,k3:r}
> 3 [{k1:s,k2:t,k3:w},{k1:q,k2:z,k3:w},{k1:x,k2:y,k3:z},{k1:v,k2:f,k3:g}]
As You can see this series contains elemnts as lists of different length. Elements in each list are dictionaries. I would like to create data frame, which will looks like that:
> k1 k2 k3
> 0 a b c
> 1 d e f
> 2 g h i
> 3 j k l
> 4 ł m n
> 5 o p r
> 6 s t w
> 7 q z w
> 8 x y z
> 9 f v g
I have tried below code:
>for index_val, series_val in series.iteritems():
>> for dict in series_val:
>>> for key,value in dict.items():
>>>> actions['key']=value
However PyCharm stops and produces nothing. Are there any other method to do that?
Use concat with apply pd.DataFrame i.e
x = pd.Series([[{'k1':'a','k2':'b','k3':'c'},{'k1':'d','k2':'e','k3':'f'}], [{'k1':'g','k2':'h','k3':'i'},{'k1':'j','k2':'k','k3':'l'},{'k1':'ł','k2':'m','k3':'n'}],
[{'k1':'o','k2':'p','k3':'r'}],[{'k1':'s','k2':'t','k3':'w'},{'k1':'q','k2':'z','k3':'w'},{'k1':'x','k2':'y','k3':'z'},{'k1':'v','k2':'f','k3':'g'}]])
df = pd.concat(x.apply(pd.DataFrame,1).tolist(),ignore_index=True)
Output :
k1 k2 k3
0 a b c
1 d e f
2 g h i
3 j k l
4 ł m n
5 o p r
6 s t w
7 q z w
8 x y z
9 v f g

Iterating through multiple dataframes pandas

I have two dataframes:
1) Contains a list of suppliers and their Lat,Long coordinates
sup_essential = pd.DataFrame({'supplier': ['A','B','C'],
'coords': [(51.1235,-0.3453),(52.1245,-0.3423),(53.1235,-1.4553)]})
2) A list of stores and their lat, long coordinates
stores_essential = pd.DataFrame({'storekey': [1,2,3],
'coords': [(54.1235,-0.6553),(49.1245,-1.3423),(50.1235,-1.8553)]})
I want to create an output table that has: store, store_coordinates, supplier, supplier_coordinates, distance for every combination of store and supplier.
I currently have:
test=[]
for row in sup_essential.iterrows():
for row in stores_essential.iterrows():
r = sup_essential['supplier'],stores_essential['storeKey']
test.append(r)
But this just gives me repeats of all the values
Source DFs
In [105]: sup
Out[105]:
coords supplier
0 (51.1235, -0.3453) A
1 (52.1245, -0.3423) B
2 (53.1235, -1.4553) C
In [106]: stores
Out[106]:
coords storekey
0 (54.1235, -0.6553) 1
1 (49.1245, -1.3423) 2
2 (50.1235, -1.8553) 3
Solutions:
from sklearn.neighbors import DistanceMetric
dist = DistanceMetric.get_metric('haversine')
m = pd.merge(sup.assign(x=0), stores.assign(x=0), on='x', suffixes=['1','2']).drop('x',1)
d1 = sup[['coords']].assign(lat=sup.coords.str[0], lon=sup.coords.str[1]).drop('coords',1)
d2 = stores[['coords']].assign(lat=stores.coords.str[0], lon=stores.coords.str[1]).drop('coords',1)
m['dist_km'] = np.ravel(dist.pairwise(np.radians(d1), np.radians(d2)) * 6367)
## -- End pasted text --
Result:
In [135]: m
Out[135]:
coords1 supplier coords2 storekey dist_km
0 (51.1235, -0.3453) A (54.1235, -0.6553) 1 334.029670
1 (51.1235, -0.3453) A (49.1245, -1.3423) 2 233.213416
2 (51.1235, -0.3453) A (50.1235, -1.8553) 3 153.880680
3 (52.1245, -0.3423) B (54.1235, -0.6553) 1 223.116901
4 (52.1245, -0.3423) B (49.1245, -1.3423) 2 340.738587
5 (52.1245, -0.3423) B (50.1235, -1.8553) 3 246.116984
6 (53.1235, -1.4553) C (54.1235, -0.6553) 1 122.997130
7 (53.1235, -1.4553) C (49.1245, -1.3423) 2 444.459052
8 (53.1235, -1.4553) C (50.1235, -1.8553) 3 334.514028

Optimizing pandas filter inside apply function

I have a list of pairs--stored in a DataFrame--each pair having an 'a' column and a 'b' column. For each pair I want to return the 'b's that have the same 'a'. For example, given the following set of pairs:
a b
0 c d
1 e f
2 c g
3 e h
4 i j
5 e k
I would like to end up with:
a b equivalents
0 c d [g]
1 e f [h, k]
2 c g [d]
3 e h [f, k]
4 i j []
5 e k [h, e]
I can do this with the following:
def equivalents(x):
l = pairs[pairs["a"] == x["a"]]["b"].tolist()
return l[1:] if l else l
pairs["equivalents"] = pairs.apply(equivalents, axis = 1)
But it is painfully slow on larger sets (e.g. 1 million plus pairs). Any suggestions how I could do this faster?
I think this ought to be a bit faster. First, just add them up.
df['equiv'] = df.groupby('a')['b'].transform(sum)
a b equiv
0 c d dg
1 e f fhk
2 c g dg
3 e h fhk
4 i j j
5 e k fhk
Now convert to a list and remove whichever letter is already in column 'b'.
df.apply( lambda x: [ y for y in list( x.equiv ) if y != x.b ], axis=1 )
0 [g]
1 [h, k]
2 [d]
3 [f, k]
4 []
5 [f, h]

Categories