VLookup (then replace) in Pandas with Dictionary? - python

I want to replaces values in Pandas dataframe using dictionary
DataFrame = games-u-q-s.csv:
blue1 blue2 blue3 blue4 blue5 red1 red2 red3 red4 red5 winner
8 432 96 11 112 104 498 122 238 412 0
119 39 76 10 35 54 25 120 157 92 0
57 63 29 61 36 90 19 412 92 22 0
column 1 - 10 contain champId with winner column as label
Dictionary = champNum.csv
champId champNum
266 1
103 2
84 3
12 4
32 5
34 6
1 7
. .
. .
143 138
and save it as dataset_feature_champion_number.csv
I want to convert champId into champNum and expected output like this:
blue1 blue2 blue3 blue4 blue5 red1 red2 red3 red4 red5 winner
125 11 59 70 124 36 129 20 135 111 0
23 40 77 53 95 67 73 37 132 91 0
69 13 116 81 22 68 127 111 91 8 0
This is the code:
import csv
import os
import numpy as np
import pandas as pd
def createDictionary(csvfile):
with open(csvfile, mode='r') as data:
reader = csv.reader(data)
dict = {int(rows[0]):int(rows[1]) for rows in reader}
return dict
def convertDataframeToChampNum(csvfile,dictionary):
df = pd.read_csv(csvfile)
temp1 = df.iloc[:,1:11]
temp2 = df['winner']
temp3 = temp1.applymap(dictionary.get)
champNum = temp3.join(temp2)
return champNum
def saveAsCSV(dataframe):
dataframe.to_csv("dataset_feature_champion_number.csv")
def main():
diction = createDictionary("champNum.csv")
dataset = convertDataframeToChampNum("games-u-q-s.csv",diction)
saveAsCSV(dataset)
if __name__ =='__main__':
main()
And I got so many errors:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-19-f86679fc49f9> in <module>()
27
28 if __name__ =='__main__':
---> 29 main()
<ipython-input-19-f86679fc49f9> in main()
22
23 def main():
---> 24 diction = createDictionary("champNum.csv")
25 dataset = convertDataframeToChampNum("games-u-q-s.csv",diction)
26 saveAsCSV(dataset)
<ipython-input-19-f86679fc49f9> in createDictionary(csvfile)
7 with open(csvfile, mode='r') as data:
8 reader = csv.reader(data)
----> 9 dict = {int(rows[0]):int(rows[1]) for rows in reader}
10 return dict
11
<ipython-input-19-f86679fc49f9> in <dictcomp>(.0)
7 with open(csvfile, mode='r') as data:
8 reader = csv.reader(data)
----> 9 dict = {int(rows[0]):int(rows[1]) for rows in reader}
10 return dict
11
ValueError: invalid literal for int() with base 10: 'champNum'

I think you're looking for pandas.DataFrame.transform:
>>> a = pd.DataFrame([[1,2,3,4,5],[6,7,8,9,10]])
>>> a
0 1 2 3 4
0 1 2 3 4 5
1 6 7 8 9 10
>>> a.transform(lambda x: -x)
0 1 2 3 4
0 -1 -2 -3 -4 -5
1 -6 -7 -8 -9 -10
or, applied to your problem
df = pd.DataFrame({'blue1': [8, 119, 57],
'blue2': [432, 39, 63],
'blue3': [96, 76, 29],
'blue4': [11, 10, 61],
'blue5': [112, 35, 36],
'red1': [104, 54, 90],
'red2': [498, 25, 19],
'red3': [122, 120, 412],
'red4': [238, 157, 92],
'red5': [412, 92, 22],
'winner': [0, 0, 0]})
transform_dict = {266: 1, 103: 2, ...}
df.transform(lambda x: transform_dict[x] if x in transform_dict else None)

Related

Pandas - Replace values based on index condition to different values

If I create a dataframe like so:
import pandas as pd, numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('AB'))
replace_1=[i+random.randint(0, 50) for i in range(16)]
How would I change the entry in column A to be the values of replace_1 list from row 0 -15, for example? In other words, how do I replace specific cells value from a list of value based purely on index?
here is one way to do it
# update the column in DF with series, based on the index value
df['A'].update(replace_1)
result
A B
0 7 17
1 26 70
2 13 81
3 48 64
4 45 74
... ... ...
95 74 3
96 18 94
97 81 4
98 37 11
99 65 29
replace_1
[7, 26, 13, 48, 45, 51, 35, 53, 20, 11, 38, 16, 36, 14, 63, 24]
Starting DF
A B
0 75 17
1 84 70
2 57 81
3 88 64
4 78 74
... ... ...
95 74 3
96 18 94
97 81 4
98 37 11
99 65 29

Count how many times a pair of values in one pandas dataframe appears in another

I have a pandas dataframe df1 that looks like this:
import pandas as pd
d = {'node1': [47, 24, 19, 77, 24, 19, 77, 24, 56, 92, 32, 77], 'node2': [24, 19, 77, 24, 19, 77, 24, 19, 92, 32, 77, 24], 'user': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C']}
df1 = pd.DataFrame(data=d)
df1
node1 node2 user
47 24 A
24 19 A
19 77 A
77 24 A
24 19 A
19 77 B
77 24 B
24 19 B
56 92 C
92 32 C
32 77 C
77 24 C
And a second pandas dataframe df2 that looks like this:
d2 = {'way_id': [4, 3, 1, 8, 5, 2, 7, 9, 6, 10], 'source': [24, 19, 84, 47, 19, 16, 77, 56, 32, 92], 'target': [19, 43, 67, 24, 77, 29, 24, 92, 77, 32]}
df2 = pd.DataFrame(data=d2)
df2
way_id source target
4 24 19
3 19 43
1 84 67
8 47 24
5 19 77
2 16 29
7 77 24
9 56 92
6 32 77
10 92 32
In a new dataframe I would like to count how often the value pairs per row in the columns node1 and node2 in df1 occur in the rows of the source and target columns in df2. The order is relevant, but also the corresponding user should be added to a new column. That's why the desired output should be like this:
way_id source target count user
4 24 19 2 A
3 19 43 0 A
1 84 67 0 A
8 47 24 1 A
5 19 77 1 A
2 16 29 0 A
7 77 24 1 A
9 56 92 0 A
6 32 77 0 A
10 92 32 0 A
4 24 19 1 B
3 19 43 0 B
1 84 67 0 B
8 47 24 0 B
5 19 77 1 B
2 16 29 0 B
7 77 24 1 B
9 56 92 0 B
6 32 77 0 B
10 92 32 0 B
4 24 19 0 C
3 19 43 0 C
1 84 67 0 C
8 47 24 0 C
5 19 77 0 C
2 16 29 0 C
7 77 24 1 C
9 56 92 1 C
6 32 77 1 C
10 92 32 1 C
Since you don't care about the source/target match, you need to duplicate the data then merge :
(pd.concat([df1.rename(columns={'node1':'source','node2':'target'}),
df1.rename(columns={'node2':'source','node1':'target'})]
)
.merge(df2, on=['source','target'], how='outer')
.groupby(['source','target','user'], as_index=False)['way_id'].count()
)

Group by/ Pivot

dummy_df = pd.DataFrame({
'accnt' : [101, 102, 103, 104, 101, 102, 103, 104, 101, 102, 103, 104, 101, 102, 103, 104, 101, 102, 103, 104],
'value' : [10, 20, 30, 40, 5, 2, 6, 48, 22, 23, 24, 25, 18, 25, 26, 14, 78, 72, 54, 6],
'category' : [1,1,1,1,2,2,2,2,1,1,2,2,3,3,3,3,1,3,2,3]
})
dummy_df
accnt value category
101 10 1
102 20 1
103 30 1
104 40 1
101 5 2
102 2 2
103 6 2
104 48 2
101 22 1
102 23 1
103 24 2
104 25 2
101 18 3
102 25 3
103 26 3
104 14 3
101 78 1
102 72 3
103 54 2
104 6 3
I want to get a dataframe like below:
accnt sum_val_c1 count_c1 sum_val_ct2 count_c2 sum_val_c3 count_c3
101 110 3 5 1 18 1
102 43 2 2 1 97 2
103 30 1 84 3 26 1
104 40 1 73 2 20 2
Which is summing up the occurrence of a category into count_c# and summing the value of that category into sum_val_c# and grouping by on accnt. I have tried using pivot() and groupby() but I know I'm missing something.
Use groupby, agg, and unstack:
u = df.groupby(['accnt', 'category'])['value'].agg(['sum', 'count']).unstack(1)
u.columns = u.columns.map('{0[0]}_c{0[1]}'.format)
u
sum_c1 sum_c2 sum_c3 count_c1 count_c2 count_c3
accnt
101 110 5 18 3 1 1
102 43 2 97 2 1 2
103 30 84 26 1 3 1
104 40 73 20 1 2 2
Similarly, with pivot_table,
u = df.pivot_table(index=['accnt'],
columns='category',
values='value',
aggfunc=['sum', 'count'])
u.columns = u.columns.map('{0[0]}_c{0[1]}'.format)
u
sum_c1 sum_c2 sum_c3 count_c1 count_c2 count_c3
accnt
101 110 5 18 3 1 1
102 43 2 97 2 1 2
103 30 84 26 1 3 1
104 40 73 20 1 2 2
Pandas has a method to do that.
pivot2 = dummy_df.pivot_table(values='value', index='accnt', columns='category', aggfunc=['count', 'sum'])
That returns a dataframe like this:
count sum
category 1 2 3 1 2 3
accnt
101 3 1 1 110 5 18
102 2 1 2 43 2 97
103 1 3 1 30 84 26
104 1 2 2 40 73 20

Append columns from a DataFrame to a list

Is it possible to append columns from a dataframe into an empty list?
Example of a random df is produced:
df = pd.DataFrame(np.random.randint(0,100,size=(10, 4)), columns=list('ABCD'))
The output is:
A B C D
0 25 27 34 77
1 85 62 39 49
2 90 51 2 97
3 39 19 86 59
4 33 79 64 73
5 36 66 29 78
6 22 27 84 41
7 0 26 22 22
8 44 57 29 37
9 0 31 96 90
If I had an empty list or lists, could you append the columns by each row? So A,C to a list and B,Dto a list. An example output would be:
empty_list = [[],[]]
empty_list[0] = [[25,34],
[85,39]
[90,2]
[39,86]
[33,64]
[36,29]
[22,84]
[0,22]
[44,29]
[0,96]]
Or would you have to go through and convert each column to a list with df['A'].tolist() and then go through an append by row?
Try this
d=df[['A','C']]
d.values.tolist()
Output
[[0, 93], [58, 14], [79, 18], [40, 26], [91, 14], [25, 18], [22, 25], [35, 99], [12, 82], [48, 72]]
So the solution would be :
empty_list = [[],[]]
empty_list[0]=df[['A','C']].values.tolist()
empty_list[1]=df[['B','D']].values.tolist()
My df was :
df = pd.DataFrame(np.random.randint(0,100,size=(10, 4)), columns=list('ABCD'))
df
A B C D
0 0 60 93 94
1 58 52 14 33
2 79 84 18 1
3 40 21 26 32
4 91 19 14 8
5 25 34 18 68
6 22 37 25 10
7 35 58 99 80
8 12 38 82 8
9 48 56 72 66

How to shuffle groups of rows of a Pandas dataframe?

Let's assume I have a dataframe df:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(12,4))
print(df)
0 1 2 3
0 71 64 84 20
1 48 60 83 61
2 48 78 71 46
3 65 88 66 77
4 71 22 42 58
5 66 76 64 80
6 67 28 74 87
7 32 90 55 78
8 80 42 52 14
9 54 76 73 17
10 32 89 42 36
11 85 78 61 12
How do I shuffle the rows of df three-by-three, i.e., how do I randomly shuffle the first three rows (0, 1, 2) with either the second (3, 4, 5), third (6, 7, 8) or fourth (9, 10, 11) group? This could be a possible outcome:
print(df)
0 1 2 3
3 65 88 66 77
4 71 22 42 58
5 66 76 64 80
9 54 76 73 17
10 32 89 42 36
11 85 78 61 12
6 67 28 74 87
7 32 90 55 78
8 80 42 52 14
0 71 64 84 20
1 48 60 83 61
2 48 78 71 46
Thus, the new order has the second group of 3 rows from original dataframe, then the last one, then the third one and finally the first group.
You can reshape into a 3D array splitting the first axis into two with the latter one of length 3 corresponding to the group length and then use np.random.shuffle for such a groupwise in-place shuffle along the first axis, which being of length as the number of groups holds those groups and thus achieves our desired result, like so -
np.random.shuffle(df.values.reshape(-1,3,df.shape[1]))
Explanation
To give it a bit of explanation, let's use np.random.permutation to generate those random indices along the first axis and then index into the 3D array version.
1] Input df :
In [199]: df
Out[199]:
0 1 2 3
0 71 64 84 20
1 48 60 83 61
2 48 78 71 46
3 65 88 66 77
4 71 22 42 58
5 66 76 64 80
6 67 28 74 87
7 32 90 55 78
8 80 42 52 14
9 54 76 73 17
10 32 89 42 36
11 85 78 61 12
2] Get 3D array version :
In [200]: arr_3D = df.values.reshape(-1,3,df.shape[1])
In [201]: arr_3D
Out[201]:
array([[[71, 64, 84, 20],
[48, 60, 83, 61],
[48, 78, 71, 46]],
[[65, 88, 66, 77],
[71, 22, 42, 58],
[66, 76, 64, 80]],
[[67, 28, 74, 87],
[32, 90, 55, 78],
[80, 42, 52, 14]],
[[54, 76, 73, 17],
[32, 89, 42, 36],
[85, 78, 61, 12]]])
3] Get shuffling indices and index into the first axis of 3D version :
In [202]: shuffle_idx = np.random.permutation(arr_3D.shape[0])
In [203]: shuffle_idx
Out[203]: array([0, 3, 1, 2])
In [204]: arr_3D[shuffle_idx]
Out[204]:
array([[[71, 64, 84, 20],
[48, 60, 83, 61],
[48, 78, 71, 46]],
[[54, 76, 73, 17],
[32, 89, 42, 36],
[85, 78, 61, 12]],
[[65, 88, 66, 77],
[71, 22, 42, 58],
[66, 76, 64, 80]],
[[67, 28, 74, 87],
[32, 90, 55, 78],
[80, 42, 52, 14]]])
Then, we are assigning these values back to input dataframe.
With np.random.shuffle, we are just doing everything in-place and hiding away the work needed to explicitly generate shuffling indices and assigning back.
Sample run -
In [181]: df = pd.DataFrame(np.random.randint(11,99,(12,4)))
In [182]: df
Out[182]:
0 1 2 3
0 82 49 80 20
1 19 97 74 81
2 62 20 97 19
3 36 31 14 41
4 27 86 28 58
5 38 68 24 83
6 85 11 25 88
7 21 31 53 19
8 38 45 14 72
9 74 63 40 94
10 69 85 53 81
11 97 96 28 29
In [183]: np.random.shuffle(df.values.reshape(-1,3,df.shape[1]))
In [184]: df
Out[184]:
0 1 2 3
0 85 11 25 88
1 21 31 53 19
2 38 45 14 72
3 82 49 80 20
4 19 97 74 81
5 62 20 97 19
6 36 31 14 41
7 27 86 28 58
8 38 68 24 83
9 74 63 40 94
10 69 85 53 81
11 97 96 28 29
Similar solution to #Divakar, probably simpler as I directly shuffle the index of the dataframe:
import numpy as np
import pandas as pd
df = pd.DataFrame([np.arange(0, 12)]*4).T
len_group = 3
index_list = np.array(df.index)
np.random.shuffle(np.reshape(index_list, (-1, len_group)))
shuffled_df = df.loc[index_list, :]
Sample output:
shuffled_df
Out[82]:
0 1 2 3
9 9 9 9 9
10 10 10 10 10
11 11 11 11 11
3 3 3 3 3
4 4 4 4 4
5 5 5 5 5
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
6 6 6 6 6
7 7 7 7 7
8 8 8 8 8
This is doing the same as the other two answers, but using integer division to create a group column.
nrows_df = len(df)
nrows_group = 3
shuffled = (
df
.assign(group_var=df.index // nrows_group)
.set_index("group_var")
.loc[np.random.permutation(nrows_df / nrows_group)]
)

Categories