Joining 2 dataframe based on a column [duplicate] - python

This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 11 months ago.
Following is one of my dataframe structure:
strike coi chgcoi
120 200 20
125 210 15
130 230 12
135 240 9
and the other one is:
strike poi chgpoi
125 210 15
130 230 12
135 240 9
140 225 12
What I want is:
strike coi chgcoi strike poi chgpoi
120 200 20 120 0 0
125 210 15 125 210 15
130 230 12 130 230 12
135 240 9 135 240 9
140 0 0 140 225 12

First, you need to create two dataframes using pandas
df1 = pd.Dataframe({'column_1': [val_1, val_2, ..., val_n], 'column_2':[val_1, val_2, ..., val_n]})
df2 = pd.Dataframe({'column_1': [val_1, val_2, ..., val_n], 'column_2':[val_1, val_2, ..., val_n]})
Then you can use outer join
df1.merge(df2, on='common_column_name', how='outer')

db1
strike coi chgcoi
0 120 200 20
1 125 210 15
2 130 230 12
3 135 240 9
db2
strike poi chgpoi
0 125 210 15
1 130 230 12
2 135 240 9
3 140 225 12
merge = db1.merge(db2,how="outer",on='strike')
merge
strike coi chgcoi poi chgpoi
0 120 200.0 20.0 NaN NaN
1 125 210.0 15.0 210.0 15.0
2 130 230.0 12.0 230.0 12.0
3 135 240.0 9.0 240.0 9.0
4 140 NaN NaN 225.0 12.0
merge.fillna(0)
strike coi chgcoi poi chgpoi
0 120 200.0 20.0 0.0 0.0
1 125 210.0 15.0 210.0 15.0
2 130 230.0 12.0 230.0 12.0
3 135 240.0 9.0 240.0 9.0
4 140 0.0 0.0 225.0 12.0
This is your expected result with the only difference that 'strike' is not repeated

Related

Python/Pandas/Excel Creating a 2D array from 3 columns

I have two questions actually. I have a dataframe like the one below. I need to split it into years/months, same as a fixed width delimiter in Excel. Pandas str.split() can't do this based on the documentation, it needs a delimiting character.
Initial df:
Year/Period PurchDoc
0 FY19P01 162
1 FY19P02 148
2 FY19P03 133
3 FY19P04 157
4 FY19P05 152
5 FY19P06 176
6 FY19P07 123
7 FY19P08 143
8 FY19P09 161
9 FY19P10 177
10 FY19P11 152
11 FY19P12 175
12 FY20P01 203
13 FY20P02 157
14 FY20P03 206
15 FY20P04 247
16 FY20P05 182
17 FY20P06 141
18 FY20P07 205
19 FY20P08 194
Expected result:
Year Period PurchDoc
0 FY19 P01 162
1 FY19 P02 148
2 FY19 P03 133
3 FY19 P04 157
4 FY19 P05 152
5 FY19 P06 176
6 FY19 P07 123
7 FY19 P08 143
8 FY19 P09 161
9 FY19 P10 177
10 FY19 P11 152
11 FY19 P12 175
12 FY20 P01 203
13 FY20 P02 157
14 FY20 P03 206
15 FY20 P04 247
16 FY20 P05 182
17 FY20 P06 141
18 FY20 P07 205
19 FY20 P08 194
Second, I need to transpose the period and PurchDoc columns so it looks like this (well, as ints and no NaNs, but I can fix that):
Unnamed: 0 P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 P12
0 FY19 162 148 133 157 152.0 176.0 123.0 143.0 161.0 177.0 152.0 175.0
1 FY20 203 157 206 247 182.0 141.0 205.0 194.0 113.0 44.0 26.0 17.0
2 FY21 41 53 42 40 52.0 54.0 57.0 46.0 90.0 103.0 63.0 86.0
3 FY22 114 96 87 92 NaN NaN NaN NaN NaN NaN NaN NaN
Couldn't find anything remotely useful googling unfortunately, so I don't have any failed code to show.
df['Year'] = df['Year/Period'].str.slice(stop=4)
df['Period'] = df['Year/Period'].str.slice(start=4)
df.drop('Year/Period', axis=1, inplace=True)
df = df.pivot(values = 'PurchDoc', index = 'Year', columns = 'Period')
print(df)
output:
Period P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 P12
Year
FY19 162.0 148.0 133.0 157.0 152.0 176.0 123.0 143.0 161.0 177.0 152.0 175.0
FY20 203.0 157.0 206.0 247.0 182.0 141.0 205.0 194.0 NaN NaN NaN NaN
df[["Year", "Period"]] = df.apply(lambda x: (x["Year/Period"][:4], x["Year/Period"][4:]), result_type="expand", axis=1)
Then:
pd.pivot_table(df, columns="Period", index="Year", values="PurchDoc", aggfunc="sum")

Calculate mean of data rows in dataframe with date-headers, dictated by a 'datetime'-column

I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0.
If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Initial dataframe:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID
0 100.0 122.0 324 632 NaN 2016-03-08 12
1 120.0 159.0 54 452 541.0 2015-04-09 96
2 NaN 164.0 687 165 245.0 2016-02-15 20
3 180.0 421.0 512 184 953.0 2018-05-01 73
4 110.0 654.0 913 173 103.0 2017-08-04 84
5 130.0 NaN 754 124 207.0 2016-07-03 26
6 170.0 256.0 843 97 806.0 2013-02-04 87
7 140.0 754.0 95 101 541.0 2016-06-08 64
8 80.0 985.0 184 84 90.0 2019-03-05 11
9 96.0 65.0 127 130 421.0 2014-05-14 34
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
Tried code: -> I'm still working on it, as I don't really know how to start for this, I only uploaded the dataframe so far, probably something with the 'datetime'-package has to be done to get the desired dataframe?
import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
print(df)
Due to your naming convention, one need to extract the years from column names for comparison purpose. Then you can mask the data and taking mean:
# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)
# the years from Date
years = pd.to_datetime(df.Date).dt.year.values
df['mean'] = data.where(data_years<years[:,None]).mean(1)
Output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.00
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.00
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.00
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.00
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.00
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447.00
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.60
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
one more answer:
import pandas as pd
import numpy as np

df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']

s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)

print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)

print(df)

Pandas rolling function with overlap

I would like to apply a function to one pandas dataframe column which does the following task:
I have a cycle counter that starts from a value but sometimes restarts.
I would like to have the counter continue and increase its value.
The function I use at the moment is the following one:
Code
import pandas as pd
d = {'Cycle':[100,100,100,100,101,101,101,102,102,102,102,102,102,103,103,103,100,100,100,100,101,101,101,101]}
df = pd.DataFrame(data=d)
df.loc[:,'counter'] = df['Cycle'].to_numpy()
df.loc[:,'counter'] = df['counter'].rolling(2).apply(lambda x: x[0] if (x[0] == x[1]) else x[0]+1, raw=True)
print(df)
Output
Cycle counter
0 100 NaN
1 100 100.0
2 100 100.0
3 100 100.0
4 101 101.0
5 101 101.0
6 101 101.0
7 102 102.0
8 102 102.0
9 102 102.0
10 102 102.0
11 102 102.0
12 102 102.0
13 103 103.0
14 103 103.0
15 103 103.0
16 100 104.0
17 100 100.0
18 100 100.0
19 100 100.0
20 101 101.0
21 101 101.0
22 101 101.0
23 101 101.0
My goal is to get a dataframe similar to this one:
Cycle counter
0 100 NaN
1 100 100.0
2 100 100.0
3 100 100.0
4 101 101.0
5 101 101.0
6 101 101.0
7 102 102.0
8 102 102.0
9 102 102.0
10 102 102.0
11 102 102.0
12 102 102.0
13 103 103.0
14 103 103.0
15 103 103.0
16 100 104.0
17 100 104.0
18 100 104.0
19 100 104.0
20 101 105.0
21 101 105.0
22 101 105.0
23 101 105.0
How do I use the rolling function with one overlap?
Do you have any recommendation to reach my goal?
Best regards,
Matteo
Another approach would be to identify the points in the Cycle column where the value changes using .diff(). Then at those points increment from the original initial cycle value and merge to the original dataframe forward filling the new values.
df2 = df[df['Cycle'].diff().apply(lambda x: x!=0)].reset_index()
df2['Target Count'] = df[df['Cycle'].diff().apply(lambda x: x!=0)].reset_index().reset_index().apply(lambda x: df.iloc[0,0] + x['level_0'], axis = 1)
df = df.merge(df2.drop('Cycle', axis = 1), right_on = 'index', left_index = True, how = 'left').ffill().set_index('index', drop = True)
def df.index.name
df
Cycle Target Count
0 100 100.0
1 100 100.0
2 100 100.0
3 100 100.0
4 101 101.0
5 101 101.0
6 101 101.0
7 102 102.0
8 102 102.0
9 102 102.0
10 102 102.0
11 102 102.0
12 102 102.0
13 103 103.0
14 103 103.0
15 103 103.0
16 100 104.0
17 100 104.0
18 100 104.0
19 100 104.0
20 101 105.0
21 101 105.0
22 101 105.0
23 101 105.0
We can use shift and ne (same as !=) to check where the Cycle column changes.
Then we use cumsum to make a counter which changes each time Cycle changes.
We add the first value of Cycle to the counter -1, to let it start at 100:
groups = df['Cycle'].ne(df['Cycle'].shift()).cumsum()
df['counter'] = groups + df['Cycle'].iat[0] - 1
Cycle counter
0 100 100
1 100 100
2 100 100
3 100 100
4 101 101
5 101 101
6 101 101
7 102 102
8 102 102
9 102 102
10 102 102
11 102 102
12 102 102
13 103 103
14 103 103
15 103 103
16 100 104
17 100 104
18 100 104
19 100 104
20 101 105
21 101 105
22 101 105
23 101 105
Details: groups gives us a counter starting at 1:
print(groups)
0 1
1 1
2 1
3 1
4 2
5 2
6 2
7 3
8 3
9 3
10 3
11 3
12 3
13 4
14 4
15 4
16 5
17 5
18 5
19 5
20 6
21 6
22 6
23 6
Name: Cycle, dtype: int64

Remove multiple rows with same index values in python dataframe

I have a dataset with number of startstation IDS, endstation IDS and the duration of travel for bikes in a city.
The data dates back to 2017 and hence now certain stations do not exist.
I have the list of those station IDs. How can I remove rows from the dataframe which either starts or ends at those stations?
For example, if I want to remove StartStation ID = 135 which is in index 4 and 5, what should I do? This entends for a million rows where 135 can be present anywhere.
Bike Id StartStation Id EndStation Id Duration
0 395 573 137.0 660.0
1 12931 399 507.0 420.0
2 7120 399 507.0 420.0
3 1198 599 616.0 300.0
4 10739 135 486.0 1260.0
5 10949 135 486.0 1260.0
6 8831 193 411.0 540.0
7 8778 266 770.0 600.0
8 700 137 294.0 540.0
9 5017 456 39.0 3000.0
10 4359 444 445.0 240.0
11 2801 288 288.0 5340.0
12 9525 265 592.0 300.0
I'm calling your list of ids to remove removed_ids.
df=df.loc[
(~df['StartStation ID'].isin(removed_ids)) &\
(~df['EndStation ID'].isin(removed_ids))
]

iterating over loc on dataframes

I'm trying to extract data from a list of dataframes and extract row ranges. Each dataframe might not have the same data, therefore I have a list of possible index ranges that I would like loc to loop over, i.e. from the code sample below, I might want CIN to LAN, but on another dataframe, the CIN row doesn't exist, so I will want DET to LAN or HOU to LAN.
so I was thinking putting them in a list and iterating over the list, i.e.
for df in dfs:
ranges=[[df.loc["CIN":"LAN"]], [df.loc["DET":"LAN"]]]
extracted ranges = (i for i in ranges)
I'm not sure how you would iterate over a list and feed into loc, or perhaps .query().
df1 stint g ab r h X2b X3b hr rbi sb cs bb \
year team
2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174
SFN 5 482 1305 198 337 67 6 40 171.0 26.0 7.0 235
TEX 2 198 729 115 200 40 4 28 115.0 21.0 4.0 73
TOR 4 459 1408 187 378 96 2 58 223.0 4.0 2.0 190
df2 so ibb hbp sh sf gidp
year team
2008 DET 176.0 3.0 10.0 4.0 8.0 28.0
HOU 212.0 3.0 9.0 16.0 6.0 17.0
LAN 141.0 8.0 9.0 3.0 8.0 29.0
NYN 310.0 24.0 23.0 18.0 15.0 48.0
SFN 188.0 51.0 8.0 16.0 6.0 41.0
TEX 140.0 4.0 5.0 2.0 8.0 16.0
TOR 265.0 16.0 12.0 4.0 16.0 38.0
Here is a solution:
import pandas as pd
# Prepare a list of ranges
ranges = [('CIN','LAN'), ('DET','LAN')]
# Declare an empty list of data frames and a list with the existing data frames
df_ranges = []
df_list = [df1, df2]
# Loop over multi-indices
for i, idx_range in enumerate(ranges):
df = df_list[i]
row1, row2 = idx_range
df_ranges.append(df.loc[(slice(None), slice(row1, row2)),:])
# Print the extracted data
print('Extracted data:\n')
print(df_ranges)
Output:
[ stint g ab r h X2b X3b hr rbi sb cs bb
year team
2007 CIN 6 379 745 101 203 35 2 36 125 10 1 105
DET 5 301 1062 162 283 54 4 37 144 24 7 97
HOU 4 311 926 109 218 47 6 14 77 10 4 60
LAN 11 413 1021 153 293 61 3 36 154 7 5 114
so ibb hbp sh sf gidp
year team
2008 DET 176 3 10 4 8 28
HOU 212 3 9 16 6 17
LAN 141 8 9 3 8 29]

Categories