I have a dataframe like the following, with a multi-index of integers that represents months and days of the year, along with maximum and minimum temperature recordings from those days.
df
Min Temp Max Temp
Date Date
1 1 -88 139
2 -115 150
3 -110 139
4 -81 156
5 -80 172
... ... ...
12 2 -94 156
3 -97 172
4 -120 156
5 -124 144
6 -161 130
7 -167 135
8 -141 167
9 -135 178
10 -106 194
11 -106 161
12 -94 144
13 -92 133
14 -149 117
15 -158 117
16 -119 122
17 -111 160
18 -142 133
19 -185 130
20 -190 161
21 -167 161
22 -98 150
23 -162 139
24 -90 183
25 -125 183
26 -119 144
27 -76 130
28 -81 134
29 -117 113
30 -127 106
31 -111 122
How can I convert this multi-index to a single index that is of type datetime? Something like this conversion is what I am looking for:
1 1 ---> January 1
1 2 ---> January 2
...
12 31 ---> December 31
Using the top of your dataframe as an example:
>>> df
Min Temp Max Temp
Date Date
1 1 -88 139
2 -115 150
3 -110 139
4 -81 156
5 -80 172
Use pd.to_datetime on the individual levels of your MultiIndex, then strftime with your desired format:
df.index = pd.to_datetime(df.index.get_level_values(0).astype(str) + '-' +
df.index.get_level_values(1).astype(str),
format='%m-%d').strftime('%B %d')
>>> df
Min Temp Max Temp
January 01 -88 139
January 02 -115 150
January 03 -110 139
January 04 -81 156
January 05 -80 172
However, because this is a formatted string, it will no longer be datetime format. If you want it to be datetime, you need to include a year. You can omit the strftime and it will use the default 1900:
df.index = pd.to_datetime(df.index.get_level_values(0).astype(str) + '-' +
df.index.get_level_values(1).astype(str),
format='%m-%d')
>>> df
Min Temp Max Temp
1900-01-01 -88 139
1900-01-02 -115 150
1900-01-03 -110 139
1900-01-04 -81 156
1900-01-05 -80 172
Let's take this sample dataframe:
import pandas as pd
import numpy as np
arrays = [[1, 1, 1, 1, 2, 2, 2, 2], [28, 29, 30, 31 , 1, 2, 3, 4]]
index = pd.MultiIndex.from_arrays(arrays, names=('Month', 'Day'))
df = pd.DataFrame(np.random.randn(8,2), index=index)
Yields:
Month Day 0 1
0 1 28 -0.295065 -0.843433
1 1 29 0.367759 0.837147
2 1 30 0.051956 0.430499
3 1 31 1.917990 1.066545
4 2 1 1.345338 -0.600304
5 2 2 -0.475890 0.763301
6 2 3 0.560985 1.747668
7 2 4 0.377741 -0.310094
Simply use reset_index(), combine the columns and convert to datetime:
new = df.reset_index()
new['Date'] = pd.to_datetime(new['Month'].astype(str) + '/' + new['Day'].astype(str), format='%m/%d')
Yields:
Month Day 0 1 Date
0 1 28 -0.295065 -0.843433 1900-01-28
1 1 29 0.367759 0.837147 1900-01-29
2 1 30 0.051956 0.430499 1900-01-30
3 1 31 1.917990 1.066545 1900-01-31
4 2 1 1.345338 -0.600304 1900-02-01
5 2 2 -0.475890 0.763301 1900-02-02
6 2 3 0.560985 1.747668 1900-02-03
7 2 4 0.377741 -0.310094 1900-02-04
Finally, use set_index() and drop() columns:
new = new.set_index('Date').drop(['Month','Day'], axis=1)
Yields:
0 1
Date
1900-01-28 0.503419 -1.197496
1900-01-29 -0.059114 0.552766
1900-01-30 0.365710 -0.079030
1900-01-31 -2.782296 1.027040
1900-02-01 1.343155 -0.846419
1900-02-02 1.334560 0.392820
1900-02-03 0.537082 1.486579
1900-02-04 0.506200 0.138864
Related
I am new to Python, and was trying to run a basic web scraper. My code looks like this
import requests
import pandas as pd
x = requests.get('https://www.baseball-reference.com/players/p/penaje02.shtml')
dfs = pd.read_html(x.content)
print(dfs)
df = pd.DataFrame(dfs)
when printing dfs it looks like this. I only want the second table.
[ Year Age Tm Lg G PA AB \
0 2018 20 HOU-min A- 36 156 136
1 2019 21 HOU-min A,A+ 109 473 409
2 2021 23 HOU-min AAA,Rk 37 160 145
3 2022 24 HOU AL 136 558 521
4 1 Yr 1 Yr 1 Yr 1 Yr 136 558 521
5 162 Game Avg. 162 Game Avg. 162 Game Avg. 162 Game Avg. 162 665 621
R H 2B ... OPS OPS+ TB GDP HBP SH SF IBB Pos \
0 22 34 5 ... 0.649 NaN 42 0 1 0 1 0 NaN
1 72 124 21 ... 0.825 NaN 180 4 11 0 6 0 NaN
2 25 43 5 ... 0.942 NaN 84 0 7 0 0 0 NaN
3 72 132 20 ... 0.715 101.0 222 5 6 1 5 0 *6/H
4 72 132 20 ... 0.715 101.0 222 5 6 1 5 0 NaN
5 86 157 24 ... 0.715 101.0 264 6 7 1 6 0 NaN
Awards
0 TRC · NYPL
1 DAV,FAY · MIDW,CARL
2 SKT,AST · AAAW,FCL
3 GG
4 NaN
5 NaN
[6 rows x 30 columns]]
however, i end up with error Must pass 2-d input. shape=(1, 6, 30) after my last line. I have tried using df=dfs[1], but got the error list index our of range. Any way i can turn dfs from a list to a datframe?
What do you mean you only want the second table? There's only one table, it's 6 rows and 30 columns. The backslashes show up when whatever you're trying to print to isn't wide enough to contain the dataframe without line wrapping. Here's the dataframe printed in a wider terminal:
The pd.read_html() function returns a List[DataFrame] so you first need to grab your dataframe from the list, and then you can subset it to get the columns you care about:
df = dfs[0]
columns = ['R', 'H', '2B', '3B', 'HR', 'RBI', 'SB', 'CS', 'BB', 'SO', 'BA', 'OBP', 'SLG', 'OPS', 'OPS+', 'TB', 'GDP', 'HBP', 'SH', 'SF', 'IBB', 'Pos']
print(df[columns])
Output:
R H 2B 3B HR RBI SB CS BB SO BA OBP SLG OPS OPS+ TB GDP HBP SH SF IBB Pos
0 22 34 5 0 1 10 3 0 18 19 0.250 0.340 0.309 0.649 NaN 42 0 1 0 1 0 NaN
1 72 124 21 7 7 54 20 10 47 90 0.303 0.385 0.440 0.825 NaN 180 4 11 0 6 0 NaN
2 25 43 5 3 10 21 6 1 8 41 0.297 0.363 0.579 0.942 NaN 84 0 7 0 0 0 NaN
3 72 132 20 2 22 63 11 2 22 135 0.253 0.289 0.426 0.715 101.0 222 5 6 1 5 0 *6/H
4 72 132 20 2 22 63 11 2 22 135 0.253 0.289 0.426 0.715 101.0 222 5 6 1 5 0 NaN
5 86 157 24 2 26 75 13 2 26 161 0.253 0.289 0.426 0.715 101.0 264 6 7 1 6 0 NaN
How can I merge and sum the columns with the same name?
So the output should be 1 Column named Canada as a result of the sum of the 4 columns named Canada.
Country/Region Brazil Canada Canada Canada Canada
Week 1 0 3 0 0 0
Week 2 0 17 0 0 0
Week 3 0 21 0 0 0
Week 4 0 21 0 0 0
Week 5 0 23 0 0 0
Week 6 0 80 0 5 0
Week 7 0 194 0 20 0
Week 8 12 702 3 199 20
Week 9 182 2679 16 2395 260
Week 10 737 8711 80 17928 892
Week 11 1674 25497 153 48195 1597
Week 12 2923 46392 175 85563 2003
Week 13 4516 76095 182 122431 2180
Week 14 6002 105386 183 163539 2431
Week 15 6751 127713 189 210409 2995
Week 16 7081 147716 189 258188 3845
From its current state, this should give the outcome you're looking for:
df = df.set_index('Country/Region') # optional
df.groupby(df.columns, axis=1).sum() # Stolen from Scott Boston as it's a superior method.
Output:
index Brazil Canada
Country/Region
Week 1 0 3
Week 2 0 17
Week 3 0 21
Week 4 0 21
Week 5 0 23
Week 6 0 85
Week 7 0 214
Week 8 12 924
Week 9 182 5350
Week 10 737 27611
Week 11 1674 75442
Week 12 2923 134133
Week 13 4516 200888
Week 14 6002 271539
Week 15 6751 341306
Week 16 7081 409938
I found your dataset interesting, here's how I would clean it up from step 1:
df = pd.read_csv('file.csv')
df = df.set_index(['Province/State', 'Country/Region', 'Lat', 'Long']).stack().reset_index()
df.columns = ['Province/State', 'Country/Region', 'Lat', 'Long', 'date', 'value']
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
df = df.pivot_table(index=df.index, columns='Country/Region', values='value', aggfunc=np.sum)
print(df)
Output:
Country/Region Afghanistan Albania Algeria Andorra Angola ... West Bank and Gaza Western Sahara Yemen Zambia Zimbabwe
date ...
2020-01-22 0 0 0 0 0 ... 0 0 0 0 0
2020-01-23 0 0 0 0 0 ... 0 0 0 0 0
2020-01-24 0 0 0 0 0 ... 0 0 0 0 0
2020-01-25 0 0 0 0 0 ... 0 0 0 0 0
2020-01-26 0 0 0 0 0 ... 0 0 0 0 0
... ... ... ... ... ... ... ... ... ... ... ...
2020-07-30 36542 5197 29831 922 1109 ... 11548 10 1726 5555 3092
2020-07-31 36675 5276 30394 925 1148 ... 11837 10 1728 5963 3169
2020-08-01 36710 5396 30950 925 1164 ... 12160 10 1730 6228 3659
2020-08-02 36710 5519 31465 925 1199 ... 12297 10 1734 6347 3921
2020-08-03 36747 5620 31972 937 1280 ... 12541 10 1734 6580 4075
If you now want to do weekly aggregations, it's as simple as:
print(df.resample('w').sum())
Output:
Country/Region Afghanistan Albania Algeria Andorra Angola ... West Bank and Gaza Western Sahara Yemen Zambia Zimbabwe
date ...
2020-01-26 0 0 0 0 0 ... 0 0 0 0 0
2020-02-02 0 0 0 0 0 ... 0 0 0 0 0
2020-02-09 0 0 0 0 0 ... 0 0 0 0 0
2020-02-16 0 0 0 0 0 ... 0 0 0 0 0
2020-02-23 0 0 0 0 0 ... 0 0 0 0 0
2020-03-01 7 0 6 0 0 ... 0 0 0 0 0
2020-03-08 10 0 85 7 0 ... 43 0 0 0 0
2020-03-15 57 160 195 7 0 ... 209 0 0 0 0
2020-03-22 175 464 705 409 5 ... 309 0 0 11 7
2020-03-29 632 1142 2537 1618 29 ... 559 0 0 113 31
2020-04-05 1783 2000 6875 2970 62 ... 1178 4 0 262 59
2020-04-12 3401 2864 11629 4057 128 ... 1847 30 3 279 84
2020-04-19 5838 3603 16062 4764 143 ... 2081 42 7 356 154
2020-04-26 8918 4606 21211 5087 174 ... 2353 42 7 541 200
2020-05-03 15149 5391 27943 5214 208 ... 2432 42 41 738 244
2020-05-10 25286 5871 36315 5265 274 ... 2607 42 203 1260 241
2020-05-17 39634 6321 45122 5317 327 ... 2632 42 632 3894 274
2020-05-24 61342 6798 54185 5332 402 ... 2869 45 1321 5991 354
2020-05-31 91885 7517 62849 5344 536 ... 3073 63 1932 7125 894
2020-06-07 126442 8378 68842 5868 609 ... 3221 63 3060 7623 1694
2020-06-14 159822 9689 74147 5967 827 ... 3396 63 4236 8836 2335
2020-06-21 191378 12463 79737 5981 1142 ... 4466 63 6322 9905 3089
2020-06-28 210487 15349 87615 5985 1522 ... 10242 70 7360 10512 3813
2020-07-05 224560 18707 102918 5985 2186 ... 21897 70 8450 11322 4426
2020-07-12 237087 22399 124588 5985 2940 ... 36949 70 9489 13002 6200
2020-07-19 245264 26845 149611 6098 4279 ... 52323 70 10855 16350 9058
2020-07-26 250970 31255 178605 6237 5919 ... 68154 70 11571 26749 14933
2020-08-02 255739 36370 208457 6429 7648 ... 80685 70 12023 38896 22241
2020-08-09 36747 5620 31972 937 1280 ... 12541 10 1734 6580 4075
Try:
np.random.seed(0)
df = pd.DataFrame(np.random.randint(0,100,(20,5)), columns=[*'ZAABC'])
df.groupby(df.columns, axis=1, sort=False).sum()
Output:
Z A B C
0 44 111 67 67
1 9 104 36 87
2 70 176 12 58
3 65 126 46 88
4 81 62 77 72
5 9 100 69 79
6 47 146 99 88
7 49 48 19 14
8 39 97 9 57
9 32 105 23 35
10 75 83 34 0
11 0 89 5 38
12 17 83 42 58
13 31 66 41 57
14 35 57 82 91
15 0 113 53 12
16 42 159 68 6
17 68 50 76 52
18 78 35 99 58
19 23 92 85 48
You can try a transpose and groupby, e.g. something similar to the below.
df_T = df.tranpose()
df_T.groupby(df_T.index).sum()['Canada']
Here's a way to do it:
df.columns = [(col + str(i)) if col.startswith('Canada') else col for i, col in enumerate(df.columns)]
df = df.assign(Canada=df.filter(like='Canada').sum(axis=1)).drop(columns=[x for x in df.columns if x.startswith('Canada') and x != 'Canada'])
First we rename the columns starting with Canada by appending their integer position, which ensures they are no longer duplicates.
Then we use sum() to add across columns like Canada, put the result in a new column named Canada, and drop the columns that were originally named Canada.
Full test code is:
import pandas as pd
df = pd.DataFrame(
columns=[x.strip() for x in 'Brazil Canada Canada Canada Canada'.split()],
index=['Week ' + str(i) for i in range(1, 17)],
data=[[i] * 5 for i in range(1, 17)])
df.columns.names=['Country/Region']
print(df)
df.columns = [(col + str(i)) if col.startswith('Canada') else col for i, col in enumerate(df.columns)]
df = df.assign(Canada=df.filter(like='Canada').sum(axis=1)).drop(columns=[x for x in df.columns if x.startswith('Canada') and x != 'Canada'])
print(df)
Output:
Country/Region Brazil Canada Canada Canada Canada
Week 1 1 1 1 1 1
Week 2 2 2 2 2 2
Week 3 3 3 3 3 3
Week 4 4 4 4 4 4
Week 5 5 5 5 5 5
Week 6 6 6 6 6 6
Week 7 7 7 7 7 7
Week 8 8 8 8 8 8
Week 9 9 9 9 9 9
Week 10 10 10 10 10 10
Week 11 11 11 11 11 11
Week 12 12 12 12 12 12
Week 13 13 13 13 13 13
Week 14 14 14 14 14 14
Week 15 15 15 15 15 15
Week 16 16 16 16 16 16
Brazil Canada
Week 1 1 4
Week 2 2 8
Week 3 3 12
Week 4 4 16
Week 5 5 20
Week 6 6 24
Week 7 7 28
Week 8 8 32
Week 9 9 36
Week 10 10 40
Week 11 11 44
Week 12 12 48
Week 13 13 52
Week 14 14 56
Week 15 15 60
Week 16 16 64
I want to add ten more rows to each column of the dataset provided below. It should add random integer values ranging from :
20-27 for temperature
40-55 for humidity
150-170 for moisture
Dataset:
Temperature Humidity Moisture
0 22 46 0
1 36 41.4 170
2 18 69.3 120
3 21 39.3 200
4 39 70 150
5 22 78 220
6 27 65 180
7 32 75 250
I have tried:
import numpy as np
import pandas as pd
data1 = np.random.randint(20,27,size=10)
df = pd.DataFrame(data, columns=['Temperature'])
print(df)
This method deletes all the existing row values and gives out only the random values. What I all need is the existing rows and the random values in addition.
Use:
df1 = pd.DataFrame({'Temperature':np.random.randint(20,28,size=10),
'Humidity':np.random.randint(40,56,size=10),
'Moisture':np.random.randint(150,171,size=10)})
df = pd.concat([df, df1], ignore_index=True)
print (df)
Temperature Humidity Moisture
0 22 46.0 0
1 36 41.4 170
2 18 69.3 120
3 21 39.3 200
4 39 70.0 150
5 22 78.0 220
6 27 65.0 180
7 32 75.0 250
8 20 52.0 158
9 21 45.0 156
10 23 49.0 151
11 24 51.0 167
12 22 45.0 157
13 21 43.0 163
14 26 55.0 162
15 25 40.0 164
16 24 40.0 155
17 20 48.0 150
I am trying to append every column from one row into another row, I want to do this for every row, but some row will not have any values, take a look at my code it will be more clear:
Here is my data
date day_of_week day_of_month day_of_year month_of_year
5/1/2017 0 1 121 5
5/2/2017 1 2 122 5
5/3/2017 2 3 123 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5
5/9/2017 1 9 129 5
5/10/2017 2 10 130 5
5/11/2017 3 11 131 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5
5/16/2017 1 16 136 5
5/17/2017 2 17 137 5
5/18/2017 3 18 138 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5
5/24/2017 2 24 144 5
5/25/2017 3 25 145 5
5/26/2017 4 26 146 5
Here is my current code:
s = df_md['date'].shift(-1)
df_md['next_calendarday'] = s.mask(s.dt.dayofweek.diff().lt(0))
df_md.set_index('date', inplace=True)
df_md.apply(lambda row: GetNextDayMarketData(row, df_md), axis=1)
def GetNextDayMarketData(row, dataframe):
if(row['next_calendarday'] is pd.NaT):
return
key = row['next_calendarday'].strftime("%Y-%m-%d")
nextrow = dataframe.loc[key]
for index, val in nextrow.iteritems():
if(index != "next_calendarday"):
dataframe.loc[row.name, index+'_nextday'] = val
This works but it's so slow it might as well not work. Here is what the result should look like, you can see that the value from the next row has been added to the previous row. The kicker is that it's the next calendar date and not just the next row in the sequence. If a row does not have an entry for next calendar date, it will simply be blank.
Here is the expected result in csv
date day_of_week day_of_month day_of_year month_of_year next_workingday day_of_week_nextday day_of_month_nextday day_of_year_nextday month_of_year_nextday
5/1/2017 0 1 121 5 5/2/2017 1 2 122 5
5/2/2017 1 2 122 5 5/3/2017 2 3 123 5
5/3/2017 2 3 123 5 5/4/2017 3 4 124 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5 5/9/2017 1 9 129 5
5/9/2017 1 9 129 5 5/10/2017 2 10 130 5
5/10/2017 2 10 130 5 5/11/2017 3 11 131 5
5/11/2017 3 11 131 5 5/12/2017 4 12 132 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5 5/16/2017 1 16 136 5
5/16/2017 1 16 136 5 5/17/2017 2 17 137 5
5/17/2017 2 17 137 5 5/18/2017 3 18 138 5
5/18/2017 3 18 138 5 5/19/2017 4 19 139 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5 5/24/2017 2 24 144 5
5/24/2017 2 24 144 5 5/25/2017 3 25 145 5
5/25/2017 3 25 145 5 5/26/2017 4 26 146 5
5/26/2017 4 26 146 5
5/30/2017 1 30 150 5
Use DataFrame.join with remove column next_calendarday_nextday:
df = df.set_index('date')
df = (df.join(df, on='next_calendarday', rsuffix='_nextday')
.drop('next_calendarday_nextday', axis=1))
I have the following data:
Date Value
0 1/3/2014 778
1 1/4/2014 4554
2 1/5/2014 23
3 1/6/2014 767
4 1/7/2014 878
5 1/8/2014 678
6 1/9/2014 64
7 1/10/2014 344
8 1/11/2014 6576
9 1/12/2014 879
10 1/13/2014 5688
11 1/14/2014 688
12 1/15/2014 8799
13 1/16/2014 7899
14 1/17/2014 76
15 1/18/2014 868
16 1/19/2014 7976
17 1/20/2014 8679
18 1/21/2014 6976
19 1/22/2014 68
20 1/23/2014 754
21 1/24/2014 878
22 1/25/2014 9796
23 1/26/2014 57
24 1/27/2014 868
25 1/28/2014 868
26 1/29/2014 8778
27 1/30/2014 887
28 1/31/2014 765
29 2/1/2014 57
I would like to divide the data into a group of 15 consecutive day and find the average of the values. I have a naive way:
i = 15
j = 0
while i <= 30:
X = data[j:i].mean()
j = i
i = i + 15
print X
Is there a better way by say using group by in pandas?
try this:
df['Date'] = pd.to_datetime(df['Date'])
print(df.set_index('Date').groupby(pd.TimeGrouper('15D')).mean())
Output:
Value
Date
2014-01-03 2579.400000
2014-01-18 3218.333333