query a csv using pandas - python

I have a csv file like this:
year month Company A Company B Company C
1990 Jan 10 15 20
1990 Feb 11 14 21
1990 mar 13 8 23
1990 april 12 22 19
1990 may 15 12 18
1990 june 18 13 13
1990 june 12 14 15
1990 july 12 14 16
1991 Jan 11 16 13
1991 Feb 14 17 11
1991 mar 23 13 12
1991 april 23 21 10
1991 may 22 22 9
1991 june 24 20 32
1991 june 12 14 15
1991 july 21 14 16
1992 Jan 10 13 26
1992 Feb 9 11 19
1992 mar 23 12 18
1992 april 12 10 21
1992 may 17 9 10
1992 june 15 42 9
1992 june 16 9 26
1992 july 15 26 19
1993 Jan 18 19 20
1993 Feb 19 18 21
1993 mar 20 21 23
1993 april 21 10 19
1993 may 13 9 14
1993 june 14 23 23
1993 june 15 21 23
1993 july 16 10 22
I want to find out for each company the month and year where they had the highest number of sale for ex: for company A in year 1990 they had highest sale of 18. I want to do this using pandas. but to understand how to proceed with this. pointers needed please.
ps: here is what I have done till now.
import pandas as pd
df = pd.read_csv('SAMPLE.csv')
num_of_rows = len(df.index)
years_list = []
months_list = []
company_list = df.columns[2:]
for each in df.columns[2:]:
each = []
for i in range(0,num_of_rows):
years_list.append(df[df.columns[0]][i])
months_list.append(df[df.columns[1]][i])
years_list = list(set(years_list))
months_list = list(set(months_list))
for each in years_list:
for c in company_list:
print df[(df.year == each)][c].max()
I am getting the biggest number for a year for a company but how to get the month and year also I dont know.

Use a combination of idxmax() and loc to filter the dataframe:
In [36]:
import pandas as pd
import io
temp = """year month Company_A Company_B Company_C
1990 Jan 10 15 20
1990 Feb 11 14 21
1990 mar 13 8 23
1990 april 12 22 19
1990 may 15 12 18
1990 june 18 13 13
1990 june 12 14 15
1990 july 12 14 16
1991 Jan 11 16 13
1991 Feb 14 17 11
1991 mar 23 13 12
1991 april 23 21 10
1991 may 22 22 9
1991 june 24 20 32
1991 june 12 14 15
1991 july 21 14 16
1992 Jan 10 13 26
1992 Feb 9 11 19
1992 mar 23 12 18
1992 april 12 10 21
1992 may 17 9 10
1992 june 15 42 9
1992 june 16 9 26
1992 july 15 26 19
1993 Jan 18 19 20
1993 Feb 19 18 21
1993 mar 20 21 23
1993 april 21 10 19
1993 may 13 9 14
1993 june 14 23 23
1993 june 15 21 23
1993 july 16 10 22"""
df = pd.read_csv(io.StringIO(temp),sep='\s+')
# the call to .unique() is because the same row for A and C appears twice
df.loc[df[['Company_A', 'Company_B', 'Company_C']].idxmax().unique()]
Out[36]:
year month Company_A Company_B Company_C
13 1991 june 24 20 32
21 1992 june 15 42 9

Related

Python Pandas multiindex

i'm try create table like in example:
Example_picture
My code:
data = list(range(39)) # mockup for 39 values
columns = pd.MultiIndex.from_product([['1', '2', '6'], [str(year) for year in range(2007, 2020)]],
names=['Factor', 'Year'])
df = pd.DataFrame(data, index=['World'], columns=columns)
print(df)
But i get error:
Shape of passed values is (39, 1), indices imply (1, 39)
What i'm did wrong?
You need to wrap the data in a list to force the DataFrame constructor to interpret the list as a row:
data = list(range(39))
columns = pd.MultiIndex.from_product([['1', '2', '6'],
[str(year) for year in range(2007, 2020)]],
names=['Factor', 'Year'])
df = pd.DataFrame([data], index=['World'], columns=columns)
output:
Factor 1 2 6
Year 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
World 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

Pandas cumulative sum without changing week order number

I have a dataframe which looks like the following:
df:
RY Week no Value
2020 14 3.95321
2020 15 3.56425
2020 16 0.07042
2020 17 6.45417
2020 18 0.00029
2020 19 0.27737
2020 20 4.12644
2020 21 0.32753
2020 22 0.47239
2020 23 0.28756
2020 24 1.83029
2020 25 0.75385
2020 26 2.08981
2020 27 2.05611
2020 28 1.00614
2020 29 0.02105
2020 30 0.58101
2020 31 3.49083
2020 32 8.29013
2020 33 8.99825
2020 34 2.66293
2020 35 0.16448
2020 36 2.26301
2020 37 1.09302
2020 38 1.66566
2020 39 1.47233
2020 40 6.42708
2020 41 2.67947
2020 42 6.79551
2020 43 4.45881
2020 44 1.87972
2020 45 0.76284
2020 46 1.8671
2020 47 2.07159
2020 48 2.87303
2020 49 7.66944
2020 50 1.20421
2020 51 9.04416
2020 52 2.2625
2020 1 1.17026
2020 2 14.22263
2020 3 1.36464
2020 4 2.64862
2020 5 8.69916
2020 6 4.51259
2020 7 2.83411
2020 8 3.64183
2020 9 4.77292
2020 10 1.64729
2020 11 1.6878
2020 12 2.24874
2020 13 0.32712
I created a week no column using date. In my scenario regulatory year starts from 1st April and ends at 31st March of next year that's why Week no starts from 14 and ends at 13. Now I want to create another column that contains the cumulative sum of the value column. I tried to use cumsum() by using the following code:
df['Cummulative Value'] = df.groupby('RY')['Value'].apply(lambda x:x.cumsum())
The problem with the above code is that it starts calculating the cumulative sum from week no 1 not from week no 14 onwards. Is there any way to calculate the cumulative sum without disturbing the week order number?
EDIT: You can sorting values by RY and Week no before GroupBy.cumsum and last sorting index for original order:
#create default index for correct working
df = df.reset_index(drop=True)
df['Cummulative Value'] = df.sort_values(['RY','Week no']).groupby('RY')['Value'].cumsum().sort_index()
print (df)
RY Week no Value Cummulative Value
0 2020 14 3.95321 53.73092
1 2020 15 3.56425 57.29517
2 2020 16 0.07042 57.36559
3 2020 17 6.45417 63.81976
4 2020 18 0.00029 63.82005
5 2020 19 0.27737 64.09742
6 2020 20 4.12644 68.22386
7 2020 21 0.32753 68.55139
8 2020 22 0.47239 69.02378
9 2020 23 0.28756 69.31134
10 2020 24 1.83029 71.14163
11 2020 25 0.75385 71.89548
12 2020 26 2.08981 73.98529
13 2020 27 2.05611 76.04140
14 2020 28 1.00614 77.04754
15 2020 29 0.02105 77.06859
16 2020 30 0.58101 77.64960
17 2020 31 3.49083 81.14043
18 2020 32 8.29013 89.43056
19 2020 33 8.99825 98.42881
20 2020 34 2.66293 101.09174
21 2020 35 0.16448 101.25622
22 2020 36 2.26301 103.51923
23 2020 37 1.09302 104.61225
24 2020 38 1.66566 106.27791
25 2020 39 1.47233 107.75024
26 2020 40 6.42708 114.17732
27 2020 41 2.67947 116.85679
28 2020 42 6.79551 123.65230
29 2020 43 4.45881 128.11111
30 2020 44 1.87972 129.99083
31 2020 45 0.76284 130.75367
32 2020 46 1.86710 132.62077
33 2020 47 2.07159 134.69236
34 2020 48 2.87303 137.56539
35 2020 49 7.66944 145.23483
36 2020 50 1.20421 146.43904
37 2020 51 9.04416 155.48320
38 2020 52 2.26250 157.74570
39 2020 1 1.17026 1.17026
40 2020 2 14.22263 15.39289
41 2020 3 1.36464 16.75753
42 2020 4 2.64862 19.40615
43 2020 5 8.69916 28.10531
44 2020 6 4.51259 32.61790
45 2020 7 2.83411 35.45201
46 2020 8 3.64183 39.09384
47 2020 9 4.77292 43.86676
48 2020 10 1.64729 45.51405
49 2020 11 1.68780 47.20185
50 2020 12 2.24874 49.45059
51 2020 13 0.32712 49.77771
EDIT:
After some discussion solution should be simplify by GroupBy.cumsum:
df['Cummulative Value'] = df.groupby('RY')['Value'].cumsum()

Fill column with value from previous year from the same month

How can I use the value from the same month in the previous year to fill values in the following table for 2020:
Category Month Year Value
A 1 2019 15
B 2 2019 20
A 2 2019 90
A 3 2019 50
B 4 2019 40
A 5 2019 20
A 6 2019 15
A 7 2019 17
A 8 2019 18
A 9 2019 12
A 10 2019 11
A 11 2019 19
A 12 2019 15
A 1 2020 18
A 2 2020 53
A 3 2020 80
The final desired result is the following:
Category Month Year Value
A 1 2019 15
B 2 2019 20
A 2 2019 90
A 3 2019 50
B 4 2019 40
A 4 2019 40
A 5 2019 20
A 6 2019 15
A 7 2019 17
A 8 2019 18
A 9 2019 12
A 10 2019 11
A 11 2019 19
A 12 2019 15
A 1 2020 18
A 2 2020 53
A 3 2020 80
B 4 2020 40
A 4 2020 40
A 5 2020 20
A 6 2020 15
A 7 2020 17
A 8 2020 18
A 9 2020 12
A 10 2020 11
A 11 2020 19
A 12 2020 15
I tried using pandas groupby but not sure if that is the right approach.
IIUC we use the pivot then ffill with stack
s=df.pivot_table(index=['Category','Year'],columns='Month',values='Value').groupby(level=0).ffill().stack().reset_index()
Category Year level_2 0
0 A 2019 1 15.0
1 A 2019 2 90.0
2 A 2019 3 50.0
3 A 2019 5 20.0
4 A 2019 6 15.0
5 A 2019 7 17.0
6 A 2019 8 18.0
7 A 2019 9 12.0
8 A 2019 10 11.0
9 A 2019 11 19.0
10 A 2019 12 15.0
11 A 2020 1 18.0
12 A 2020 2 53.0
13 A 2020 3 80.0
14 A 2020 5 20.0
15 A 2020 6 15.0
16 A 2020 7 17.0
17 A 2020 8 18.0
18 A 2020 9 12.0
19 A 2020 10 11.0
20 A 2020 11 19.0
21 A 2020 12 15.0
22 B 2019 2 20.0
23 B 2019 4 40.0
You can accomplish this with a combination of loc, concat, and drop_duplicates.
The idea here is to concatenate the dataframe with a copy of the 2019 data where year is changed to 2020, and then only keeping the first value for Category, Month, Year.
df2 = df.loc[df['Year'] == 2019, :]
df2['Year'] = 2020
pd.concat([df, df2]).drop_duplicates(subset=['Category', 'Month', 'Year'], keep='first')
Output
Category Month Year Value
0 A 1 2019 15
1 B 2 2019 20
2 A 2 2019 90
3 A 3 2019 50
4 B 4 2019 40
5 A 5 2019 20
6 A 6 2019 15
7 A 7 2019 17
8 A 8 2019 18
9 A 9 2019 12
10 A 10 2019 11
11 A 11 2019 19
12 A 12 2019 15
13 A 1 2020 18
14 A 2 2020 53
15 A 3 2020 80
1 B 2 2020 20
4 B 4 2020 40
5 A 5 2020 20
6 A 6 2020 15
7 A 7 2020 17
8 A 8 2020 18
9 A 9 2020 12
10 A 10 2020 11
11 A 11 2020 19
12 A 12 2020 15

Create a new column in pandas dataframe based on multiple conditions

I have a dataframe like the one below, and I have to create a new column year_val that is equal to the values of col2016 through col2019 based on the Years column, so that the value for year_val will be the value of col#### when Years is equal to the suffix of col####
import pandas as pd
sampleDF = pd.DataFrame({'Years':[2016,2016,2017,2017,2018,2018,2019,2019],
'col2016':[1,2,3,4,5,6,7,8],
'col2017':[9,10,11,12,13,14,15,16],
'col2018':[17,18,19,20,21,22,23,24],
'col2019':[25,26,27,28,29,30,31,32]})
sampleDF['year_val'] = ?????
Use DataFrame.lookup with change values in Years column with prepend col and cast to string:
sampleDF['year_val'] = sampleDF.lookup(sampleDF.index, 'col' + sampleDF['Years'].astype(str))
print (sampleDF)
Years col2016 col2017 col2018 col2019 year_val
0 2016 1 9 17 25 1
1 2016 2 10 18 26 2
2 2017 3 11 19 27 11
3 2017 4 12 20 28 12
4 2018 5 13 21 29 21
5 2018 6 14 22 30 22
6 2019 7 15 23 31 31
7 2019 8 16 24 32 32
EDIT: If check definition of lookup function:
result = [df.get_value(row, col) for row, col in zip(row_labels, col_labels)]
you can modify it with try-except statement with Series.at for prevent:
FutureWarning: get_value is deprecated and will be removed in a future release. Please use .at[] or .iat[] accessors instead
oup.append(sampleDF.at[row, col] )
sampleDF = pd.DataFrame({'Years':[2015,2016,2017,2017,2018,2018,2019,2019],
'col2016':[1,2,3,4,5,6,7,8],
'col2017':[9,10,11,12,13,14,15,16],
'col2018':[17,18,19,20,21,22,23,24],
'col2019':[25,26,27,28,29,30,31,32]})
print (sampleDF)
Years col2016 col2017 col2018 col2019
0 2015 1 9 17 25
1 2016 2 10 18 26
2 2017 3 11 19 27
3 2017 4 12 20 28
4 2018 5 13 21 29
5 2018 6 14 22 30
6 2019 7 15 23 31
7 2019 8 16 24 32
out= []
for row, col in zip(sampleDF.index, 'col' + sampleDF['Years'].astype(str)):
try:
out.append(sampleDF.at[row, col] )
except KeyError:
out.append(np.nan)
sampleDF['year_val'] = out
print (sampleDF)
Years col2016 col2017 col2018 col2019 year_val
0 2015 1 9 17 25 NaN
1 2016 2 10 18 26 2.0
2 2017 3 11 19 27 11.0
3 2017 4 12 20 28 12.0
4 2018 5 13 21 29 21.0
5 2018 6 14 22 30 22.0
6 2019 7 15 23 31 31.0
7 2019 8 16 24 32 32.0

How to normalize the following dates inside a pandas dataframe?

I have the following dates dataframe:
dates
0 2012 10 4
1
2 2012 01 19
3 20 6 11
4 20 10 7
5 19 11 12
6
7 2013 03 19
8 2016 2 5
9 2011 2 19
10
11 2011 05 23
12 2012 04 5
How can I normalize the dates column into:
dates
0 2012 10 04
1
2 2012 01 19
3 2020 06 11
4 2020 10 07
5 2019 11 12
6
7 2013 03 19
8 2016 02 05
9 2011 02 19
10
11 2011 05 23
12 2012 04 05
I tried with regex and splitting and tweaking each column separately. However I am complicating the task. Is it possible to normalize this into the latter dataframe?. The rule is to add a 0 if the year is incomplete or a 20 at the beggining of the string if the year is incomplete, the format is yyyymmdd.
Solution:
x = (df.loc[df.dates.str.contains(r'\d+\s*\d+\s*\d+'), 'dates']
.str.split(expand=True)
.rename(columns={0:'year',1:'month',2:'day'})
.astype(int)
)
x.loc[x.year <= 50, 'year'] += 2000
df['new'] = pd.to_datetime(x, errors='coerce').dt.strftime('%Y%m%d')
Result:
In [148]: df
Out[148]:
dates new
0 2012 10 4 20121004
1 NaN
2 2012 01 19 20120119
3 20 6 11 20200611
4 20 10 7 20201007
5 19 11 12 20191112
6 NaN
7 2013 03 19 20130319
8 2016 2 5 20160205
9 2011 2 19 20110219
10 NaN
11 2011 05 23 20110523
12 2012 04 5 20120405
Explanation:
In [149]: df.loc[df.dates.str.contains(r'\d+\s*\d+\s*\d+'), 'dates']
Out[149]:
0 2012 10 4
2 2012 01 19
3 20 6 11
4 20 10 7
5 19 11 12
7 2013 03 19
8 2016 2 5
9 2011 2 19
11 2011 05 23
12 2012 04 5
Name: dates, dtype: object
In [152]: (df.loc[df.dates.str.contains(r'\d+\s*\d+\s*\d+'), 'dates']
...: .str.split(expand=True)
...: .rename(columns={0:'year',1:'month',2:'day'})
...: .astype(int))
Out[152]:
year month day
0 2012 10 4
2 2012 1 19
3 20 6 11
4 20 10 7
5 19 11 12
7 2013 3 19
8 2016 2 5
9 2011 2 19
11 2011 5 23
12 2012 4 5

Categories