Round before convert to string in pandas - python

I have problem in rounding, this is seems so common, but I can't find the answer by googling, so I decide to ask it here.
Here's my data
day reg log ad trans paid
1111 20171005 172 65 39.0 14.0 3.0
1112 20171006 211 90 46.0 17.0 4.0
1113 20171007 155 70 50.0 17.0 1.0
1114 20171008 174 71 42.0 18.0 0.0
1115 20171009 209 63 43.0 21.0 2.0
Here's what I did, I still want to % in number
table['% log'] = (table.log / table.reg * 100).astype(str) + '%'
table['% ad'] = (table.ad / table.reg * 100).astype(str) + '%'
table['% trans'] = (table.trans / table.reg* 100).astype(str) + '%'
table['% paid'] = (table.paid / table.reg * 100).astype(str) + '%'
Here's what I get, need a final touch in rounding
day reg log ad trans paid % log % ad % trans % paid
1111 20171005 172 65 39.0 14.0 3.0 37.7906976744% 22.6744186047% 8.13953488372% 1.74418604651%
1112 20171006 211 90 46.0 17.0 4.0 42.654028436% 21.8009478673% 8.05687203791% 1.89573459716%
1113 20171007 155 70 50.0 17.0 1.0 45.1612903226% 32.2580645161% 10.9677419355% 0.645161290323%
1114 20171008 174 71 42.0 18.0 0.0 40.8045977011% 24.1379310345% 10.3448275862% 0.0%
1115 20171009 209 63 43.0 21.0 2.0 30.1435406699% 20.5741626794% 10.04784689% 0.956937799043%
What I want is the percentage is not too long, just round in two digits.

You need round:
table['% log'] = (table.log / table.reg * 100).round(2).astype(str) + '%'
Better solution is select all columns by subset and output join to original df:
cols = ['log','ad','trans','paid']
table =(table.join(table[cols].div(table.reg, 0)
.mul(100)
.round(2)
.astype(str)
.add('%')
.add_prefix('%% ')))
print (table)
day reg log ad trans paid % log % ad % trans % paid
1111 20171005 172 65 39.0 14.0 3.0 37.79% 22.67% 8.14% 1.74%
1112 20171006 211 90 46.0 17.0 4.0 42.65% 21.8% 8.06% 1.9%
1113 20171007 155 70 50.0 17.0 1.0 45.16% 32.26% 10.97% 0.65%
1114 20171008 174 71 42.0 18.0 0.0 40.8% 24.14% 10.34% 0.0%
1115 20171009 209 63 43.0 21.0 2.0 30.14% 20.57% 10.05% 0.96%
Also if need nicer output - add 0 for 2 decimals:
table =(table.join(table[cols].div(table.reg, 0)
.mul(100)
.applymap("{0:.2f}".format)
.add('%')
.add_prefix('%% ')))
print (table)
day reg log ad trans paid % log % ad % trans % paid
1111 20171005 172 65 39.0 14.0 3.0 37.79% 22.67% 8.14% 1.74%
1112 20171006 211 90 46.0 17.0 4.0 42.65% 21.80% 8.06% 1.90%
1113 20171007 155 70 50.0 17.0 1.0 45.16% 32.26% 10.97% 0.65%
1114 20171008 174 71 42.0 18.0 0.0 40.80% 24.14% 10.34% 0.00%
1115 20171009 209 63 43.0 21.0 2.0 30.14% 20.57% 10.05% 0.96%

Related

cant get count with 2 arguments in python pandas dataframe?

I have a dataframe called new_df
that prints this ....
which basically collates the following data
Pass Profit Trades MA2
0 69 10526.0 14 119
1 47 10420.0 13 97
2 68 10406.0 14 118
3 50 10376.0 13 100
4 285 10352.0 16 335
... ... ... ... ...
21643 117 -10376.0 14 167
21644 116 -10376.0 14 166
21645 115 -10376.0 14 165
21646 114 -10376.0 14 164
21647 113 -10376.0 14 163
[21648 rows x 4 columns]
and then i can see there are 48 times 69 is showing in the Pass column, etc
#counts the number of times each pass number is listed in pass column
new_df['Pass'].value_counts()
69 48
219 48
184 48
185 48
186 48
..
59 48
16 48
20 48
70 48
113 48
Name: Pass, Length: 451, dtype: int64
Right now i am trying to create a new df called sorted_df
the columns i cant get working are below
Total Pass - Counts the number of times a unique number in Pass column also has the profit column above 110000
Pass % - Total Pass / Total Weeks
Total Fail - Counts the number of times a unique number in Pass column also has the profit column below 100000
Fail % - Total Fail / Total Weeks
sorted_df = pd.DataFrame(columns=['Pass','Total Profit','Total Weeks','Average per week','Total Pass','Pass %','Total Fail','Fail %','MA2'])
#group the original df by Pass and get first MA2 value of each group
pass_to_ma2 = new_df.groupby('Pass')['MA2'].first()
total_pass = 0
total_fail = 0
for value in new_df['Pass'].unique():
mask = new_df['Pass'] == value
pass_value = new_df[mask]
total_profit = pass_value['Profit'].sum()
total_weeks = pass_value.shape[0]
average_per_week = total_profit/total_weeks
total_pass = pass_value[pass_value['Profit'] > 110000].shape[0]
pass_percentage = total_pass / total_weeks * 100 if total_weeks > 0 else 0
total_fail = pass_value[pass_value['Profit'] < 100000].shape[0]
fail_percentage = total_fail / total_weeks * 100 if total_weeks > 0 else 0
sorted_df = sorted_df.append({'Pass': value, 'Total Profit': total_profit, 'Total Weeks': total_weeks, 'Average per week': average_per_week, 'In Profit': in_profit, 'Profit %': profit_percentage, 'Total Pass': total_pass, 'Pass %': pass_percentage, 'Total Fail': total_fail, 'Fail %': fail_percentage}, ignore_index=True)
# Add the MA2 value to the sorted_df DataFrame
sorted_df["MA2"] = sorted_df["Pass"].map(pass_to_ma2)
Pass Total Profit Total Weeks Average per week Total Pass Pass % \
0 69.0 505248.0 48.0 10526.0 0.0 0.0
1 47.0 500160.0 48.0 10420.0 0.0 0.0
2 68.0 499488.0 48.0 10406.0 0.0 0.0
3 50.0 498048.0 48.0 10376.0 0.0 0.0
4 285.0 496896.0 48.0 10352.0 0.0 0.0
.. ... ... ... ... ... ...
446 117.0 -498048.0 48.0 -10376.0 0.0 0.0
447 116.0 -498048.0 48.0 -10376.0 0.0 0.0
448 115.0 -498048.0 48.0 -10376.0 0.0 0.0
449 114.0 -498048.0 48.0 -10376.0 0.0 0.0
450 113.0 -498048.0 48.0 -10376.0 0.0 0.0
Total Fail Fail % MA2 In Profit Profit %
0 48.0 100.0 119 0.0 0.0
1 48.0 100.0 97 0.0 0.0
2 48.0 100.0 118 0.0 0.0
3 48.0 100.0 100 0.0 0.0
4 48.0 100.0 335 0.0 0.0
.. ... ... ... ... ...
446 48.0 100.0 167 0.0 0.0
447 48.0 100.0 166 0.0 0.0
448 48.0 100.0 165 0.0 0.0
449 48.0 100.0 164 0.0 0.0
450 48.0 100.0 163 0.0 0.0
[451 rows x 11 columns]
What am i doing wrong?

How can I iterate over a pandas dataframe so I can divide specific values based on a condition?

I have a dataframe like below:
0 1 2 ... 62 63 64
795 89.0 92.0 89.0 ... 74.0 64.0 4.0
575 80.0 75.0 78.0 ... 70.0 68.0 3.0
1119 2694.0 2437.0 2227.0 ... 4004.0 4010.0 6.0
777 90.0 88.0 88.0 ... 71.0 67.0 4.0
506 82.0 73.0 77.0 ... 69.0 64.0 2.0
... ... ... ... ... ... ... ...
65 84.0 77.0 78.0 ... 78.0 80.0 0.0
1368 4021.0 3999.0 4064.0 ... 1.0 4094.0 8.0
1036 80.0 80.0 79.0 ... 73.0 66.0 5.0
1391 3894.0 3915.0 3973.0 ... 4.0 4090.0 8.0
345 81.0 74.0 75.0 ... 80.0 75.0 1.0
I want to divide all elements over 1000 in this dataframe by 100. So 4021.0 becomes 40.21, et cetera.
I've tried something like below:
for cols in df:
for rows in df[cols]:
print(df[cols][rows])
I get index out of bound errors. I'm just not sure how to properly iterate the way I'm looking for.
I think loops are here slow, so better is use vectorizes solutions - select values greater like 1000 and divide:
df[df.gt(1000)] = df.div(100)
Or using DataFrame.mask:
df = df.mask(df.gt(1000), df.div(100))
print (df)
0 1 2 62 63 64
795 89.00 92.00 89.00 74.00 64.00 4.0
575 80.00 75.00 78.00 70.00 68.00 3.0
1119 26.94 24.37 22.27 40.04 40.10 6.0
777 90.00 88.00 88.00 71.00 67.00 4.0
506 82.00 73.00 77.00 69.00 64.00 2.0
65 84.00 77.00 78.00 78.00 80.00 0.0
1368 40.21 39.99 40.64 1.00 40.94 8.0
1036 80.00 80.00 79.00 73.00 66.00 5.0
1391 38.94 39.15 39.73 4.00 40.90 8.0
345 81.00 74.00 75.00 80.00 75.00 1.0
You can use the applymap function and create your custom function
def mapper_function(x):
if x >= 1000:
x=x/100
else:
x
return x
df=df.applymap(mapper_function)

Calculate mean of data rows in dataframe with date-headers, dictated by a 'datetime'-column

I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0.
If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Initial dataframe:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID
0 100.0 122.0 324 632 NaN 2016-03-08 12
1 120.0 159.0 54 452 541.0 2015-04-09 96
2 NaN 164.0 687 165 245.0 2016-02-15 20
3 180.0 421.0 512 184 953.0 2018-05-01 73
4 110.0 654.0 913 173 103.0 2017-08-04 84
5 130.0 NaN 754 124 207.0 2016-07-03 26
6 170.0 256.0 843 97 806.0 2013-02-04 87
7 140.0 754.0 95 101 541.0 2016-06-08 64
8 80.0 985.0 184 84 90.0 2019-03-05 11
9 96.0 65.0 127 130 421.0 2014-05-14 34
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
Tried code: -> I'm still working on it, as I don't really know how to start for this, I only uploaded the dataframe so far, probably something with the 'datetime'-package has to be done to get the desired dataframe?
import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
print(df)
Due to your naming convention, one need to extract the years from column names for comparison purpose. Then you can mask the data and taking mean:
# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)
# the years from Date
years = pd.to_datetime(df.Date).dt.year.values
df['mean'] = data.where(data_years<years[:,None]).mean(1)
Output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.00
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.00
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.00
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.00
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.00
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447.00
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.60
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
one more answer:
import pandas as pd
import numpy as np

df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']

s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)

print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)

print(df)

iterating over loc on dataframes

I'm trying to extract data from a list of dataframes and extract row ranges. Each dataframe might not have the same data, therefore I have a list of possible index ranges that I would like loc to loop over, i.e. from the code sample below, I might want CIN to LAN, but on another dataframe, the CIN row doesn't exist, so I will want DET to LAN or HOU to LAN.
so I was thinking putting them in a list and iterating over the list, i.e.
for df in dfs:
ranges=[[df.loc["CIN":"LAN"]], [df.loc["DET":"LAN"]]]
extracted ranges = (i for i in ranges)
I'm not sure how you would iterate over a list and feed into loc, or perhaps .query().
df1 stint g ab r h X2b X3b hr rbi sb cs bb \
year team
2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174
SFN 5 482 1305 198 337 67 6 40 171.0 26.0 7.0 235
TEX 2 198 729 115 200 40 4 28 115.0 21.0 4.0 73
TOR 4 459 1408 187 378 96 2 58 223.0 4.0 2.0 190
df2 so ibb hbp sh sf gidp
year team
2008 DET 176.0 3.0 10.0 4.0 8.0 28.0
HOU 212.0 3.0 9.0 16.0 6.0 17.0
LAN 141.0 8.0 9.0 3.0 8.0 29.0
NYN 310.0 24.0 23.0 18.0 15.0 48.0
SFN 188.0 51.0 8.0 16.0 6.0 41.0
TEX 140.0 4.0 5.0 2.0 8.0 16.0
TOR 265.0 16.0 12.0 4.0 16.0 38.0
Here is a solution:
import pandas as pd
# Prepare a list of ranges
ranges = [('CIN','LAN'), ('DET','LAN')]
# Declare an empty list of data frames and a list with the existing data frames
df_ranges = []
df_list = [df1, df2]
# Loop over multi-indices
for i, idx_range in enumerate(ranges):
df = df_list[i]
row1, row2 = idx_range
df_ranges.append(df.loc[(slice(None), slice(row1, row2)),:])
# Print the extracted data
print('Extracted data:\n')
print(df_ranges)
Output:
[ stint g ab r h X2b X3b hr rbi sb cs bb
year team
2007 CIN 6 379 745 101 203 35 2 36 125 10 1 105
DET 5 301 1062 162 283 54 4 37 144 24 7 97
HOU 4 311 926 109 218 47 6 14 77 10 4 60
LAN 11 413 1021 153 293 61 3 36 154 7 5 114
so ibb hbp sh sf gidp
year team
2008 DET 176 3 10 4 8 28
HOU 212 3 9 16 6 17
LAN 141 8 9 3 8 29]

How to retrieve one column from csv file using python?

im trying to retrieve the age column from one of the csv file , here is what i coded so far.
df = pd.DataFrame.from_csv('train.csv')
result = df[(df.Sex=='female') & (df.Pclass==3)]
print(result.Age)
# finding the average age of all people who survived
print len(result)
sum = len(result)
I printed out the age, because i wanted to see the list of all ages that belong to the colunm of sex that has the value of "female" and the column of class which has the value of "3"
the print result for some reason shows the colunm number and the age next to it, i just want it print the list of ages thats all.
PassengerId
3 26.0
9 27.0
11 4.0
15 14.0
19 31.0
20 NaN
23 15.0
25 8.0
26 38.0
29 NaN
33 NaN
39 18.0
40 14.0
41 40.0
45 19.0
48 NaN
50 18.0
69 17.0
72 16.0
80 30.0
83 NaN
86 33.0
101 28.0
107 21.0
110 NaN
112 14.5
114 20.0
115 17.0
120 2.0
129 NaN
...
658 32.0
678 18.0
679 43.0
681 NaN
692 4.0
698 NaN
703 18.0
728 NaN
730 25.0
737 48.0
768 30.5
778 5.0
781 13.0
787 18.0
793 NaN
798 31.0
800 30.0
808 18.0
814 6.0
817 23.0
824 27.0
831 15.0
853 9.0
856 18.0
859 24.0
864 NaN
876 15.0
883 22.0
886 39.0
889 NaN
Name: Age, dtype: float64
This is what my program prints, i just want the list of age on the right column only not the passengerID column which is on the left.
Thank you
result.Age is a pandas Series object, and so when you print it, column headers, indices, and data types are shown as well. This is a good thing, because it makes the printed representation of the object much more useful.
If you want to control exactly how the data is displayed, you will need to do some string formatting. Something like this should do what you're asking for:
print('\n'.join(str(x) for x in result.Age))
If you want access to the raw data underlying that column for some reason (usually you can work with the Series just as well), without indices or headers, you can get a numpy array with
result.Age.values

Categories