I am trying to import all 9 columns of the popular MPG dataset from UCI from a URL. The problem is , instead of the string values showing, Carname (the ninth column) is populated by NaN.
What is going wrong and how can one fix this? The link to the repository shows that the original dataset has 9 columns, so this should work.
From the URL and we find that the data looks like
18.0 8 307.0 130.0 3504. 12.0 70 1 "chevrolet chevelle malibu"
15.0 8 350.0 165.0 3693. 11.5 70 1 "buick skylark 320"
with unique string values on the Carname but when we import it as
import pandas as pd
# Import raw dataset from URL
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'Model Year', 'Origin', 'Carname']
data = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
data.head(3)
yielding (with NaN values on Carname)
MPG Cylinders Displacement Horsepower Weight Acceleration Model Year Origin Carname
0 18.0 8 307.0 130.0 3504.0 12.0 70 1 NaN
1 15.0 8 350.0 165.0 3693.0 11.5 70 1 NaN
It’s literally in your read_csv call: comment='\t'. The only tabs are before the Carname field, which means the way you read the fle explicitely ignores that column.
You can remove the comment parameter and use the more generic separator \s+ instead to split on any whitespace (one or more spaces, a tab, etc.):
>>> pd.read_csv(url, names=column_names, na_values='?', sep='\s+')
MPG Cylinders Displacement Horsepower Weight Acceleration Model Year Origin Carname
0 18.0 8 307.0 130.0 3504.0 12.0 70 1 chevrolet chevelle malibu
1 15.0 8 350.0 165.0 3693.0 11.5 70 1 buick skylark 320
2 18.0 8 318.0 150.0 3436.0 11.0 70 1 plymouth satellite
3 16.0 8 304.0 150.0 3433.0 12.0 70 1 amc rebel sst
4 17.0 8 302.0 140.0 3449.0 10.5 70 1 ford torino
.. ... ... ... ... ... ... ... ... ...
393 27.0 4 140.0 86.0 2790.0 15.6 82 1 ford mustang gl
394 44.0 4 97.0 52.0 2130.0 24.6 82 2 vw pickup
395 32.0 4 135.0 84.0 2295.0 11.6 82 1 dodge rampage
396 28.0 4 120.0 79.0 2625.0 18.6 82 1 ford ranger
397 31.0 4 119.0 82.0 2720.0 19.4 82 1 chevy s-10
[398 rows x 9 columns]
I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0.
If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Initial dataframe:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID
0 100.0 122.0 324 632 NaN 2016-03-08 12
1 120.0 159.0 54 452 541.0 2015-04-09 96
2 NaN 164.0 687 165 245.0 2016-02-15 20
3 180.0 421.0 512 184 953.0 2018-05-01 73
4 110.0 654.0 913 173 103.0 2017-08-04 84
5 130.0 NaN 754 124 207.0 2016-07-03 26
6 170.0 256.0 843 97 806.0 2013-02-04 87
7 140.0 754.0 95 101 541.0 2016-06-08 64
8 80.0 985.0 184 84 90.0 2019-03-05 11
9 96.0 65.0 127 130 421.0 2014-05-14 34
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
Tried code: -> I'm still working on it, as I don't really know how to start for this, I only uploaded the dataframe so far, probably something with the 'datetime'-package has to be done to get the desired dataframe?
import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
print(df)
Due to your naming convention, one need to extract the years from column names for comparison purpose. Then you can mask the data and taking mean:
# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)
# the years from Date
years = pd.to_datetime(df.Date).dt.year.values
df['mean'] = data.where(data_years<years[:,None]).mean(1)
Output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.00
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.00
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.00
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.00
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.00
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447.00
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.60
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
one more answer:
import pandas as pd
import numpy as np
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']
s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)
print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)
print(df)
I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses for 2014-2018 of each ID in the dataframe.
There is however one condition: if one of the cells in the rows (2014-2018) is empty, NaN should be returned. So I only want the mean to be calculated when all 5 row-cells in the columns 2014-2018 have a numeric value.
Initial dataframe:
2014 2015 2016 2017 2018 ID
100 122.0 324 632 NaN 12.0
120 159.0 54 452 541.0 96.0
NaN 164.0 687 165 245.0 20.0
180 421.0 512 184 953.0 73.0
110 654.0 913 173 103.0 84.0
130 NaN 754 124 207.0 26.0
170 256.0 843 97 806.0 87.0
140 754.0 95 101 541.0 64.0
80 985.0 184 84 90.0 11.0
96 65.0 127 130 421.0 34.0
Desired output
2014 2015 2016 2017 2018 ID mean
100 122.0 324 632 NaN 12.0 NaN
120 159.0 54 452 541.0 96.0 265.20
NaN 164.0 687 165 245.0 20.0 NaN
180 421.0 512 184 953.0 73.0 450.00
110 654.0 913 173 103.0 84.0 390.60
130 NaN 754 124 207.0 26.0 NaN
170 256.0 843 97 806.0 87.0 434.40
140 754.0 95 101 541.0 64.0 326.20
80 985.0 184 84 90.0 11.0 284.60
96 65.0 127 130 421.0 34.0 167.80
Tried code: -> this however only gives me the mean, ignoring the NaN condition. Is their some brief lambda function that can add the condition to the code?
import pandas as pd
import numpy as np
data = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"2014": [100,120,np.nan,180,110,130,170,140,80,96],
"2015": [122,159,164,421,654,np.nan,256,754,985,65],
"2016": [324,54,687,512,913,754,843,95,184,127],
"2017": [632,452,165,184,173,124,97,101,84,130],
"2018": [np.nan,541,245,953,103,207,806,541,90,421]})
print(data)
fiveyear = ["2014", "2015", "2016", "2017", "2018"] -> if a cell in these rows is empty(NaN), then NaN should be in the new 'mean'-column. I only want the mean when, all 5 cells in the row have a numeric value.
data.loc[:, 'mean'] = data[fiveyear].mean(axis=1)
print(data)
Use dropna to remove rows before calculating the mean. Because pandas will align on index when assigning the result back, and these rows were removed, the result of these dropped rows is NaN
df['mean'] = df[fiveyear].dropna(how='any').mean(1)
Also possible to mask the result to only those rows that were all non-null
df['mean'] = df[fiveyear].mean(1).mask(df[fiveyear].isnull().any(1))
A bit more of a hack, but because you know you need all 5 values you could also use sum which supports the min_count argument, so anything with fewer than 5 values is NaN
df['mean'] = df[fiveyear].sum(1, min_count=len(fiveyear))/len(fiveyear)
This is the same as #ALollz answer but with a flexible way to detect all columns regardless of how many years there are in the df
#get years columns in a list
yearsCols= [c for c in df if c != 'ID']
#calculate mean
df['mean'] = df[yearsCols].dropna(how='any').mean(1)
I'm trying to create a data frame where I add duplicates as variants in a column.To further illustrate my question:
I have a pandas dataframe like this:
Case ButtonAsInteger
0 1 130
1 1 133
2 1 42
3 2 165
4 2 158
5 2 157
6 3 158
7 3 159
8 3 157
9 4 130
10 4 133
11 4 43
... ... ...
I have converted it into this form:
grouped = activity2.groupby(['Case'])
values = grouped['ButtonAsInteger'].agg('sum')
id_df = grouped['ButtonAsInteger'].apply(lambda x: pd.Series(x.values)).unstack(level=-1
0 1 2 3 4 5 6 7 8 9
Case
1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
And now I want to find duplicates and mark each duplicate as a variant. So in this example, Case 1 and 4 should get variant 1. Like this:
Variants 0 1 2 3 4 5 6 7 8 9
Case
1 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
I have already tried this method https://stackoverflow.com/a/44999009. But it doesn't work on my data frame. Unfortunately I don't know why.
It will probably be possible to apply a double for loop. So for each line look if there is a duplicate in the record. Whether this is efficient on a large record, I don't know.
I have also added my procedure with grouping, because perhaps there is a possibility to already work with duplicates at this point?
This groups by all columns and returns the group index (+ 1 because zero based indexing is the default). I think this should be what you want.
id_df['Variant'] = id_df.groupby(
id_df.columns.values.tolist()).grouper.group_info[0] + 1
The resulting data frame, given your input data like above:
0 1 2 Variant
Case
1 130 133 42 1
2 165 158 157 3
3 158 159 157 2
4 130 133 42 1
There could be a syntactically nicer way to access the group index, but i didn't find one.
I'm trying to extract data from a list of dataframes and extract row ranges. Each dataframe might not have the same data, therefore I have a list of possible index ranges that I would like loc to loop over, i.e. from the code sample below, I might want CIN to LAN, but on another dataframe, the CIN row doesn't exist, so I will want DET to LAN or HOU to LAN.
so I was thinking putting them in a list and iterating over the list, i.e.
for df in dfs:
ranges=[[df.loc["CIN":"LAN"]], [df.loc["DET":"LAN"]]]
extracted ranges = (i for i in ranges)
I'm not sure how you would iterate over a list and feed into loc, or perhaps .query().
df1 stint g ab r h X2b X3b hr rbi sb cs bb \
year team
2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174
SFN 5 482 1305 198 337 67 6 40 171.0 26.0 7.0 235
TEX 2 198 729 115 200 40 4 28 115.0 21.0 4.0 73
TOR 4 459 1408 187 378 96 2 58 223.0 4.0 2.0 190
df2 so ibb hbp sh sf gidp
year team
2008 DET 176.0 3.0 10.0 4.0 8.0 28.0
HOU 212.0 3.0 9.0 16.0 6.0 17.0
LAN 141.0 8.0 9.0 3.0 8.0 29.0
NYN 310.0 24.0 23.0 18.0 15.0 48.0
SFN 188.0 51.0 8.0 16.0 6.0 41.0
TEX 140.0 4.0 5.0 2.0 8.0 16.0
TOR 265.0 16.0 12.0 4.0 16.0 38.0
Here is a solution:
import pandas as pd
# Prepare a list of ranges
ranges = [('CIN','LAN'), ('DET','LAN')]
# Declare an empty list of data frames and a list with the existing data frames
df_ranges = []
df_list = [df1, df2]
# Loop over multi-indices
for i, idx_range in enumerate(ranges):
df = df_list[i]
row1, row2 = idx_range
df_ranges.append(df.loc[(slice(None), slice(row1, row2)),:])
# Print the extracted data
print('Extracted data:\n')
print(df_ranges)
Output:
[ stint g ab r h X2b X3b hr rbi sb cs bb
year team
2007 CIN 6 379 745 101 203 35 2 36 125 10 1 105
DET 5 301 1062 162 283 54 4 37 144 24 7 97
HOU 4 311 926 109 218 47 6 14 77 10 4 60
LAN 11 413 1021 153 293 61 3 36 154 7 5 114
so ibb hbp sh sf gidp
year team
2008 DET 176 3 10 4 8 28
HOU 212 3 9 16 6 17
LAN 141 8 9 3 8 29]