find duplicates and mark as variant - python

I'm trying to create a data frame where I add duplicates as variants in a column.To further illustrate my question:
I have a pandas dataframe like this:
Case ButtonAsInteger
0 1 130
1 1 133
2 1 42
3 2 165
4 2 158
5 2 157
6 3 158
7 3 159
8 3 157
9 4 130
10 4 133
11 4 43
... ... ...
I have converted it into this form:
grouped = activity2.groupby(['Case'])
values = grouped['ButtonAsInteger'].agg('sum')
id_df = grouped['ButtonAsInteger'].apply(lambda x: pd.Series(x.values)).unstack(level=-1
0 1 2 3 4 5 6 7 8 9
Case
1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
And now I want to find duplicates and mark each duplicate as a variant. So in this example, Case 1 and 4 should get variant 1. Like this:
Variants 0 1 2 3 4 5 6 7 8 9
Case
1 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
I have already tried this method https://stackoverflow.com/a/44999009. But it doesn't work on my data frame. Unfortunately I don't know why.
It will probably be possible to apply a double for loop. So for each line look if there is a duplicate in the record. Whether this is efficient on a large record, I don't know.
I have also added my procedure with grouping, because perhaps there is a possibility to already work with duplicates at this point?

This groups by all columns and returns the group index (+ 1 because zero based indexing is the default). I think this should be what you want.
id_df['Variant'] = id_df.groupby(
id_df.columns.values.tolist()).grouper.group_info[0] + 1
The resulting data frame, given your input data like above:
0 1 2 Variant
Case
1 130 133 42 1
2 165 158 157 3
3 158 159 157 2
4 130 133 42 1
There could be a syntactically nicer way to access the group index, but i didn't find one.

Related

NaN in single column while importing data from URL

I am trying to import all 9 columns of the popular MPG dataset from UCI from a URL. The problem is , instead of the string values showing, Carname (the ninth column) is populated by NaN.
What is going wrong and how can one fix this? The link to the repository shows that the original dataset has 9 columns, so this should work.
From the URL and we find that the data looks like
18.0 8 307.0 130.0 3504. 12.0 70 1 "chevrolet chevelle malibu"
15.0 8 350.0 165.0 3693. 11.5 70 1 "buick skylark 320"
with unique string values on the Carname but when we import it as
import pandas as pd
# Import raw dataset from URL
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'Model Year', 'Origin', 'Carname']
data = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
data.head(3)
yielding (with NaN values on Carname)
MPG Cylinders Displacement Horsepower Weight Acceleration Model Year Origin Carname
0 18.0 8 307.0 130.0 3504.0 12.0 70 1 NaN
1 15.0 8 350.0 165.0 3693.0 11.5 70 1 NaN
It’s literally in your read_csv call: comment='\t'. The only tabs are before the Carname field, which means the way you read the fle explicitely ignores that column.
You can remove the comment parameter and use the more generic separator \s+ instead to split on any whitespace (one or more spaces, a tab, etc.):
>>> pd.read_csv(url, names=column_names, na_values='?', sep='\s+')
MPG Cylinders Displacement Horsepower Weight Acceleration Model Year Origin Carname
0 18.0 8 307.0 130.0 3504.0 12.0 70 1 chevrolet chevelle malibu
1 15.0 8 350.0 165.0 3693.0 11.5 70 1 buick skylark 320
2 18.0 8 318.0 150.0 3436.0 11.0 70 1 plymouth satellite
3 16.0 8 304.0 150.0 3433.0 12.0 70 1 amc rebel sst
4 17.0 8 302.0 140.0 3449.0 10.5 70 1 ford torino
.. ... ... ... ... ... ... ... ... ...
393 27.0 4 140.0 86.0 2790.0 15.6 82 1 ford mustang gl
394 44.0 4 97.0 52.0 2130.0 24.6 82 2 vw pickup
395 32.0 4 135.0 84.0 2295.0 11.6 82 1 dodge rampage
396 28.0 4 120.0 79.0 2625.0 18.6 82 1 ford ranger
397 31.0 4 119.0 82.0 2720.0 19.4 82 1 chevy s-10
[398 rows x 9 columns]

Subtracting fix date from whole panda data frame - python

I have data
customer_id purchase_amount date_of_purchase
0 760 25.0 06-11-2009
1 860 50.0 09-28-2012
2 1200 100.0 10-25-2005
3 1420 50.0 09-07-2009
4 1940 70.0 01-25-2013
5 1960 40.0 10-29-2013
6 2620 30.0 09-03-2006
7 3050 50.0 12-04-2007
8 3120 150.0 08-11-2006
9 3260 45.0 10-20-2010
10 3510 35.0 04-05-2013
11 3970 30.0 07-06-2007
12 4000 20.0 11-25-2005
13 4180 20.0 09-22-2010
14 4390 30.0 04-15-2011
15 4750 60.0 02-12-2013
16 4840 30.0 10-14-2005
17 4910 15.0 12-13-2006
18 4950 50.0 05-19-2010
19 4970 30.0 01-12-2006
20 5250 50.0 12-20-2005
Now I want to subtract 01-01-2016 from each row of date_of_purchase
I tried the following so I should have a new column days_since with a number of days.
NOW = pd.to_datetime('01/01/2016').strftime('%m-%d-%Y')
gb = customer_purchases_df.groupby('customer_id')
df2 = gb.agg({'date_of_purchase': lambda x: (NOW - x.max()).days})
any suggestion. how I can achieve this
Thanks in advance
pd.to_datetime(df['date_of_purchase']).rsub(pd.to_datetime('2016-01-01')).dt.days
0 2395
1 1190
2 3720
3 2307
4 1071
5 794
6 3407
7 2950
8 3430
9 1899
10 1001
11 3101
12 3689
13 1927
14 1722
15 1053
16 3731
17 3306
18 2053
19 3641
20 3664
Name: date_of_purchase, dtype: int64
I'm assuming the 'date_of_purchase' column already has the datetime dtype.
>>> df
customer_id purchase_amount date_of_purchase
0 760 25.0 2009-06-11
1 860 50.0 2012-09-28
2 1200 100.0 2005-10-25
>>> df['days_since'] = df['date_of_purchase'].sub(pd.to_datetime('01/01/2016')).dt.days.abs()
>>> df
customer_id purchase_amount date_of_purchase days_since
0 760 25.0 2009-06-11 2395
1 860 50.0 2012-09-28 1190
2 1200 100.0 2005-10-25 3720

Calculate mean of data rows in dataframe with date-headers, dictated by a 'datetime'-column

I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0.
If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Initial dataframe:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID
0 100.0 122.0 324 632 NaN 2016-03-08 12
1 120.0 159.0 54 452 541.0 2015-04-09 96
2 NaN 164.0 687 165 245.0 2016-02-15 20
3 180.0 421.0 512 184 953.0 2018-05-01 73
4 110.0 654.0 913 173 103.0 2017-08-04 84
5 130.0 NaN 754 124 207.0 2016-07-03 26
6 170.0 256.0 843 97 806.0 2013-02-04 87
7 140.0 754.0 95 101 541.0 2016-06-08 64
8 80.0 985.0 184 84 90.0 2019-03-05 11
9 96.0 65.0 127 130 421.0 2014-05-14 34
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
Tried code: -> I'm still working on it, as I don't really know how to start for this, I only uploaded the dataframe so far, probably something with the 'datetime'-package has to be done to get the desired dataframe?
import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
print(df)
Due to your naming convention, one need to extract the years from column names for comparison purpose. Then you can mask the data and taking mean:
# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)
# the years from Date
years = pd.to_datetime(df.Date).dt.year.values
df['mean'] = data.where(data_years<years[:,None]).mean(1)
Output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.00
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.00
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.00
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.00
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.00
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447.00
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.60
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
one more answer:
import pandas as pd
import numpy as np

df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']

s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)

print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)

print(df)

Difference between dates in Pandas dataframe

This is related to this question, but now I need to find the difference between dates that are stored in 'YYYY-MM-DD'. Essentially the difference between values in the count column is what we need, but normalized by the number of days between each row.
My dataframe is:
date,site,country_code,kind,ID,rank,votes,sessions,avg_score,count
2017-03-20,website1,US,0,84,226,0.0,15.0,3.370812,53.0
2017-03-21,website1,US,0,84,214,0.0,15.0,3.370812,53.0
2017-03-22,website1,US,0,84,226,0.0,16.0,3.370812,53.0
2017-03-23,website1,US,0,84,234,0.0,16.0,3.369048,54.0
2017-03-24,website1,US,0,84,226,0.0,16.0,3.369048,54.0
2017-03-25,website1,US,0,84,212,0.0,16.0,3.369048,54.0
2017-03-27,website1,US,0,84,228,0.0,16.0,3.369048,58.0
2017-02-15,website2,AU,1,91,144,4.0,148.0,4.727272,521.0
2017-02-16,website2,AU,1,91,144,3.0,147.0,4.727272,524.0
2017-02-20,website2,AU,1,91,100,4.0,148.0,4.727272,531.0
2017-02-21,website2,AU,1,91,118,6.0,149.0,4.727272,533.0
2017-02-22,website2,AU,1,91,114,4.0,151.0,4.727272,534.0
And I'd like to find the difference between each date after grouping by date+site+country+kind+ID tuples.
[date,site,country_code,kind,ID,rank,votes,sessions,avg_score,count,day_diff
2017-03-20,website1,US,0,84,226,0.0,15.0,3.370812,0,0
2017-03-21,website1,US,0,84,214,0.0,15.0,3.370812,0,1
2017-03-22,website1,US,0,84,226,0.0,16.0,3.370812,0,1
2017-03-23,website1,US,0,84,234,0.0,16.0,3.369048,0,1
2017-03-24,website1,US,0,84,226,0.0,16.0,3.369048,0,1
2017-03-25,website1,US,0,84,212,0.0,16.0,3.369048,0,1
2017-03-27,website1,US,0,84,228,0.0,16.0,3.369048,4,2
2017-02-15,website2,AU,1,91,144,4.0,148.0,4.727272,0,0
2017-02-16,website2,AU,1,91,144,3.0,147.0,4.727272,3,1
2017-02-20,website2,AU,1,91,100,4.0,148.0,4.727272,7,4
2017-02-21,website2,AU,1,91,118,6.0,149.0,4.727272,3,1
2017-02-22,website2,AU,1,91,114,4.0,151.0,4.727272,1,1]
One option would be to convert the date column to a Pandas datetime one using pd.to_datetime() and use the diff function but that results in values of "x days", of type timetelda64. I'd like to use this difference to find the daily average count so if this can be accomplished in even a single/less painful step, that would work well.
you can use .dt.days accessor:
In [72]: df['date'] = pd.to_datetime(df['date'])
In [73]: df['day_diff'] = df.groupby(['site','country_code','kind','ID'])['date'] \
.diff().dt.days.fillna(0)
In [74]: df
Out[74]:
date site country_code kind ID rank votes sessions avg_score count day_diff
0 2017-03-20 website1 US 0 84 226 0.0 15.0 3.370812 53.0 0.0
1 2017-03-21 website1 US 0 84 214 0.0 15.0 3.370812 53.0 1.0
2 2017-03-22 website1 US 0 84 226 0.0 16.0 3.370812 53.0 1.0
3 2017-03-23 website1 US 0 84 234 0.0 16.0 3.369048 54.0 1.0
4 2017-03-24 website1 US 0 84 226 0.0 16.0 3.369048 54.0 1.0
5 2017-03-25 website1 US 0 84 212 0.0 16.0 3.369048 54.0 1.0
6 2017-03-27 website1 US 0 84 228 0.0 16.0 3.369048 58.0 2.0
7 2017-02-15 website2 AU 1 91 144 4.0 148.0 4.727272 521.0 0.0
8 2017-02-16 website2 AU 1 91 144 3.0 147.0 4.727272 524.0 1.0
9 2017-02-20 website2 AU 1 91 100 4.0 148.0 4.727272 531.0 4.0
10 2017-02-21 website2 AU 1 91 118 6.0 149.0 4.727272 533.0 1.0
11 2017-02-22 website2 AU 1 91 114 4.0 151.0 4.727272 534.0 1.0

How to retrieve one column from csv file using python?

im trying to retrieve the age column from one of the csv file , here is what i coded so far.
df = pd.DataFrame.from_csv('train.csv')
result = df[(df.Sex=='female') & (df.Pclass==3)]
print(result.Age)
# finding the average age of all people who survived
print len(result)
sum = len(result)
I printed out the age, because i wanted to see the list of all ages that belong to the colunm of sex that has the value of "female" and the column of class which has the value of "3"
the print result for some reason shows the colunm number and the age next to it, i just want it print the list of ages thats all.
PassengerId
3 26.0
9 27.0
11 4.0
15 14.0
19 31.0
20 NaN
23 15.0
25 8.0
26 38.0
29 NaN
33 NaN
39 18.0
40 14.0
41 40.0
45 19.0
48 NaN
50 18.0
69 17.0
72 16.0
80 30.0
83 NaN
86 33.0
101 28.0
107 21.0
110 NaN
112 14.5
114 20.0
115 17.0
120 2.0
129 NaN
...
658 32.0
678 18.0
679 43.0
681 NaN
692 4.0
698 NaN
703 18.0
728 NaN
730 25.0
737 48.0
768 30.5
778 5.0
781 13.0
787 18.0
793 NaN
798 31.0
800 30.0
808 18.0
814 6.0
817 23.0
824 27.0
831 15.0
853 9.0
856 18.0
859 24.0
864 NaN
876 15.0
883 22.0
886 39.0
889 NaN
Name: Age, dtype: float64
This is what my program prints, i just want the list of age on the right column only not the passengerID column which is on the left.
Thank you
result.Age is a pandas Series object, and so when you print it, column headers, indices, and data types are shown as well. This is a good thing, because it makes the printed representation of the object much more useful.
If you want to control exactly how the data is displayed, you will need to do some string formatting. Something like this should do what you're asking for:
print('\n'.join(str(x) for x in result.Age))
If you want access to the raw data underlying that column for some reason (usually you can work with the Series just as well), without indices or headers, you can get a numpy array with
result.Age.values

Categories