I have a dataset like these
state
sex
birth
player
QLD
m
1993
Dave
QLD
m
1992
Rob
Now I would like to create an additional row, which is the id. ID is equal to the row number but + 1
df = df.assign(ID=range(len(df)))
But unfortunately the first ID is zero, how can I fix that the first ID begins with 1 and so on
I want these output
state
sex
birth
player
ID
QLD
m
1993
Dave
1
QLD
m
1992
Rob
2
but I got these
state
sex
birth
player
ID
QLD
m
1993
Dave
0
QLD
m
1992
Rob
1
How can I add an additional column to python, which starts with one and gives for every row a unique number so for the second row 2, third 3 and so on.
You can try this:
import pandas as pd
df['ID'] = pd.Series(range(len(df))) + 1
Related
I have the following table
Father
Son
Year
James
Harry
1999
James
Alfi
2001
Corey
Kyle
2003
I would like to add a fourth column that makes the table look like below. It's supposed to show which child of each father was born first, second, third, and so on. How can I do that?
Father
Son
Year
Child
James
Harry
1999
1
James
Alfi
2001
2
Corey
Kyle
2003
1
here is one way to do it. using cumcount
# groupby Father and take a cumcount, offsetted by 1
df['Child']=df.groupby(['Father'])['Son'].cumcount()+1
df
Father Son Year Child
0 James Harry 1999 1
1 James Alfi 2001 2
2 Corey Kyle 2003 1
it assumes that DF is sorted by Father and Year. if not, then
df['Child']=df.sort_values(['Father','Year']).groupby(['Father'] )['Son'].cumcount()+1
df
Here is an idea of solving this using groupby and cumsum functions.
This assumes that the rows are ordered so that the younger sibling is always below their elder brother and all children of the same father are in a continuous pack of rows.
Assume we have the following setup
import pandas as pd
df = pd.DataFrame({'Father': ['James', 'James', 'Corey'],
'Son': ['Harry', 'Alfi', 'Kyle'],
'Year': [1999, 2001, 2003]})
then here is the trick we group the siblings with the same father into a groupby object and then compute the cumulative sum of ones to assign a sequential number to each row.
df['temp_column'] = 1
df['Child'] = df.groupby('Father')['temp_column'].cumsum()
df.drop(columns='temp_column')
The result would look like this
Father Son Year Child
0 James Harry 1999 1
1 James Alfi 2001 2
2 Corey Kyle 2003 1
Now to make the solution more general consider reordering the rows to satisfy the preconditions before applying the solution and then if necessary restore the dataframe to the original order.
I have a dataframe that look like below.
id name tag location date
1 John 34 FL 01/12/1990
1 Peter 32 NC 01/12/1990
1 Dave 66 SC 11/25/1990
1 Mary 12 CA 03/09/1990
1 Sue 29 NY 07/10/1990
1 Eve 89 MA 06/12/1990
: : : : :
n John 34 FL 01/12/2000
n Peter 32 NC 01/12/2000
n Dave 66 SC 11/25/1999
n Mary 12 CA 03/09/1999
n Sue 29 NY 07/10/1998
n Eve 89 MA 06/12/1997
I need to find the location information based on the id column but with one condition, only need the earliest date. For example, the earliest date for id=1 group is 01/12/1990, which means the location is FL and NC. Then apply it to all the different id group to get the top 3 locations. I have written the code to do this for me.
#Get the earliest date base on id group
df_ear = df.loc[df.groupby('id')['date'].idxmin()]
#Count the occurancees of the location
df_ear['location'].value_counts()
The code works perfectly fine but it cannot return more than 1 location (using my first line of code) if they have the same earliest date, for example, id=1 group will only return FL instead FL and NC. I am wondering how can I fix my code to include the condition that if the earliest date is more than 1.
Thanks!
Use GroupBy.transform for Series for minimal date per groups, so possible compare by column Date in boolean indexing:
df['date'] = pd.to_datetime(df['date'])
df_ear = df[df.groupby('id')['date'].transform('min').eq(df['date'])]
I am tyring to do some equivalent of COUNTIF in Pandas. I am trying to get my head around doing it with groupby, but I am struggling because my logical grouping condition is dynamic.
Say I have a list of customers, and the day on which they visited. I want to identify new customers based on 2 logical conditions
They must be the same customer (same Guest ID)
They must have been there on the previous day
If both conditions are met, they are a returning customer. If not, they are new (Hence newby = 1-... to identify new customers.
I managed to do this with a for loop, but obviously performance is terrible and this goes pretty much against the logic of Pandas.
How can I wrap the following code into something smarter than a loop?
for i in range (0, len(df)):
newby = 1-np.sum((df["Day"] == df.iloc[i]["Day"]-1) & (df["Guest ID"] == df.iloc[i]["Guest ID"]))
This post does not help, as the condition is static. I would like to avoid introducting "dummy columns", such as transposing the df, because I will have many categories (many customer names) and would like to build more complex logical statements. I do not want to run the risk of ending up with many auxiliary columns
I have the following input
df
Day Guest ID
0 3230 Tom
1 3230 Peter
2 3231 Tom
3 3232 Peter
4 3232 Peter
and expect this output
df
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1
Note that elements 3 and 4 are not necessarily duplicates - given there might be additional, varying columns (such as their order).
Do:
# ensure the df is sorted by date
df = df.sort_values('Day')
# group by customer and find the diff within each group
df['newby'] = (df.groupby('Guest ID')['Day'].transform('diff').fillna(2) > 1).astype(int)
print(df)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
UPDATE
If multiple visits are allowed per day, you could do:
# only keep unique visits per day
uniques = df.drop_duplicates()
# ensure the df is sorted by date
uniques = uniques.sort_values('Day')
# group by customer and find the diff within each group
uniques['newby'] = (uniques.groupby('Guest ID')['Day'].transform('diff').fillna(2) > 1).astype(int)
# merge the uniques visits back into the original df
res = df.merge(uniques, on=['Day', 'Guest ID'])
print(res)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1
As an alternative, without sorting or merging, you could do:
lookup = {(day + 1, guest) for day, guest in df[['Day', 'Guest ID']].value_counts().to_dict()}
df['newby'] = (~pd.MultiIndex.from_arrays([df['Day'], df['Guest ID']]).isin(lookup)).astype(int)
print(df)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1
I am relatively new to pandas / python.
I have a list of names and dates. I want to group the entries by Name and count the number of Names for 'after 2016' and 'before 2016'. The count should be added to a new column.
My input:
Name Date
Marc 2006
Carl 2003
Carl 2002
Carl 1990
Marc 1999
Max 2016
Max 2014
Marc 2006
Carl 2003
Carl 2002
Carl 2019
Marc 1999
Max 2016
Max 2014
And the output, should look like this:
Before
2016 Count
Marc 1 4
Marc 0 0
Carl 1 5
Carl 0 1
Max 1 2
Max 0 2
So the Output should have 2 entries for each Name, one with a count of Names before 2016 and one after. Addtionally a column which just stats 1 for before 2016 and 0 for after.
As mentioned before, I am quite a beginner. I was able to count the entries with the condition of the year:
df.groupby('Name')['Date'].apply(lambda x: (x<'2016').sum()).reset_index(name='count')
But honestly, I am not quite sure what to do next. Maybe somebody could point me in the right direction.
You can pass to apply a function which returns a 2x2 dataframe. Something like this:
def counting(x):
bef = (x < 2016).sum()
aft = (x > 2016).sum()
return pd.DataFrame([[1, bef], [0, aft]], index=[x.name, x.name], columns=["before 2016", "Count"])
ddf = df.groupby('Name')['Date'].apply(counting).reset_index(level=0, drop=True)
ddf is:
before 2016 Count
Carl 1 5
Carl 0 1
Marc 1 4
Marc 0 0
Max 1 2
Max 0 0
You can group by an external series having the same length as the dataframe:
s = df['Date'].lt(2016).astype('int')
s.name = 'Before 2016'
df.groupby(['Name', s]).count()
Result:
Date
Name Before 2016
Carl 0 1
1 5
Marc 1 4
Max 0 2
1 2
lt stands for "less than". Other comparison functions are le (less than or equal), gt (greater than), ge (greater than or equal) and eq (equal)
From what I understand you need to populate both 1 and 0 for each names, try with pivot_table with df.unstack():
(df.assign(Before=df['Date'].lt(2016).view('i1'))
.pivot_table('Date','Name','Before',aggfunc='count',fill_value=0).unstack()
.sort_index(level=1).reset_index(0,name='Count'))
Before Count
Name
Carl 0 1
Carl 1 5
Marc 0 0
Marc 1 4
Max 0 2
Max 1 2
Just get back into coding. But came across this issue.
How do I get a 1 string into a dataframe where it sorts every five lines into a column.
The string show
"Jane Doe
Male-52
City- NYC
$36,000
total salary
Amy sam
Female-65
City- NYC
$38,000
total salary
.....
.....
and so on
"
How do I get it to be a data frame where I can put it into
Name Sex age City Total Salary
Jane Doe Male 52 NYC 36,000
Amy Sam Female 65 NYC 38,000
......
My code is
elements = driver.find_elements_by_xpath("""//*[#id="file"]""")
data = "".join([element.text for element in elements])
import pandas
s = """Jane Doe
Male-52
City- NYC
$36,000
total salary
Amy sam
Female-65
City- NYC
$38,000
total salary"""
import re
df = pandas.DataFrame(re.findall("(\w+ \w+)\n(\w+)-(\d+)\nCity- (\w+)\n\$(.*)",s),
columns=["name","sex","age","city","salary"])
print(df)
is one way to solve this ...
This should work for n number of columns - you would just have to pass in appropriate column names to dataframe afterwards. You will also have to clean up the columns and delete unnecessary ones after the reshaping is done
Edited to include the entire code and output
import pandas as pd
mystr = """Jane Doe
Male-52
City- NYC
$36,000
total salary
Amy sam
Female-65
City- NYC
$38,000
total salary"""
num_columns = 5
df = pd.Series(mystr.split("\n"), name="data")
pd.DataFrame(df.values.reshape((int(df.shape[0]/num_columns), num_columns)))
output image