For example, I have a dataframe like this:
data = {'id': [1,1,1,2,2],
'value': ['red','red and blue','yellow','oak','oak wood']
}
df = pd.DataFrame (data, columns = ['id','value'])
I want :
id value count
1 red 2
1 blue 1
1 yellow 1
2 oak 2
2 wood 1
Many thanks!
Solution for pandas 0.25+ with DataFrame.explode by lists created by Series.str.split and GroupBy.size:
df1 = (df.assign(value = df['value'].str.split())
.explode('value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count'))
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
For lower pandas versions use DataFrame.set_index with Series.str.split and expand=True for DataFrame, reshape by DataFrame.stack, create columns from MultiIndex Series ands use same solution like above:
df1 = (df.set_index('id')['value']
.str.split(expand=True)
.stack()
.reset_index(name='value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count')
)
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
Related
I am sorry for being a noob but I can't find a solution for my problem with hours of search.
import pandas as pd
df1 = pd.read_excel('df1.xlsx')
df1.set_index('time')
print(df1)
df2 = pd.read_excel('df2.xlsx')
df2.set_index('time')
print(df2)
new_df = pd.merge(df1, df2,how='outer')
print(new_df)
df1
time bought
0 1 0
1 2 0
2 3 0
3 4 0
4 5 1
df2
time bought
0 3 0
1 4 0
2 5 0
3 6 0
4 7 0
new_df
time bought
0 1 0
1 2 0
2 3 0
3 4 0
4 5 1
5 5 0
6 6 0
7 7 0
What I want is
updating df1(existing data) with df2(new data feed). when it comes to bought value, df1 data should comes first
the new_df should have all unique time values from df1, df2 without duplicates
I tried every method I found but no one made my desired outcome or created unnecessary duplicates as above.(two rows with time value of 5)
merge method created _x _y suffixes or duplicates
join() didn't work as well.
What I desire should look like:
new_df
time bought
0 1 0
1 2 0
2 3 0
3 4 0
4 5 1
5 6 0
6 7 0
Thank you in advance
if you perform the join as you have done all you need to do is remove the duplicate rows keeping only the more resent data,
drop_duplicates() take the kwarg subset which takes a list of columns and keep which sets which row to keep if there are duplicates
in this case we only need to check for duplicates in the time column and wee keep the first column
import pandas as pd
df1 = pd.read_excel('df1.xlsx')
df1.set_index('time')
print(df1)
df2 = pd.read_excel('df2.xlsx')
df2.set_index('time')
print(df2)
new_df = pd.merge(df1, df2,how='outer')
new_df = new_df.drop_duplicates(subset=['time'], keep='first')
print(new_df)
Output:
time bought
0 1 0
1 2 0
2 3 0
3 4 0
4 5 1
5 6 0
6 7 0
I have a dataframe:
df =
col1
Num
1
4
1
4
2
5
2
1
2
1
3
2
I want to add all the numbers and show the total.
So I will get:
col1
Sum
1
8
2
7
3
2
Try this:
df.groupby('col1').sum()
If you wanted the new column to have the name 'sum' as in your example you could do the following:
df1 = df.groupby('col1').sum()
df1.columns = ['Sum']
I have a dataframe with this format:
ID measurement_1 measurement_2
0 3 NaN
1 NaN 5
2 NaN 7
3 NaN NaN
I want to combine to:
ID measurement measurement_type
0 3 1
1 5 2
2 7 2
For each row there will be a value in either measurement_1 or measurement_2 column, not in both, the other column will be NaN.
In some rows both columns will be NaN.
I want to add a column for the measurement type (depending on which column has the value) and take the actual value out of both columns, and remove the rows that have NaN in both columns.
Is there an easy way of doing this?
Thanks!
Use DataFrame.stack to reshape the dataframe then use reset_index and use DataFrame.assign to assign the column measurement_type by using Series.str.split + Series.str[:1] on level_1:
df1 = (
df.set_index('ID').stack().reset_index(name='measurement')
.assign(mesurement_type=lambda x: x.pop('level_1').str.split('_').str[-1])
)
Result:
print(df1)
ID measurement mesurement_type
0 0 3.0 1
1 1 5.0 2
2 2 7.0 2
Maybe combine_first could help?
import numpy as np
df["measurement"] = df["measurement_1"].combine_first(df["measurement_2"])
df["measurement_type"] = np.where(df["measurement_1"].notnull(), 1, 2)
df.drop(["measurement_1", "measurement_2"], 1)
ID measurement measurement_type
0 0 3 1
1 1 5 2
2 2 7 2
Set a threshold and drop any that has more than one NaN. Use df.assign to fillna() measurement_1 and apply np.where on measurement_2
df= df.dropna(thresh=2).assign(measurement=df.measurement_1.fillna\
(df.measurement_2), measurement_type=np.where(df.measurement_2.isna(),1,2)).drop(columns=['measurement_1','measurement_2'])
ID measurement measurement_type
0 0 3 1
1 1 5 2
2 2 7 2
You could use pandas melt :
(
df.melt("ID", var_name="measurement_type", value_name="measurement")
.dropna()
.assign(measurement_type=lambda x: x.measurement_type.str[-1])
.iloc[:, [0, -1, 1]]
.astype("int8")
)
or wide to long :
(
pd.wide_to_long(df, stubnames="measurement", i="ID",
j="measurement_type", sep="_")
.dropna()
.reset_index()
.astype("int8")
.iloc[:, [0, -1, 1]]
)
ID measurement measurement_type
0 0 3 1
1 1 5 2
2 2 7 2
I have 2 dataframes, one of them contains some general information about football players, and second of them contains other information like winning matches for each player. They both have the "id" column. However, they are not in same length.
What I want to do is creating a new dataframe which contains 2 columns: "x" from first dataframe and "y" from second dataframe, ONLY where the "id" column contains the same value in both dataframes. Thus, I can match the "x" and "y" columns which belong to same person.
I tried to do it using concat function:
pd.concat([firstdataframe['x'], seconddataframe['y']], axis=1, keys=['x', 'y'])
But I didn't manage to know how to apply the condition of the "id" being equal in both dataframes.
It seems you need merge with default inner join, also each values in id columns has to be unique:
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
Sample:
df1 = pd.DataFrame({'id':[1,2,3],'x':[4,3,8]})
print (df1)
id x
0 1 4
1 2 3
2 3 8
df2 = pd.DataFrame({'id':[1,2],'y':[7,0]})
print (df2)
id y
0 1 7
1 2 0
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
print (df)
id x y
0 1 4 7
1 2 3 0
Solution with concat is possible, but a bit complicated, becasue need join on indexes with inner join:
df = pd.concat([df1.set_index('id')['x'],
df2.set_index('id')['y']], axis=1, join='inner')
.reset_index()
print (df)
id x y
0 1 4 7
1 2 3 0
EDIT:
If ids are not unique, duplicates create all combinations and output dataframe is expanded:
df1 = pd.DataFrame({'id':[1,2,3],'x':[4,3,8]})
print (df1)
id x
0 1 4
1 2 3
2 3 8
df2 = pd.DataFrame({'id':[1,2,1,1],'y':[7,0,4,2]})
print (df2)
id y
0 1 7
1 2 0
2 1 4
3 1 2
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
print (df)
id x y
0 1 4 7
1 1 4 4
2 1 4 2
3 2 3 0
Is there a shorter way of dropping a column MultiIndex level (in my case, basic_amt) except transposing it twice?
In [704]: test
Out[704]:
basic_amt
Faculty NSW QLD VIC All
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
In [705]: test.reset_index(level=0, drop=True)
Out[705]:
basic_amt
Faculty NSW QLD VIC All
0 1 1 2 4
1 0 1 0 1
2 1 0 2 3
In [711]: test.transpose().reset_index(level=0, drop=True).transpose()
Out[711]:
Faculty NSW QLD VIC All
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
Another solution is to use MultiIndex.droplevel with rename_axis (new in pandas 0.18.0):
import pandas as pd
cols = pd.MultiIndex.from_arrays([['basic_amt']*4,
['NSW','QLD','VIC','All']],
names = [None, 'Faculty'])
idx = pd.Index(['All', 'Full Time', 'Part Time'])
df = pd.DataFrame([(1,1,2,4),
(0,1,0,1),
(1,0,2,3)], index = idx, columns=cols)
print (df)
basic_amt
Faculty NSW QLD VIC All
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
df.columns = df.columns.droplevel(0)
#pandas 0.18.0 and higher
df = df.rename_axis(None, axis=1)
#pandas bellow 0.18.0
#df.columns.name = None
print (df)
NSW QLD VIC All
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
print (df.columns)
Index(['NSW', 'QLD', 'VIC', 'All'], dtype='object')
If you need both column names, use list comprehension:
df.columns = ['_'.join(col) for col in df.columns]
print (df)
basic_amt_NSW basic_amt_QLD basic_amt_VIC basic_amt_All
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
print (df.columns)
Index(['basic_amt_NSW', 'basic_amt_QLD', 'basic_amt_VIC', 'basic_amt_All'], dtype='object')
Zip levels together
Here is an alternative solution which zips the levels together and joins them with underscore.
Derived from the above answer, and this was what I wanted to do when I found this answer. Thought I would share even if it does not answer the exact above question.
["_".join(pair) for pair in df.columns]
gives
['basic_amt_NSW', 'basic_amt_QLD', 'basic_amt_VIC', 'basic_amt_All']
Just set this as a the columns
df.columns = ["_".join(pair) for pair in df.columns]
basic_amt_NSW basic_amt_QLD basic_amt_VIC basic_amt_All
Faculty
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
How about simply reassigning df.columns:
levels = df.columns.levels
labels = df.columns.labels
df.columns = levels[1][labels[1]]
For example:
import pandas as pd
columns = pd.MultiIndex.from_arrays([['basic_amt']*4,
['NSW','QLD','VIC','All']])
index = pd.Index(['All', 'Full Time', 'Part Time'], name = 'Faculty')
df = pd.DataFrame([(1,1,2,4),
(0,01,0,1),
(1,0,2,3)])
df.columns = columns
df.index = index
Before:
print(df)
basic_amt
NSW QLD VIC All
Faculty
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3
After:
levels = df.columns.levels
labels = df.columns.labels
df.columns = levels[1][labels[1]]
print(df)
NSW QLD VIC All
Faculty
All 1 1 2 4
Full Time 0 1 0 1
Part Time 1 0 2 3