I have two data frames which look like:
DF1:
x_id y_id
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
DF2:
x_id y_id
1 1
2 1
3 1
4 2
5 2
6 2
1 3
3 3
: :
: :
3 y(i)
So, I want to merge/insert y_id from DF2 into y_id in DF1 in each iteration of the loop.
What I have so far:
count = df2['y_id'].unique()
for i in count:
new_df = df1.merge(df2['y_id']==i], how='inner', left_on='x_id', right_on='x_id')
While this create a new dataframe for each iteration of the loop, I think there should be a better way of doing this.
I want my final data frame to look like:
DF3:
x_id y_id
1 3
2 1
3 y(i)
4 2
5 2
6 2
Essentially what I want to do is group DF2 by y_id and merge them in a sorted order. So we can see in DF2 the values 1 and 3 have y_id = 1 and then further down the column they have y_id = 3. Since three is >1 I would like to use this value (ie. the greatest or most recent if we were working with dates etc.)
What I want to do is similar to an update statement in SQL where we update the column and set the row = y_id, taking the most recent value.
Hope I have explained sufficiently, any questions just ask.
Thanks
You can drop_duplicates before merge
df1=df1.drop('y_id',1).merge(df2.drop_duplicates('x_id',keep='last'),on='x_id')
df1
Out[469]:
x_id y_id
0 1 3
1 2 1
2 3 3
3 4 2
4 5 2
5 6 2
Related
I have a df that looks like this:
df
time score
83623 4
83624 3
83625 3
83629 2
83633 1
I want to explode df.time so that the single digit increments by 1, and then the df.score value is duplicated for each added row. See example below:
time score
83623 4
83624 3
83625 3
83626 3
83627 3
83628 3
83629 2
83630 2
83631 2
83632 2
83633 1
From your sample, I assume df.time is integer. You may try this way
df_final = df.set_index('time').reindex(range(df.time.min(), df.time.max()+1),
method='pad').reset_index()
Out[89]:
time score
0 83623 4
1 83624 3
2 83625 3
3 83626 3
4 83627 3
5 83628 3
6 83629 2
7 83630 2
8 83631 2
9 83632 2
10 83633 1
I have a dataframe df with the shape (4573,64) that I'm trying to pivot. The last column is an 'id' with two possible string values 'old' and 'new'. I would like to set the first 63 columns as index and then have the 'id' column across the top with values being the count of 'old' or 'new' for each index row.
I've created a list object out of columns labels that I want as index named cols.
I tried the following:
df.pivot(index=cols, columns='id')['id']
this gives an error: 'all arrays must be same length'
also tried the following to see if I can get sum but no luck either:
pd.pivot_table(df,index=cols,values=['id'],aggfunc=np.sum)
any ides greatly appreciated
I found a thread online talking about a possible bug in pandas 0.23.0 where the pandas.pivot_table() will not accept the multiindex as long as it contains NaN's (link to github in comments). My workaround was to do
df.fillna('empty', inplace=True)
then the solution below:
df1 = pd.pivot_table(df, index=cols,columns='id',aggfunc='size', fill_value=0)
as proposed by jezrael will work as intended hence the answer accepted.
I believe need convert columns names to list and then aggregate size with unstack:
df = pd.DataFrame({'B':[4,4,4,5,5,4],
'C':[1,1,9,4,2,3],
'D':[1,1,5,7,1,0],
'E':[0,0,6,9,2,4],
'id':list('aaabbb')})
print (df)
B C D E id
0 4 1 1 0 a
1 4 1 1 0 a
2 4 9 5 6 a
3 5 4 7 9 b
4 5 2 1 2 b
5 4 3 0 4 b
cols = df.columns.tolist()
df1 = df.groupby(cols)['id'].size().unstack(fill_value=0)
print (df1)
id a b
B C D E
4 1 1 0 2 0
3 0 4 0 1
9 5 6 1 0
5 2 1 2 0 1
4 7 9 0 1
Solution with pivot_table:
df1 = pd.pivot_table(df, index=cols,columns='id',aggfunc='size', fill_value=0)
print (df1)
id a b
B C D E
4 1 1 0 2 0
3 0 4 0 1
9 5 6 1 0
5 2 1 2 0 1
4 7 9 0 1
I have a dataframe with 2 columns, and I want to create a 3rd column based on a comparison between the 2 columns.
So the logic is:
column 1 val = 3, column 2 val = 4, so the new column value is nothing
column 1 val = 3, column 2 val = 2, so the new column is 3
It's a very similar problem to one previously asked but the answer there isn't working for me, using np.where()
Here's what I tried:
FinalDF['c'] = np.where(FinalDF['a']>FinalDF['b'],[FinalDF['a'],""])
and after that failed I tried to see if maybe it doesn't like the [x,y] I gave it, so I tried:
FinalDF['c'] = np.where(FinalDF['a']>FinalDF['b'],[1,0])
the result is always:
ValueError: either both or neither of x and y should be given
Edit: I also removed the [x,y], to see what happens, since the documentation says it is optional. But I still get an error:
ValueError: Length of values does not match length of index
Which is odd because they are sitting in the same dataframe, although one column does have some Nan values.
I don't think I can use np.select because I have a condition here. I've linked to the previous questions so readers can reference them in future questions.
Thanks for any help.
I think that this should work:
FinalDF['c'] = np.where(FinalDF['a']>FinalDF['b'], FinalDF['a'],"")
Example:
FinalDF = pd.DataFrame({'a':[4,2,4,5,5,4],
'b':[4,3,2,2,2,4],
})
print FinalDF
a b
0 4 4
1 2 3
2 4 2
3 5 2
4 5 2
5 4 4
Output:
a b c
0 4 4
1 2 3
2 4 2 4
3 5 2 5
4 5 2 5
5 4 4
or if the column b has to have a greater value of column a, use this:
FinalDF['c'] = np.where(FinalDF['a']<FinalDF['b'], FinalDF['b'],"")
Output:
a b c
0 4 4
1 2 3 3
2 4 2
3 5 2
4 5 2
5 4 4
I've looked into pandas join, merge, concat with different param values (how to join, indexing, axis=1, etc) but nothing solves it!
I have two dataframes:
x = pd.DataFrame(np.random.randn(4,4))
y = pd.DataFrame(np.random.randn(4,4),columns=list(range(2,6)))
x
Out[67]:
0 1 2 3
0 -0.036327 -0.594224 0.469633 -0.649221
1 1.891510 0.164184 -0.010760 -0.848515
2 -0.383299 1.416787 0.719434 0.025509
3 0.097420 -0.868072 -0.591106 -0.672628
y
Out[68]:
2 3 4 5
0 -0.328402 -0.001436 -1.339613 -0.721508
1 0.408685 1.986148 0.176883 0.146694
2 -0.638341 0.018629 -0.319985 -1.832628
3 0.125003 1.134909 0.500017 0.319324
I'd like to combine to one dataframe where the values from y in columns 2 and 3 overwrite those of x and then columns 4 and 5 are inserted on the end:
new
Out[100]:
0 1 2 3 4 5
0 -0.036327 -0.594224 -0.328402 -0.001436 -1.339613 -0.721508
1 1.891510 0.164184 0.408685 1.986148 0.176883 0.146694
2 -0.383299 1.416787 -0.638341 0.018629 -0.319985 -1.832628
3 0.097420 -0.868072 0.125003 1.134909 0.500017 0.319324
You can try combine_first:
df = y.combine_first(x)
You need update and combine_first
x.update(y)
x.combine_first(y)
Out[1417]:
0 1 2 3 4 5
0 -1.075266 1.044069 -0.423888 0.247130 0.008867 2.058995
1 0.122782 -0.444159 1.528181 0.595939 0.155170 1.693578
2 -0.825819 0.395140 -0.171900 -0.161182 -2.016067 0.223774
3 -0.009081 -0.148430 -0.028605 0.092074 1.355105 -0.003027
Or you using pd.concat + intersection
pd.concat([x.drop(x.columns.intersection(y.columns),1),y],1)
Out[1432]:
0 1 2 3 4 5
0 -1.075266 1.044069 -0.423888 0.247130 0.008867 2.058995
1 0.122782 -0.444159 1.528181 0.595939 0.155170 1.693578
2 -0.825819 0.395140 -0.171900 -0.161182 -2.016067 0.223774
3 -0.009081 -0.148430 -0.028605 0.092074 1.355105 -0.003027
I want to apply an operation on multiple groups of a data frame and then fill all values of that group by the result. Lets take mean and np.cumsum as an example and the following dataframe:
df=pd.DataFrame({"a":[1,3,2,4],"b":[1,1,2,2]})
which looks like this
a b
0 1 1
1 3 1
2 2 2
3 4 2
Now I want to group the dataframe by b, then take the mean of a in each group, then apply np.cumsum to the means, and then replace all values of a by the (group dependent) result.
For the first three steps, I would start like this
df.groupby("b").mean().apply(np.cumsum)
which gives
a
b
1 2
2 5
But what I want to get is
a b
0 2 1
1 2 1
2 5 2
3 5 2
Any ideas how this can be solved in a nice way?
You can use map by Series:
df1 = df.groupby("b").mean().cumsum()
print (df1)
a
b
1 2
2 5
df['a'] = df['b'].map(df1['a'])
print (df)
a b
0 2 1
1 2 1
2 5 2
3 5 2