Extend and fill a Pandas DataFrame to match another - python

I have two Pandas DataFrames A and B.
They have an identical index (weekly dates) up to a point: the series ends at the beginning of the year
for A and goes on for a number of observations in frame B. I need to set data frame A to have the same index as frame B - and fill each column with its own last values.
Thank you in advance.
Tikhon
EDIT: thank you for the advice on the question. What I need is for dfA_before to look at dfB and become dfA_after:
print(dfA_before)
a b
0 10 100
1 20 200
2 30 300
print(dfB)
a b
0 11 111
1 22 222
2 33 333
3 44 444
4 55 555
print(dfA_after)
a b
0 10 100
1 20 200
2 30 300
3 30 300
4 30 300

This should work
import numpy as np
import pandas as pd
df1 = pd.DataFrame({'a':[10,20,30],'b':[100,200,300]})
df2 = pd.DataFrame({'a':[11,22,33,44,55],'c':[111,222,333,444,555]})
# solution
last = df1.iloc[-1].to_numpy()
df3 = pd.DataFrame(np.tile(last,(2,1)),
columns=df1.columns)
df4 = df1.append(df3,ignore_index=True)
# method 2
for _ in range(len(df2)-len(df1)):
df1.loc[len(df1)] = df1.loc[len(df1)-1]
# method 3
for _ in range(df2.shape[0]-df1.shape[0]):
df1 = df1.append(df1.loc[len(df1)-1],ignore_index=True)
# result
a b
0 10 100
1 20 200
2 30 300
3 30 300
4 30 300

Probably very inefficient - I am a beginner:
dfA_New = dfB.copy()
dfA_New.loc[:] = 0
dfA_New.loc[:] = dfA.loc[:]
dfA_New.fillna(method='ffill', inplace = True)
dfA = dfA_New

Related

How to stack two columns of a pandas dataframe in python

I want to stack two columns on top of each other
So I have Left and Right values in one column each, and want to combine them into a single one. How do I do this in Python?
I'm working with Pandas Dataframes.
Basically from this
Left Right
0 20 25
1 15 18
2 10 35
3 0 5
To this:
New Name
0 20
1 15
2 10
3 0
4 25
5 18
6 35
7 5
It doesn't matter how they are combined as I will plot it anyway, and the new column name also doesn't matter because I can rename it.
You can create a list of the cols, and call squeeze to anonymise the data so it doesn't try to align on columns, and then call concat on this list, passing ignore_index=True creates a new index, otherwise you'll get the names as index values repeated:
cols = [df[col].squeeze() for col in df]
pd.concat(cols, ignore_index=True)
Many options, stack, melt, concat, ...
Here's one:
>>> df.melt(value_name='New Name').drop('variable', 1)
New Name
0 20
1 15
2 10
3 0
4 25
5 18
6 35
7 5
You can also use np.ravel:
import numpy as np
out = pd.DataFrame(np.ravel(df.values.T), columns=['New name'])
print(out)
# Output
New name
0 20
1 15
2 10
3 0
4 25
5 18
6 35
7 5
Update
If you have only 2 cols:
out = pd.concat([df['Left'], df['Right']], ignore_index=True).to_frame('New name')
print(out)
# Output
New name
0 20
1 15
2 10
3 0
4 25
5 18
6 35
7 5
Solution with unstack
df2 = df.unstack()
# recreate index
df2.index = np.arange(len(df2))
A solution with masking.
# Your data
import numpy as np
import pandas as pd
df = pd.DataFrame({"Left":[20,15,10,0], "Right":[25,18,35,5]})
# Masking columns to ravel
df2 = pd.DataFrame({"New Name":np.ravel(df[["Left","Right"]])})
df2
New Name
0 20
1 25
2 15
3 18
4 10
5 35
6 0
7 5
I ended up using this solution, seems to work fine
df1 = dfTest[['Left']].copy()
df2 = dfTest[['Right']].copy()
df2.columns=['Left']
df3 = pd.concat([df1, df2],ignore_index=True)

Merging dataframes with multiple key columns

I'd like to merge this dataframe:
import pandas as pd
import numpy as np
df1 = pd.DataFrame([[1,10,100],[2,20,np.nan],[3,30,300]], columns=["A","B","C"])
df1
A B C
0 1 10 100
1 2 20 NaN
2 3 30 300
with this one:
df2 = pd.DataFrame([[1,422],[10,72],[2,278],[300,198]], columns=["ID","Value"])
df2
ID Value
0 1 422
1 10 72
2 2 278
3 300 198
to get an output:
df_output = pd.DataFrame([[1,10,100,422],[1,10,100,72],[2,20,200,278],[3,30,300,198]], columns=["A","B","C","Value"])
df_output
A B C Value
0 1 10 100 422
1 1 10 100 72
2 2 20 NaN 278
3 3 30 300 198
The idea is that for df2 the key column is "ID", while for df1 we have 3 possible key columns ["A","B","C"].
Please notice that the numbers in df2 are chosen to be like this for simplicity, and they can include random numbers in practice.
How do I perform such a merge? Thanks!
IIUC, you need a double merge/join.
First, melt df1 to get a single column, while keeping the index. Then merge to get the matches. Finally join to the original DataFrame.
s = (df1
.reset_index().melt(id_vars='index')
.merge(df2, left_on='value', right_on='ID')
.set_index('index')['Value']
)
# index
# 0 422
# 1 278
# 0 72
# 2 198
# Name: Value, dtype: int64
df_output = df1.join(s)
output:
A B C Value
0 1 10 100.0 422
0 1 10 100.0 72
1 2 20 NaN 278
2 3 30 300.0 198
Alternative with stack + map:
s = df1.stack().droplevel(1).map(df2.set_index('ID')['Value']).dropna()
df_output = df1.join(s.rename('Value'))

Is there a way to avoid while loops using pandas in order to speed up my code?

I'm writing a code to merge several dataframe together using pandas .
Here is my first table :
Index Values Intensity
1 11 98
2 12 855
3 13 500
4 24 140
and here is the second one:
Index Values Intensity
1 21 1000
2 11 2000
3 24 0.55
4 25 500
With these two df, I concanate and drop_duplicates the Values columns which give me the following df :
Index Values Intensity_df1 Intensity_df2
1 11 0 0
2 12 0 0
3 13 0 0
4 24 0 0
5 21 0 0
6 25 0 0
I would like to recover the intensity of each values in each Dataframes, for this purpose, I'm iterating through each line of each df which is very inefficient. Here is the following code I use:
m = 0
while m < len(num_df):
n = 0
while n < len(df3):
temp_intens_abs = df[m]['Intensity'][df3['Values'][n] == df[m]['Values']]
if temp_intens_abs.empty:
merged.at[n,"Intensity_df%s" %df[m]] = 0
else:
merged.at[n,"Intensity_df%s" %df[m]] = pandas.to_numeric(temp_intens_abs, errors='coerce')
n = n + 1
m = m + 1
The resulting df3 looks like this at the end:
Index Values Intensity_df1 Intensity_df2
1 11 98 2000
2 12 855 0
3 13 500 0
4 24 140 0.55
5 21 0 1000
6 25 0 500
My question is : Is there a way to directly recover "present" values in a df by comparing directly two columns using pandas? I've tried several solutions using numpy but without success.. Thanks in advance for your help.
You can try joining these dataframes: df3 = df1.merge(df2, on="Values")

pandas apply User defined function to grouped dataframe on multiple columns

I would like to apply a function f1 by group to a dataframe:
import pandas as pd
import numpy as np
data = np.array([['id1','id2','u','v0','v1'],
['A','A',10,1,7],
['A','A',10,2,8],
['A','B',20,3,9],
['B','A',10,4,10],
['B','B',30,5,11],
['B','B',30,6,12]])
z = pd.DataFrame(data = data[1:,:], columns=data[0,:])
def f1(u,v):
return u*np.cumprod(v)
The result of the function depends on the column u and columns v0 or v1 (that can be thousands of v ecause I'm doing a simulation on a lot of paths).
The result should be like this
id1 id2 new_v0 new_v1
0 A A 10 70
1 A A 20 560
2 A B 60 180
3 B A 40 100
4 B B 150 330
5 B B 900 3960
I tried for a start
output = z.groupby(['id1', 'id2']).apply(lambda x: f1(u = x.u,v =x.v0))
but I can't even get a result with just one column.
Thank you very much!
You can filter column names starting with v and create a list and pass them under groupby:
v_cols = z.columns[z.columns.str.startswith('v')].tolist()
z[['u']+v_cols] = z[['u']+v_cols].apply(pd.to_numeric)
out = z.assign(**z.groupby(['id1','id2'])[v_cols].cumprod()
.mul(z['u'],axis=0).add_prefix('new_'))
print(out)
id1 id2 u v0 v1 new_v0 new_v1
0 A A 10 1 7 10 70
1 A A 10 2 8 20 560
2 A B 20 3 9 60 180
3 B A 10 4 10 40 100
4 B B 30 5 11 150 330
5 B B 30 6 12 900 3960
The way you create your data frame , will make the numeric to object , we convert first , then use the groupby+ cumprod
z[['u','v0','v1']]=z[['u','v0','v1']].apply(pd.to_numeric)
s=z.groupby(['id1','id2'])[['v0','v1']].cumprod().mul(z['u'],0)
#z=z.join(s.add_prefix('New_'))
v0 v1
0 10 70
1 20 560
2 60 180
3 40 100
4 150 330
5 900 3960
If you want to handle more than 2 v columns, it's better not to reference it.
(
z.apply(lambda x: pd.to_numeric(x, errors='ignore'))
.groupby(['id1', 'id2']).apply(lambda x: x.cumprod().mul(x.u.min()))
)

For loops with multiple result

I have a following dummy data frame:
df = pd.DataFrame([[1,50,60],[5,70,80],[2,120,30],[3,125,450],[5,80,90],[4,100,200],[2,1000,2000],[1,10,20]],columns = ['A','B','C'])
A B C
0 1 50 60
1 5 70 80
2 2 120 30
3 3 125 450
4 5 80 90
5 4 100 200
6 2 1000 2000
7 1 10 20
I am for loop in python at this moment and I would like to know if there is any possibility that for loop in python to generate multiple results. I would like to break the above data frame using for loop where for each variable in column A I would like to have new df and sort them based on column B and have column C multiplied by 2:
df1 =
A B C
1 10 40
1 20 120
df2 =
A B C
2 120 60
2 1000 4000
df3 =
A B C
3 125 900
df4 =
A B C
4 100 200
df5 =
A B C
5 70 80
5 80 90
I am not sure if this can be done in Python. Normally I use matlab and for this I tried the following in my python script:
def f(df):
for i in np.unique(df['A'].values):
df = df.sort_values(['A','B'])
df = df['C'].assign(C = lambda x: x.C*2)
print df
Of course this is wrong since it will not generate multiple result as df1,df2...df5 (this variables are important to be ended by 1,2,...5 such that it can be traced or followed column A of the dataframe). Could anyone help me please? I understand that this can be easily done without for loop (vectorization), but I have many unique values in column A and I would like to run a for loop on them and I would also like to learn more about loop in Python. Many thanks.
Use DataFrame.groupby is faster than Series.unique.
Optionally you can save the dataframes in a dictionary.
The advantage of using a dictionary with respect to the list is that it can match the password with the value in A
df2=df.copy()
df2['C']=df2['C']*2
df2=df2.sort_values('B')
dfs={i:group for i,group in df2.groupby('A')}
access the dictionary based on the value in A:
for key in dfs:
print(f'dfs[{key}]')
print(dfs[key])
print('_'*20)
dfs[1]
A B C
7 1 10 80
0 1 50 240
____________________
dfs[2]
A B C
2 2 120 120
6 2 1000 8000
____________________
dfs[3]
A B C
3 3 125 1800
____________________
dfs[4]
A B C
5 4 100 800
____________________
dfs[5]
A B C
1 5 70 320
4 5 80 360
Sort and multiply before chunking into pieces:
df['C'] = 2* df['C']
[group for name, group in df.sort_values(by=['A','B']).groupby('A')]
Or if you want a dict:
{name: group for name, group in df.sort_values(by=['A','B']).groupby('A')}
I have similar answer like Ansev:
df = pd.DataFrame([[1,50,60],[5,70,80],[2,120,30],[3,125,450],[5,80,90],[4,100,200],[2,1000,2000],[1,10,20]],columns = ['A','B','C'])
A = np.unique(data['A'].values)
df_result = []
for a in A:
df1 = df.loc[df['A'] == a]
df1 = df1.sort_values('B')
df1 = df1.assign(C = lambda x: x.C*2)
df_result+=[df1]
I am still unable to automate this for having the result as df_result1, df_result2...df_result5. What I can do is only to call the result from each loop as df_result[0], df_result[1],...df_result[4].
What you want to do is group by the column A and then store the resulting dataframe into a dict indexed by the value of A. A code to do that would be
df_dict = {}
for ix, gp in df.groupby('A'):
new_df = gp.sort_values('B')
new_df['C'] = 2*new_df['C']
df_dict[ix] = new_df
Then the variable df_list contains all the resulting dataframes sorted by column B and column C multiplied by 2. For example
print(df_dict[1])
A B C
1 10 40
1 50 120

Categories