I have a 2 pandas dataframes that are (156915, 22) and (22,2) in shape. The DF1, (156915, 22), has column names that match DF2 column 1 rows. I want to do matrix multiplication where DF1.columns = DF2['col1']. Heres a quick view of what the df's may look like. I would like to return a pandas dataframe of the same shape as DF1. Thank you in advance!
DF1:
A | B | C
1 | 15 | 8
5 | 3 | 2
DF2:
col1 | col2
A | 5
B | 1
C | 0
One method:
df3 = df2.set_index('col1')
df1[df3.index].apply(lambda x: x*df3['col2'].T,axis=1)`
If your columns in DF1 are in the same order as your col1 in DF2, then using np.dot should work:
np.dot(DF1, DF2['col2'])
Related
I'm new to pandas and i'm trying to understand if there is a method to find out, if two values from one row in df1 are between two values from one row in df2.
Basically my df1 looks like this:
start | value | end
1 | TEST | 5
2 | TEST | 3
...
and my df2 looks like this:
start | value | end
2 | TEST2 | 10
3 | TEST2 | 4
...
Right now i've got it working with two loops:
for row in df1.iterrows():
for row2 in df2.iterrows():
if row2[1]["start"] >= row[1]["start"] and row2[1]["end"] <= row[1]["end"]:
print(row2)
but this doesn't feel like it's the pandas way to me.
What I'm expecting is that row number 2 from df2 is getting printed because 3 > 1 and 4 < 5, i.e.:
3 | TEST2 | 4
Is there a method to do this in the pandas kind of working?
You could use a cross merge to get all combinations of df1 and df2 rows, and filter using classical comparisons. Finally, get the indices and slice:
idx = (df1.merge(df2.reset_index(), suffixes=('1', '2'), how='cross')
.query('(start2 > start1) & (end2 < end1)')
['index'].unique()
)
df2.loc[idx]
NB. I am using unique here to ensure that a row is selected only once, even if there are several matches
output:
start value end
1 3 TEST2 4
Table 1
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
Table 2
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','05-05-2021','03-05-2021'],'data':[12,13,16,9],'test':['g','h','j','i'],'test2':['k','l','m','n']})
Result Table
Brief Explanation on how the Result table needs to be created:
I have two data frames and I want to merge them based on a df_id. But the date column from second table should be transposed into the resultant table.
The date columns for the result table will be a range between the min date and max date from the second table
The column values for the dates in the result table will be from the data column of the second table.
Also the test column from the second table will only take its value of the latest date for the result table
I hope this is clear. Any suggestion or help regarding this will be greatly appreciated.
I have tried using pivot on the second table and then trying to merge the pivoted second table df1 but its not working. I do not know how to get only one row for the latest value of test.
Note: I am trying to solve this problem using vectorization and do not want to serially parse through each row
You need to pivot your df2 into two separate table as we need data and test values and then merge both resulting pivot table with df1
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','03-05-2021','05-05-2021'],'data':[12,13,9,16],'test':['g','h','i','j']})
test_piv = df2.pivot(index=['df1_id'],columns=['date'],values=['test'])
data_piv = df2.pivot(index=['df1_id'],columns=['date'],values=['data'])
max_test = test_piv['test'].ffill(axis=1).iloc[:,-1].rename('test')
final = df1.merge(data_piv['data'],left_on=df1.df1_id, right_index=True, how='left')
final = final.merge(max_test,left_on=df1.df1_id, right_index=True, how='left')
and hence your resulting final dataframe as below
| | df1_id | col1 | col2 | 01-05-2021 | 03-05-2021 | 05-05-2021 | test |
|---:|---------:|:-------|:-------|-------------:|-------------:|-------------:|:-------|
| 0 | 1 | a | d | 12 | 9 | 16 | j |
| 1 | 2 | b | e | nan | 13 | nan | h |
| 2 | 3 | c | f | nan | nan | nan | nan |
Here is the solution for the question:
I first sort df2 based of df1_id and date to ensure that table entries are in order.
Then I drop duplicates based on df_id and select the last row to ensure I have the latest values for test and test2
Then I pivot df2 to get the corresponding date as column and data as its value
Then I merge the table with df2_pivoted to combine the latest values of test and test2
Then I merge with df1 to get the resultant table
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','05-05-2021','03-05-2021'],'data':[12,13,16,9],'test':['g','h','j','i'],'test2':['k','l','m','n']})
df2=df2.sort_values(by=['df1_id','date'])
df2_latest_vals = df2.drop_duplicates(subset=['df1_id'],keep='last')
df2_pivoted = df2.pivot_table(index=['df1_id'],columns=['date'],values=['data'])
df2_pivoted = df2_pivoted.droplevel(0,axis=1).reset_index()
df2_pivoted = pd.merge(df2_pivoted,df2_latest_vals,on='df1_id')
df2_pivoted = df2_pivoted.drop(columns=['date','data'])
result = pd.merge(df1,df2_pivoted,on='df1_id',how='left')
result
Note: I have not been able to figure out how to get the entire date range between 01-05-2021 and 05-05-2021 and show the empty values as NaN. If anyone can help please edit the answer
I have a df like:
| col1 | col2 | col3
0 | Text1 | a,b ,c | klra-tk³,t54 ?
1 | Text2 | NaN | gimbal3, gimbal4
2 | Text3 | a,k,m | NaN
I want to get a single row with all unique values of a column in a single line and NaNs ignored like:
| col1 | col2 | col3
0 | Text1, Text2, Text3 | a,b,c,k,m | klra-tk³,t54,gimbal3, gimbal4
How can I do this with pandas?
Use custom function with Series.str.split, DataFrame.stack, reove duplicates by Series.drop_duplicates and remove missing values by Series.dropna, last join by , and convert Series to one row DataFrame by Series.to_frame and transpose:
f = lambda x: ','.join(x.str.split(',', expand=True).stack().drop_duplicates().dropna())
df = df.apply(f).to_frame().T
print (df)
col1 col2 col3
0 Text1,Text2,Text3 a,b,c,k,m klra-tk,t54,gimbal3,gimbal4
Or use list comprehension like:
f = lambda x: ','.join(x.str.split(',', expand=True).stack().drop_duplicates().dropna())
df = pd.DataFrame([[f(df[x]) for x in df.columns]], columns=df.columns)
For every row in df_a, I am looking to find rows in df_b where the id's are the same and the df_a row's location falls within the df_b row's start and end location.
df_a looks like:
|---------------------|------------------|------------------|
| Name | id | location |
|---------------------|------------------|------------------|
| a | 1 | 202013 |
|---------------------|------------------|------------------|
df_b looks like:
|---------------------|------------------|------------------|------------------|
| Name | id | location_start | location_end |
|---------------------|------------------|------------------|------------------|
| x | 1 | 202010 | 2020199 |
|---------------------|------------------|------------------|------------------|
Unfortunately, df_a and df_b are both nearly a million rows. This code is taking like 10 hours to run on my local. Currently I'm running the following:
for index,row in df_a.iterrows():
matched = df_b[(df_b['location_start']<row['location'])
& (df_b['location_end']>row['location'])
& (df_b['id']==row['id'])]
Is there any obvious way to speed this up?
You can do this:
Consider my sample dataframes below:
In [90]: df_a = pd.DataFrame({'Name':['a','b'], 'id':[1,2], 'location':[202013, 102013]})
In [91]: df_b = pd.DataFrame({'Name':['a','b'], 'id':[1,2], 'location_start':[202010, 1020199],'location_end':[2020199, 1020299] })
In [92]: df_a
Out[92]:
Name id location
0 a 1 202013
1 b 2 102013
In [93]: df_b
Out[93]:
Name id location_start location_end
0 a 1 202010 2020199
1 b 2 1020199 1020299
In [95]: d = pd.merge(df_a, df_b, on='id')
In [106]: indexes = d[d['location'].between(d['location_start'], d['location_end'])].index.tolist()
In [107]: df_b.iloc[indexes, :]
Out[107]:
Name id location_start location_end
0 a 1 202010 2020199
I have a dataframe of the form:
A| B| C | D
a| x| r | 1
a| x| s | 2
a| y| r | 1
b| w| t | 4
b| z| v | 2
I'd like to be able to return something like (showing unique values and frequency)
A| freq of most common value in Column B |maximum of column D based on the most common value in Column B | most common value in Column B
a 2 2 x
b 1 4 w
at the moment i can calculate the everything but the 3 column of the result dataframe quiet fast via
df = (df.groupby('A', sort=False)['B']
.apply(lambda x: x.value_counts().head(1))
.reset_index()
but to calculate the 2 Column ("maximum of column D based on the most common value in Column B") i have writen a for-loop witch is slow for a lot of data.
Is there a fast way?
The question is linked to: Count values in dataframe based on entry
Use merge with get rows by maximum D per groups by DataFrameGroupBy.idxmax:
df1 = (df.groupby('A', sort=False)['B']
.apply(lambda x: x.value_counts().head(1))
.reset_index()
.rename(columns={'level_1':'E'}))
#print (df1)
df = df1.merge(df, left_on=['A','E'], right_on=['A','B'], suffixes=('','_'))
df = df.loc[df.groupby('A')['D'].idxmax(), ['A','B','D','E']]
print (df)
A B D E
1 a 2 2 x
2 b 1 4 w
Consider doing this in 3 steps:
find most common B (as in your code):
df2 = (df.groupby('A', sort=False)['B']).apply(lambda x: x.value_counts().head(1)).reset_index()
build DataFrame with max D for each combination of A and B
df3 = df.groupby(['A','B']).agg({'D':max}).reset_index()
merge 2 DataFrames to find max Ds matching the A-B pairs selected earlier
df2.merge(df3, left_on=['A','level_1'], right_on=['A','B'])
The column D in the resulting DataFrame will be what you need
A level_1 B_x B_y D
0 a x 2 x 2
1 b w 1 w 4