I am trying to add a suffix to the dataframes called on by a dictionary.
Here is a sample code below:
import pandas as pd
import numpy as np
from collections import OrderedDict
from itertools import chain
# defining stuff
num_periods_1 = 11
num_periods_2 = 4
num_periods_3 = 5
# create sample time series
dates1 = pd.date_range('1/1/2000 00:00:00', periods=num_periods_1, freq='10min')
dates2 = pd.date_range('1/1/2000 01:30:00', periods=num_periods_2, freq='10min')
dates3 = pd.date_range('1/1/2000 02:00:00', periods=num_periods_3, freq='10min')
# column_names = ['WS Avg','WS Max','WS Min','WS Dev','WD Avg']
# column_names = ['A','B','C','D','E']
column_names_1 = ['C', 'B', 'A']
column_names_2 = ['B', 'C', 'D']
column_names_3 = ['E', 'B', 'C']
df1 = pd.DataFrame(np.random.randn(num_periods_1, len(column_names_1)), index=dates1, columns=column_names_1)
df2 = pd.DataFrame(np.random.randn(num_periods_2, len(column_names_2)), index=dates2, columns=column_names_2)
df3 = pd.DataFrame(np.random.randn(num_periods_3, len(column_names_3)), index=dates3, columns=column_names_3)
sep0 = '<~>'
suf1 = '_1'
suf2 = '_2'
suf3 = '_3'
ddict = {'df1': df1, 'df2': df2, 'df3': df3}
frames_to_concat = {'Sheets': ['df1', 'df3']}
Suffs = {'Suffixes': ['Suffix 1', 'Suffix 2', 'Suffix 3']}
Suff = {'Suffix 1': suf1, 'Suffix 2': suf2, 'Suffix 3': suf3}
## appply suffix to each data frame selected in order HERE
# Suffdict = [Suff[x] for x in Suffs['Suffixes']]
# print(Suffdict)
df4 = pd.concat([ddict[x] for x in frames_to_concat['Sheets']],
axis=1,
join='outer')
I want to add a suffix to each dataframe so that they can be distinguished when the dataframes are concatenated. I am having some trouble calling them and then applying them to each dataframe. So I have called for df1 and df3 to be concatenated and I would like only suffix 1 to be applied to df1 and suffix 2 to be applied to df3.
Order does not matter for the data frame suffix if df2 and df3 were called suffix 1 would be applied to df2 and suffix 2 would be applied to df3. obviously the last suffix would not be used.
Unless you have python3.6, you cannot guarantee order in dictionaries. Even if you could with python3.6, that would imply your code would not run in any lower python version. If you need order, you should be looking at lists instead.
You can store your dataframes as well as your suffixes in a list, and then use zip to add a suffix to each df in turn.
dfs = [df1, df2, df3]
sufs = [suf1, suf2, suf3]
df_sufs = [x.add_suffix(y) for x, y in zip(dfs, sufs)]
Based on your code/answer, you can load your dataframes and suffixes into lists, call zip, add a suffix to each one, and call pd.concat.
dfs = [ddict[x] for x in frames_to_concat['Sheets']]
sufs = [suff[x] for x in suffs['Suffixes']]
df4 = pd.concat([x.add_suffix(sep0 + y)
for x, y in zip(dfs, sufs)], axis=1, join='outer')
Ended up just making a simple iterator for the problem. Here is my solution
n=0
for df in frames_to_concat['Sheets']:
print(df_dict[df])
df_dict[df] = df_dict[df].add_suffix(sep0 + suff[suffs['Suffixes'][n]])
n = n+1
Anyone have a better way to do this?
Related
I need to import and transform xlsx files. They are written in a wide format and I need to reproduce some of the cell information from each row and pair it up with information from all the other rows:
[Edit: changed format to represent the more complex requirements]
Source format
ID
Property
Activity1name
Activity1timestamp
Activity2name
Activity2timestamp
1
A
a
1.1.22 00:00
b
2.1.22 10:05
2
B
a
1.1.22 03:00
b
5.1.22 20:16
Target format
ID
Property
Activity
Timestamp
1
A
a
1.1.22 00:00
1
A
b
2.1.22 10:05
2
B
a
1.1.22 03:00
2
B
b
5.1.22 20:16
The following code works fine to transform the data, but the process is really, really slow:
def transform(data_in):
data = pd.DataFrame(columns=columns)
# Determine number of processes entered in a single row of the original file
steps_per_row = int((data_in.shape[1] - (len(columns) - 2)) / len(process_matching) + 1)
data_in = data_in.to_dict("records") # Convert to dict for speed optimization
for row_dict in tqdm(data_in): # Iterate over each row of the original file
new_row = {}
# Set common columns for each process step
for column in column_matching:
new_row[column] = row_dict[column_matching[column]]
for step in range(0, steps_per_row):
rep = str(step+1) if step > 0 else ""
# Iterate for as many times as there are process steps in one row of the original file and
# set specific columns for each process step, keeping common column values identical for current row
for column in process_matching:
new_row[column] = row_dict[process_matching[column]+rep]
data = data.append(new_row, ignore_index=True) # append dict of new_row to existing data
data.index.name = "SortKey"
data[timestamp].replace(r'.000', '', regex=True, inplace=True) # Remove trailing zeros from timestamp # TODO check if works as intended
data.replace(r'^\s*$', float('NaN'), regex=True, inplace=True) # Replace cells with only spaces with nan
data.dropna(axis=0, how="all", inplace=True) # Remove empty rows
data.dropna(axis=1, how="all", inplace=True) # Remove empty columns
data.dropna(axis=0, subset=[timestamp], inplace=True) # Drop rows with empty Timestamp
data.fillna('', inplace=True) # Replace NaN values with empty cells
return data
Obviously, iterating over each row and then even each column is not at all how to use pandas the right way, but I don't see how this kind of transformation can be vectorized.
I have tried using parallelization (modin) and played around with using dict or not, but it didn't work / help. The rest of the script literally just opens and saves the files, so the problem lies here.
I would be very grateful for any ideas on how to improve the speed!
The df.melt function should be able to do this type of operation much faster.
df = pd.DataFrame({'ID' : [1, 2],
'Property' : ['A', 'B'],
'Info1' : ['x', 'a'],
'Info2' : ['y', 'b'],
'Info3' : ['z', 'c'],
})
data=df.melt(id_vars=['ID','Property'], value_vars=['Info1', 'Info2', 'Info3'])
** Edit to address modified question **
Combine the df.melt with df.pivot operation.
# create data
df = pd.DataFrame({'ID' : [1, 2, 3],
'Property' : ['A', 'B', 'C'],
'Activity1name' : ['a', 'a', 'a'],
'Activity1timestamp' : ['1_1_22', '1_1_23', '1_1_24'],
'Activity2name' : ['b', 'b', 'b'],
'Activity2timestamp' : ['2_1_22', '2_1_23', '2_1_24'],
})
# melt dataframe
df_melted = df.melt(id_vars=['ID','Property'],
value_vars=['Activity1name', 'Activity1timestamp',
'Activity2name', 'Activity2timestamp',],
)
# merge categories, i.e. Activity1name Activity2name become Activity
df_melted.loc[df_melted['variable'].str.contains('name'), 'variable'] = 'Activity'
df_melted.loc[df_melted['variable'].str.contains('timestamp'),'variable'] = 'Timestamp'
# add category ids (dataframe may need to be sorted before this operation)
u_category_ids = np.arange(1,len(df_melted.variable.unique())+1)
category_ids = np.repeat(u_category_ids,len(df)*2).astype(str)
df_melted.insert(0, 'unique_id', df_melted['ID'].astype(str) +'_'+ category_ids)
# pivot table
table = df_melted.pivot_table(index=['unique_id','ID','Property',],
columns='variable', values='value',
aggfunc=lambda x: ' '.join(x))
table = table.reset_index().drop(['unique_id'], axis=1)
Using pd.melt, as suggested by #Pantelis, I was able to speed up this transformation so extremely much, it's unbelievable. Before, a file with ~13k rows took 4-5 hours on a brand-new ThinkPad X1 - now it takes less than 2 minutes! That's a speed up by factor 150, just wow. :)
Here's my new code, for inspiration / reference if anyone has a similar data structure:
def transform(data_in):
# Determine number of processes entered in a single row of the original file
steps_per_row = int((data_in.shape[1] - len(column_matching)) / len(process_matching) )
# Specify columns for pd.melt, transforming wide data format to long format
id_columns = column_matching.values()
var_names = {"Erledigungstermin Auftragsschrittbeschreibung":data_in["Auftragsschrittbeschreibung"].replace(" ", np.nan).dropna().values[0]}
var_columns = ["Erledigungstermin Auftragsschrittbeschreibung"]
for _ in range(2, steps_per_row+1):
try:
var_names["Erledigungstermin Auftragsschrittbeschreibung" + str(_)] = data_in["Auftragsschrittbeschreibung" + str(_)].replace(" ", np.nan).dropna().values[0]
except IndexError:
var_names["Erledigungstermin Auftragsschrittbeschreibung" + str(_)] = data_in.loc[0,"Auftragsschrittbeschreibung" + str(_)]
var_columns.append("Erledigungstermin Auftragsschrittbeschreibung" + str(_))
data = pd.melt(data_in, id_vars=id_columns, value_vars=var_columns, var_name="ActivityName", value_name=timestamp)
data.replace(var_names, inplace=True) # Replace "Erledigungstermin Auftragsschrittbeschreibung" with ActivityName
data.sort_values(["Auftrags-\npositionsnummer",timestamp], ascending=True, inplace=True)
# Improve column names
data.index.name = "SortKey"
column_names = {v: k for k, v in column_matching.items()}
data.rename(mapper=column_names, axis="columns", inplace=True)
data[timestamp].replace(r'.000', '', regex=True, inplace=True) # Remove trailing zeros from timestamp
data.replace(r'^\s*$', float('NaN'), regex=True, inplace=True) # Replace cells with only spaces with nan
data.dropna(axis=0, how="all", inplace=True) # Remove empty rows
data.dropna(axis=1, how="all", inplace=True) # Remove empty columns
data.dropna(axis=0, subset=[timestamp], inplace=True) # Drop rows with empty Timestamp
data.fillna('', inplace=True) # Replace NaN values with empty cells
return data
I have three dataframes
df1 = pd.DataFrame({'src': ['src1', 'src2', 'src3'],
'dst': ['dst1', 'dst2', 'dst3']})
df2 = pd.DataFrame({'src': ['dst1', 'dst1', 'dst3'],
'dst': ['dstDst1', 'dstDst2', 'dstDst3']})
df3 = pd.DataFrame({'src': ['dstDst3', 'dstDst3'],
'dst': ['dstDstDst1', 'dstDstDst2']})
I want to merge the three dataframes using the following rule:
Keep the initial src field and merge all the dst if there is a src->dst relationship that can backtrack to the src in df1. To be more concrete, the result of the merge is:
df4 = pd.DataFrame({'src': ['src1', 'src2', 'src3'],
'dst': ['dst1, dstDst1, dstDst2', 'dst2', 'dst3, dstDst3, dstDstDst1, dstDstDst2']})
NOTE: it is guaranteed that df2's src values are subsets of df1's dst values and df3's src values are subsets of df2's dst values.
I came up with this solution, would like to know if there are more elegant or idiomatic way of doing this.
df_collection = {}
df_collection[1] = df1
df_collection[2] = df2
df_collection[3] = df3
def merge(df1, df2):
'''
for each target in df1, find source in df2
'''
df3 = df1.copy(deep=True)
# forward merge will be n complexity, ...save for later
for i in range(0, len(df1)):
for j in range(0, len(df2)):
if df1.iloc[i]['dst'] == df2.iloc[j]['src']:
df3.iloc[i]['dst'] = df3.iloc[i]['dst'] + ',' + df2.iloc[j]['dst']
return df3
for i in range(3, 1, -1):
df_collection[i-1] = merge(df_collection[i-1], df_collection[i])
You can avoid the nested for loops by the following code.
Code:
import pandas as pd
# Create the sample dataframes
df1 = pd.DataFrame({'src': ['src1', 'src2', 'src3'], 'dst': ['dst1', 'dst2', 'dst3']})
df2 = pd.DataFrame({'src': ['dst1', 'dst1', 'dst3'], 'dst': ['dstDst1', 'dstDst2', 'dstDst3']})
df3 = pd.DataFrame({'src': ['dstDst3', 'dstDst3'], 'dst': ['dstDstDst1', 'dstDstDst2']})
# Merge the dataframes from back to front
df = pd.DataFrame(columns=['src', 'dst'])
for i, _df in enumerate([df3, df2, df1]):
df = _df.merge(df, how='left', left_on='dst', right_on='src', suffixes=('', f'_{i}'))
df = df.groupby(['src', 'dst'], as_index=False)[f'dst_{i}'].agg(lambda s: [e for e in s if pd.notna(e)])
df['dst'] = df.apply(lambda r: ', '.join([r['dst']] + r[f'dst_{i}']), axis=1)
df = df[['src', 'dst']]
print(df)
Output:
src
dst
src1
dst1, dstDst1, dstDst2
src2
dst2
src3
dst3, dstDst3, dstDstDst1, dstDstDst2
I have the following dataframe as below:
df = pd.DataFrame({'Field':'FAPERF',
'Form':'LIVERID',
'Folder':'ALL',
'Logline':'9',
'Data':'Yes',
'Data':'Blank',
'Data':'No',
'Logline':'10'}) '''
I need dataframe:
df = pd.DataFrame({'Field':['FAPERF','FAPERF'],
'Form':['LIVERID','LIVERID'],
'Folder':['ALL','ALL'],
'Logline':['9','10'],
'Data':['Yes','Blank','No']}) '''
I had tried using the below code but not able to achieve desired output.
res3.set_index(res3.groupby(level=0).cumcount(), append=True['Data'].unstack(0)
Can anyone please help me.
I believe your best option is to create multiple data frames with the same column name ( example 3 df with column name : "Data" ) then simply perform a concat function over Data frames :
df1 = pd.DataFrame({'Field':'FAPERF',
'Form':'LIVERID',
'Folder':'ALL',
'Logline':'9',
'Data':'Yes'}
df2 = pd.DataFrame({
'Data':'No',
'Logline':'10'})
df3 = pd.DataFrame({'Data':'Blank'})
frames = [df1, df2, df3]
result = pd.concat(frames)
You just need to add to list in which you specify the logline and data_type for each row.
import pandas as pd
import numpy as np
list_df = []
data_type_list = ["yes","no","Blank"]
logline_type = ["9","10",'10']
for x in range (len(data_type_list)):
new_dict = { 'Field':['FAPERF'], 'Form':['LIVERID'],'Folder':['ALL'],"Data" : [data_type_list[x]], "Logline" : [logline_type[x]]}
df = pd.DataFrame(new_dict)
list_df.append(df)
new_df = pd.concat(list_df)
print(new_df)
I have dataframe as below:
df = pd.DataFrame({'$a':[1,2], '$b': [10,20]})
I tried creating a function which allow to change the column name dynamically where I can just input the old column name and new column name in the function as below:
def rename_column_name(df, old_column, new_column):
df = df.rename({'{}'.format(old_column) : '{}'.format(new_column)}, axis=1)
return df
This function is only applicable if I only have one input as below:
new_df = rename_column_name(df, '$a' , 'a')
which give me this new_df as below:
new_df = pd.DataFrame({'a':[1,2], '$b': [10,20]})
However, i wanted to create a function that allow me to make changes on multiple/one column depending on my preference as such:
new_df = rename_column_name(df, ['$a','$b'] , ['a','b'])
And get the new_df as below
new_df = pd.DataFrame({'a':[1,2], 'b': [10,20]})
So, how do I make my function more dynamic to allow me the freedom to enter multiple/one column names and rename them?
You don't need a function, you can do this using dict comprehension:
In [265]: old_names = df.columns.tolist()
In [266]: new_names = ['a','b']
In [268]: df = df.rename(columns=dict(zip(old_names, new_names)))
In [269]: df
Out[269]:
a b
0 1 10
1 2 20
Function that OP needs:
In [274]: def rename_column_name(df, old_column_list, new_column_list):
...: df = df.rename(columns=dict(zip(old_column_list, new_column_list)))
...: return df
...:
In [275]: rename_column_name(df,old_names,new_names)
Out[275]:
a b
0 1 10
1 2 20
You need to pass a list of columns to this function. It can be multiple columns or a single column. This should do what you were looking for.
def rename_column_name(df, old_column, new_column):
if not isinstance(old_column,(list,tuple)):
old_column = [old_column]
if not isinstance(new_column,(list,tuple)):
old_column = [new_column]
df = df.rename({'{}'.format(old) : '{}'.format(new) for old,new in zip(old_column,new_column)}, axis=1)
return df # dang i should have used dict.zip like in the other solution :P
I guess ... although i don't understand how this is easier than just calling
df.rename(columns={'$a':'a','$b':b})
You can do that with zip function where,
old_column_names and new_column_names should be lists.
def rename_column_name(df, old_column_names, new_column_names):
//validating the such that all the new names have been passed
if(len(old_column_names) == len(new_column_names)):
df = df.rename(columns=dict(zip(old_column_names, new_column_names)), inplace=True)
return df
To handle both one column rename and passing them as lists the function would require further conditions which can be
def rename_column_name(df, old_column_names, new_column_names):
//validating the such that all the new names have been passed
if(isinstance(old_column_names, list)) and (isinstance(new_column_names, list)):
if(len(old_column_names) == len(new_column_names)):
df = df.rename(columns=dict(zip(old_column_names, new_column_names)), inplace=True)
elif (isinstance(old_column_names, str)) and (isinstance(new_column_names, str)):
df = df.rename(columns={'{}'.format(old_column_names) : '{}'.format(new_column_names)}, inplace=True)
return df
I want to select rows from a dask dataframe based on a list of indices. How can I do that?
Example:
Let's say, I have the following dask dataframe.
dict_ = {'A':[1,2,3,4,5,6,7], 'B':[2,3,4,5,6,7,8], 'index':['x1', 'a2', 'x3', 'c4', 'x5', 'y6', 'x7']}
pdf = pd.DataFrame(dict_)
pdf = pdf.set_index('index')
ddf = dask.dataframe.from_pandas(pdf, npartitions = 2)
Furthermore, I have a list of indices, that I am interested in, e.g.
indices_i_want_to_select = ['x1','x3', 'y6']
From this, I would like to generate a dask dataframe containing only the rows specified in indices_i_want_to_select
Edit: dask now supports loc on lists:
ddf_selected = ddf.loc[indices_i_want_to_select]
The following should still work, but is not necessary anymore:
import pandas as pd
import dask.dataframe as dd
#generate example dataframe
pdf = pd.DataFrame(dict(A = [1,2,3,4,5], B = [6,7,8,9,0]), index=['i1', 'i2', 'i3', 4, 5])
ddf = dd.from_pandas(pdf, npartitions = 2)
#list of indices I want to select
l = ['i1', 4, 5]
#generate new dask dataframe containing only the specified indices
ddf_selected = ddf.map_partitions(lambda x: x[x.index.isin(l)], meta = ddf.dtypes)
Using dask version '1.2.0' results with an error due to the mixed index type.
in any case there is an option to use loc.
import pandas as pd
import dask.dataframe as dd
#generate example dataframe
pdf = pd.DataFrame(dict(A = [1,2,3,4,5], B = [6,7,8,9,0]), index=['i1', 'i2', 'i3', '4', '5'])
ddf = dd.from_pandas(pdf, npartitions = 2,)
# #list of indices I want to select
l = ['i1', '4', '5']
# #generate new dask dataframe containing only the specified indices
# ddf_selected = ddf.map_partitions(lambda x: x[x.index.isin(l)], meta = ddf.dtypes)
ddf_selected = ddf.loc[l]
ddf_selected.head()