Very basic user of Pandas but I am coming against a brick wall here.
So I have one dataframe called dg has a column called 'user_id', and two other columns which aren't needed at the moment. I also have two more dataframes(data_conv and data_retargeting) with includes the same column name and a column called 'timestamp' in it however there is multiple timestamps for each 'user_id'.
What I need to create new columns in dg for the minimum and maximum 'timestamp' found.
I am currently able to do this through some very long-winded method with iterrow rows however for a dataframe of ~16000, it took 45minutes and I would like to cut it down because I have larger dataframes to run this one.
for index,row in dg.iterrows():
user_id=row['pdp_id']
n_audft=data_retargeting[data_retargeting.pdp_id == user_id].index.min()
n_audlt=data_retargeting[data_retargeting.pdp_id == user_id].index.max()
n_convft=data_conv[data_conv.pdp_id == user_id].index.min()
n_convlt=data_conv[data_conv.pdp_id == user_id].index.max()
dg[index,'first_retargeting']=data_retargeting.loc[n_audft, 'raw_time']
dg[index,'last_retargeting']=data_retargeting.loc[n_audlt, 'raw_time']
dg[index,'first_conversion']=data_conv.loc[n_convft, 'raw_time']
dg[index,'last_conversion']=data_conv.loc[n_convlt, 'raw_time']
without going into specific code, is every user_id in dg found in data_conv and data_retargeting? if so, you can merge (http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.merge.html) them into a new dataframe first, and then compute the max/min, and extract the desired columns. i suspect that might run a little bit faster.
Related
I am currently working on a project where my goal is to get the game scores for each NCAA mens basketball game. In order to do this, I need to use the python package sportsreference. I need to use two dataframes, one called df which has the game date and one called box_index (shown below) which has the unique link of each game. I need to get the date column replaced by the unique link of each game. These unique links start with the date (formatted exactly as in the date column of df), which makes it easier to do this with regex or the .contains(). I keep getting a Keyerror: 0 error. Can someone help me figure out what is wrong with my logic below?
from sportsreference.ncaab.schedule import Schedule
def get_team_schedule(name):
combined =Schedule(name).dataframe
box_index = combined["boxscore_index"]
box = box_index.to_frame()
#print(box)
for i in range(len(df)):
for j in range(len(box)):
if box.loc[i,"boxscore_index"].contains(df.loc[i, "date"]):
df.loc[i,"date"] = box.loc[i,"boxscore_index"]
get_team_schedule("Virginia")
It seems like "box" and "df" are pandas data frame, and since you are iterating through all the rows, it may be more efficient to use iterrows (instead of searching by index with ".loc")
for i, row_df in df.iterrows():
for j, row_box in box.iterrows():
if row_box["boxscore_index"].contains(row_df["date"]):
df.at[i, 'date'] = row_box["boxscore_index"]
the ".at" function will overwrite the value at a given cell
Just fyi, iterrows is more efficient than .loc., however itertuples is about 10x faster, and zip about 100xs.
The Keyerror: 0 error is saying you can't get that row at index 0, because there is no index value of 0 using box.loc[i,"boxscore_index"] (the index values are the dates, for example '2020-12-22-14-virginia'). You could use .iloc. though, like box.iloc[i]["boxscore_index"]. You'd have to convert all the .loc to that.
Like the other post said though, I wouldn't go that path. I actually wouldn't even use iterrows here. I would put the box_index into a list, then iterarte through that. Then use pandas to filter your df dataframe. I'm sort of making some assumptions of what df looks like, so if this doesn't work, or not what you looking to do, please share some sample rows of df:
from sportsreference.ncaab.schedule import Schedule
def get_team_schedule(name):
combined = Schedule(name).dataframe
box_index_list = list(combined["boxscore_index"])
for box_index in box_index_list:
temp_game_data = df[df["date"] == boxscore_index]
print(box_index)
print(temp_game_data,'\n')
get_team_schedule("Virginia")
I have 2 pandas dataframes, df_pe and df_merged. Both the dataframes have several rows, as well as several columns. Now, there are some specific things I would like to accomplish using these dataframes:
In df_merged, there is a column named ST, which contains timestamps of various events in format eg. (2017-08-27 00:00:00). In df_pe, there are 2 columns Ton and Toff which contain the time when an event started and when and event ended. Eg. (Ton value for a random row: 2018-08-17 01:20:00 while Toff value 2018-08-17 02:30:00).
Secondly, there is a column in df_pe, namely EC. I have another dataframe called df_uniqueal, which also has a column called EC. What I would like to do is:
a. For all rows in df_merged, whenever the ST value is within the duration of Ton and Toff in the df_pe, create 2 new columns in df_merged: EC and ED. Append/Put the value of the EC from data frame df_pe into this new column, while, put the value of the dataframe df_uniqueal into the new column ED (which is eventually a mapped version of the EC in df_pe for obtaining ED in df_uniqueal). If none of the conditions matches/there are NaNs (missing values) left after this procedure, put the string "NF" into df_merged's new ED column, while put the integer 0 into the df_merged's new EC column.
I have explored SO and SE, but have not found anything substantial. Any help in this regard is highly appreciated.
This is my attempt at using for loops in Python for iterating over the dataframes for accomplishing the first condition but it runs forever (never ending) and I don't think this is the best possible way to accomplish this.
for i in range(len(df_merged)):
for j in range(len(df_pe)):
if df_pe.TOn[j] < df_merged.ST[i] < df_pe.TOff[j]:
df_merged.EC[i] = df_pe.EC[j]
df_merged.ED[i] = df_uniqueal.ED[df_processed.EC[j]]
else:
df_merged.EC[i] = 0
df_merged.ED[i] = "NF"
EDIT
Please refer image for expected output and baby example of dataframes.
The relevant columns are in bold (note the column numbers may differ, but the column names are same in this sample example).
If I have understood the question correctly hopefully this will get you started.
for i,val in df_merged['ST'].items():
bool_idx = (df_pe['TOn']<val)&(val<df_pe['Toff'])
if df_pe[bool_idx]['EC'].empty:
df_merged.loc[i,'EC']=0
df_merged.loc[i,'ED']="NF"
else:
value_from_df_pe = df_pe[bool_idx]['EC']
df_merged.loc[i,'EC']= value_from_df_pe
value_from_df_uniqueal = df_uniqueal[df_uniqueal['EC']==value_from_df_pe]['EC']
df_merged.loc[i,'ED']= value_from_df_uniqueal
Please note I have note tested this code on any data.
I am new to Python and I'm trying to produce a similar result of Excel's IndexMatch function with Python & Pandas, though I'm struggling to get it working.
Basically, I have 2 separate DataFrames:
The first DataFrame ('market') has 7 columns, though I only need 3 of those columns for this exercise ('symbol', 'date', 'close'). This df has 13,948,340 rows.
The second DataFrame ('transactions') has 14 columns, though only I only need 2 of those columns ('i_symbol', 'acceptance_date'). This df has 1,428,026 rows.
My logic is: If i_symbol is equal to symbol and acceptance_date is equal to date: print symbol, date & close. This should be easy.
I have achieved it with iterrows() but because of the size of the dataset, it returns a single result every 3 minutes - which means I would have to run the script for 1,190 hours to get the final result.
Based on what I have read online, itertuples should be a faster approach, but I am currently getting an error:
ValueError: too many values to unpack (expected 2)
This is the code I have written (which currently produces the above ValueError):
for i_symbol, acceptance_date in transactions.itertuples(index=False):
for symbol, date in market.itertuples(index=False):
if i_symbol == symbol and acceptance_date == date:
print(market.symbol + market.date + market.close)
2 questions:
Is itertuples() the best/fastest approach? If so, how can I get the above working?
Does anyone know a better way? Would indexing work? Should I use an external db (e.g. mysql) instead?
Thanks, Matt
Regarding question 1: pandas.itertuples() yields one namedtuple for each row. You can either unpack these like standard tuples or access the tuple elements by name:
for t in transactions.itertuples(index=False):
for m in market.itertuples(index=False):
if t.i_symbol == m.symbol and t.acceptance_date == m.date:
print(m.symbol + m.date + m.close)
(I did not test this with data frames of your size but I'm pretty sure it's still painfully slow)
Regarding question 2: You can simply merge both data frames on symbol and date.
Rename your "transactions" DataFrame so that it also has columns named "symbol" and "date":
transactions = transactions[['i_symbol', 'acceptance_date']]
transactions.columns = ['symbol','date']
Then merge both DataFrames on symbol and date:
result = pd.merge(market, transactions, on=['symbol','date'])
The result DataFrame consists of one row for each symbol/date combination which exists in both DataFrames. The operation only takes a few seconds on my machine with DataFrames of your size.
#Parfait provided the best answer below as a comment. Very clean, worked incredibly fast - thank you.
pd.merge(market[['symbol', 'date', 'close']], transactions[['i_symbol',
'acceptance_date']], left_on=['symbol', 'date'], right_on=['i_symbol',
'acceptance_date']).
No need for looping.
I have two dataframes with different lengths(df,df1). They share one similar label "collo_number". I want to search the second dataframe for every collo_number in the first data frame. Problem is that the second date frame contains multiple rows for different dates for every collo_nummer. So i want to sum these dates and add this in a new column in the first database.
I now use a loop but it is rather slow and has to perform this operation for al 7 days in a week. Is there a way to get a better performance? I tried multiple solutions but keep getting the error that i cannot use the equal sign for two databases with different lenghts. Help would really be appreciated! Here is an example of what is working but with a rather bad performance.
df5=[df1.loc[(df1.index == nasa) & (df1.afleverdag == x1) & (df1.ind_init_actie=="N"), "aantal_colli"].sum() for nasa in df.collonr]
Your description is a bit vague (hence my comment). First what you good do is to select the rows of the dataframe that you want to search:
dftmp = df1[(df1.afleverdag==x1) & (df1.ind_init_actie=='N')]
so that you don't do this for every item in the loop.
Second, use .groupby.
newseries = dftmp['aantal_colli'].groupby(dftmp.index).sum()
newseries = newseries.ix[df.collonr.unique()]
Say I construct a dataframe with pandas, having multi-indexed columns:
mi = pd.MultiIndex.from_product([['trial_1', 'trial_2', 'trial_3'], ['motor_neuron','afferent_neuron','interneuron'], ['time','voltage','calcium']])
ind = np.arange(1,11)
df = pd.DataFrame(np.random.randn(10,27),index=ind, columns=mi)
Link to image of output dataframe
Say I want only the voltage data from trial 1. I know that the following code fails, because the indices are not sorted lexically:
idx = pd.IndexSlice
df.loc[:,idx['trial_1',:,'voltage']]
As explained in another post, the solution is to sort the dataframe's indices, which works as expected:
dfSorted = df.sortlevel(axis=1)
dfSorted.loc[:,idx['trial_1',:,'voltage']]
I understand why this is necessary. However, say I want to add a new column:
dfSorted.loc[:,('trial_1','interneuron','scaledTime')] = 100 * dfSorted.loc[:,('trial_1','interneuron','time')]
Now dfSorted is not sorted anymore, since the new column was tacked onto the end, rather than snuggled into order. Again, I have to call sortlevel before selecting multiple columns.
I feel this makes for repetitive, bug-prone code, especially when adding lots of columns to the much bigger dataframe in my own project. Is there a (preferably clean-looking) way of inserting new columns in lexical order without having to call sortlevel over and over again?
One approach would be to use filter which does a text filter on the column names:
In [117]: df['trial_1'].filter(like='voltage')
Out[117]:
motor_neuron afferent_neuron interneuron
voltage voltage voltage
1 -0.548699 0.986121 -1.339783
2 -1.320589 -0.509410 -0.529686