Python pandas: multiply 2 columns of 2 dataframes with different datetime index - python

I have 2 dataframes UsdBrlDSlice and indexesM. The first is in daily basis, and has and index in yyyy-mm-dd format, and the second is in monthly basis, and has an index in yyyy-mm format.
Example of UsdBrlDSlice:
USDBRL
date
1994-01-03 331.2200
1994-01-04 336.4900
1994-01-05 341.8300
1994-01-06 347.2350
1994-01-07 352.7300
...
2020-10-05 5.6299
2020-10-06 5.5205
2020-10-07 5.6018
2020-10-08 5.6200
2020-10-09 5.5393
I need to insert a new column in UsdBrlDSlice, multiplying it´s value USDBRL with a specific column in indexesM['c'], but matching the correct month of both indexes.
Something like excel´s vlookup multiplication. Thanks.

I solved 1) creating a new y-m column in the first dataframe, and then 2) applying the map() function:
UsdBrlDSlice['y-m'] = UsdBrlDSlice.index.to_period('M')
UsdBrlDSlice['new col'] = UsdBrlDSlice['USDBRL'] * UsdBrlDSlice['y-m'].map(indexesM.set_index(indexesM.index)['c'])

UsdBrlDSliceTmp = UsdBrlDSlice.copy()
UsdBrlDSliceTmp['date_col'] = UsdBrlDSliceTmp.index.values
indexesMTmp = indexesM.copy()
indexesMTmp['date_col'] = indexesMTmp.index.values
UsdBrlDSliceTmp['month'] = UsdBrlDSliceTmp['date_col'].apply(lambda x: x.month)
indexesMTmp['month'] = indexesMTmp['date_col'].apply(lambda x: x.month)
UsdBrlDSliceTmp = UsdBrlDSliceTmp.merge(indexesMTmp, on='month', how='left')
UsdBrlDSliceTmp['target'] = UsdBrlDSliceTmp['USDBRL']*UsdBrlDSliceTmp['c']
UsdBrlDSlice['new_col'] = UsdBrlDSliceTmp['target']

Related

Iterate through df rows faster

I am trying to iterate through rows of a Pandas df to get data from one column of the row, and using that data to add new columns. The code is listed below but it is VERY slow. Is there any way to do what I am trying to do without iterating thru the individual rows of the dataframe?
ctqparam = []
wwy = []
ww = []
for index, row in df.iterrows():
date = str(row['Event_Start_Time'])
day = int(date[8] + date[9])
month = int(date[5] + date[6])
total = 0
for i in range(0, month-1):
total += months[i]
total += day
out = total // 7
ww += [out]
wwy += [str(date[0] + date[1] + date[2] + date[3])]
val = str(row['TPRev'])
out = ""
for letter in val:
if letter != '.':
out += letter
df.replace(to_replace=row['TPRev'], value=str(out), inplace = True)
val = str(row['Subtest'])
if val in ctqparam_dict.keys():
ctqparam += [ctqparam_dict[val]]
# add WWY column, WW column, and correct data format of Test_Tape column
df.insert(0, column='Work_Week_Year', value = wwy)
df.insert(3, column='Work_Week', value = ww)
df.insert(4, column='ctqparam', value = ctqparam)
It's hard to say exactly what your trying to do. However, if you're looping through rows chances are that there is a better way to do it.
For example, given a csv file that looks like this..
Event_Start_Time,TPRev,Subtest
4/12/19 06:00,"this. string. has dots.. in it.",{'A_Dict':'maybe?'}
6/10/19 04:27,"another stri.ng wi.th d.ots.",{'A_Dict':'aVal'}
You may want to:
Format Event_Start_Time as datetime.
Get the week number from Event_Start_Time.
Remove all the dots (.) from the strings in column TPRev.
Expand a dictionary contained in Subtest to its own column.
Without looping through the rows, consider doing thing by columns. Like doing it to the first 'cell' of the column and it replicates all the way down.
Code:
import pandas as pd
df = pd.read_csv('data.csv')
print(df)
Event_Start_Time TPRev Subtest
0 4/12/19 06:00 this. string. has dots.. in it. {'A_Dict':'maybe?'}
1 6/10/19 04:27 another stri.ng wi.th d.ots. {'A_Dict':'aVal'}
# format 'Event_Start_Time' as as datetime
df['Event_Start_Time'] = pd.to_datetime(df['Event_Start_Time'], format='%d/%m/%y %H:%M')
# get the week number from 'Event_Start_Time'
df['Week_Number'] = df['Event_Start_Time'].dt.isocalendar().week
# replace all '.' (periods) in the 'TPRev' column
df['TPRev'] = df['TPRev'].str.replace('.', '', regex=False)
# get a dictionary string out of column 'Subtest' and put into a new column
df = pd.concat([df.drop(['Subtest'], axis=1), df['Subtest'].map(eval).apply(pd.Series)], axis=1)
print(df)
Event_Start_Time TPRev Week_Number A_Dict
0 2019-12-04 06:00:00 this string has dots in it 49 maybe?
1 2019-10-06 04:27:00 another string with dots 40 aVal
print(df.info())
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Event_Start_Time 2 non-null datetime64[ns]
1 TPRev 2 non-null object
2 Week_Number 2 non-null UInt32
3 A_Dict 2 non-null object
dtypes: UInt32(1), datetime64[ns](1), object(2)
So you end up with a dataframe like this...
Event_Start_Time TPRev Week_Number A_Dict
0 2019-12-04 06:00:00 this string has dots in it 49 maybe?
1 2019-10-06 04:27:00 another string with dots 40 aVa
Obviously you'll probably want to do other things. Look at your data. Make a list of what you want to do to each column or what new columns you need. Don't mention how right now as chances are it's possible and has been done before - you just need to find the existing method.
You may write down get the difference in days from the current row and the row beneath etc.). Finally search out how to do the formatting or calculation you require. Break the problem down.

Is it possible to transform a CSV file with date as index?

I currently try to convert a CSV with python3 to a new format.
My later goal is to add some information to this file with pandas.
Thinks like "is the date a weekday or weekend?".
To achieve this, however, I have to overcome the first hurdle.
I need to transform my CSV file from this:
date,hour,price
2018-10-01,0-1,59.53
2018-10-01,1-2,56.10
2018-10-01,2-3,51.41
2018-10-01,3-4,47.38
2018-10-01,4-5,47.59
2018-10-01,5-6,51.61
2018-10-01,6-7,69.13
2018-10-01,7-8,77.32
2018-10-01,8-9,84.97
2018-10-01,9-10,79.56
2018-10-01,10-11,73.70
2018-10-01,11-12,71.63
2018-10-01,12-13,63.15
2018-10-01,13-14,60.24
2018-10-01,14-15,56.18
2018-10-01,15-16,53.00
2018-10-01,16-17,53.37
2018-10-01,17-18,60.42
2018-10-01,18-19,69.93
2018-10-01,19-20,75.00
2018-10-01,20-21,65.83
2018-10-01,21-22,53.86
2018-10-01,22-23,46.46
2018-10-01,23-24,42.50
2018-10-02,0-1,45.10
2018-10-02,1-2,44.10
2018-10-02,2-3,44.06
2018-10-02,3-4,43.70
2018-10-02,4-5,44.29
2018-10-02,5-6,48.13
2018-10-02,6-7,57.70
2018-10-02,7-8,68.21
2018-10-02,8-9,70.36
2018-10-02,9-10,54.53
2018-10-02,10-11,48.49
2018-10-02,11-12,46.19
2018-10-02,12-13,44.15
2018-10-02,13-14,30.79
2018-10-02,14-15,27.75
2018-10-02,15-16,30.74
2018-10-02,16-17,26.77
2018-10-02,17-18,38.68
2018-10-02,18-19,48.52
2018-10-02,19-20,49.03
2018-10-02,20-21,45.43
2018-10-02,21-22,32.04
2018-10-02,22-23,26.22
2018-10-02,23-24,1.08
2018-10-03,0-1,2.13
2018-10-03,1-2,0.10
...
to this:
date,0-1,1-2,2-3,3-4,4-5,5-6,6-7,7-8,8-9,...,23-24
2018-10-01,59.53,56.10,51.41,47.38,47.59,51.61,69.13,77.32,84.97,...,42.50
2018-10-02,45.10,44.10,44.06,43.70,44.29,....
2018-10.03,2.13,0.10,....
...
I've tried a lot with pandas DataFrames, but I can't come up with a solution.
import numpy as np
import pandas as pd
df = pd.read_csv('file.csv')
df
date hour price
0 2018-10-01 0-1 59.53
1 2018-10-01 1-2 56.10
2 2018-10-01 2-3 51.41
3 2018-10-01 3-4 47.38
4 2018-10-01 4-5 47.59
5 2018-10-01 5-6 51.61
6 2018-10-01 6-7 69.13
7 2018-10-01 7-8 77.32
8 2018-10-01 8-9 84.97
The DataFrame should look like this.
But I don't manage to fill the DataFrame.
df = pd.DataFrame(df, index=['date'], columns=['date','0-1','1-2','2-3', '3-4', '4-5', '5-6', '6-7', '7-8', '8-9', '9-10', '10-11', '11-12', '12-13', '13-14', '14-15', '15-16', '16-17', '17-18', '18-19', '19-20', '20-21', '21-22', '22-23', '23-24'])
How would you solve this?
You can use pandas.DataFrame.unstack():
# pivot the dataframe with hour to the columns
df1 = df.set_index(['date','hour']).unstack(1)
# drop level-0 on columns
df1.columns = [ c[1] for c in df1.columns ]
# sort the column names by numeric order of hours (the number before '-')
df1 = df1.reindex(columns=sorted(df1.columns, key=lambda x: int(x.split('-')[0]))).reset_index()
If I understand correctly, try using the index_col argument of pd.read_csv(), using integer labelling for the columns in the file:
df = pd.read_csv('file.csv', index_col=0)
read_csv docs here; don't be put off by the alarming number of keyword arguments, one of them will often do what you need!
You may need to parse the first two columns as a date, then add a column for weekend based on a condition on the result. See the parse_dates and infer_datetime_format keyword arguments.

New column based off certain input parameter to select what columns to use - Python

Have a pandas dataframe that includes multiple columns of monthly finance data. I have an input of period that is specified by the person running the program. It's currently just saved as period like shown below within the code.
#coded into python
period = ?? (user adds this in from input screen)
I need to create another column of data that uses the input period number to perform a calculation of other columns.
So, in the above table I'd like to create a new column 'calculation' that depends on the period input. For example, if a period of 1 was used the following calc1 would be completed (with math actually done). Period = 2 - then calc2. Period = 3 - then calc3. I only need one column calculated depending on the period number but added three examples in below picture for example of how it'd work.
I can do this in SQL using case when. So using the input period then sum what columns I need to.
select Account #,
'&Period' AS Period,
'&Year' AS YR,
case
When '&Period' = '1' then sum(d_cf+d_1)
when '&Period' = '2' then sum(d_cf+d_1+d_2)
when '&Period' = '3' then sum(d_cf+d_1+d_2+d_3)
I am unsure on how to do this easily in python (newer learner). Yes, I could create a column that does each calculation via new column for every possible period (1-12), and then only select that column but I'd like to learn and do it a more efficient way.
Can you help more or point me in a better direction?
You could certainly do something like
df[['d_cf'] + [f'd_{i}' for i in range(1, period+1)]].sum(axis=1)
You can do this using a simple function in python:
def get_calculation(df, period=NULL):
'''
df = pandas data frame
period = integer type
'''
if period == 1:
return df.apply(lambda x: x['d_0'] +x['d_1'], axis=1)
if period == 2:
return df.apply(lambda x: x['d_0'] +x['d_1']+ x['d_2'], axis=1)
if period == 3:
return df.apply(lambda x: x['d_0'] +x['d_1']+ x['d_2'] + x['d_3'], axis=1)
new_df = get_calculation(df, period = 1)
Setup:
df = pd.DataFrame({'d_0':list(range(1,7)),
'd_1': list(range(10,70,10)),
'd_2':list(range(100,700,100)),
'd_3': list(range(1000,7000,1000))})
Setup:
import pandas as pd
ddict = {
'Year':['2018','2018','2018','2018','2018',],
'Account_Num':['1111','1122','1133','1144','1155'],
'd_cf':['1','2','3','4','5'],
}
data = pd.DataFrame(ddict)
Create value calculator:
def get_calcs(period):
# Convert period to integer
s = str(period)
# Convert to string value
n = int(period) + 1
# This will repeat the period number by the value of the period number
return ''.join([i * n for i in s])
Main function copies data frame, iterates through period values, and sets calculated values to the correct spot index-wise for each relevant column:
def process_data(data_frame=data, period_column='d_cf'):
# Copy data_frame argument
df = data_frame.copy(deep=True)
# Run through each value in our period column
for i in df[period_column].values.tolist():
# Create a temporary column
new_column = 'd_{}'.format(i)
# Pass the period into our calculator; Capture the result
calculated_value = get_calcs(i)
# Create a new column based on our period number
df[new_column] = ''
# Use indexing to place the calculated value into our desired location
df.loc[df[period_column] == i, new_column] = calculated_value
# Return the result
return df
Start:
Year Account_Num d_cf
0 2018 1111 1
1 2018 1122 2
2 2018 1133 3
3 2018 1144 4
4 2018 1155 5
Result:
process_data(data)
Year Account_Num d_cf d_1 d_2 d_3 d_4 d_5
0 2018 1111 1 11
1 2018 1122 2 222
2 2018 1133 3 3333
3 2018 1144 4 44444
4 2018 1155 5 555555

calculating slope on a rolling basis in pandas df python

I have a dataframe :
CAT ^GSPC
Date
2012-01-06 80.435059 1277.810059
2012-01-09 81.560600 1280.699951
2012-01-10 83.962914 1292.079956
....
2017-09-16 144.56653 2230.567646
and I want to find the slope of the stock / and S&P index for the last 63 days for each period. I have tried :
x = 0
temp_dct = {}
for date in df.index:
x += 1
max(x, (len(df.index)-64))
temp_dct[str(date)] = np.polyfit(df['^GSPC'][0+x:63+x].values,
df['CAT'][0+x:63+x].values,
1)[0]
However I feel this is very "unpythonic" , but I've had trouble integrating rolling/shift functions into this.
My expected output is to have a column called "Beta" that has the slope of the S&P (x values) and stock (y values) for all dates available
# this will operate on series
def polyf(seri):
return np.polyfit(seri.index.values, seri.values, 1)[0]
# you can store the original index in a column in case you need to reset back to it after fitting
df.index = df['^GSPC']
df['slope'] = df['CAT'].rolling(63, min_periods=2).apply(polyf, raw=False)
After running this, there will be a new column store the fitting result.

Python Pandas: Count quarterly occurrence from start and end date range

I have a dataframe of jobs for different people with star and end time for each job. I'd like to count, every four months, how many jobs each person is responsible for. I figured out away to do it but I'm sure it's tremendously inefficient (I'm new to pandas). It takes quite a while to compute when I run the code on my complete dataset (hundreds of persons and jobs).
Here is what I have so far.
#create a data frame
import pandas as pd
import numpy as np
df = pd.DataFrame({'job': pd.Categorical(['job1','job2','job3','job4']),
'person': pd.Categorical(['p1', 'p1', 'p2','p2']),
'start': ['2015-01-01', '2015-06-01', '2015-01-01', '2016- 01- 01'],
'end': ['2015-07-01', '2015- 12-31', '2016-03-01', '2016-12-31']})
df['start'] = pd.to_datetime(df['start'])
df['end'] = pd.to_datetime(df['end'])
Which gives me
I then create a new dataset with
bdate = min(df['start'])
edate = max(df['end'])
dates = pd.date_range(bdate, edate, freq='4MS')
people = sorted(set(list(df['person'])))
df2 = pd.DataFrame(np.zeros((len(dates), len(people))), index=dates, columns=people)
for d in pd.date_range(bdate, edate, freq='MS'):
for p in people:
contagem = df[(df['person'] == p) &
(df['start'] <= d) &
(df['end'] >= d)]
pos = np.argmin(np.abs(dates - d))
df2.iloc[pos][p] = len(contagem.index)
df2
And I get
I'm sure there must be a better way of doing this without having to loop through all dates and persons. But how?
This answer assumes that each job-person combination is unique. It creates a series for every row with the value equal to the job an index that expands the dates. Then it resamples every 4th month (which is not quarterly but what your solution describes) and counts the unique non-na occurrences.
def make_date_range(x):
return pd.Series(index=pd.date_range(x.start.values[0], x.end.values[0], freq='M'), data=x.job.values[0])
# Iterate through each job person combo and make an entry for each month with the job as the value
df1 = df.groupby(['job', 'person']).apply(make_date_range).unstack('person')
# remove outer level from index
df1.index = df1.index.droplevel('job')
# resample each month counting only unique values
df1.resample('4MS').agg(lambda x: len(x[x.notnull()].unique()))
Output
person p1 p2
2015-01-01 1 1
2015-05-01 2 1
2015-09-01 1 1
2016-01-01 0 2
2016-05-01 0 1
2016-09-01 0 1
And here is a long one line solution that iterates over every rows and creates a new dataframe and stacks all of them together via pd.concat and then resamples.
pd.concat([pd.DataFrame(index = pd.date_range(tup.start, tup.end, freq='4MS'),
data=[[tup.job]],
columns=[tup.person]) for tup in df.itertuples()])\
.resample('4MS').count()
And another one that is faster
df1 = pd.melt(df, id_vars=['job', 'person'], value_name='date').set_index('date')
g = df1.groupby([pd.TimeGrouper('4MS'), 'person'])['job']
g.agg('nunique').unstack('person', fill_value=0)

Categories