Related
I have two dataframes
import numpy as np
import pandas as pd
test1 = pd.date_range(start='1/1/2018', end='1/10/2018')
test1 = pd.DataFrame(test1)
test1.rename(columns = {list(test1)[0]: 'time'}, inplace = True)
test2 = pd.date_range(start='1/5/2018', end='1/20/2018')
test2 = pd.DataFrame(test2)
test2.rename(columns = {list(test2)[0]: 'time'}, inplace = True)
Now in first dataframe I create column
test1['values'] = np.zeros(10)
I want to fill this column, next to each date there should be the index of the closest date from second dataframe. I want it to look like this:
0 2018-01-01 0
1 2018-01-02 0
2 2018-01-03 0
3 2018-01-04 0
4 2018-01-05 0
5 2018-01-06 1
6 2018-01-07 2
7 2018-01-08 3
Of course my real data is not evenly spaced and has minutes and seconds, but the idea is same. I use the following code:
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
for k in range(10):
a = nearest(test2['time'], test1['time'][k]) ### find nearest timestamp from second dataframe
b = test2.index[test2['time'] == a].tolist()[0] ### identify the index of this timestamp
test1['value'][k] = b ### assign this value to the cell
This code is very slow on large datasets, how can I make it more efficient?
P.S. timestamps in my real data are sorted and increasing just like in these artificial examples.
You could do this in one line, using numpy's argmin:
test1['values'] = test1['time'].apply(lambda t: np.argmin(np.absolute(test2['time'] - t)))
Note that applying a lambda function is essentially also a loop. Check if that satisfies your requirements performance-wise.
You might also be able to leverage the fact that your timestamps are sorted and the timedelta between each timestamp is constant (if I got that correctly). Calculate the offset in days and derive the index vector, e.g. as follows:
offset = (test1['time'] - test2['time']).iloc[0].days
if offset < 0: # test1 time starts before test2 time, prepend zeros:
offset = abs(offset)
idx = np.append(np.zeros(offset), np.arange(len(test1['time'])-offset)).astype(int)
else: # test1 time starts after or with test2 time, use arange right away:
idx = np.arange(offset, offset+len(test1['time']))
test1['values'] = idx
I am dealing with a pandas dataframe where the index is a DateTime object and the columns represent minute-by-minute returns on several stocks from the SP500 index, together with a column of returns from the index. It's fairly long (100 stocks, 1510 trading days, minute-by-minute data each day) and looks like this (only three stocks for the sake of example):
DateTime SPY AAPL AMZN T
2014-01-02 9:30 0.032 -0.01 0.164 0.007
2014-01-02 9:31 -0.012 0.02 0.001 -0.004
2014-01-02 9:32 -0.015 0.031 0.004 -0.001
I am trying to compute the betas of each stock for each different day and for each 30-minute window. The beta of a stock in this case is defined as the covariance between its returns and the SPY returns divided by the variance of SPY in the same period. My desired output is a 3-dimensional numpy array beta_HF where beta_HF[s, i, j], for instance, means the beta of stock s at day i at window j. At this moment, I am computing the betas in the following way (let returns be full dataframe):
trading_days = pd.unique(returns.index.date)
window = "30min"
moments = pd.date_range(start = "9:30", end = "16:00", freq = window).time
def dispersion(trading_days, moments, df, verbose = True):
index = 'SPY'
beta_HF = np.zeros((df.shape[1] - 1, len(trading_days), len(moments) - 1))
for i, day in enumerate(trading_days):
daily_data = df[df.index.date == day]
start_time = dt.time(9,30)
for j, end_time in enumerate(moments[1:]):
moment_data = daily_data.between_time(start_time, end_time)
covariances = np.array([moment_data[index].cov(moment_data[symbol]) for symbol in df])
beta_HF[:, i,j] = covariances[1:]/covariances[0]
if verbose == True:
if np.remainder(i, 100) == 0:
print("Current Trading Day: {}".format(day))
return(beta_HF)
The dispersion() function generates the correct output. However, I understand that I am looping over long iterables and this is not very efficient. I seek a more efficient way to "slice" the dataframe at each 30-minute window for each day in the sample and compute the covariances. Effectively, for each slice, I need to compute 101 numbers (100 covariances + 1 variance). On my local machine (a 2013 Retina i5 Macbook Pro) it's taking around 8 minutes to compute everything. I tested it on a research server of my university and the computing time was basically the same, which probably implies that computing power is not the bottleneck but my code has low quality in this part. I would appreciate any ideas on how to make this faster.
One might point out that parallelization is the way to go here since the elements in beta_HF never interact with each other. So this seems to be easy to parallelize. However, I have never implemented anything with parallelization so I am very new to these concepts. Any ideas on how to make the code run faster? Thanks a lot!
You can use pandas Grouper in order to group your data by frequency. The only drawbacks are that you cannot have overlapping windows and it will iterate over times that are not existant.
The first issue basically means that the window will slide from 9:30-9:59 to 10:00-10:29 instead of 9:30-10:00 to 10:00-10:30.
The second issue comes to play during holidays and night when no trading takes place. Hence, if you have a large period without trading then you might want to split the DataFrame and combine them afterwards.
Create example data
import pandas as pd
import numpy as np
time = pd.date_range(start="2014-01-02 09:30",
end="2014-01-02 16:00", freq="min")
time = time.append( pd.date_range(start="2014-01-03 09:30",
end="2014-01-03 16:00", freq="min") )
df = pd.DataFrame(data=np.random.rand(time.shape[0], 4)-0.5,
index=time, columns=['SPY','AAPL','AMZN','T'])
define the range you want to use
freq = '30min'
obs_per_day = len(pd.date_range(start = "9:30", end = "16:00", freq = "30min"))
trading_days = len(pd.unique(df.index.date))
make a function to calculate the beta values
def beta(df):
if df.empty: # returns nan when no trading takes place
return np.nan
mat = df.to_numpy() # numpy is faster than pandas
m = mat.mean(axis=0)
mat = mat - m[np.newaxis,:] # demean
dof = mat.shape[0] - 1 # degree of freedom
if dof != 0: # check if you data has more than one observation
mat = mat.T.dot(mat[:,0]) / dof # covariance with first column
return mat[1:] / mat[0] # beta
else:
return np.zeros(mat.shape[1] - 1) # return zeros for to short data e.g. 16:00
and in the end use pd.groupby().apply()
res = df.groupby(pd.Grouper(freq=freq)).apply(beta)
res = np.array( [k for k in res.values if ~np.isnan(k).any()] ) # remove NaN
res = res.reshape([trading_days, obs_per_day, df.shape[1]-1])
Note that the result is in a slightly different shape than yours.
The results also differ a bit because of the different window sliding. To check whether the results are the same, simply try somthing like this
trading_days = pd.unique(df.index.date)
# Your result
moments1 = pd.date_range(start = "9:30", end = "10:00", freq = "30min").time
beta(df[df.index.date == trading_days[0]].between_time(moments1[0], moments1[1]))
# mine
moments2 = pd.date_range(start = "9:30", end = "10:00", freq = "29min").time
beta(df[df.index.date == trading_days[0]].between_time(moments[0], moments2[1]))
I am quite new to python so please bear with me.
I am trying to pick one of the values printed, find it in the csv file, and print the values + or - 1 around it.
Here is the code to pick the values.
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
df = pd.read_csv(r"/Users/aaronhuang/Desktop/ffp/exfileCLEAN2.csv", skiprows=[1])
magnitudes = df['Magnitude '].values
times = df['Time '].values
zscores = np.abs(stats.zscore(magnitudes, ddof=1))
outlier_indicies = np.argwhere(zscores > 3).flatten()
numbers = print(times[outlier_indicies])
The values printed are below.
2455338.895 2455350.644 2455391.557 2455404.776 2455413.734 2455451.661
2455473.49 2455477.521 2455507.505 2455702.662 2455734.597 2455765.765
2455776.575 2455826.593 2455842.512 2455866.508 2455996.796 2456017.767
2456047.694 2456058.732 2456062.722 2456071.924 2456082.802 2456116.494
2456116.535 2456116.576 2456116.624 2456116.673 2456116.714 2456116.799
2456123.527 2456164.507 2456166.634 2456391.703 2456455.535 2456455.6
2456501.763 2456511.616 2456519.731 2456525.49 2456547.588 2456570.526
2456595.515 2456776.661 2456853.543 2456920.511 2456953.496 2457234.643
2457250.68 2457252.672 2457278.526 2457451.89 2457485.722 2457497.93
2457500.674 2457566.874 2457567.877 2457644.495 2457661.553 2457675.513
An example of the csv file is below.
Time Magnitude Magnitude error
2455260.853 19.472 0.150
2455260.900 19.445 0.126
2455261.792 19.484 0.168
2455262.830 19.157 0.261
2455264.814 19.376 0.150
... ... ... ...
2457686.478 19.063 0.176
2457689.480 19.178 0.128
2457690.475 19.386 0.171
2457690.480 19.092 0.112
2457691.476 19.191 0.122
For example, I want to pick the first value, which is 2455338.895 i would like to print all the values + or - 1 of it (in the time column) (and later graph it).
Some help would be greatly appreciated.
Thank you in advance.
I think this is what you are looking for (assuming you want single number query which you mentioned in question that way):
numbers = times[outlier_indicies]
print(df[(df['Time']<numbers[0]+1) & (df['Time']>numbers[0]-1)]['Time'])
Looping over numbers to get all intervals are straightforward, if that is what you might be interested in.
EDIT:
The for loop looks like this:
print(pd.concat([df[(df['Time']<i+1) & (df['Time']>i-1)]['Time'] for i in numbers]))
The non-loop version if there are no overlapping intervals in (numbers[i]-1,numbers[i]+1):
intervals = pd.DataFrame(data={'start': numbers-1,'finish': numbers+1})
starts = pd.DataFrame(data={'start': 1}, index=intervals.start.tolist())
finishs = pd.DataFrame(data={'finish': -1},index=intervals.finish.tolist())
transitions = pd.merge(starts, finishs, how='outer', left_index=True, right_index=True).fillna(0)
transitions['transition'] = (transitions.pop('finish')+transitions.pop('start')).cumsum()
B = pd.DataFrame(index=numbers)
pd.merge(transitions, B, how='outer', left_index=True, right_index=True).fillna(method='ffill').loc[B.index].astype(bool)
print(transitions[transitions.transition == 1].index)
In case of overlapping you can merge consecutive overlapping intervals in intervals dataframe with the help of following column and then run the above code(needs maybe a couple more lines to complete):
intervals['overlapping'] = (intervals.finish - intervals.start.shift(-1))>0
You can simply iterate over the numbers:
all_nums = numbers.split(" ")
first = all_nums[0]
threshold = 1
result = []
for num in all_nums:
if abs(float(first) - float(num)) < threshold:
result.append(num) # float(num) if you want number instead of str
print(result)
I have several large data sets (~3000 rows, 100 columns) that I need to process with pandas. Each row represents a point on a map and there's a bunch of data associated with that point. I am doing spatial calculations (may introduce a few more variables in the future), so for each row I am only using the data from 1-4 columns. The issue is that I have to compare each row to every other row - essentially, I am trying to figure out spatial relationships between every point. At this stage in the project, I am doing calculations to determine how many points are inside a given radius for each point in the table. I have to do this 5 or 6 times (i.e. running the distance calculation function for multiple radius sizes.) This means that I end up with ~10-50 million calculations. It is slow. Very slow (like 9+ hours of computing time.)
After I run all these calculations, I need to append them as new columns in the original (very large) dataframe. Currently, I have been passing the entire dataframe to my function, which might further slow things down.
I know that many people are running this size of calculation on super computers or dedicated multicore units, but I would like to do what I can to optimize my code to run as efficiently as possible, regardless of the hardware.
I am currently using a double for loop with .iterrows(). I have stripped away as much of the unnecessary steps as possible. I may be able to pair down the dataframe into a subset, and then pass that to the function, and append the calculations to the original in another step, if that would help speed things up. I have also considered using .apply() to eliminate the outside loop (e.g. .apply() the inner loop to all rows in the dataframe...?)
Below, I have showed the functions that I am using. This is probably the simplest application that I have for this project... there are others that do more calculations/return pairs or groups of points based on certain spacial criteria, but this is the best example to show the basic idea of what I am doing.
# specify file to be read into pandas
df = pd.read_csv('input_file.csv', low_memory = False)
# function to return distance between two points w/ (x,y) coordinates
def xy_distance_calc(x1, x2, y1, y2):
return math.sqrt((x1 - x2)**2 + (y1-y2)**2)
# function to calculate number of points inside a given radius for each point
def spacing_calc(data, rad_crit, col_x, col_y):
count_list = list()
df_list = pd.DataFrame()
for index, row in data.iterrows():
x_row_current = row[col_x]
y_row_current = row[col_y]
count = 0
# dist_list = list()
for index1, row1 in data.iterrows():
x1 = row1[col_x]
y1 = row1[col_y]
dist = xy_distance_calc(x_row_current, x1, y_row_current, y1)
if dist < rad_crit:
count += 1
else:
continue
count_list.append(count)
df_list = pd.DataFrame(data=count_list, columns = [str(rad_crit) + ' radius'])
return df_list
# call the function for each radius in question, append new data
df_2640 = spacing_calc(df, 2640.0, 'MID_X', 'MID_Y')
df = df.join(df_2640)
df_1320 = spacing_calc(df, 1320.0, 'MID_X', 'MID_Y')
df = df.join(df_1320)
df_1155 = spacing_calc(df, 1155.0, 'MID_X', 'MID_Y')
df = df.join(df_1155)
df_990 = spacing_calc(df, 990.0, 'MID_X', 'MID_Y')
df = df.join(df_990)
df_660 = spacing_calc(df, 660.0, 'MID_X', 'MID_Y')
df = df.join(df_660)
df_330 = spacing_calc(df, 330.0, 'MID_X', 'MID_Y')
df = df.join(df_330)
df.to_csv('spacing_calc_all.csv', index=None)
No errors, everything works, I just don't think it's as efficient as it could be.
Your problem is that you loop too many times. At the very least, you should calculate a distance matrix and the count the how many points fall within a radius from that matrix. However, the fastest solution is to use numpy's vectorized functions, which are highly optimized C code.
As with most learning experiences, it's better to start with a small problem:
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.spatial import distance_matrix
# Create a dataframe with columns two MID_X and MID_Y assigned at random
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.uniform(1, 10, size=(5, 2)), columns=['MID_X', 'MID_Y'])
>>> df.index.name = 'PointID'
MID_X MID_Y
PointID
0 4.370861 9.556429
1 7.587945 6.387926
2 2.404168 2.403951
3 1.522753 8.795585
4 6.410035 7.372653
# Calculate the distance matrix
>>> cols = ['MID_X', 'MID_Y']
>>> d = distance_matrix(df[cols].values, df[cols].values)
array([[0. , 4.51542241, 7.41793942, 2.94798323, 2.98782637],
[4.51542241, 0. , 6.53786001, 6.52559479, 1.53530446],
[7.41793942, 6.53786001, 0. , 6.4521226 , 6.38239593],
[2.94798323, 6.52559479, 6.4521226 , 0. , 5.09021286],
[2.98782637, 1.53530446, 6.38239593, 5.09021286, 0. ]])
# The radii for which you want to measure. They need to be raised
# up 2 extra dimensions to prepare for array broadcasting later
>>> radii = np.array([3,6,9])[:, None, None]
array([[[3]],
[[6]],
[[9]]])
# Count how many points fall within a certain radius from another
# point using numpy's array broadcasting. `d < radii` will return
# an array of `True/False` and we can count the number of `True`
# by `sum` over the last axis.
#
# The distance between a point to itself is 0 and we don't want
# to count that hence the -1.
>>> count = (d < radii).sum(axis=-1) - 1
array([[2, 1, 0, 1, 2],
[3, 2, 0, 2, 3],
[4, 4, 4, 4, 4]])
# Putting everything together for export
>>> result = pd.DataFrame(count, index=radii.flatten()).stack().to_frame('Count')
>>> result.index.names = ['Radius', 'PointID']
Count
Radius PointID
3 0 2
1 1
2 0
3 1
4 2
6 0 3
1 2
2 0
3 2
4 3
9 0 4
1 4
2 4
3 4
4 4
The final result means that within a radius of 3, point #0 has 2 neighbours, point #1 has 1 neighbour, point #2 has 0 neighbour and so on. Reshape and format the frame to your liking.
You shouldn't have a problem scaling this to thousands of points.
I am trying to sum (and plot) a total from functions which change states at different times using Python's Pandas.DataFrame. For example:
Suppose we have 3 people whose states can be a) holding nothing, b) holding a 5 pound weight, and c) holding a 10 pound weight. Over time, these people pick weights up and put them down. I want to plot the total amount of weight being held. So, given:
My brute forece attempt:
import pandas as ps
import math
import numpy as np
person1=[3,0,10,10,10,10,10]
person2=[4,0,20,20,25,25,40]
person3=[5,0,5,5,15,15,40]
allPeopleDf=ps.DataFrame(np.array(zip(person1,person2,person3)).T)
allPeopleDf.columns=['count','start1', 'end1', 'start2', 'end2', 'start3','end3']
allPeopleDfNoCount=allPeopleDf[['start1', 'end1', 'start2', 'end2', 'start3','end3']]
uniqueTimes=sorted(ps.unique(allPeopleDfNoCount.values.ravel()))
possibleStates=[-1,0,1,2] #extra state 0 for initialization
stateData={}
comboStates={}
#initialize dict to add up all of the stateData
for time in uniqueTimes:
comboStates[time]=0.0
allPeopleDf['track']=-1
allPeopleDf['status']=-1
numberState=len(possibleStates)
starti=-1
endi=0
startState=0
for i in range(3):
starti=starti+2
print starti
endi=endi+2
for time in uniqueTimes:
def helper(row):
start=row[starti]
end=row[endi]
track=row[7]
if start <= time and time < end:
return possibleStates[i+1]
else:
return possibleStates[0]
def trackHelp(row):
status=row[8]
track=row[7]
if track<=status:
return status
else:
return track
def Multiplier(row):
x=row[8]
if x==0:
return 0.0*row[0]
if x==1:
return 5.0*row[0]
if x==2:
return 10.0*row[0]
if x==-1:#numeric place holder for non-contributing
return 0.0*row[0]
allPeopleDf['status']=allPeopleDf.apply(helper,axis=1)
allPeopleDf['track']=allPeopleDf.apply(trackHelp,axis=1)
stateData[time]=allPeopleDf.apply(Multiplier,axis=1).sum()
for k,v in stateData.iteritems():
comboStates[k]=comboStates.get(k,0)+v
print allPeopleDf
print stateData
print comboStates
Plots of weight being held over time might look like the following:
And the sum of the intensities over time might look like the black line in the following:
with the black line defined with the Cartesian points: (0,0 lbs),(5,0 lbs),(5,5 lbs),(15,5 lbs),(15,10 lbs),(20,10 lbs),(20,15 lbs),(25,15 lbs),(25,20 lbs),(40,20 lbs). However, I'm flexible and don't necessarily need to define the combined intensity line as a set of Cartesian points. The unique times can be found with:
print list(set(uniqueTimes).intersection(allNoCountT[1].values.ravel())).sort()
,but I can't come up with a slick way of getting the corresponding intensity values.
I started out with a very ugly function to break apart each "person's" graph so that all people had start and stop times (albeit many stop and start times without state change) at the same time, and then I could add up all the "chunks" of time. This was cumbersome; there has to be a slick pandas way of handling this. If anyone can offer a suggestion or point me to another SO like that I might have missed, I'd appreciate the help!
In case my simplified example isn't clear, another might be plotting the intensity of sound coming from a piano: there are many notes being played for different durations with different intensities. I would like the sum of intensity coming from the piano over time. While my example is simplistic, I need a solution that is more on the scale of a piano song: thousands of discrete intensity levels per key, and many keys contributing over the course of a song.
Edit--Implementation of mgab's provided solution:
import pandas as ps
import math
import numpy as np
person1=['person1',3,0.0,10.0,10.0,10.0,10.0,10.0]
person2=['person2',4,0,20,20,25,25,40]
person3=['person3',5,0,5,5,15,15,40]
allPeopleDf=ps.DataFrame(np.array(zip(person1,person2,person3)).T)
allPeopleDf.columns=['id','intensity','start1', 'end1', 'start2', 'end2', 'start3','end3']
allPeopleDf=ps.melt(allPeopleDf,id_vars=['intensity','id'])
allPeopleDf.columns=['intensity','id','timeid','time']
df=ps.DataFrame(allPeopleDf).drop('timeid',1)
df[df.id=='person1'].drop('id',1) #easier to visualize one id for check
df['increment']=df.groupby('id')['intensity'].transform( lambda x: x.sub(x.shift(), fill_value= 0 ))
TypeError: unsupported operand type(s) for -: 'str' and 'int'
End Edit
Going for the piano keys example, lets assume you have three keys, with 30 levels of intensity.
I would try to keep the data in this format:
import pandas as pd
df = pd.DataFrame([[10,'A',5],
[10,'B',7],
[13,'C',10],
[15,'A',15],
[20,'A',7],
[23,'C',0]], columns=["time", "key", "intensity"])
time key intensity
0 10 A 5
1 10 B 7
2 13 C 10
3 15 A 15
4 20 A 7
5 23 C 0
where you record every change in intensity of any of the keys. From here you can already get the Cartesian coordinates for each individual key as (time,intensity) pairs
df[df.key=="A"].drop('key',1)
time intensity
0 10 5
3 15 15
4 20 7
Then, you can easily create a new column increment that will indicate the change in intensity that occurred for that key at that time point (intensity indicates just the new value of intensity)
df["increment"]=df.groupby("key")["intensity"].transform(
lambda x: x.sub(x.shift(), fill_value= 0 ))
df
time key intensity increment
0 10 A 5 5
1 10 B 7 7
2 13 C 10 10
3 15 A 15 10
4 20 A 7 -8
5 23 C 0 -10
And then, using this new column, you can generate the (time, total_intensity) pairs to use as Cartesian coordinates
df.groupby("time").sum()["increment"].cumsum()
time
10 12
13 22
15 32
20 24
23 14
dtype: int64
EDIT: applying the specific data presented in question
Assuming the data comes as a list of values, starting with the element id (person/piano key), then a factor multiplying the measured weight/intensities for this element, and then pairs of time values indicating the start and end of a series of known states (weight being carried/intensity being emitted). Not sure if I got the data format right. From your question:
data1=['person1',3,0.0,10.0,10.0,10.0,10.0,10.0]
data2=['person2',4,0,20,20,25,25,40]
data3=['person3',5,0,5,5,15,15,40]
And if we know the weight/intensity of each one of the states, we can define:
known_states = [5, 10, 15]
DF_columns = ["time", "id", "intensity"]
Then, the easiest way I came up to load the data includes this function:
import pandas as pd
def read_data(data, states, columns):
id = data[0]
factor = data[1]
reshaped_data = []
for i in xrange(len(states)):
j += 2+2*i
if not data[j] == data[j+1]:
reshaped_data.append([data[j], id, factor*states[i]])
reshaped_data.append([data[j+1], id, -1*factor*states[i]])
return pd.DataFrame(reshaped_data, columns=columns)
Notice that the if not data[j] == data[j+1]: avoids loading data to the dataframe when start and end times for a given state are equal (seems uninformative, and wouldn't appear in your plots anyway). But take it out if you still want these entries.
Then, you load the data:
df = read_data(data1, known_states, DF_columns)
df = df.append(read_data(data2, known_states, DF_columns), ignore_index=True)
df = df.append(read_data(data3, known_states, DF_columns), ignore_index=True)
# and so on...
And then you're right at the beginning of this answer (substituting 'key' by 'id' and the ids, of course)
Appears to be what .sum() is for:
In [10]:
allPeopleDf.sum()
Out[10]:
aStart 0
aEnd 35
bStart 35
bEnd 50
cStart 50
cEnd 90
dtype: int32