I have two dataframes
import numpy as np
import pandas as pd
test1 = pd.date_range(start='1/1/2018', end='1/10/2018')
test1 = pd.DataFrame(test1)
test1.rename(columns = {list(test1)[0]: 'time'}, inplace = True)
test2 = pd.date_range(start='1/5/2018', end='1/20/2018')
test2 = pd.DataFrame(test2)
test2.rename(columns = {list(test2)[0]: 'time'}, inplace = True)
Now in first dataframe I create column
test1['values'] = np.zeros(10)
I want to fill this column, next to each date there should be the index of the closest date from second dataframe. I want it to look like this:
0 2018-01-01 0
1 2018-01-02 0
2 2018-01-03 0
3 2018-01-04 0
4 2018-01-05 0
5 2018-01-06 1
6 2018-01-07 2
7 2018-01-08 3
Of course my real data is not evenly spaced and has minutes and seconds, but the idea is same. I use the following code:
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
for k in range(10):
a = nearest(test2['time'], test1['time'][k]) ### find nearest timestamp from second dataframe
b = test2.index[test2['time'] == a].tolist()[0] ### identify the index of this timestamp
test1['value'][k] = b ### assign this value to the cell
This code is very slow on large datasets, how can I make it more efficient?
P.S. timestamps in my real data are sorted and increasing just like in these artificial examples.
You could do this in one line, using numpy's argmin:
test1['values'] = test1['time'].apply(lambda t: np.argmin(np.absolute(test2['time'] - t)))
Note that applying a lambda function is essentially also a loop. Check if that satisfies your requirements performance-wise.
You might also be able to leverage the fact that your timestamps are sorted and the timedelta between each timestamp is constant (if I got that correctly). Calculate the offset in days and derive the index vector, e.g. as follows:
offset = (test1['time'] - test2['time']).iloc[0].days
if offset < 0: # test1 time starts before test2 time, prepend zeros:
offset = abs(offset)
idx = np.append(np.zeros(offset), np.arange(len(test1['time'])-offset)).astype(int)
else: # test1 time starts after or with test2 time, use arange right away:
idx = np.arange(offset, offset+len(test1['time']))
test1['values'] = idx
Related
I have a data frame that looks like that:
group date value
g_1 1/2/2019 11:03:00 3
g_1 1/2/2019 11:04:00 5
g_1 1/2/2019 10:03:32 100
g_2 4/3/2019 09:11:09 46
I want to calculate the time difference between occurrences (in seconds) per group.
Example output:
groups_time_diff = {'g_1': [23,5666,7878], 'g_2: [0.2,56,2343] ,...}
This is my code:
groups_time_diff = defaultdict(list)
for group in tqdm(groups):
group_df = unit_df[unit_df['group'] == group]
dates = list(group_df['time'])
while len(dates) != 0:
min_date = min(dates)
dates.remove(min_date)
if len(dates) > 0:
second_min_date = min(dates)
date_diff = second_min_date - min_date
groups_time_diff[group].append(date_diff.seconds)
This takes forever to run and I am looking for a more time efficient way to get the desired output.
Any ideas?
Try this:
sorted_group_df = group_df.sort_values(by='time',ascending=True)
dates = sorted_group_df['time']
one = dates[1:-1].reset_index(drop=True)
two = dates[0:-1].reset_index(drop=True)
date_difference = one - two
date_difference_in_seconds = date_difference.dt.seconds
Try at first sort your dates. Then subtract these two series:
dates = dates.sort_values()
pd.Series.subtract(dates[0:-1], dates[1:-1])
You are using min function twice in each iteration that is not efficient.
Hope this helps.
I already asked a similar question but was able to piece some more of it together but need more help. Determining how one date/time range overlaps with the second date/time range?
I want to be able to check when two date range with start date/time and end date/time overlap. My type2 has about 50 rows while type 1 has over 500. I want to be able to take the start and end of type2 and see if it falls within type1 range. Here is a snip of the data, however the dates do change down the list from 2019-04-01 the the following days.
type1 type1_start type1_end
a 2019-04-01T00:43:18.046Z 2019-04-01T00:51:35.013Z
b 2019-04-01T02:16:46.490Z 2019-04-01T02:23:23.887Z
c 2019-04-01T03:49:31.981Z 2019-04-01T03:55:16.153Z
d 2019-04-01T05:21:22.131Z 2019-04-01T05:28:05.469Z
type2 type2_start type2_end
1 2019-04-01T00:35:12.061Z 2019-04-01T00:37:00.783Z
2 2019-04-02T00:37:15.077Z 2019-04-02T00:39:01.393Z
3 2019-04-03T00:39:18.268Z 2019-04-03T00:41:01.844Z
4 2019-04-04T00:41:21.576Z 2019-04-04T00:43:02.071Z`
I have been googling the best way to this and have read through Determine Whether Two Date Ranges Overlap and understand how it should be done, but I don't know enough about how to call for the variables and make them work.
#Here is what I have, but I am stuck and have no clue where to go form here:
import pandas as pd
from pandas import Timestamp
import numpy as np
from collections import namedtuple
colnames = ['type1', 'type1_start', 'type1_end', 'type2', 'type2_start', 'type2_end']
data = pd.read_csv('test.csv', names=colnames, parse_dates=['type1_start', 'type1_end','type2_start', 'type2_end'])
A_start = data['type1_start']
A_end = data['type1_end']
B_start= data['typer2_start']
B_end = data['type2_end']
t1 = data['type1']
t2 = data['type2']
r1 = (B_start, B_end)
r2 = (A_start, A_end)
def doesOverlap(r1, r2):
if B_start > A_start:
swap(r1, r2)
if A_start > B_end:
return false
return true
It would be nice to have a csv with a result of true or false overlap. I was able to make my data run using this also Efficiently find overlap of date-time ranges from 2 dataframes but it isn't correct in the results. I added couple of rows that I know should overlap to the data, and it didn't work. I'd need for each type2 start/end to go through each type1.
Any help would be greatly appreciated.
Here is one way to do it:
import pandas as pd
def overlaps(row):
if ((row['type1_start'] < row['type2_start'] and row['type2_start'] < row['type1_end'])
or (row['type1_start'] < row['type2_end'] and row['type2_end'] < row['type1_end'])):
return True
else:
return False
colnames = ['type1', 'type1_start', 'type1_end', 'type2', 'type2_start', 'type2_end']
df = pd.read_csv('test.csv', names=colnames, parse_dates=[
'type1_start', 'type1_end', 'type2_start', 'type2_end'])
df['overlap'] = df.apply(overlaps, axis=1)
print('\n', df)
gives:
type1 type1_start type1_end type2 type2_start type2_end overlap
0 type1 type1_start type1_end type2 type2_start type2_end False
1 a 2019-03-01T00:43:18.046Z 2019-04-02T00:51:35.013Z 1 2019-04-01T00:35:12.061Z 2019-04-01T00:37:00.783Z True
2 b 2019-04-01T02:16:46.490Z 2019-04-01T02:23:23.887Z 2 2019-04-02T00:37:15.077Z 2019-04-02T00:39:01.393Z False
3 c 2019-04-01T03:49:31.981Z 2019-04-01T03:55:16.153Z 3 2019-04-03T00:39:18.268Z 2019-04-03T00:41:01.844Z False
4 d 2019-04-01T05:21:22.131Z 2019-04-01T05:28:05.469Z 4 2019-04-04T00:41:21.576Z 2019-04-04T00:43:02.071Z False
Below df1 contains type1 records and df2 contains type2 records:
df_new = df1.assign(key=1)\
.merge(df2.assign(key=1), on='key')\
.assign(has_overlap=lambda x: ~((x.type2_start > x.type1_end) | (x.type2_end < x.type1_start)))
REF: Performant cartesian product (CROSS JOIN) with pandas
I have a time series data in pandas, and I would like to group by a certain time window in each year and calculate its min and max.
For example:
times = pd.date_range(start = '1/1/2011', end = '1/1/2016', freq = 'D')
df = pd.DataFrame(np.random.rand(len(times)), index=times, columns=["value"])
How to group by time window e.g. 'Jan-10':'Mar-21' for each year and calculate its min and max for column value?
You can use the resample method.
df.resample('5d').agg(['min','max'])
I'm not sure if there's a direct way to do it without first creating a flag for the days required. The following function is used to create a flag required:
# Function for flagging the days required
def flag(x):
if x.month == 1 and x.day>=10: return True
elif x.month in [2,3,4]: return True
elif x.month == 5 and x.day<=21: return True
else: return False
Since you require for each year, it would be a good idea to have the year as a column.
Then the min and max for each year for given periods can be obtained with the code below:
times = pd.date_range(start = '1/1/2011', end = '1/1/2016', freq = 'D')
df = pd.DataFrame(np.random.rand(len(times)), index=times, columns=["value"])
df['Year'] = df.index.year
pd.pivot_table(df[list(pd.Series(df.index).apply(flag))], values=['value'], index = ['Year'], aggfunc=[min,max])
The output will look like follows:
Sample Output
Hope that answers your question... :)
You can define the bin edges, then throw out the bins you don't need (every other) with .loc[::2, :]. Here I'll define two functions just to check we're getting the date ranges we want within groups (Note since left edges are open, need to subtract 1 day):
import pandas as pd
edges = pd.to_datetime([x for year in df.index.year.unique()
for x in [f'{year}-02-09', f'{year}-03-21']])
def min_idx(x):
return x.index.min()
def max_idx(x):
return x.index.max()
df.groupby(pd.cut(df.index, bins=edges)).agg([min_idx, max_idx, min, max]).loc[::2, :]
Output:
value
min_idx max_idx min max
(2011-02-09, 2011-03-21] 2011-02-10 2011-03-21 0.009343 0.990564
(2012-02-09, 2012-03-21] 2012-02-10 2012-03-21 0.026369 0.978470
(2013-02-09, 2013-03-21] 2013-02-10 2013-03-21 0.039491 0.946481
(2014-02-09, 2014-03-21] 2014-02-10 2014-03-21 0.029161 0.967490
(2015-02-09, 2015-03-21] 2015-02-10 2015-03-21 0.006877 0.969296
(2016-02-09, 2016-03-21] NaT NaT NaN NaN
I have a Dask Dataframe for which I would like to compute skewness for a list of columns and if this skewness exceeds a certain threshold, I correct it using log transformation. I am wondering whether there is a more efficient way of making correct_skewness() function work on multiple columns in parallel by removing the for loop in the correct_skewness() function below:
import dask
import dask.array as da
from scipy import stats
# Create a dataframe
df = dask.datasets.timeseries()
df.head()
id name x y
timestamp
2000-01-01 00:00:00 1032 Oliver 0.018604 0.089191
2000-01-01 00:00:01 1032 Norbert 0.666689 -0.979374
2000-01-01 00:00:02 991 Victor 0.027691 -0.474660
2000-01-01 00:00:03 979 Kevin 0.320067 0.656949
2000-01-01 00:00:04 1087 Zelda -0.462076 0.513409
def correct_skewness(columns=None, max_skewness=2):
if columns is None:
raise ValueError(
f"columns argument is None. Please set columns argument to a list of columns"
)
for col in columns:
skewness = stats.skew(df[col])
max_val = df[col].max().compute()
min_val = df[col].min().compute()
if abs(skewness) > max_skewness and (max_val > 1 or min_val < 0):
delta = 1.0
if min_val < 0:
delta = max(1, -min_val + 1)
df[col] = da.log(delta + df[col])
return df
df = correct_skewness(columns=['x', 'y'])
There are a couple things you can do to improve parallelism in this example:
You can use dask.array.stats.skew rather than statsmodels.skew. You will have to import dask.array.stats explicitly
You can compute the min/max of all columns in one computation
mins = [df[col].min() for col in cols]
maxes = [df[col].min() for col in cols]
skews = [da.stats.skew(df[col]) for col in cols]
mins, maxes, skews = dask.compute(mins, maxes, skews)
Then you could do your if-logic and apply da.log as appropriate. This still requires two passes over your data, but that should be a nice improvement over what you have now.
I have a function that outputs a dataframe generated from a RINEX (GPS) file. At present, I get the dataframe to be output into separated satellite (1-32) files. I'd like to access in the first column (either when it's still a dataframe or in these new files) in order to format the date to a timestamp in seconds, like below:
Epochs Epochs
2014-04-27 00:00:00 -> 00000
2014-04-27 00:00:30 -> 00030
2014-04-27 00:01:00 -> 00060
This requires stripping the date away, then converting hh:mm:ss to seconds. I've hit a wall trying to figure out how best to access this first column (Epochs) and then make the conversion on the entire column. The code I have been working on is:
def read_data(self, RINEXfile):
obs_data_chunks = []
while True:
obss, _, _, epochs, _ = self.read_data_chunk(RINEXfile)
if obss.shape[0] == 0:
break
obs_data_chunks.append(pd.Panel(
np.rollaxis(obss, 1, 0),
items=['G%02d' % d for d in range(1, 33)],
major_axis=epochs,
minor_axis=self.obs_types
).dropna(axis=0, how='all').dropna(axis=2, how='all'))
obs_data_chunks_dataframe = obs_data_chunks[0]
for sv in range(32):
sat = obs_data_chunks_dataframe[sv, :]
print "sat_columns: {0}".format(sat.columns[0]) #list header of first column: L1
sat.to_csv(('SV_{0}').format(sv+1), index_label="Epochs", sep='\t')
Do I perform this conversion within the dataframe i.e on "sat", or on the files after using the "to_csv"? I'm a bit lost here. Same question for formatting the columns. See the not-so-nicely formatted columns below:
Epochs L1 L2 P1 P2 C1 S1 S2
2014-04-27 00:00:00 669486.833 530073.33 24568752.516 24568762.572 24568751.442 43.0 38.0
2014-04-27 00:00:30 786184.519 621006.551 24590960.634 24590970.218 24590958.374 43.0 38.0
2014-04-27 00:01:00 902916.181 711966.252 24613174.234 24613180.219 24613173.065 42.0 38.0
2014-04-27 00:01:30 1019689.006 802958.016 24635396.428 24635402.41 24635395.627 42.0 37.0
2014-04-27 00:02:00 1136478.43 893962.705 24657620.079 24657627.11 24657621.828 42.0 37.0
UPDATE:
By saying that I've hit a wall trying to figure out how best to access this first column (Epochs), the ""sat" dataframe originally in its header had no "Epochs". It simply had the signals:
L1 L2 P1 P2 C1 S1 S2
The index, (date&time), was missing from the header. In order to overcome this in my csv output files, I "forced" the name with:
sat.to_csv(('SV_{0}').format(sv+1), index_label="Epochs", sep='\t')
I would expect before generating the csv files, I should (but don't know how) be able to access this index (date&time) column and simply convert all dates/times in one swoop, so that the timestamps are outputted.
UPDATE:
The epochs are generated in the dataframe in another function as so:
epochs = np.zeros(CHUNK_SIZE, dtype='datetime64[us]')
UPDATE:
def read_data_chunk(self, RINEXfile, CHUNK_SIZE = 10000):
obss = np.empty((CHUNK_SIZE, TOTAL_SATS, len(self.obs_types)), dtype=np.float64) * np.NaN
llis = np.zeros((CHUNK_SIZE, TOTAL_SATS, len(self.obs_types)), dtype=np.uint8)
signal_strengths = np.zeros((CHUNK_SIZE, TOTAL_SATS, len(self.obs_types)), dtype=np.uint8)
epochs = np.zeros(CHUNK_SIZE, dtype='datetime64[us]')
flags = np.zeros(CHUNK_SIZE, dtype=np.uint8)
i = 0
while True:
hdr = self.read_epoch_header(RINEXfile)
#print hdr
if hdr is None:
break
epoch, flags[i], sats = hdr
epochs[i] = np.datetime64(epoch)
sat_map = np.ones(len(sats)) * -1
for n, sat in enumerate(sats):
if sat[0] == 'G':
sat_map[n] = int(sat[1:]) - 1
obss[i], llis[i], signal_strengths[i] = self.read_obs(RINEXfile, len(sats), sat_map)
i += 1
if i >= CHUNK_SIZE:
break
return obss[:i], llis[:i], signal_strengths[:i], epochs[:i], flags[:i]
UPDATE:
My apologies if my description was somewhat vague. Actually I'm modifying code already developed, and I'm not a SW developer so it's a strong learning curve for me too. Let me explain further: the "Epochs" are read from another function:
def read_epoch_header(self, RINEXfile):
epoch_hdr = RINEXfile.readline()
if epoch_hdr == '':
return None
year = int(epoch_hdr[1:3])
if year >= 80:
year += 1900
else:
year += 2000
month = int(epoch_hdr[4:6])
day = int(epoch_hdr[7:9])
hour = int(epoch_hdr[10:12])
minute = int(epoch_hdr[13:15])
second = int(epoch_hdr[15:18])
microsecond = int(epoch_hdr[19:25]) # Discard the least significant digits (use microseconds only).
epoch = datetime.datetime(year, month, day, hour, minute, second, microsecond)
flag = int(epoch_hdr[28])
if flag != 0:
raise ValueError("Don't know how to handle epoch flag %d in epoch header:\n%s", (flag, epoch_hdr))
n_sats = int(epoch_hdr[29:32])
sats = []
for i in range(0, n_sats):
if ((i % 12) == 0) and (i > 0):
epoch_hdr = RINEXfile.readline()
sats.append(epoch_hdr[(32+(i%12)*3):(35+(i%12)*3)])
return epoch, flag, sats
In the above read_data function, these are appended into a dataframe. I basically want to have this dataframe separated by its satellite axis, so that each satellite file has in the first column, the epochs, then the following 7 signals. The last bit of code in the read_data file (below) explains this:
for sv in range(32):
sat = obs_data_chunks_dataframe[sv, :]
print "sat_columns: {0}".format(sat.columns[0]) #list header of first column: L1
sat.to_csv(('SV_{0}').format(sv+1), index_label="Epochs", sep='\t')
The problem here is (1) I want to have the first column as timestamps (so, strip the date, convert so midnight = 00000s and 23:59:59 = 86399s) not as they are now, and (2) ensure the columns are aligned, so I can eventually manipulate these further using a different class to perform other calculations i.e. L1 minus L2 plotted against time, etc.
It will be much quicker to do this when it's a df, if the dtype is datetime64 then just convert to int64 and then divide by nanoseconds:
In [241]:
df['Epochs'].astype(np.int64) // 10**9
Out[241]:
0 1398556800
1 1398556830
2 1398556860
3 1398556890
4 1398556920
Name: Epochs, dtype: int64
If it's a string then convert using to_datetime and then perform the above:
df['Epochs'] = pd.to_datetime(df['Epochs']).astype(np.int64) // 10**9
see related
I resolved part of this myself in the end: in the read_epoch_header function, I simply manipulated a variable that converted just hh:mm:ss to seconds, and used this as the epoch. Doesn't look that elegant but it works. Just need to format the header so that it aligns with the columns (and they are aligned too). Cheers, pymat