Find peaks and bottoms of graph and label them - python

df1
Date Topic Return
1/1/2010 A,B -0.308648967
1/2/2010 C,D -0.465862046
1/3/2010 E 0.374052392
1/4/2010 F 0.520312204
1/5/2010 G 0.503889198
1/6/2010 H -1.730646788
1/7/2010 L,M,N 1.756295613
1/8/2010 K -0.598990239
......
1/30/2010 z 2,124355
Plot= df1.plot(x='Date', y='Return')
How can I find highest peaks and smallest troughs for this graph and label these special points with corresponding Topics?

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Take an example data
data = {"Date":["date{i}".format(i=i) for i in range(10)], "Topic":["topic{i}".format(i=i) for i in range(10)], "Return":[1,2,3,2,1,2,4,7,1,3]}
df = pd.DataFrame.from_dict(data)
dates = np.array(df["Date"].tolist())
returns = np.array(df["Return"].tolist())
# Calculate the minimas and the maximas
minimas = (np.diff(np.sign(np.diff(returns))) > 0).nonzero()[0] + 1
maximas = (np.diff(np.sign(np.diff(returns))) < 0).nonzero()[0] + 1
# Plot the entire data first
plt.plot(dates, returns)
# Then mark the maximas and the minimas
for minima in minimas:
plt.plot(df.iloc[minima]["Date"], df.iloc[minima]["Return"], marker="o", label=str(df.iloc[minima]["Topic"]))
for maxima in maximas:
plt.plot(df.iloc[maxima]["Date"], df.iloc[maxima]["Return"], marker="o", label=str(df.iloc[maxima]["Topic"]))
plt.legend()
plt.show()
Example dataframe:
Date Topic Return
0 date0 topic0 1
1 date1 topic1 2
2 date2 topic2 3
3 date3 topic3 2
4 date4 topic4 1
5 date5 topic5 2
6 date6 topic6 4
7 date7 topic7 7
8 date8 topic8 1
9 date9 topic9 3
Plot it produces:

This depends a little bit on your definitions of "peak" and "trough". Oftentimes, a person might care about smoothed peaks and troughs to identify broad trends, especially in the presence of noise. In the event that you want every fine-grained dip or rise in the data though (and if your rows are sorted), you can cheat a little bit with vectorized routines from numpy.
import numpy as np
d = np.diff(df['Return'])
i = np.argwhere((d[:-1]*d[1:])<=0).flatten()
special_points = df['Topic'][i+1]
The first line with np.diff() compares each return value to the next return value. In particular, it subtracts them. Depending a little on your definition of a local peak/trough, these will have the property that you only have a feature you're looking for if these pairwise differences alternate in sign. Consider the following peak.
[1, 5, 1]
If you compute the pairwise differences, you get a slightly shorter vector
[4, -4]
Note that these alternate in sign. Hence, if you multiply them you get -16, which must be negative. This is the exact insight that our code uses to identify the peaks and troughs. The dimension reduction offsets things a little bit, so we shift the indices we find by 1 (in the df['Topic'][i+1] block).
Caveats: Note that we have <= instead of strict inequality. This is in case we have a wider peak than normal. Consider [1, 2, 2, 2, 2, 2, 1]. Arguably, the string of 2's represents a peak and would need to be captured. If that isn't desirable, make the inequality strict.
Additionally, if you're interested in wider peaks like that, this algorithm still isn't correct. It's plenty fast, but in general it only computes a superset of the peaks/troughs. Consider the following
[1, 2, 2, 3, 2, 1]
Arguably, the number 3 is the only peak in that dataset (depends a bit on your definitions of course), but our algorithm will also pick up the first and second instances of the number 2 due to their being on a shelf (being identical to a neighbor).
Extras: The scipy.signal module has a variety of peak-finding algorithms which may be better suited depending on any extra requirements you have on your peaks. Modifying this solution is unlikely to be as fast or clean as using an appropriate built-in signal processor. A call to scipy.signal.find_peaks() can basically replicate everything we've done here, and it has more options if you need them. Other algorithms like scipy.signal.find_peaks_cwt() might be more appropriate if you need any kind of smoothing or more complicated operations.

Related

Is there a simple way to select a loop or cycle of data in pandas/Python?

I would like to select a cycle of data in Python (in this case, pandas, but it's a flexible and recurring problem), such that the selected data circles back around to the start of the data for n of the beginning rows. I keep running into this problem, and while I have a variety of working solutions for it, I'm interested in whether there's some built-in method for this I don't know about?
Here's an example pandas DataFrame:
import pandas as pd
import numpy as np
df = pd.DataFrame([[0,1],[2,3],[5,6],[7,8]],columns = ['x','y'])
This DataFrame would, of course, look like this:
x y
0 0 1
1 2 3
2 5 6
3 7 8
A good-enough solution I've found is to create an index array:
n = 1 #how far through the start of the dataframe to select
select = np.concatenate((range(len(df)),range(n)))
The output of select would be:
array([0, 1, 2, 3, 0])
If I do df.x[select], I get what I'm looking for:
0 0
1 2
2 5
3 7
0 0
Name: x, dtype: int64
Is there some functionality in NumPy, pandas, or any other module of Python that would allow this kind of looping selection of data? Something like df.x[0:+1] that would work in a way analogous to df.x[0:-1] instead of how it currently works? Or a NumPy method that works the way I'm making np.arange() work by combining it with np.concatenate()? As I mentioned, I keep running into this problem, and for a language as convenient as Python has been, it sure seems like a feature that would or should exist. Am I missing or forgetting something?
Edit to clarify request:
A suggested method by avloss was to use np.tile, which is the right degree of simple and generalizable I'm looking for, only it's excessive for the applications I'm using. These are cases where you have to close the loop of cyclic data either for plotting, or for calculating (eg if you're matching slopes of the beginning and end of a closed curve using a finite difference method). In these cases you only need the first n data points in a series (where n is usually somewhere between 1 and 3) to be repeated for closing the calculation or plotting the cycle.
This is not exactly what you're asking for, but np.tile comes pretty close:
https://numpy.org/doc/stable/reference/generated/numpy.tile.html
Or, if you want to do this through indices, you might use mod division
np.arange(5) % 4 == array([0, 1, 2, 3, 0])

Intelligent averaging of time series data with python

I have the following (time-series) data:
t = [5.13, 5.27, 5.40, 5.46, 190.99, 191.13, 191.267, 368.70, 368.83, 368.90, 368.93]
y = [17.17, 17.18, 17.014, 17.104, 16.981, 16.96, 16.85, 17.27, 17.66, 17.76, 18.01]
so, groups of data in short (time) intervals then separated cleanly by a long time gap.
I'm looking for a simple method that will intelligently average these together; sort of a 'Bayesian blocks' but for non-histogram data.
One could do a simple moving average, or numpy convolution, but I'm looking for something a bit smarter that will generalize to larger, similar, but not identical datasets.
It's easy with Pandas. First, construct a DataFrame:
df = pd.DataFrame({'t':t,'y':y})
Then label the groups according to a time threshold:
groups = (df.t.diff() > 10).cumsum()
That gives you [0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2], because cumsum() on a boolean array increments wherever the input is true.
Finally, use groupby():
df.groupby(groups).mean()
It gives you:
t y
t
0 5.315 17.117000
1 191.129 16.930333
2 368.840 17.675000
If you need plain NumPy arrays at the end, just tack on .t.values and .y.values.
If you don't know a priori what time threshold to use, I'm sure you can come up with some heuristic, perhaps involving simple statistics on df.t and df.t.diff().

New Dataframe column as a generic function of other rows (pandas)

What is the fastest (and most efficient) way to create a new column in a DataFrame that is a function of other rows in pandas ?
Consider the following example:
import pandas as pd
d = {
'id': [1, 2, 3, 4, 5, 6],
'word': ['cat', 'hat', 'hag', 'hog', 'dog', 'elephant']
}
pandas_df = pd.DataFrame(d)
Which yields:
id word
0 1 cat
1 2 hat
2 3 hag
3 4 hog
4 5 dog
5 6 elephant
Suppose I want to create a new column bar containing a value that is based on the output of using a function foo to compare the word in the current row to the other rows in the dataframe.
def foo(word1, word2):
# do some calculation
return foobar # in this example, the return type is numeric
threshold = some_threshold
for index, _id, word in pandas_df.itertuples():
value = sum(
pandas_df[pandas_df['word'] != word].apply(
lambda x: foo(x['word'], word),
axis=1
) < threshold
)
pandas_df.loc[index, 'bar'] = value
This does produce the correct output, but it uses itertuples() and apply(), which is not performant for large DataFrames.
Is there a way to vectorize (is that the correct term?) this approach? Or is there another better (faster) way to do this?
Notes / Updates:
In the original post, I used edit distance/levenshtein distance as the foo function. I have changed the question in an attempt to be more generic. The idea is that the function to be applied is to compare the current rows value against all other rows and return some aggregate value.
If foo was nltk.metrics.distance.edit_distance and the threshold was set to 2 (as in the original post), this produces the output below:
id word bar
0 1 cat 1.0
1 2 hat 2.0
2 3 hag 2.0
3 4 hog 2.0
4 5 dog 1.0
5 6 elephant 0.0
I have the same question for spark dataframes as well. I thought it made sense to split these into two posts so they are not too broad. However, I have generally found that solutions to similar pandas problems can sometimes be modified to work for spark.
Inspired by this answer to my spark version of this question, I tried to use a cartesian product in pandas. My speed tests indicate that this is slightly faster (though I suspect that may vary with the size of the data). Unfortunately, I still can't get around calling apply().
Example code:
from nltk.metrics.distance import edit_distance as edit_dist
pandas_df2 = pd.DataFrame(d)
i, j = np.where(np.ones((len(pandas_df2), len(pandas_df2))))
cart = pandas_df2.iloc[i].reset_index(drop=True).join(
pandas_df2.iloc[j].reset_index(drop=True), rsuffix='_r'
)
cart['dist'] = cart.apply(lambda x: edit_dist(x['word'], x['word_r']), axis=1)
pandas_df2 = (
cart[cart['dist'] < 2].groupby(['id', 'word']).count()['dist'] - 1
).reset_index()
Let's try to analyze the problem for a second:
If you have N rows, then you have N*N "pairs" to consider in your similarity function. In the general case, there is no escape from evaluating all of them (sounds very rational, but I can't prove it). Hence, you have at least O(n^2) time complexity.
What you can try, however, is to play with the constant factors of that time complexity.
The possible options I found are:
1. Parallelization:
Since you have some large DataFrame, parallelizing the processing is the best obvious choice. That will gain you (almost) linear improvement in time complexity, so if you have 16 workers you will gain (almost) 16x improvement.
For example, we can partition the rows of the df into disjoint parts, and process each part individually, then combine the results.
A very basic parallel code might look like this:
from multiprocessing import cpu_count,Pool
def work(part):
"""
Args:
part (DataFrame) : a part (collection of rows) of the whole DataFrame.
Returns:
DataFrame: the same part, with the desired property calculated and added as a new column
"""
# Note that we are using the original df (pandas_df) as a global variable
# But changes made in this function will not be global (a side effect of using multiprocessing).
for index, _id, word in part.itertuples(): # iterate over the "part" tuples
value = sum(
pandas_df[pandas_df['word'] != word].apply( # Calculate the desired function using the whole original df
lambda x: foo(x['word'], word),
axis=1
) < threshold
)
part.loc[index, 'bar'] = value
return part
# New code starts here ...
cores = cpu_count() #Number of CPU cores on your system
data_split = np.array_split(data, cores) # Split the DataFrame into parts
pool = Pool(cores) # Create a new thread pool
new_parts = pool.map(work , data_split) # apply the function `work` to each part, this will give you a list of the new parts
pool.close() # close the pool
pool.join()
new_df = pd.concat(new_parts) # Concatenate the new parts
Note: I've tried to keep the code as close to OP's code as possible. This is just a basic demonstration code and a lot of better alternatives exist.
2. "Low level" optimizations:
Another solution is to try to optimize the similarity function computation and iterating/mapping. I don't think this will gain you much speedup compared to the previous option or the next one.
3. Function-dependent pruning:
The last thing you can try are similarity-function-dependent improvements. This doesn't work in the general case, but will work very well if you can analyze the similarity function. For example:
Assuming you are using Levenshtein distance (LD), you can observe that the distance between any two strings is >= the difference between their lengths. i.e. LD(s1,s2) >= abs(len(s1)-len(s2)) .
You can use this observation to prune the possible similar pairs to consider for evaluation. So for each string with length l1, compare it only with strings having length l2 having abs(l1-l2) <= limit. (limit is the maximum accepted dis-similarity, 2 in your provided example).
Another observation is that LD(s1,s2) = LD(s2,s1). That cuts the number of pairs by a factor of 2.
This solution may actually get you down to O(n) time complexity (depends highly on the data).
Why? you may ask.
That's because if we had 10^9 rows, but on average we have only 10^3 rows with "close" length to each row, then we need to evaluate the function for about 10^9 * 10^3 /2 pairs, instead of 10^9 * 10^9 pairs. But that's (again) depends on the data. This approach will be useless if (in this example) you have strings all which have length 3.
Thoughts about preprocessing (groupby)
Because you are looking for edit distance less than 2, you can first group by the length of strings. If the difference of length between groups is greater or equal to 2, you do not need to compare them. (This part is quite similar to Qusai Alothman's answer in section 3. H)
Thus, first thing is to group by the length of the string.
df["length"] = df.word.str.len()
df.groupby("length")["id", "word"]
Then, you compute the edit distance between every two consecutive group if the difference in length is less than or equal to 2. This does not directly relate to your question but I hope it would be helpful.
Potential vectorization (after groupby)
After that, you may also try to vectorize the computation by splitting each string into characters. Note that if the cost of splitting is greater than the vectorized benefits it carries, you should not do this. Or when you are creating the data frame, just create one that with characters rather than words.
We will use the answer in Pandas split dataframe column for every character to split a string into a list of characters.
# assuming we had groupped the df.
df_len_3 = pd.DataFrame({"word": ['cat', 'hat', 'hag', 'hog', 'dog']})
# turn it into chars
splitted = df_len_3.word.apply(lambda x: pd.Series(list(x)))
0 1 2
0 c a t
1 h a t
2 h a g
3 h o g
4 d o g
splitted.loc[0] == splitted # compare one word to all words
0 1 2
0 True True True -> comparing to itself is always all true.
1 False True True
2 False True False
3 False False False
4 False False False
splitted.apply(lambda x: (x == splitted).sum(axis=1).ge(len(x)-1), axis=1).sum(axis=1) - 1
0 1
1 2
2 2
3 2
4 1
dtype: int64
Explanation of splitted.apply(lambda x: (x == splitted).sum(axis=1).ge(len(x)-1), axis=1).sum(axis=1) - 1
For each row, lambda x: (x == splitted) compares each row to the whole df just like splitted.loc[0] == splitted above. It will generate a true/false table.
Then, we sum up the table horizontally with a .sum(axis=1) following (x == splitted).
Then, we want to find out which words are similar. Thus, we apply a ge function that checks the number of true is over a threshold. Here, we only allow difference to be 1, so it is set to be len(x)-1.
Finally, we will have to subtract the whole array by 1 because we compare each word with itself in operation. We will want to exclude self-comparison.
Note, this vectorization part only works for within-group similarity checking. You still need to check groups with different length with the edit distance approach, I suppose.

Find the worst element making the correlation worse in pandas DataFrame

I'd like to find the worst record which make the correlation worse in pandas.DataFrame to remove anomaly records.
When I have the following DataFrame:
df = pd.DataFrame({'a':[1,2,3], 'b':[1,2,30]})
The correlation becomes better removing third row.
print df.corr() #-> correlation is 0.88
print df.ix[0:1].corr() # -> correlation is 1.00
In this case, my question is how to find the third row is an candidate of anomalies which make the correlation worse.
My idea is execute linear regression and calculate the error of each element (row). But, I don't know the simple way to try that idea and also believe there is more simple and straightforward way.
Update
Of course, you can remove all of elements and achieve the correlation is 1. But I'd like to find just one (or several) anomaly row(s). Intuitively, I hope to get non-trivial set of records which achieves better correlation.
First, you could brute force it to get exact solution:
import pandas as pd
import numpy as np
from itertools import combinations, chain, imap
df = pd.DataFrame(zip(np.random.randn(10), np.random.randn(10)))
# set the maximal number of lines you are willing to remove
reomve_up_to_n = 3
# all combinations of indices to keep
to_keep = imap(list, chain(*map(lambda i: combinations(df.index, df.shape[0] - i), range(1, reomve_up_to_n + 1))))
# find index with highest remaining correlation
highest_correlation_index = max(to_keep, key = lambda ks: df.ix[ks].corr().ix[0,1])
df_remaining = df.ix[highest_correlation_index]
This can be costly. You could get a greedy approximation by adding a column with something like row's contribution to correlation.
df['CorComp'] = (df.icol(0).mean() - df.icol(0)) * (df.icol(1).mean() - df.icol(1))
df = df.sort(['CorComp'])
Now you can remove rows starting from the top, which may raise your correlation.
Your question is about outliers detection. There is many way to perform this detection, but a simple way could be to exclude values with deviation exceeding x % of the standard deviation of the series.
# Keep only values with a deviation less than 10% of the standard deviation of the series.
df[np.abs(df.b-df.b.mean())<=(1.1*df.b.std())]
# result
a b
0 1 1
1 2 2

Using scipy pdist on pandas data frames (and) lists

I've run into an odd problem yet again.
Suppose I have the following dummy data frame (by way of demonstrating my problem):
import numpy as np
import pandas as pd
import string
# Test data frame
N = 3
col_ids = string.letters[:N]
df = pd.DataFrame(
np.random.randn(5, 3*N),
columns=['{}_{}'.format(letter, coord) for letter in col_ids for coord in list('xyz')])
df
This produces:
A_x A_y A_z B_x B_y B_z C_x C_y C_z
0 -1.339040 0.185817 0.083120 0.498545 -0.569518 0.580264 0.453234 1.336992 -0.346724
1 -0.938575 0.367866 1.084475 1.497117 0.349927 -0.726140 -0.870142 -0.371153 -0.881763
2 -0.346819 -1.689058 -0.475032 -0.625383 -0.890025 0.929955 0.683413 0.819212 0.102625
3 0.359540 -0.125700 -0.900680 -0.403000 2.655242 -0.607996 1.117012 -0.905600 0.671239
4 1.624630 -1.036742 0.538341 -0.682000 0.542178 -0.001380 -1.126426 0.756532 -0.701805
Now I would like to use scipy.spatial.distance.pdist on this pandas data frame. This turns out to be a rather non-trivial process. What pdist does is to compute the distance between m points using Euclidean distance (2-norm) as the distance metric between the points. The points are arranged as m n-dimensional row vectors in the matrix X (source).
So, there are a couple of things that one has to do to create a function that operates on a pandas data frame, such that the pdist function can be used. You will note that pdist is convenient when the number of points gets very large. I've tried making my own, which works for a one-row data-frame, but I cannot get it to work, ideally, on the whole data frame at once.
Here's my attempt:
from scipy.spatial.distance import pdist, squareform
import numpy as np
import pandas as pd
import string
def Euclidean_distance(df):
EcDist = pd.DataFrame(index=df.index) # results container
arr = df.values # Store data frame values into a numpy array
tag_list = [num for elem in arr for num in elem] # flatten numpy array into single list
tag_list_3D = zip(*[iter(tag_list)]*3) # separate list into length = 3 sub-lists, that pdist() can work with
EcDist = pdist(tag_list_3D) # the distance between m points using Euclidean distance (2-norm)
return EcDist
First I begin my creating a results container in pandas form, to store the result in. Secondly I save the pandas data frame as a numpy array, in order to get it into list form in the next step. It has to be list form because the pdist function does only operate on lists. When saving the data frame into an array, it stores it as a list within a list. This has to be flattened which is saved in the 'tag_list' variable. Thirdly, the tag_list is furthered reduced into sub-lists of length three, such that the x, y and z coordinates can be obtained for each point, which can the be used to find the Euclidean distance between all of these points (in this example there are three points: A,B and C each being three dimensional).
As said, the function works if the data frame is a single row, but when using the function in the given example it calculates the Euclidean distance for 5x3 points, which yields a total of 105 distances. What I want it to do is to calculate the distances per row (so pdist should only work on a 1x3 vector at a time). Such that my final results, for this example, would look something like this:
dist_1 dist_2 dist_3
0 0.807271 0.142495 1.759969
1 0.180112 0.641855 0.257957
2 0.196950 1.334812 0.638719
3 0.145780 0.384268 0.577387
4 0.044030 0.735428 0.549897
(these are just dummy numbers to show the desired shape)
Hence how do I get my function to apply to the data frame in a row-wise fashion?
Or better yet, how can I get it to perform the function on the entire data frame at once, and then store the result in a new data frame?
Any help would be very appreciated. Thanks.
If I understand correctly, you have "groups" of points. In your example each group has three points, which you call A, B and C. A is represented by three columns A_x, A_y, A_z, and likewise for B and C.
What I suggest is that you restructure your "wide-form" data into a "long" form in which each row contains only one point. Each row then will have only three columns for the coordinates, and then you will add an additional column to represent which group a point is in. Here's an example:
>>> d = pandas.DataFrame(np.random.randn(12, 3), columns=["X", "Y", "Z"])
>>> d["Group"] = np.repeat([1, 2, 3, 4], 3)
>>> d
X Y Z Group
0 -0.280505 0.888417 -0.936790 1
1 0.823741 -0.428267 1.483763 1
2 -0.465326 0.005103 -1.107431 1
3 -1.009077 -1.618600 -0.443975 2
4 0.535634 0.562617 1.165269 2
5 1.544621 -0.858873 -0.349492 2
6 0.839795 0.720828 -0.973234 3
7 -2.273654 0.125304 0.469443 3
8 -0.179703 0.962098 -0.179542 3
9 -0.390777 -0.715896 -0.897837 4
10 -0.030338 0.746647 0.250173 4
11 -1.886581 0.643817 -2.658379 4
The three points with Group==1 correspond to A, B and C in your first row; the three points with Group==2 correspond to A, B, and C in your second row; etc.
With this structure, computing the pairwise distances by group using pdist becomes straightforward:
>>> d.groupby('Group')[["X", "Y", "Z"]].apply(lambda g: pandas.Series(distance.pdist(g), index=["D1", "D2", "D3"]))
D1 D2 D3
Group
1 2.968517 0.918435 2.926395
2 3.119856 2.665986 2.309370
3 3.482747 1.314357 2.346495
4 1.893904 2.680627 3.451939
It is possible to do a similar thing with your existing setup, but it will be more awkward. The problem with the way you set it up is that you have encoded critical information in a difficult-to-extract way. The information about which columns are X coordinates and which are Y or Z coordinates, as well as the information about which columns refer to point A versus B or C, in your setup, is encoded in the textual names of the columns. You as a human can see which columns are X values just by looking at them, but specifying that programmatically requires parsing the string names of the columns.
You can see this in how you made the column names with your '{}_{}'.format(letter, coord) business. This means that in order to get to use pdist on your data, you will have to do the reverse operation of parsing the column names as strings in order to decide which columns to compare. Needless to say, this will be awkward. On the other hand, if you put the data into "long" form, there is no such difficulty: the X coordinates of all points line up in one column, and likewise for Y and Z, and the information about which points are to be compared is also contained in one column (the "Group" column).
When you want to do large-scale operations on subsets of data, it's usually better to split out things into separate rows. This allows you to leverage the power of groupby, and is also usually what is expected by scipy tools.

Categories