calculate the mad() of every two values of a dataframe - python

I have a dataframe that looks like:
index data
11727.213152 -62.260842
12144.825397 -26.384420
12566.138322 -47.091084
12981.362812 -74.528391
I would like to calculate the mad() value of every two items of the data column, how can i do that?
Is there a way to group the data column in groups of two (or more)?
Or should I simply iterate through the df and calculate the mad of two consequent values?
thanks!

I think need groupby by helper array created by flooring division by //:
s = df.groupby(np.arange(len(df)) // 2)['data'].mad()
print (s)
0 17.938211
1 13.718653
Name: data, dtype: float64
Detail:
print (np.arange(len(df)) // 2)
[0 0 1 1]

Related

How to retrieve cells from a dataframe based on condition from another dataframe

We have two dataframes, first one contains some float values (which mean average speed).
0 1 2
1 15.610826 19.182879 6.678087
2 13.740250 15.666897 17.640749
3 2.379010 2.889702 2.955097
4 20.540628 9.661226 9.479921
And another dataframe with geographical coordinates, where the average speed takes place.
0 1 2
1 [52.2399255, 21.0654495] [52.23893150000001, 21.06087] [52.23800850000001,21.056779]
2 [52.2449705, 21.0755175] [52.2452905, 21.075118000000003] [52.245557500000004, 21.0748175]
3 [52.2401885, 21.012981500000002] [52.239134, 21.009432] [52.238420500000004, 21.007080000000002]
4 [52.221506500000004, 20.9665085] [52.222458, 20.968952] [52.224409, 20.969248999999998]
Now I want to create a list with coordinates where average speed is above 18, in this case this would be
list_above_18=[[52.23893150000001, 21.06087] , [52.221506500000004, 20.9665085]]
How can I select values from a dataframe based on values in another dataframe?
You can use enumerate to zip the dataframes and work on the elements seperately. See below (A,B are your dataframes, in same order you provided them):
list_above_18=[]
p=list(enumerate(zip(A.values, B.values)))
for i in p:
for k in range(3):
if i[1][0][k]>18:
list_above_18.append(i[1][1][k])
Output:
>>>print(list_above_18)
[[52.23893150000001, 21.06087] , [52.221506500000004, 20.9665085]]
Considering the shape of the Average Speed dataset will remain same as the coordinates dataset, you can try the below
coord_df[data_df.iloc[:,:] > 18].T.stack().values
Here,
coord_df = DataFrame with coordinate values
data_df = Average Speed values
This would return a numpy array with just the coordinate values where the Average speed is greater than 18
How this works :
data_df.iloc[:,:] > 18
Creates a dataframe mask such that all the values which are smaller than 18 are marked as False and rest as True
coord_df[data_df.iloc[:,:] > 18]
Passes the mask in the Target Dataframe i.e. coordinate dataframe which then results in a dataframe which shows coordinate values only for those cells where the mask has True i.e. where the average speed was above 18
.T.stack().values
This then retrieves only the non-null values from the resultant dataframe and returns a numpy array
References I took :
Get non-null elements in a pandas DataFrame --- To get only the non null values from a dataframe (.T.stack().values)
Let the first df be df1 and second df be df2
output_array = df2[df1>18].values.flatten() # df1>18 would create the mask
output_array = [val for val in output_array if type(val) == list] # removing the nan values. We can't use np.isnan as it would not work for list
Sample Input:
df1
df2
output_array
[[15.1, 20.5], [91.5, 95.8]]

Return value from row based on column value pandas

I have a data frame that has a column of lists of strings, I want to find the value of a colum in a row which is based on the value of another column
i.e
samples subject trial_num
0 ['aa','bb'] 1 1
1 ['bb','cc'] 1 2
I have ['bb','cc'] and I want to get the value from the trial_num column where this list equals the samples colum, in this case 2.
Given the search column (samples) contains a list, it makes thing a tiny bit more complicated.
In this case, the apply() function can be used to test the values, and return a boolean mask, which can be applied to the DataFrame to obtain the required value.
Example code:
df.loc[df['samples'].apply(lambda x: x == ['bb', 'cc']), 'trial_num']
Output:
1 2
Name: trial_num, dtype: int64
To only return the required value (2), simply append .iloc[0] to the end of the statement, as:
df.loc[df['samples'].apply(lambda x: x == ['bb', 'cc']), 'trial_num'].iloc[0]
>>> 2

python - elegant way to pick the highest weighted value in set

I have a list such that
l = ['xyz','abc','mnq','qpr']
These values are weighted such that xyz>abc>mnq>qpr
I have a pandas dataframe with a column that has sets of values.
COL_NAME
0 set(['xyz', 'abc'])
1 set(['xyz'])
2 set(['mnq','qpr'])
Now, I want to pick the highest values in the sets such that after I apply the custom function I am left with
COL_NAME
0 set(['xyz'])
1 set(['xyz'])
2 set(['mnq'])
Is there an elegant way to do this process without resorting to a dictionary of weights?
you can use pd.Categorical with the parameter ordered=True and set the categories=l[::-1] to get the order you'd like.
def max_cat(x):
return set([pd.Categorical(x, l[::-1], True).max()])
df.COL_NAME.apply(max_cat)
0 {xyz}
1 {xyz}
2 {mnq}
Name: COL_NAME, dtype: object

how do you pass multiple variables to pandas dataframe to use them with .map to create a new column

To pass multiple variables to a normal python function you can just write something like:
def a_function(date,string,float):
do something....
convert string to int,
date = date + (float * int) days
return date
When using Pandas DataFrames I know you can create a new column based on the contents of one like so:
df['new_col']) = df['column_A'].map(a_function)
# This might return the year from a date column
# return date.year
What I'm wondering is in the same way you can pass multiple pieces of data to a single function (as seen in the first example above), can you use multiple columns in the creation of a new pandas DataFrame column?
For example combining three separate parts of a date Y - M - D into one field.
df['whole_date']) = df['Year','Month','Day'].map(a_function)
I get a key error with the following test.
def combine(one,two,three):
return one + two + three
df = pd.DataFrame({'a': [1,2,3], 'b': [2,3,4],'c': [4,5,6]})
df['d'] = df['a','b','b'].map(combine)
Is there a way of creating a new column in a pandas DataFrame using .map or something else which takes as input three columns and returns a single column?
-> Example input: 1, 2, 3
-> Example output: 1*2*3
Likewise is there also a way of having a function take in one argument, a date and return three new pandas DataFrame columns; one for the year, month and day?
Is there a way of creating a new column in a pandas dataframe using .MAP or something else which takes as input three columns and returns a single column. For example input would be 1, 2, 3 and output would be 1*2*3
To do that, you can use apply with axis=1. However, instead of being called with three separate arguments (one for each column) your specified function will then be called with a single argument for each row, and that argument will be a Series containing the data for that row. You can either account for this in your function:
def combine(row):
return row['a'] + row['b'] + row['c']
>>> df.apply(combine, axis=1)
0 7
1 10
2 13
Or you can pass a lambda which unpacks the Series into separate arguments:
def combine(one,two,three):
return one + two + three
>>> df.apply(lambda x: combine(*x), axis=1)
0 7
1 10
2 13
If you want to pass only specific rows, you need to select them by indexing on the DataFrame with a list:
>>> df[['a', 'b', 'c']].apply(lambda x: combine(*x), axis=1)
0 7
1 10
2 13
Note the double brackets. (This doesn't really have anything to do with apply; indexing with a list is the normal way to access multiple columns from a DataFrame.)
However, it's important to note that in many cases you don't need to use apply, because you can just use vectorized operations on the columns themselves. The combine function above can simply be called with the DataFrame columns themselves as the arguments:
>>> combine(df.a, df.b, df.c)
0 7
1 10
2 13
This is typically much more efficient when the "combining" operation is vectorizable.
Likewise is there also a way of having a function take in one argument, a date and return three new pandas dataframe columns; one for the year, month and day?
As above, there are two basic ways to do this: a general but non-vectorized way using apply, and a faster vectorized way. Suppose you have a DataFrame like this:
>>> df = pandas.DataFrame({'date': pandas.date_range('2015/05/01', '2015/05/03')})
>>> df
date
0 2015-05-01
1 2015-05-02
2 2015-05-03
You can define a function that returns a Series for each value, and then apply it to the column:
def dateComponents(date):
return pandas.Series([date.year, date.month, date.day], index=["Year", "Month", "Day"])
>>> df.date.apply(dateComponents)
11: Year Month Day
0 2015 5 1
1 2015 5 2
2 2015 5 3
In this situation, this is the only option, since there is no vectorized way to access the individual date components. However, in some cases you can use vectorized operations:
>>> df = pandas.DataFrame({'a': ["Hello", "There", "Pal"]})
>>> df
a
0 Hello
1 There
2 Pal
>>> pandas.DataFrame({'FirstChar': df.a.str[0], 'Length': df.a.str.len()})
FirstChar Length
0 H 5
1 T 5
2 P 3
Here again the operation is vectorized by operating directly on the values instead of applying a function elementwise. In this case, we have two vectorized operations (getting first character and getting the string length), and then we wrap the results in another call to DataFrame to create separate columns for each of the two kinds of results.
I normally use apply for this kind of thing; it's basically the DataFrame version of map (the axis parameter lets you decide whether to apply your function to rows or columns):
df.apply(lambda row: row.a*row.b*row.c, axis =1)
or
df.apply(np.prod, axis=1)
0 8
1 30
2 72

Adding calculated column(s) to a dataframe in pandas

I have an OHLC price data set, that I have parsed from CSV into a Pandas dataframe and resampled to 15 min bars:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 500047 entries, 1998-05-04 04:45:00 to 2012-08-07 00:15:00
Freq: 15T
Data columns:
Close 363152 non-null values
High 363152 non-null values
Low 363152 non-null values
Open 363152 non-null values
dtypes: float64(4)
I would like to add various calculated columns, starting with simple ones such as period Range (H-L) and then booleans to indicate the occurrence of price patterns that I will define - e.g. a hammer candle pattern, for which a sample definition:
def closed_in_top_half_of_range(h,l,c):
return c > l + (h-l)/2
def lower_wick(o,l,c):
return min(o,c)-l
def real_body(o,c):
return abs(c-o)
def lower_wick_at_least_twice_real_body(o,l,c):
return lower_wick(o,l,c) >= 2 * real_body(o,c)
def is_hammer(row):
return lower_wick_at_least_twice_real_body(row["Open"],row["Low"],row["Close"]) \
and closed_in_top_half_of_range(row["High"],row["Low"],row["Close"])
Basic problem: how do I map the function to the column, specifically where I would like to reference more than one other column or the whole row or whatever?
This post deals with adding two calculated columns off of a single source column, which is close, but not quite it.
And slightly more advanced: for price patterns that are determined with reference to more than a single bar (T), how can I reference different rows (e.g. T-1, T-2 etc.) from within the function definition?
The exact code will vary for each of the columns you want to do, but it's likely you'll want to use the map and apply functions. In some cases you can just compute using the existing columns directly, since the columns are Pandas Series objects, which also work as Numpy arrays, which automatically work element-wise for usual mathematical operations.
>>> d
A B C
0 11 13 5
1 6 7 4
2 8 3 6
3 4 8 7
4 0 1 7
>>> (d.A + d.B) / d.C
0 4.800000
1 3.250000
2 1.833333
3 1.714286
4 0.142857
>>> d.A > d.C
0 True
1 True
2 True
3 False
4 False
If you need to use operations like max and min within a row, you can use apply with axis=1 to apply any function you like to each row. Here's an example that computes min(A, B)-C, which seems to be like your "lower wick":
>>> d.apply(lambda row: min([row['A'], row['B']])-row['C'], axis=1)
0 6
1 2
2 -3
3 -3
4 -7
Hopefully that gives you some idea of how to proceed.
Edit: to compare rows against neighboring rows, the simplest approach is to slice the columns you want to compare, leaving off the beginning/end, and then compare the resulting slices. For instance, this will tell you for which rows the element in column A is less than the next row's element in column C:
d['A'][:-1] < d['C'][1:]
and this does it the other way, telling you which rows have A less than the preceding row's C:
d['A'][1:] < d['C'][:-1]
Doing ['A"][:-1] slices off the last element of column A, and doing ['C'][1:] slices off the first element of column C, so when you line these two up and compare them, you're comparing each element in A with the C from the following row.
You could have is_hammer in terms of row["Open"] etc. as follows
def is_hammer(rOpen,rLow,rClose,rHigh):
return lower_wick_at_least_twice_real_body(rOpen,rLow,rClose) \
and closed_in_top_half_of_range(rHigh,rLow,rClose)
Then you can use map:
df["isHammer"] = map(is_hammer, df["Open"], df["Low"], df["Close"], df["High"])
For the second part of your question, you can also use shift, for example:
df['t-1'] = df['t'].shift(1)
t-1 would then contain the values from t one row above.
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html
The first four functions you list will work on vectors as well, with the exception that lower_wick needs to be adapted. Something like this,
def lower_wick_vec(o, l, c):
min_oc = numpy.where(o > c, c, o)
return min_oc - l
where o, l and c are vectors.
You could do it this way instead which just takes the df as input and avoid using numpy, although it will be much slower:
def lower_wick_df(df):
min_oc = df[['Open', 'Close']].min(axis=1)
return min_oc - l
The other three will work on columns or vectors just as they are. Then you can finish off with
def is_hammer(df):
lw = lower_wick_at_least_twice_real_body(df["Open"], df["Low"], df["Close"])
cl = closed_in_top_half_of_range(df["High"], df["Low"], df["Close"])
return cl & lw
Bit operators can perform set logic on boolean vectors, & for and, | for or etc. This is enough to completely vectorize the sample calculations you gave and should be relatively fast. You could probably speed up even more by temporarily working with the numpy arrays underlying the data while performing these calculations.
For the second part, I would recommend introducing a column indicating the pattern for each row and writing a family of functions which deal with each pattern. Then groupby the pattern and apply the appropriate function to each group.

Categories