in python language i can easily do this and output is whole list:
import random
list = [random.randrange(150) for i in range(10)]
print(list)
Can i do this thing in C# language without for cycle like this? Because output seperates my list's elements.
List<int> list = new List<int> ();
Random rnd = new Random();
for (int i = 0; i < 10; i++){
list.Add(rnd.Next (150));
}
for(int i = 0; i < list.Count; i++){
Console.WriteLine(list[i]);
}
Well, we can do it in one line if you want as well. This code is also thread-safe but requires .NET 6.0 or higher due to the use of Random.Shared.
Console.WriteLine(string.Join(",", Enumerable.Range(0, 10).Select(_ => Random.Shared.Next(150))));
This generates an IEnumerable<int> with random integers from 0 to 149 and then writes them to the Console separated by commas.
As far as I know, there is not a method generating a list of random integers in .NET, but why won't you write your own? For example:
public static class MyEnumerable
{
public static IEnumerable<int> RandomEnumerable(int maxValue, int count, Random random = default)
{
if (count < 0)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (maxValue < 0)
{
throw new ArgumentOutOfRangeException(nameof(maxValue));
}
random ??= Random.Shared;
for (int i = 0; i < count; i++)
{
yield return random.Next(maxValue);
}
}
}
Now you can do your task in two lines like in phyton:
var randomList = MyEnumerable.RandomEnumerable(150, 10).ToList();
Console.WriteLine($"[{string.Join(", ", randomList)}]");
I have written a short function to convert an input decimal number to a binary output. However, at a much higher level of the code, the end user should toggle an option as to whether or not they desire a 5B or 10B value. For the sake of some other low level maths, I have to clip the data here.
So I need some help figuring out how to clip the output to a desired length and stuff the required number of leading zeros.
The incomplete C code:
long dec2bin(int x_dec,int res)
{
long x_bin = 0;
int x_bin_len;
int x_rem, i = 1;
while (x_dec != 0)
{
x_rem = x_dec % 2;
x_dec /= 2;
x_bin += x_rem * i;
i *= 10;
}
return x_bin;
}
I had completed a working proof of concept using python. The end application however, requires I write this in C.
The working python script:
def dec2bin(x_dec,x_res):
x_bin = bin(x_dec)[2:] #Convert to Binary (Remove 0B Prefix)
x_len = len(x_bin)
if x_len < x_res: #If Smaller than desired resolution
x_bin = '0' * (x_res-x_len) + x_bin #Stuff with leading 0s
if x_len > x_res: #If larger than desired resolution
x_bin = x_bin[x_len-x_res:x_len] #Display desired no. LSBs
return x_bin
I'm sure this has been done before, Indeed, my python script proves it should be relatively straightforward, but I'm not as experienced with C.
Any help is greatly appreciated.
Mark.
As #yano suggested, I think you have to return an ascii string to the caller, rather than a long. Below's the short function I wrote for my own purposes, for any base...
char *itoa ( int i, int base, int ndigits ) {
static char a[999], digits[99] = /* up to base 65 */
"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz#$*";
int n=ndigits;
memset(a,'0',ndigits); a[ndigits]='\000';
while ( --n >= 0) {
a[n] = digits[i%base];
if ( (i/=base) < 1 ) break; }
return ( a );
} /* --- end-of-function itoa() --- */
I am currently migrating to Cython a set of functions that are currently implemented in C++ through scipy.weave (now deprecated).
These functions operate on timeseries points that are 2D-lists (eg. [[17100, 19.2], [17101, 20.7], [17102, 20.3], ...]) both in input and in output. A sample function is subtract that accepts two timeseries and calculates a new timeserie as subtraction of the two inputs going date-by-date.
The structure and the interfaces have to be mantained for retrocompatibility, but my profiling trials show that Cython porting is about 30%-40% slower than the original scipy.weave implementation.
I have tried many ways to optimize (inner conversions to Numpy arrays and memoryviews, C pointers, ...), but the conversion time required lenghtens the overall execution time. Even trying to define input and output as C++ vectors, leveraging on Cython implicit conversions doesn't seem to be effective in order to mantain scipy.weave speed. I have also used the various hints on boundscheck, wraparound, division, ...
The highest slow-downs seem to be on functions that uses nested loops and I've seen that a little gain can be predefining the list size (cdef list target = [[-1, float('nan')]]*size).
I am aware that Cython can't be so much performing on Python structures, especially lists, but are there any other tricks or techniques with which a speedup can be obtained?
=== EDIT - ADD CODE EXAMPLE ===
A good example of the typology of functions is the following.
The function takes a 2-D list of dates/prices and a 2-D list of dates/decimal factors and searches matching dates between the two lists, calculating the output on the corresponding price/factor by multiplying or dividing (that is a third input parameter).
My best-performing cython code:
#cython.cdivision(True)
#cython.boundscheck(False)
#cython.wraparound(False)
cpdef apply_conversion(list original_timeserie, list factor_timeserie, int divide_or_multiply=False):
cdef:
Py_ssize_t i, j = 0, size = len(original_timeserie), size2 = len(factor_timeserie)
long original_date, factor_date
double original_price, factor_price, conv_price
list result = []
for i in range(size):
original_date = original_timeserie[i][0]
for j in range(j, size2):
factor_date = factor_timeserie[j][0]
if original_date == factor_date:
original_price = original_timeserie[i][1]
factor_price = factor_timeserie[j][1]
if divide_or_multiply:
if factor_price != 0:
conv_price = original_price / factor_price
else:
conv_price = float('inf')
else:
conv_price = original_price * factor_price
result.append([original_date, conv_price])
break
return result
The original scipy function:
int len = original_timeserie.length();
int len2 = factor_timeserie.length();
PyObject* py_serieconv = PyList_New(len);
PyObject* original_item = NULL;
PyObject* factor_item = NULL;
PyObject* date = NULL;
PyObject* value = NULL;
long original_date = 0;
long factor_date = 0;
double original_price = 0;
double factor_price = 0;
int j = 0;
for(int i=0;i<len;i++) {
original_item = PyList_GetItem(original_timeserie, i);
date = PyList_GetItem(original_item, 0);
original_date = PyInt_AsLong(date);
original_price = PyFloat_AsDouble( PyList_GetItem(original_item, 1) );
factor_item = NULL;
for(;j<len2;) {
factor_item = PyList_GetItem(factor_timeserie, j++);
factor_date = PyInt_AsLong(PyList_GetItem(factor_item, 0));
if (factor_date == original_date) {
factor_price = PyFloat_AsDouble(PyList_GetItem(factor_item, 1));
value = PyFloat_FromDouble(original_price * (divide_or_multiply==0 ? factor_price : 1/factor_price));
PyObject* py_new_item = PyList_New(2);
Py_XINCREF(date);
PyList_SetItem(py_new_item, 0, date);
PyList_SetItem(py_new_item, 1, value);
PyList_SetItem(py_serieconv, i, py_new_item);
break;
}
}
}
return_val = py_serieconv;
Py_XDECREF(py_serieconv);
I'm implementing a simple Xor Reducer, but it is unable to return the appropriate value.
Python Code (Input):
class LazySpecializedFunctionSubclass(LazySpecializedFunction):
subconfig_type = namedtuple('subconfig',['dtype','ndim','shape','size','flags'])
def __init__(self, py_ast = None):
py_ast = py_ast or get_ast(self.kernel)
super(LazySlimmy, self).__init__(py_ast)
# [... other code ...]
def points(self, inpt):
iter = np.nditer(input, flags=['c_index'])
while not iter.finished:
yield iter.index
iter.iternext()
class XorReduction(LazySpecializedFunctionSubclass):
def kernel(self, inpt):
'''
Calculates the cumulative XOR of elements in inpt, equivalent to
Reduce with XOR
'''
result = 0
for point in self.points(inpt): # self.points is defined in LazySpecializedFunctionSubclass
result = point ^ result # notice how 'point' here is the actual element in self.points(inpt), not the index
return result
C Code (Output):
// <file: module.c>
void kernel(long* inpt, long* output) {
long result = 0;
for (int point = 0; point < 2; point ++) {
result = point ^ result; // Notice how it's using the index, point, not inpt[point].
};
* output = result;
};
Any ideas how to fix this?
The problem is that you are using point in different ways, in XorReduction kernel method you are iterating of the values in the array, but in the generated C code you are iterating over the indices of the array. Your C code xor reduction is thus done on the indices.
The generated C function should look more like
// <file: module.c>
void kernel(long* inpt, long* output) {
long result = 0;
for (int point = 0; point < 2; point ++) {
result = inpt[point] ^ result; // you did not reference your input in the question
};
* output = result;
};
It's pretty easy to write a function that computes the maximum drawdown of a time series. It takes a small bit of thinking to write it in O(n) time instead of O(n^2) time. But it's not that bad. This will work:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def max_dd(ser):
max2here = pd.expanding_max(ser)
dd2here = ser - max2here
return dd2here.min()
Let's set up a brief series to play with to try it out:
np.random.seed(0)
n = 100
s = pd.Series(np.random.randn(n).cumsum())
s.plot()
plt.show()
As expected, max_dd(s) winds up showing something right around -17.6. Good, great, grand. Now say I'm interested in computing the rolling drawdown of this Series. I.e. for each step, I want to compute the maximum drawdown from the preceding sub series of a specified length. This is easy to do using pd.rolling_apply. It works like so:
rolling_dd = pd.rolling_apply(s, 10, max_dd, min_periods=0)
df = pd.concat([s, rolling_dd], axis=1)
df.columns = ['s', 'rol_dd_10']
df.plot()
This works perfectly. But it feels very slow. Is there a particularly slick algorithm in pandas or another toolkit to do this fast? I took a shot at writing something bespoke: it keeps track of all sorts of intermediate data (locations of observed maxima, locations of previously found drawdowns) to cut down on lots of redundant calculations. It does save some time, but not a whole lot, and not nearly as much as should be possible.
I think it's because of all the looping overhead in Python/Numpy/Pandas. But I'm not currently fluent enough in Cython to really know how to begin attacking this from that angle. I was hoping someone had tried this before. Or, perhaps, that someone might want to have a look at my "handmade" code and be willing to help me convert it to Cython.
Edit:
For anyone who wants a review of all the functions mentioned here (and some others!) have a look at the iPython notebook at: http://nbviewer.ipython.org/gist/8one6/8506455
It shows how some of the approaches to this problem relate, checks that they give the same results, and shows their runtimes on data of various sizes.
If anyone is interested, the "bespoke" algorithm I alluded to in my post is rolling_dd_custom. I think that could be a very fast solution if implemented in Cython.
Here's a numpy version of the rolling maximum drawdown function. windowed_view is a wrapper of a one-line function that uses numpy.lib.stride_tricks.as_strided to make a memory efficient 2d windowed view of the 1d array (full code below). Once we have this windowed view, the calculation is basically the same as your max_dd, but written for a numpy array, and applied along the second axis (i.e. axis=1).
def rolling_max_dd(x, window_size, min_periods=1):
"""Compute the rolling maximum drawdown of `x`.
`x` must be a 1d numpy array.
`min_periods` should satisfy `1 <= min_periods <= window_size`.
Returns an 1d array with length `len(x) - min_periods + 1`.
"""
if min_periods < window_size:
pad = np.empty(window_size - min_periods)
pad.fill(x[0])
x = np.concatenate((pad, x))
y = windowed_view(x, window_size)
running_max_y = np.maximum.accumulate(y, axis=1)
dd = y - running_max_y
return dd.min(axis=1)
Here's a complete script that demonstrates the function:
import numpy as np
from numpy.lib.stride_tricks import as_strided
import pandas as pd
import matplotlib.pyplot as plt
def windowed_view(x, window_size):
"""Creat a 2d windowed view of a 1d array.
`x` must be a 1d numpy array.
`numpy.lib.stride_tricks.as_strided` is used to create the view.
The data is not copied.
Example:
>>> x = np.array([1, 2, 3, 4, 5, 6])
>>> windowed_view(x, 3)
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
"""
y = as_strided(x, shape=(x.size - window_size + 1, window_size),
strides=(x.strides[0], x.strides[0]))
return y
def rolling_max_dd(x, window_size, min_periods=1):
"""Compute the rolling maximum drawdown of `x`.
`x` must be a 1d numpy array.
`min_periods` should satisfy `1 <= min_periods <= window_size`.
Returns an 1d array with length `len(x) - min_periods + 1`.
"""
if min_periods < window_size:
pad = np.empty(window_size - min_periods)
pad.fill(x[0])
x = np.concatenate((pad, x))
y = windowed_view(x, window_size)
running_max_y = np.maximum.accumulate(y, axis=1)
dd = y - running_max_y
return dd.min(axis=1)
def max_dd(ser):
max2here = pd.expanding_max(ser)
dd2here = ser - max2here
return dd2here.min()
if __name__ == "__main__":
np.random.seed(0)
n = 100
s = pd.Series(np.random.randn(n).cumsum())
window_length = 10
rolling_dd = pd.rolling_apply(s, window_length, max_dd, min_periods=0)
df = pd.concat([s, rolling_dd], axis=1)
df.columns = ['s', 'rol_dd_%d' % window_length]
df.plot(linewidth=3, alpha=0.4)
my_rmdd = rolling_max_dd(s.values, window_length, min_periods=1)
plt.plot(my_rmdd, 'g.')
plt.show()
The plot shows the curves generated by your code. The green dots are computed by rolling_max_dd.
Timing comparison, with n = 10000 and window_length = 500:
In [2]: %timeit rolling_dd = pd.rolling_apply(s, window_length, max_dd, min_periods=0)
1 loops, best of 3: 247 ms per loop
In [3]: %timeit my_rmdd = rolling_max_dd(s.values, window_length, min_periods=1)
10 loops, best of 3: 38.2 ms per loop
rolling_max_dd is about 6.5 times faster. The speedup is better for smaller window lengths. For example, with window_length = 200, it is almost 13 times faster.
To handle NA's, you could preprocess the Series using the fillna method before passing the array to rolling_max_dd.
For the sake of posterity and for completeness, here's what I wound up with in Cython. MemoryViews materially sped things up. There was a bit of work to do to make sure I'd properly typed everything (sorry, new to c-type languages). But in the end I think it works nicely. For typical use cases, the speedup vs regular python was ~100x or ~150x. The function to call is cy_rolling_dd_custom_mv where the first argument (ser) should be a 1-d numpy array and the second argument (window) should be a positive integer. The function returns a numpy memoryview, which works well enough in most cases. You can explicitly call np.array(result) if you need to to get a nice array of the output:
import numpy as np
cimport numpy as np
cimport cython
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
cpdef tuple cy_dd_custom_mv(double[:] ser):
cdef double running_global_peak = ser[0]
cdef double min_since_global_peak = ser[0]
cdef double running_max_dd = 0
cdef long running_global_peak_id = 0
cdef long running_max_dd_peak_id = 0
cdef long running_max_dd_trough_id = 0
cdef long i
cdef double val
for i in xrange(ser.shape[0]):
val = ser[i]
if val >= running_global_peak:
running_global_peak = val
running_global_peak_id = i
min_since_global_peak = val
if val < min_since_global_peak:
min_since_global_peak = val
if val - running_global_peak <= running_max_dd:
running_max_dd = val - running_global_peak
running_max_dd_peak_id = running_global_peak_id
running_max_dd_trough_id = i
return (running_max_dd, running_max_dd_peak_id, running_max_dd_trough_id, running_global_peak_id)
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
def cy_rolling_dd_custom_mv(double[:] ser, long window):
cdef double[:, :] result
result = np.zeros((ser.shape[0], 4))
cdef double running_global_peak = ser[0]
cdef double min_since_global_peak = ser[0]
cdef double running_max_dd = 0
cdef long running_global_peak_id = 0
cdef long running_max_dd_peak_id = 0
cdef long running_max_dd_trough_id = 0
cdef long i
cdef double val
cdef int prob_1
cdef int prob_2
cdef tuple intermed
cdef long newthing
for i in xrange(ser.shape[0]):
val = ser[i]
if i < window:
if val >= running_global_peak:
running_global_peak = val
running_global_peak_id = i
min_since_global_peak = val
if val < min_since_global_peak:
min_since_global_peak = val
if val - running_global_peak <= running_max_dd:
running_max_dd = val - running_global_peak
running_max_dd_peak_id = running_global_peak_id
running_max_dd_trough_id = i
result[i, 0] = <double>running_max_dd
result[i, 1] = <double>running_max_dd_peak_id
result[i, 2] = <double>running_max_dd_trough_id
result[i, 3] = <double>running_global_peak_id
else:
prob_1 = 1 if result[i-1, 3] <= float(i - window) else 0
prob_2 = 1 if result[i-1, 1] <= float(i - window) else 0
if prob_1 or prob_2:
intermed = cy_dd_custom_mv(ser[i-window+1:i+1])
result[i, 0] = <double>intermed[0]
result[i, 1] = <double>(intermed[1] + i - window + 1)
result[i, 2] = <double>(intermed[2] + i - window + 1)
result[i, 3] = <double>(intermed[3] + i - window + 1)
else:
newthing = <long>(int(result[i-1, 3]))
result[i, 3] = i if ser[i] >= ser[newthing] else result[i-1, 3]
if val - ser[newthing] <= result[i-1, 0]:
result[i, 0] = <double>(val - ser[newthing])
result[i, 1] = <double>result[i-1, 3]
result[i, 2] = <double>i
else:
result[i, 0] = <double>result[i-1, 0]
result[i, 1] = <double>result[i-1, 1]
result[i, 2] = <double>result[i-1, 2]
cdef double[:] finalresult = result[:, 0]
return finalresult
Here is a Numba-accelerated solution:
import pandas as pd
import numpy as np
import numba
from time import time
n = 10000
returns = pd.Series(np.random.normal(1.001, 0.01, n), pd.date_range("2020-01-01", periods=n, freq="1min"))
#numba.njit
def max_drawdown(cum_returns):
max_drawdown = 0.0
current_max_ret = cum_returns[0]
for ret in cum_returns:
if ret > current_max_ret:
current_max_ret = ret
max_drawdown = max(max_drawdown, 1 - ret / current_max_ret)
return max_drawdown
t = time()
rolling_1h_max_dd = returns.cumprod().rolling("1h").apply(max_drawdown, raw=True)
print("Fast:", time() - t);
def max_drawdown_slow(x):
return (1 - x / x.cummax()).max()
t = time()
rolling_1h_max_dd_slow = returns.cumprod().rolling("1h").apply(max_drawdown_slow, raw=False)
print("Slow:", time() - t);
assert rolling_1h_max_dd.equals(rolling_1h_max_dd_slow)
Output:
Fast: 0.05633878707885742
Slow: 4.540301084518433
=> 80x speedup
Simple oneliner
df['rol_dd_10'] = df['s'].rolling(10).apply(lambda s: ((s - s.cummax()) / s.cummax()).min())
Which gives you a rolling window of maximum drawdown in percent.
If you don't want percentages but rather only want the absolute value instead:
df['rol_dd_10'] = df['s'].rolling(10).apply(lambda s: (s - s.cummax()).min())
# BEGIN: TRADEWAVE MOVING AVERAGE CROSSOVER EXAMPLE
THRESHOLD = 0.005
INTERVAL = 43200
SHORT = 10
LONG = 90
def initialize():
storage.invested = storage.get('invested', False)
def tick():
short_term = data(interval=INTERVAL).btc_usd.ma(SHORT)
long_term = data(interval=INTERVAL).btc_usd.ma(LONG)
diff = 100 * (short_term - long_term) / ((short_term + long_term) / 2)
if diff >= THRESHOLD and not storage.invested:
buy(pairs.btc_usd)
storage.invested = True
elif diff <= -THRESHOLD and storage.invested:
sell(pairs.btc_usd)
storage.invested = False
plot('short_term', short_term)
plot('long_term', long_term)
# END: TRADEWAVE MOVING AVERAGE CROSSOVER EXAMPLE
##############################################################
##############################################################
# BEGIN MAX DRAW DOWN by litepresence
# vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
dd()
ROLLING = 30 # days
def dd():
dd, storage.max_dd = max_dd(0)
bnh_dd, storage.max_bnh_dd = bnh_max_dd(0)
rolling_dd, storage.max_rolling_dd = max_dd(
ROLLING*86400/info.interval)
rolling_bnh_dd, storage.max_rolling_bnh_dd = bnh_max_dd(
ROLLING*86400/info.interval)
plot('dd', dd, secondary=True)
plot('bnh_dd', bnh_dd, secondary=True)
plot('rolling_dd', rolling_dd, secondary=True)
plot('rolling_bnh_dd', rolling_bnh_dd, secondary=True)
plot('zero', 0, secondary=True)
if info.tick == 0:
plot('dd_floor', -200, secondary=True)
def max_dd(rolling):
port_value = float(portfolio.usd+portfolio.btc*data.btc_usd.price)
max_value = 'max_value_' + str(rolling)
values_since_max = 'values_since_max_' + str(rolling)
max_dd = 'max_dd_' + str(rolling)
storage[max_value] = storage.get(max_value, [port_value])
storage[values_since_max] = storage.get(values_since_max, [port_value])
storage[max_dd] = storage.get(max_dd, [0])
storage[max_value].append(port_value)
if port_value > max(storage[max_value]):
storage[values_since_max] = [port_value]
else:
storage[values_since_max].append(port_value)
storage[max_value] = storage[max_value][-rolling:]
storage[values_since_max] = storage[values_since_max][-rolling:]
dd = -100*(max(storage[max_value]) - storage[values_since_max][-1]
)/max(storage[max_value])
storage[max_dd].append(float(dd))
storage[max_dd] = storage[max_dd][-rolling:]
max_dd = min(storage[max_dd])
return (dd, max_dd)
def bnh_max_dd(rolling):
coin = data.btc_usd.price
bnh_max_value = 'bnh_max_value_' + str(rolling)
bnh_values_since_max = 'bnh_values_since_max_' + str(rolling)
bnh_max_dd = 'bnh_max_dd_' + str(rolling)
storage[bnh_max_value] = storage.get(bnh_max_value, [coin])
storage[bnh_values_since_max] = storage.get(bnh_values_since_max, [coin])
storage[bnh_max_dd] = storage.get(bnh_max_dd, [0])
storage[bnh_max_value].append(coin)
if coin > max(storage[bnh_max_value]):
storage[bnh_values_since_max] = [coin]
else:
storage[bnh_values_since_max].append(coin)
storage[bnh_max_value] = storage[bnh_max_value][-rolling:]
storage[bnh_values_since_max] = storage[bnh_values_since_max][-rolling:]
bnh_dd = -100*(max(storage[bnh_max_value]) - storage[bnh_values_since_max][-1]
)/max(storage[bnh_max_value])
storage[bnh_max_dd].append(float(bnh_dd))
storage[bnh_max_dd] = storage[bnh_max_dd][-rolling:]
bnh_max_dd = min(storage[bnh_max_dd])
return (bnh_dd, bnh_max_dd)
def stop():
log('MAX DD......: %.2f pct' % storage.max_dd)
log('R MAX DD....: %.2f pct' % storage.max_rolling_dd)
log('MAX BNH DD..: %.2f pct' % storage.max_bnh_dd)
log('R MAX BNH DD: %.2f pct' % storage.max_rolling_bnh_dd)
[2015-03-04 00:00:00] MAX DD......: -67.94 pct
[2015-03-04 00:00:00] R MAX DD....: -4.93 pct
[2015-03-04 00:00:00] MAX BNH DD..: -82.88 pct
[2015-03-04 00:00:00] R MAX BNH DD: -26.38 pct
Draw Down
Max Drawn Down
Buy and Hold Draw Down
Buy and Hold Max Draw Down
Rolling Draw Down
Rolling Max Drawn Down
Rolling Buy and Hold Draw Down
Rolling Buy and Hold Max Draw Down
No pandas, cython, or numpy dependencies. All calculations via simple lists.
Definitions are reusable for multiple rolling window sizes in the same script. You will have to edit the series input for your platform as this is designed for Bitcoin trading at tradewave.net
Hello people.
This is quite a complex problem if you want to solve this in a computationally efficient way for a rolling window.
I have gone ahead and written a solution to this in C#.
I want to share this as the effort required to replicate this work is quite high.
First, here are the results:
here we take a simple drawdown implementation and re-calculate for the full window each time
test1 - simple drawdown test with 30 period rolling window. run 100 times.
total seconds 0.8060461
test2 - simple drawdown test with 60 period rolling window. run 100 times.
total seconds 1.416081
test3 - simple drawdown test with 180 period rolling window. run 100 times.
total seconds 3.6602093
test4 - simple drawdown test with 360 period rolling window. run 100 times.
total seconds 6.696383
test5 - simple drawdown test with 500 period rolling window. run 100 times.
total seconds 8.9815137
here we compare to the results generated from my efficient rolling window algorithm where only the latest observation is added and then it does it's magic
test6 - running drawdown test with 30 period rolling window. run 100 times.
total seconds 0.2940168
test7 - running drawdown test with 60 period rolling window. run 100 times.
total seconds 0.3050175
test8 - running drawdown test with 180 period rolling window. run 100 times.
total seconds 0.3780216
test9 - running drawdown test with 360 period rolling window. run 100 times.
total seconds 0.4560261
test10 - running drawdown test with 500 period rolling window. run 100 times.
total seconds 0.5050288
At at 500 period window. We are achieving about a 20:1 improvement in calculation time.
Here is the code of the simple drawdown class used for the comparisons:
public class SimpleDrawDown
{
public double Peak { get; set; }
public double Trough { get; set; }
public double MaxDrawDown { get; set; }
public SimpleDrawDown()
{
Peak = double.NegativeInfinity;
Trough = double.PositiveInfinity;
MaxDrawDown = 0;
}
public void Calculate(double newValue)
{
if (newValue > Peak)
{
Peak = newValue;
Trough = Peak;
}
else if (newValue < Trough)
{
Trough = newValue;
var tmpDrawDown = Peak - Trough;
if (tmpDrawDown > MaxDrawDown)
MaxDrawDown = tmpDrawDown;
}
}
}
And here is the code for the full efficient implementation. Hopefully the code comments make sense.
internal class DrawDown
{
int _n;
int _startIndex, _endIndex, _troughIndex;
public int Count { get; set; }
LinkedList<double> _values;
public double Peak { get; set; }
public double Trough { get; set; }
public bool SkipMoveBackDoubleCalc { get; set; }
public int PeakIndex
{
get
{
return _startIndex;
}
}
public int TroughIndex
{
get
{
return _troughIndex;
}
}
//peak to trough return
public double DrawDownAmount
{
get
{
return Peak - Trough;
}
}
/// <summary>
///
/// </summary>
/// <param name="n">max window for drawdown period</param>
/// <param name="peak">drawdown peak i.e. start value</param>
public DrawDown(int n, double peak)
{
_n = n - 1;
_startIndex = _n;
_endIndex = _n;
_troughIndex = _n;
Count = 1;
_values = new LinkedList<double>();
_values.AddLast(peak);
Peak = peak;
Trough = peak;
}
/// <summary>
/// adds a new observation on the drawdown curve
/// </summary>
/// <param name="newValue"></param>
public void Add(double newValue)
{
//push the start of this drawdown backwards
//_startIndex--;
//the end of the drawdown is the current period end
_endIndex = _n;
//the total periods increases with a new observation
Count++;
//track what all point values are in the drawdown curve
_values.AddLast(newValue);
//update if we have a new trough
if (newValue < Trough)
{
Trough = newValue;
_troughIndex = _endIndex;
}
}
/// <summary>
/// Shift this Drawdown backwards in the observation window
/// </summary>
/// <param name="trackingNewPeak">whether we are already tracking a new peak or not</param>
/// <returns>a new drawdown to track if a new peak becomes active</returns>
public DrawDown MoveBack(bool trackingNewPeak, bool recomputeWindow = true)
{
if (!SkipMoveBackDoubleCalc)
{
_startIndex--;
_endIndex--;
_troughIndex--;
if (recomputeWindow)
return RecomputeDrawdownToWindowSize(trackingNewPeak);
}
else
SkipMoveBackDoubleCalc = false;
return null;
}
private DrawDown RecomputeDrawdownToWindowSize(bool trackingNewPeak)
{
//the start of this drawdown has fallen out of the start of our observation window, so we have to recalculate the peak of the drawdown
if (_startIndex < 0)
{
Peak = double.NegativeInfinity;
_values.RemoveFirst();
Count--;
//there is the possibility now that there is a higher peak, within the current drawdown curve, than our first observation
//when we find it, remove all data points prior to this point
//the new peak must be before the current known trough point
int iObservation = 0, iNewPeak = 0, iNewTrough = _troughIndex, iTmpNewPeak = 0, iTempTrough = 0;
double newDrawDown = 0, tmpPeak = 0, tmpTrough = double.NegativeInfinity;
DrawDown newDrawDownObj = null;
foreach (var pointOnDrawDown in _values)
{
if (iObservation < _troughIndex)
{
if (pointOnDrawDown > Peak)
{
iNewPeak = iObservation;
Peak = pointOnDrawDown;
}
}
else if (iObservation == _troughIndex)
{
newDrawDown = Peak - Trough;
tmpPeak = Peak;
}
else
{
//now continue on through the remaining points, to determine if there is a nested-drawdown, that is now larger than the newDrawDown
//e.g. higher peak beyond _troughIndex, with higher trough than that at _troughIndex, but where new peak minus new trough is > newDrawDown
if (pointOnDrawDown > tmpPeak)
{
tmpPeak = pointOnDrawDown;
tmpTrough = tmpPeak;
iTmpNewPeak = iObservation;
//we need a new drawdown object, as we have a new higher peak
if (!trackingNewPeak)
newDrawDownObj = new DrawDown(_n + 1, tmpPeak);
}
else
{
if (!trackingNewPeak && newDrawDownObj != null)
{
newDrawDownObj.MoveBack(true, false); //recomputeWindow is irrelevant for this as it will never fall before period 0 in this usage scenario
newDrawDownObj.Add(pointOnDrawDown); //keep tracking this new drawdown peak
}
if (pointOnDrawDown < tmpTrough)
{
tmpTrough = pointOnDrawDown;
iTempTrough = iObservation;
var tmpDrawDown = tmpPeak - tmpTrough;
if (tmpDrawDown > newDrawDown)
{
newDrawDown = tmpDrawDown;
iNewPeak = iTmpNewPeak;
iNewTrough = iTempTrough;
Peak = tmpPeak;
Trough = tmpTrough;
}
}
}
}
iObservation++;
}
_startIndex = iNewPeak; //our drawdown now starts from here in our observation window
_troughIndex = iNewTrough;
for (int i = 0; i < _startIndex; i++)
{
_values.RemoveFirst(); //get rid of the data points prior to this new drawdown peak
Count--;
}
return newDrawDownObj;
}
return null;
}
}
public class RunningDrawDown
{
int _n;
List<DrawDown> _drawdownObjs;
DrawDown _currentDrawDown;
DrawDown _maxDrawDownObj;
/// <summary>
/// The Peak of the MaxDrawDown
/// </summary>
public double DrawDownPeak
{
get
{
if (_maxDrawDownObj == null) return double.NegativeInfinity;
return _maxDrawDownObj.Peak;
}
}
/// <summary>
/// The Trough of the Max DrawDown
/// </summary>
public double DrawDownTrough
{
get
{
if (_maxDrawDownObj == null) return double.PositiveInfinity;
return _maxDrawDownObj.Trough;
}
}
/// <summary>
/// The Size of the DrawDown - Peak to Trough
/// </summary>
public double DrawDown
{
get
{
if (_maxDrawDownObj == null) return 0;
return _maxDrawDownObj.DrawDownAmount;
}
}
/// <summary>
/// The Index into the Window that the Peak of the DrawDown is seen
/// </summary>
public int PeakIndex
{
get
{
if (_maxDrawDownObj == null) return 0;
return _maxDrawDownObj.PeakIndex;
}
}
/// <summary>
/// The Index into the Window that the Trough of the DrawDown is seen
/// </summary>
public int TroughIndex
{
get
{
if (_maxDrawDownObj == null) return 0;
return _maxDrawDownObj.TroughIndex;
}
}
/// <summary>
/// Creates a running window for the calculation of MaxDrawDown within the window
/// </summary>
/// <param name="n">the number of periods within the window</param>
public RunningDrawDown(int n)
{
_n = n;
_currentDrawDown = null;
_drawdownObjs = new List<DrawDown>();
}
/// <summary>
/// The new value to add onto the end of the current window (the first value will drop off)
/// </summary>
/// <param name="newValue">the new point on the curve</param>
public void Calculate(double newValue)
{
if (double.IsNaN(newValue)) return;
if (_currentDrawDown == null)
{
var drawDown = new DrawDown(_n, newValue);
_currentDrawDown = drawDown;
_maxDrawDownObj = drawDown;
}
else
{
//shift current drawdown back one. and if the first observation falling outside the window means we encounter a new peak after the current trough, we start tracking a new drawdown
var drawDownFromNewPeak = _currentDrawDown.MoveBack(false);
//this is a special case, where a new lower peak (now the highest) is created due to the drop of of the pre-existing highest peak, and we are not yet tracking a new peak
if (drawDownFromNewPeak != null)
{
_drawdownObjs.Add(_currentDrawDown); //record this drawdown into our running drawdowns list)
_currentDrawDown.SkipMoveBackDoubleCalc = true; //MoveBack() is calculated again below in _drawdownObjs collection, so we make sure that is skipped this first time
_currentDrawDown = drawDownFromNewPeak;
_currentDrawDown.MoveBack(true);
}
if (newValue > _currentDrawDown.Peak)
{
//we need a new drawdown object, as we have a new higher peak
var drawDown = new DrawDown(_n, newValue);
//do we have an existing drawdown object, and does it have more than 1 observation
if (_currentDrawDown.Count > 1)
{
_drawdownObjs.Add(_currentDrawDown); //record this drawdown into our running drawdowns list)
_currentDrawDown.SkipMoveBackDoubleCalc = true; //MoveBack() is calculated again below in _drawdownObjs collection, so we make sure that is skipped this first time
}
_currentDrawDown = drawDown;
}
else
{
//add the new observation to the current drawdown
_currentDrawDown.Add(newValue);
}
}
//does our new drawdown surpass any of the previous drawdowns?
//if so, we can drop the old drawdowns, as for the remainer of the old drawdowns lives in our lookup window, they will be smaller than the new one
var newDrawDown = _currentDrawDown.DrawDownAmount;
_maxDrawDownObj = _currentDrawDown;
var maxDrawDown = newDrawDown;
var keepDrawDownsList = new List<DrawDown>();
foreach (var drawDownObj in _drawdownObjs)
{
drawDownObj.MoveBack(true);
if (drawDownObj.DrawDownAmount > newDrawDown)
{
keepDrawDownsList.Add(drawDownObj);
}
//also calculate our max drawdown here
if (drawDownObj.DrawDownAmount > maxDrawDown)
{
maxDrawDown = drawDownObj.DrawDownAmount;
_maxDrawDownObj = drawDownObj;
}
}
_drawdownObjs = keepDrawDownsList;
}
}
Example usage:
RunningDrawDown rd = new RunningDrawDown(500);
foreach (var input in data)
{
rd.Calculate(input);
Console.WriteLine(string.Format("max draw {0:0.00000}, peak {1:0.00000}, trough {2:0.00000}, drawstart {3:0.00000}, drawend {4:0.00000}",
rd.DrawDown, rd.DrawDownPeak, rd.DrawDownTrough, rd.PeakIndex, rd.TroughIndex));
}