https://www.tradingview.com/pine-script-reference/v5/#fun_ta%7Bdot%7Dcross
ta.cross(source1, source2) → series bool
RETURNS
true if two series have crossed each other, otherwise false.
ARGUMENTS
source1 (series int/float) First data series.
source2 (series int/float) Second data series.
Pine script
cross_1 = cross(longband[1], RSIndex)
trend := cross(RSIndex, shortband[1]) ? 1 : cross_1 ? -1 : nz(trend[1], 1)
FastAtrRsiTL = trend == 1 ? longband : shortband
to python
cross_1 = cross(longband[1], RSIndex)
if np.isclose[-1]( shortband[1], RSIndex):
trend = 1
elif cross_1 == True:
trend = -1
else:
trend = trend[1], 1
if trend == 1:
FastAtrRsiTL = longband
else:
FastAtrRsiTL = shortband
I need a cross function, but I don't know how to implement it.
Related
I have a dataframe of elements with a start and end datetime. What is the best option to find intersections of the dates? My naive approach right now consists of two nested loops cross-comparing the elements, which obviously is super slow. What would be a better way to achieve that?
dict = {}
start = "start_time"
end = "end_time"
for index1, rowLoop1 in df[{start, end}].head(500).iterrows():
matches = []
dict[(index1, rowLoop1[start])] = 0
for index2, rowLoop2 in df[{start,end}].head(500).iterrows():
if index1 != index2:
if date_intersection(rowLoop1[start], rowLoop1[end], rowLoop2[start], rowLoop2[end]):
dict[(index1, rowLoop1[start])] += 1
Code for date_intersection:
def date_intersection(t1start, t1end, t2start, t2end):
if (t1start <= t2start <= t2end <= t1end): return True
elif (t1start <= t2start <= t1end):return True
elif (t1start <= t2end <= t1end):return True
elif (t2start <= t1start <= t1end <= t2end):return True
else: return False
Sample data:
id,start_date,end_date
41234132,2021-01-10 10:00:05,2021-01-10 10:30:27
64564512,2021-01-10 10:10:00,2021-01-11 10:28:00
21135765,2021-01-12 12:30:00,2021-01-12 12:38:00
87643252,2021-01-12 12:17:00,2021-01-12 12:42:00
87641234,2021-01-12 12:58:00,2021-01-12 13:17:00
You can do something like merging your dataframe with itself to get the cartesian product and comparing columns.
df = df.merge(df, how='cross', suffixes=('','_2'))
df['date_intersection'] = (((df['start_date'].le(df['start_date_2']) & df['start_date_2'].le(df['end_date'])) | # start 2 within start/end
(df['start_date'].le(df['end_date_2']) & df['end_date_2'].le(df['end_date'])) | # end 2 within start/end
(df['start_date_2'].le(df['start_date']) & df['start_date'].le(df['end_date_2'])) | # start within start 2/end 2
(df['start_date_2'].le(df['end_date']) & df['end_date'].le(df['end_date_2']))) & # end within start 2/end 2
df['id'].ne(df['id_2'])) # id not compared to itself
and then to return the ids and if they have a date intersection...
df.groupby('id')['date_intersection'].any()
id
21135765 True
41234132 True
64564512 True
87641234 False
87643252 True
or if you need the ids that were intersected
df.loc[df['date_intersection'], :].groupby(['id'])['id_2'].agg(list).to_frame('intersected_ids')
intersected_ids
id
21135765 [87643252]
41234132 [64564512]
64564512 [41234132]
87643252 [21135765]
I have a small time-series data :
ser = pd.Series([2,3,4,5,6,0,8,7,1,3,4,0,6,4,0,2,4,0,4,5,0,1,7,0,1,8,5,3,6])
let's say if we choose a threshold of 5 to enter the market and zero to exit
I am trying to write a program which will generate an output like this :
so far I have used numba but still working on logic can you please help.
#numba.vectorize
def check_signal(x,t):
if x >= t :
y = 2
if x < t :
y =1
if x == 0:
y = -1
else :
y = y
return y
Why would you use numba unless you had tens of millions of these samples?
states = ["Entered market", "inside market", "market exit", "outside market"]
state = 2
fout = open('seriesdata.csv','w')
print("Time,Percent_change,Signal,Timestamp", file=fout)
for pct in ser:
stamp = ''
if state == 1 and pct == 0:
state = 2
stamp = str(len(data)+1)
elif state == 3 and pct >= 5:
state = 0
stamp = str(len(data)+1)
else if state in (0, 2):
state += 1
print(''.join((str(pct), states[state], stamp)), file=fout)
If you'd rather make a dataframe, just accumulate those values in a list and convert after.
I'm using df.iloc[i] to assign (ori + gap) on each row of the dataframe. But I got 'No axis named 1 for object type Series' error. And I don't understand why.
df1 = pd.read_csv('异常销量监控_0127_to Xiaolei Eagle send.csv',low_memory=False)
df2 = pd.read_csv('test0412.csv',dtype = {'Actual':float})
gap = 0
for i in range(len(df2)):
ym = df2['YM'].iloc[i]
kcode = df2['REPKCode'].iloc[i]
fn = df2['FamilyNameE'].iloc[i]
ori = float(df2['Actual'].iloc[i])
filt = (df1['YM'] == ym )& (df1['REPKCode'] == kcode) & (df1['FamilyNameE'] == fn))
gap = df1[filt]['Actual']
df2['Actual'].iloc[i] = (ori + gap)
df2.to_csv('after.csv',index=False)
The issue is in the following line
filt = (df1['YM'] == ym )& (df1['REPKCode'] == kcode) & (df1['FamilyNameE'] == fn))
gap = df1[filt]['Actual']
the value of filt will be either 1 or 0 because you are checking multiple conditions
(df1['YM'] == ym )& (df1['REPKCode'] == kcode) & (df1['FamilyNameE'] == fn))
and if the above condition is true , filt will be assigned 1 otherwise 0.
So your condition was true and filt == 1
Now in the following line
gap = df1[filt]['Actual']
you are actually doing this
gap = df1[1]['Actual']
Its trying to find the column '1' in df1 dataframe and because there is no column as '1' thats why its giving you error.
EDIT
Reply to your comment - How can I get the 'Actual' value with 'YM','REPKCode','FamilyNameE' match in df1?
for that you need to write below lines
gap = df1[ df1['YM'] == ym ][ df1['REPKCode'] == kcode][ df1['FamilyNameE'] == fn]['Actual']
and remove below lines
filt = (df1['YM'] == ym )& (df1['REPKCode'] == kcode) & (df1['FamilyNameE'] == fn))
gap = df1[filt]['Actual']
I think the problem here is,
df2["YM"].iloc[i]
because when you are typed df2["YM"] it returns YM column from the dataframe df2.
So, that means you are trying to get a column from the column by typing df2["YM"].iloc[i].
Try, df2.iloc[i]
Objective: Output buy/sell/neutral/error indicators to a single df[column] while filtering out "False" values. Indicators are based on the below dataframe column, and then formulated with a boolean statement:
df['sma_10'] = pd.DataFrame(ta.SMA(df['close'], timeperiod=10), dtype=np.float, columns=['close'])
df['buy'] = pd.DataFrame(df['close'] > df['sma_10'], columns=['buy'])
df['buy'] = df['buy'].replace({True: 'BUY'})
df['sell'] = pd.DataFrame(df['close'] < df['sma_10'], columns=['sell'])
df['sell'] = df['sell'].replace({True: 'SELL'})
df['neutral'] = pd.DataFrame(df['close'] == df['sma_10'], columns=['neutral'])
df['neutral'] = df['neutral'].replace({True: 'NEUTRAL'})
df['error'] = pd.DataFrame((df['buy'] == False) & (df['sell'] == False) & (df['neutral'] == False), columns=['Error'])
df['error'] = df['error'].replace({True: 'ERROR'})
Current output of df
buy sell Neutral Error
False False False ERROR
BUY False False False
False SELL False False
False False NEUTRAL False
Desired output of df
Indicator
ERROR
BUY
SELL
NEUTRAL
Attempts & Methods:
1st Method: Merging all the buy/sell/neutral/error columns and attempting to drop "False" values. Dataframe only iterates once before erroring out.
df['sma_10_indic']=[df['buy'].astype(str)+df['sell'].astype(str)+df['neutral'].astype(str)+df['error'].astype(str)].drop("False")
I have tried a subroutine of if & elif's such as:
This method also errors out before the first index
df['buy'] = pd.DataFrame(df['close'] > df['sma_10'])
df['sell'] = pd.DataFrame(df['close'] < df['sma_10'])
df['neutral'] = pd.DataFrame(df['close'] == df['sma_10'])
error = ((buy == False) and (sell == False) and (neutral == False))
if (df['buy'] == "True"):
df['sma_10_indic'] = pd.DataFrame("BUY",columns=['indicator'])
elif (df['sell'] == "True"):
df['sma_10_indic'] = pd.DataFrame("SELL",columns=['indicator'])
elif (df['neutral'] == "True"):
df['sma_10_indic'] = pd.DataFrame("NEUTRAL",columns=['indicator'])
elif (error == True):
df['sma_10_indic'] = pd.DataFrame("ERROR",columns=['indicator'])
I am unsure on the path ahead, I have been beating my head against the wall for about 14 hours on this one with no clear path ahead. I have also tried creating another seperate dataframe and merging them via concat with no luck due to the boolean. I am relatively new to python and pandas/dataframes, so please be patient with me. Thank you in Advance!
Use numpy.select:
m1 = df['close'] > df['sma_10']
m2 = df['close'] < df['sma_10']
m3 = df['close'] == df['sma_10']
df['Indicator'] = np.select([m1, m2, m3], ['BUY','SELL','NEUTRAL'], 'ERROR')
I'm trading daily on Cryptocurrencies and would like to find which are the most desirable Cryptos for trading.
I have CSV file for every Crypto with the following fields:
Date Sell Buy
43051.23918 1925.16 1929.83
43051.23919 1925.12 1929.79
43051.23922 1925.12 1929.79
43051.23924 1926.16 1930.83
43051.23925 1926.12 1930.79
43051.23926 1926.12 1930.79
43051.23927 1950.96 1987.56
43051.23928 1190.90 1911.56
43051.23929 1926.12 1930.79
I would like to check:
How many quotes will end with profit:
for Buy positions - if one of the following Sells > current Buy.
for Sell positions - if one of the following Buys < current Sell.
How much time it would take to a theoretical position to become profitable.
What can be the profit potential.
I'm using the following code:
#converting from OLE to datetime
OLE_TIME_ZERO = dt.datetime(1899, 12, 30, 0, 0, 0)
def ole(oledt):
return OLE_TIME_ZERO + dt.timedelta(days=float(oledt))
#variables initialization
buy_time = ole(43031.57567) - ole(43031.57567)
sell_time = ole(43031.57567) - ole(43031.57567)
profit_buy_counter = 0
no_profit_buy_counter = 0
profit_sell_counter = 0
no_profit_sell_counter = 0
max_profit_buy_positions = 0
max_profit_buy_counter = 0
max_profit_sell_positions = 0
max_profit_sell_counter = 0
df = pd.read_csv("C:/P/Crypto/bitcoin_test_normal_276k.csv")
#comparing to max
for index, row in df.iterrows():
a = index + 1
df_slice = df[a:]
if df_slice["Sell"].max() - row["Buy"] > 0:
max_profit_buy_positions += df_slice["Sell"].max() - row["Buy"]
max_profit_buy_counter += 1
for index1, row1 in df_slice.iterrows():
if row["Buy"] < row1["Sell"] :
buy_time += ole(row1["Date"])- ole(row["Date"])
profit_buy_counter += 1
break
else:
no_profit_buy_counter += 1
#comparing to sell
for index, row in df.iterrows():
a = index + 1
df_slice = df[a:]
if row["Sell"] - df_slice["Buy"].min() > 0:
max_profit_sell_positions += row["Sell"] - df_slice["Buy"].min()
max_profit_sell_counter += 1
for index2, row2 in df_slice.iterrows():
if row["Sell"] > row2["Buy"] :
sell_time += ole(row2["Date"])- ole(row["Date"])
profit_sell_counter += 1
break
else:
no_profit_sell_counter += 1
num_rows = len(df.index)
buy_avg_time = buy_time/num_rows
sell_avg_time = sell_time/num_rows
if max_profit_buy_counter == 0:
avg_max_profit_buy = "There is no profitable buy positions"
else:
avg_max_profit_buy = max_profit_buy_positions/max_profit_buy_counter
if max_profit_sell_counter == 0:
avg_max_profit_sell = "There is no profitable sell positions"
else:
avg_max_profit_sell = max_profit_sell_positions/max_profit_sell_counter
The code works fine for 10K-20K lines but for a larger amount (276K) it take a long time (more than 10 hrs)
What can I do in order to improve it?
Is there any "Pythonic" way to compare each value in a data frame to all following values?
note - the dates in the CSV are in OLE so I need to convert it to Datetime.
File for testing:
Thanks for your comment.
Here you can find the file that I used:
First, I'd want to create the cumulative maximum/minimum values for Sell and Buy per row, so it's easy to compare to. pandas has cummax and cummin, but they go the wrong way. So we'll do:
df['Max Sell'] = df[::-1]['Sell'].cummax()[::-1]
df['Min Buy'] = df[::-1]['Buy'].cummin()[::-1]
Now, we can just compare each row:
df['Buy Profit'] = df['Max Sell'] - df['Buy']
df['Sell Profit'] = df['Sell'] - df['Min Buy']
I'm positive this isn't exactly what you want as I don't perfectly understand what you're trying to do, but hopefully it leads you in the right direction.
After comparing your function and mine, there is a slight difference, as your a is offset one off the index. Removing that offset, you'll see that my method produces the same results as yours, only in vastly shorter time:
for index, row in df.iterrows():
a = index
df_slice = df[a:]
assert (df_slice["Sell"].max() - row["Buy"]) == df['Max Sell'][a] - df['Buy'][a]
else:
print("All assertions passed!")
Note this will still take the very long time required by your function. Note that this can be fixed with shift, but I don't want to run your function for long enough to figure out what way to shift it.