Here I have a program which polls the queue for an event, if found it executes an order to a REST API. In addition if an event is found, it prints the current price that I need to use as my stopLoss. This code runs exactly as I would like it to, however, the moment I try and call the function rates() inside the __main__ the program just stops running.
Remove the reference stopLoss = rates() and the program runs great just without a stopLoss, but I need the rate -.001 as my stopLoss.
Code as follows:
import Queue
import threading
import time
import json
import oandapy
from execution import Execution
from settings import STREAM_DOMAIN, API_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID
from strategy import TestRandomStrategy
from streaming import StreamingForexPrices
#polls API for Current Price
def stop():
while True:
oanda = oandapy.API(environment="practice", access_token="xxxxxx")
response = oanda.get_prices(instruments="EUR_USD")
prices = response.get("prices")
asking_price = prices[0].get("ask")
s = asking_price - .001
return s
#Checks for events and executes order
def trade(events, strategy, execution):
while True:
try:
event = events.get(False)
except Queue.Empty:
pass
else:
if event is not None:
if event.type == 'TICK':
strategy.calculate_signals(event)
elif event.type == 'ORDER':
print
execution.execute_order(event)
def rates(events):
while True:
try:
event = events.get(False)
except Queue.Empty:
pass
else:
if event.type == 'TICK':
r = stop()
print r
if __name__ == "__main__":
heartbeat = 0 # Half a second between polling
events = Queue.Queue()
# Trade 1 unit of EUR/USD
instrument = "EUR_USD"
units = 1
stopLoss = rates() #Problem area!!!!!!>>>>>>>>>>>>>>//////////////////////////
prices = StreamingForexPrices(
STREAM_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID,
instrument, events
)
execution = Execution(API_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID)
strategy = TestRandomStrategy(instrument, units, events, stopLoss)
#Threads
trade_thread = threading.Thread(target=trade, args=(events, strategy, execution))
price_thread = threading.Thread(target=prices.stream_to_queue, args=[])
stop_thread = threading.Thread(target=rates, args=(events,))
# Start both threads
trade_thread.start()
price_thread.start()
stop_thread.start()
Okay no answers so far, so I'll try.
Your main problem seems to be, that you don't know how to interchange data between threads.
First the problem with the price.
The loop here:
while True:
oanda = oandapy.API(environment="practice", access_token="xxxxxx")
response = oanda.get_prices(instruments="EUR_USD")
prices = response.get("prices")
asking_price = prices[0].get("ask")
s = asking_price - .001
return s
Has no effect, because return s will automatically break out of it.
So what you need is a shared variable where you store s. You can protect the access to it by using threading.Lock. The easiest way would be to subclass Thread and make s an instance attribute like this (I named it price):
class PricePoller(threading.Thread):
def __init__(self, interval):
super(PricePoller, self).__init__()
# private attribute, will be accessed as property via
# threadsafe getter and setter
self._price = None
# lock guarding access to _price
self._dataLock = threading.Lock()
# polling interval
self.interval = interval
# set this thread as deamon, so it will be killed when
# the main thread dies
self.deamon = True
# create an event that allows us to exit the mainloop
# and terminate the thread safely
self._stopEvent = threading.Event()
def getPrice(self):
# use _dataLock to get threadsafe access to self._price
with self._dataLock:
return self._price
def setPrice(self, price)
# use _dataLock to get threadsafe access to self._price
with self._dataLock:
self._price = price
price = property(getPrice, setPrice, None)
def run(self):
while not self.stopEvent.isSet():
oanda = oandapy.API(environment="practice", access_token="xxxxxx")
response = oanda.get_prices(instruments="EUR_USD")
prices = response.get("prices")
asking_price = prices[0].get("ask")
self.price = asking_price - .001
time.sleep(self.interval) # don't spam the server
def stop(self):
self._stopEvent.set()
It can then be started with:
poller = PricePoller(heartbeat)
poller.start()
And you can get the price with poller.price wherever you want! You can even pass the poller on to other threads if you like.
BUT! If you try to get the price immediately after poller.start() you will certainly get a None. Why this? poller.start() does not block, therefore while your main thread is going on and tries to get the first price, your poller has not even finished starting!
How to solve this? Introduce another threading.Event and use its function wait to let the main thread wait until the poller thread has set it. I leave the implementation up to you.
I'm just guessing that this is what you want... looking only at your code you don't have to put the stop function in a thread at all and you can just replace stopLess = rates() with stopLess = stop(), because you're not updating the results from the price polling anywhere! But I think you want to do that at some point, otherwise it wouldn't make sense to put it into a thread.
Now to the queue and your 'event stream'.
This snippet:
try:
event = events.get(False)
except Queue.Empty:
pass
Can just as well be:
event = events.get()
You're doing nothing in the meantime anyway and it is better to let Queue deal with waiting for an event.
Then, as far as I can see, you have two threads calling Queue.get, but this function will delete the element from the queue after retrieving it! This means whoever obtains the event first, consumes it and the other thread will never see it. But with the above solution for the poller, I think you can get rid of the stop_thread, which also solves that problem.
Now a note on Threads in general.
A thread has its own 'chain' of calls that starts within its run method (or the method which you supply as target if you don't subclass).
That means whatever function is called by run is executed by this thread, and also all functions that are called in turn by this one (and so on). HOWEVER, it is perfectly possible that two threads execute the same function, at the same time! And there is no way to know which thread executes which part of the code at a certain time, if you do not use means of synchronisation (e.g. Events, Locks or Barriers).
This is no problem if all variables used in a called function are local ore were local in the calling function:
def onlyLocal(x, n):
if n == 0:
return x
return onlyLocal(x*2, n-1)
or are exclusively read:
def onlyRead(myarray):
t = time.time()
return t - sum(myarray)
But as soon as you do both read from and write to a variable from multiple threads, you need to secure access to those because otherwise if you pass objects which are known by more than one thread (for example self):
def setPrice(self, price):
self._price = price
or if your function uses variables from an outer scope which is acessed by multiple threads:
def variableFromOutside(y):
global x
x += y
return y
You can never be sure that there isn't a thread(2) changing a variable which you(1) have just read, while you are processing it and before you update it with a then invalid value.
global x ; Thread1 ; Thread2 ;
2 ; y = x ; z = x ;
2 ; y **= 3 ; x = z+1 ;
3 ; x = y-4 ; return ;
4 ; return ; ... ;
This is why you have to secure the access to those variables with locks. With a Lock (l):
global x ; Thread1 ; Thread2 ;
2 ; l.acqcuire() ; l.acquire() ;
2 ; y = x ; | ;
2 ; y **= 3 ; | ;
2 ; x = y-4 ; | ;
4 ; l.release() ; v ;
4 ; return ; z = x ;
4 ; ... ; x = z+1 ;
5 ; ... ; l.release() ;
Here Thread1 acquires the lock before Thread2. Thread2 therefore has to wait until Thread1 releases the lock again before its call to acquire returns.
acquire and release are automatically called when you use with lock:.
Note also that in this toy example it could have been the case that Thread2 aqcuires the lock before Thread1, but at least they would still not interfere with each other.
This was a brief introduction on a large topic, read a bit about thread parallelisation and play around with it. There is no better means for learning than practice.
I've written this code here in the browser and therefore it is not tested! If someone finds issues, please tell me so in the comments or feel free to change it directly.
Related
I am running a python script every hour and I've been using time.sleep(3600) inside of a while loop. It seems to work as needed but I am worried about it blocking new tasks. My research of this seems to be that it only blocks the current thread but I want to be 100% sure. While the hourly job shouldn't take more than 15min, if it does or if it hangs, I don't want it to block the next one that starts. This is how I've done it:
import threading
import time
def long_hourly_job():
# do some long task
pass
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
time.sleep(3600)
Is this sufficient?
Also, the reason i am using time.sleep for this hourly job rather than a cron job is I want to do everything in code to make dockerization cleaner.
The code will work (ie: sleep does only block the calling thread), but you should be careful of some issues. Some of them have been already stated in the comments, like the possibility of time overlaps between threads. The main issue is that your code is slowly leaking resources. After creating a thread, the OS keeps some data structures even after the thread has finished running. This is necessary, for example to keep the thread's exit status until the thread's creator requires it. The function to clear these structures (conceptually equivalent to closing a file) is called join. A thread that has finished running and is not joined is termed a 'zombie thread'. The amount of memory required by these structures is very small, and your program should run for centuries for any reasonable amount of available RAM. Nevertheless, it is a good practice to join all the threads you create. A simple approach (if you know that 3600 s is more than enough time for the thread to finish) would be:
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
thr.join(3600) # wait at most 3600 s for the thread to finish
if thr.isAlive(): # join does not return useful information
print("Ooops: the last job did not finish on time")
A better approach if you think that it is possible that sometimes 3600 s is not enough time for the thread to finish:
if __name__ == "__main__":
previous = []
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
previous.append(thr)
time.sleep(3600)
for i in reversed(range(len(previous))):
t = previous[i]
t.join(0)
if t.isAlive():
print("Ooops: thread still running")
else:
print("Thread finished")
previous.remove(t)
I know that the print statement makes no sense: use logging instead.
Perhaps a little late. I tested the code from other answers but my main process got stuck (perhaps I'm doing something wrong?). I then tried a different approach. It's based on threading Timer class, but trying to emulate the QtCore.QTimer() behavior and features:
import threading
import time
import traceback
class Timer:
SNOOZE = 0
ONEOFF = 1
def __init__(self, timerType=SNOOZE):
self._timerType = timerType
self._keep = threading.Event()
self._timerSnooze = None
self._timerOneoff = None
class _SnoozeTimer(threading.Timer):
# This uses threading.Timer class, but consumes more CPU?!?!?!
def __init__(self, event, msec, callback, *args):
threading.Thread.__init__(self)
self.stopped = event
self.msec = msec
self.callback = callback
self.args = args
def run(self):
while not self.stopped.wait(self.msec):
self.callback(*self.args)
def start(self, msec: int, callback, *args, start_now=False) -> bool:
started = False
if msec > 0:
if self._timerType == self.SNOOZE:
if self._timerSnooze is None:
self._timerSnooze = self._SnoozeTimer(self._keep, msec / 1000, callback, *args)
self._timerSnooze.start()
if start_now:
callback(*args)
started = True
else:
if self._timerOneoff is None:
self._timerOneoff = threading.Timer(msec / 1000, callback, *args)
self._timerOneoff.start()
started = True
return started
def stop(self):
if self._timerType == self.SNOOZE:
self._keep.set()
self._timerSnooze.join()
else:
self._timerOneoff.cancel()
self._timerOneoff.join()
def is_alive(self):
if self._timerType == self.SNOOZE:
isAlive = self._timerSnooze is not None and self._timerSnooze.is_alive() and not self._keep.is_set()
else:
isAlive = self._timerOneoff is not None and self._timerOneoff.is_alive()
return isAlive
isAlive = is_alive
KEEP = True
def callback():
global KEEP
KEEP = False
print("ENDED", time.strftime("%M:%S"))
if __name__ == "__main__":
count = 0
t = Timer(timerType=Timer.ONEOFF)
t.start(5000, callback)
print("START", time.strftime("%M:%S"))
while KEEP:
if count % 10000000 == 0:
print("STILL RUNNING")
count += 1
Notice the while loop runs in a separate thread, and uses a callback function to invoke when the time is over (in your case, this callback function would be used to check if the long running process has finished).
I'm making a GUI application that tracks time spent on each foreground window. I attempted to do this with a loop for every process being monitored as such:
class processes(object):
def __init__(self, name, pid):
self.name = name
self.pid = pid
self.time_spent = 0
self.time_active = 0
p1 = multiprocessing.Process(target=self.loop, args=())
p1.start()
def loop(self):
t = 0
start_time = time.time()
while True:
#While the process is running, check if foreground window (window currently being used) is the same as the process
h_wnd = user32.GetForegroundWindow()
pid = wintypes.DWORD()
user32.GetWindowThreadProcessId(h_wnd, ctypes.byref(pid))
p = psutil.Process(pid.value)
name = str(p.name())
name2 = str(self.name)
if name2 == name:
t = time.time() - start_time
#Log the total time the user spent using the window
self.time_active += t
self.time_spent = time.perf_counter()
time.sleep(2)
def get_time(self):
print("{:.2f}".format(self.time_active) + " name: " + self.name)
I select the process I want in the gui and find it by its name in a list. Once found I call the function get_time() that's supposed to print how long the selected process has been in the foreground.
def display_time(Lb2):
for s in Lb2.curselection():
for e in process_list:
if Lb2.get(s) == e.name:
e.get_time()
The problem is time_active is 0 every time I print it.
I've debugged the program and can tell it's somewhat working (not perfectly, it still records time while the program is not on the foreground) and updating the variable inside the loop. However, when it comes to printing it out the value remains as 0. I think I'm having trouble understanding multiprocessing if anyone could clear up the confusion
The simplest solution was offered by #TheLizzard, i.e. just use threading instead of multiprocessing:
import threading
...
#p1 = multiprocessing.Process(target=self.loop, args=())
p1 = threading.Thread(target=self.loop, args=())
But that doesn't explain why creating a process instead did not work. What happened was that your process.__init__ code first created several attributes such as self.time_active, self.time_spent, etc. This code is executing in the main process. But when you execute the following two statements ...
p1 = multiprocessing.Process(target=self.loop, args=())
p1.start()
... the process object that was created must now be serialized/deserialized to the new address space in which the new Process instance you just created must run. Consequently, in the loop method when you execute a statement such as self.time_active += t, you are updating the instance of self.time_active that "lives" in the address space of the sub-process. But code that prints out the value of self.time_active is executing in the main process's address space and is therefore printing out only the original value of that attribute.
If you had to use multiprocessing because your loop method was CPU-intensive and you needed the parallelism with other processes, then the solution would be to create self.time_active and self.time_spent in shared memory so that both the main process and the sub-process would be accessing the same, shared attributes:
class processes(object):
def __init__(self, name, pid):
self.name = name
self.pid = pid
# Create shared floating point values:
self.time_spent = multiprocessing.Value('f', 0)
self.time_active = multiprocessing.Value('f', 0)
...
def loop(self):
...
self.time_active.value += t
self.time_spent.value = time.perf_counter()
...
def get_time(self):
print("{:.2f}".format(self.time_active.value) + " name: " + self.name)
I have a Python program that does the following:
1) endlessly wait on com port a command character
2) on character reception, launch a new thread to execute a particular piece of code
What I would need to do if a new command is received is:
1) kill the previous thread
2) launch a new one
I read here and there that doing so is not the right way to proceed.
What would be the best way to do this knowing that I need to do this in the same process so I guess I need to use threads ...
I would suggest you two differente approaches:
if your processes are both called internally from a function, you could set a timeout on the first function.
if you are running external script, you might want to kill the process.
Let me try to be more precise in my question by adding an example of my code structure.
Suppose synchronous functionA is still running because waiting internally for a particular event, if command "c" is received, I need to stop functionA and launch functionC.
def functionA():
....
....
call a synchronous serviceA that can take several seconds even more to execute
....
....
def functionB():
....
....
call a synchronous serviceB that nearly returns immediately
....
....
def functionC():
....
....
call a synchronous serviceC
....
....
#-------------------
def launch_async_task(function):
t = threading.Thread(target=function, name="async")
t.setDaemon(True)
t.start()
#------main----------
while True:
try:
car = COM_port.read(1)
if car == "a":
launch_async_task(functionA)
elif car == "b":
launch_async_task(functionB)
elif car == "c":
launch_async_task(functionC)
May want to run the serial port in a separate thread. When it receives a byte put that byte in a queue. Have the main program loop and check the queue to decide what to do with it. From the main program you can kill the thread with join and start a new thread. You may also want to look into a thread pool to see if it is what you want.
ser = serial.Serial("COM1", 9600)
que = queue.Queue()
def read_serial(com, q):
val = com.read(1)
q.put(val)
ser_th = threading.Thread(target=read_serial, args=(ser, que))
ser_th.start()
th = None
while True:
if not que.empty():
val = que.get()
if val == b"e":
break # quit
elif val == b"a":
if th is not None:
th.join(0) # Kill the previous function
th = threading.Thread(target=functionA)
th.start()
elif val == b"b":
if th is not None:
th.join(0) # Kill the previous function
th = threading.Thread(target=functionB)
th.start()
elif val == b"c":
if th is not None:
th.join(0) # Kill the previous thread (functionA)
th = threading.Thread(target=functionC)
th.start()
try:
ser.close()
th.join(0)
except:
pass
If you are creating and joining a lot of threads you may want to just have a function that checks what command to run.
running = True
def run_options(option):
if option == 0:
print("Running Option 0")
elif option == 1:
print("Running Option 1")
else:
running = False
while running:
if not que.empty():
val = que.get()
run_options(val)
Ok, I finally used a piece of code that uses ctypes lib to provide some kind of killing thread function.
I know this is not a clean way to proceed but in my case, there are no resources shared by the threads so it shouldn't have any impact ...
If it can help, here is the piece of code that can easily be found on the net:
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
I am having issues getting 3 threads to run concurrently. I would like to have the "trade" loop, the "prices"loop and the "stop" loop run at the same time however it seems that the "stop" loop hijacks the program and runs while the others wait their turn. How should I set it up so that they all run at the same time?
import Queue
import threading
import time
import json
from execution import Execution
from settings import STREAM_DOMAIN, API_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID
from strategy import TestRandomStrategy
from streaming import StreamingForexPrices
from event import TickEvent
from rates import stop
def trade(events, strategy, execution):
"""
Carries out an infinite while loop that polls the
events queue and directs each event to either the
strategy component of the execution handler. The
loop will then pause for "heartbeat" seconds and
continue.
"""
while True:
try:
event = events.get(False)
except Queue.Empty:
pass
else:
if event is not None:
if event.type == 'TICK':
strategy.calculate_signals(event)
elif event.type == 'ORDER':
print "Executing order!"
execution.execute_order(event)
time.sleep(heartbeat)
if __name__ == "__main__":
heartbeat = 0 # Half a second between polling
events = Queue.Queue()
# Trade 1000 unit of EUR/USD
instrument = "EUR_USD"
units = 1
stopLoss = stopper
# Create the OANDA market price streaming class
# making sure to provide authentication commands
prices = StreamingForexPrices(
STREAM_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID,
instrument, events
)
#handle stopLoss price
stopper = stop()
# Create the execution handler making sure to
# provide authentication commands
execution = Execution(API_DOMAIN, ACCESS_TOKEN, ACCOUNT_ID)
# Create the strategy/signal generator, passing the
# instrument, quantity of units and the events queue
strategy = TestRandomStrategy(instrument, units, events, stopLoss)
# Create two separate threads: One for the trading loop
# and another for the market price streaming class
trade_thread = threading.Thread(target=trade, args=(events, strategy, execution))
price_thread = threading.Thread(target=prices.stream_to_queue, args=[])
rate_thread = threading.Thread(target=stop, args=[])
# Start both threads
trade_thread.start()
price_thread.start()
rate_thread.start()
Just fyi, everything worked great until I tried to add the "rate". The only things I have added are an additional thread, the stopLoss and the rate.py file.
rate.py:
import oandapy
import time
oanda = oandapy.API(environment="practice", access_token="xxxxxxxxx")
while True:
response = oanda.get_prices(instruments="EUR_USD")
prices = response.get("prices")
asking_price = prices[0].get("ask")
stop = asking_price - .001
print stop
time.sleep(1)
Thanks for the help in advance!
First of all, a remark:
don't use sleep if you can avoid it; for example, in the "trade" loop you
don't need sleep at all if you make a blocking .get() on your queue
Then, once the "rates.py" is imported it starts the while loop; you're
missing the stop() function (or your code is not complete ?)
EDIT: in case you want to add the stop function in rates.py, just put
the while loop code inside a def stop(): block like this
def stop():
while True:
response = oanda.get_prices(instruments="EUR_USD")
prices = response.get("prices")
asking_price = prices[0].get("ask")
stop = asking_price - .001
print stop
time.sleep(1)
(btw: do you really know what you're doing?)
I have an application that fires up a series of threads. Occassionally, one of these threads dies (usually due to a network problem). How can I properly detect a thread crash and restart just that thread? Here is example code:
import random
import threading
import time
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
def run(self):
self.running = True
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
if rand == 4:
raise ValueError('Returned 4!')
if __name__ == '__main__':
group1 = []
group2 = []
for g in range(4):
group1.append(MyThread(g))
group2.append(MyThread(g+20))
for m in group1:
m.start()
print "Now start second wave..."
for p in group2:
p.start()
In this example, I start 4 threads then I start 4 more threads. Each thread randomly generates an int between 0 and 10. If that int is 4, it raises an exception. Notice that I don't join the threads. I want both group1 and group2 list of threads to be running. I found that if I joined the threads it would wait until the thread terminated. My thread is supposed to be a daemon process, thus should rarely (if ever) hit the ValueError Exception this example code is showing and should be running constantly. By joining it, the next set of threads doesn't begin.
How can I detect that a specific thread died and restart just that one thread?
I have attempted the following loop right after my for p in group2 loop.
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if not m.isAlive():
group1.remove(m)
group1.append(MyThread(1))
for m in group2[:]:
if not m.isAlive():
group2.remove(m)
group2.append(MyThread(500))
time.sleep(5.0)
I took this method from this question.
The problem with this, is that isAlive() seems to always return True, because the threads never restart.
Edit
Would it be more appropriate in this situation to use multiprocessing? I found this tutorial. Is it more appropriate to have separate processes if I am going to need to restart the process? It seems that restarting a thread is difficult.
It was mentioned in the comments that I should check is_active() against the thread. I don't see this mentioned in the documentation, but I do see the isAlive that I am currently using. As I mentioned above, though, this returns True, thus I'm never able to see that a thread as died.
I had a similar issue and stumbled across this question. I found that join takes a timeout argument, and that is_alive will return False once the thread is joined. So my audit for each thread is:
def check_thread_alive(thr):
thr.join(timeout=0.0)
return thr.is_alive()
This detects thread death for me.
You could potentially put in an a try except around where you expect it to crash (if it can be anywhere you can do it around the whole run function) and have an indicator variable which has its status.
So something like the following:
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
self.RUNNING = 0
self.FINISHED_OK = 1
self.STOPPED = 2
self.CRASHED = 3
self.status = self.STOPPED
def run(self):
self.running = True
self.status = self.RUNNING
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
try:
if rand == 4:
raise ValueError('Returned 4!')
except:
self.status = self.CRASHED
Then you can use your loop:
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if m.status == m.CRASHED:
value = m.value
group1.remove(m)
group1.append(MyThread(value))
for m in group2[:]:
if m.status == m.CRASHED:
value = m.value
group2.remove(m)
group2.append(MyThread(value))
time.sleep(5.0)