Python How to Implement Threading of my Email Class - python

I have a pretty large email class that constructs & sends various emails with mime, attachments etc.
Everything works fine but it is called from my main loop and the sendmail method using smtplib can sometimes block for several seconds because of issues at the other end.
I want to run the code in a new thread so my main loop can keep trucking.
I have tried two approaches without success:
Calling my class using a Thread
Inherit my V1 class from Thread
Neither stop the blocking. Below is a working skeleton of my class (V1) and what I have tried. Success would be to see "Done" appear immediately after "Starting" instead of waiting the 3 seconds for sendmail to finish.
Hoping there is an easy way to do this...
from time import sleep
from threading import Thread
class EMailv1():
def __init__(self):
self._from_address = 'a#b.com'
self._to_addresslist = []
def buildmessage(self, **kwargs):
message ={}
return message
#staticmethod
def sendmail(message):
sleep(5)
return
class EMailv2(Thread):
def __init__(self):
Thread.__init__(self)
self._from_address = 'a#b.com'
self._to_addresslist = []
def run(self):
pass
def buildmessage(self, **kwargs):
message ={}
return message
#staticmethod
def sendmail( message):
sleep(3)
return
if __name__ == "__main__":
print('Starting V1 class send in current manner, which blocks if smtplib cannot deliver mail')
email =EMailv1()
msg = email.buildmessage(a='a',b='b')
email.sendmail(msg)
print('v1 Done after sendmail sleep finishes')
print('Starting threaded call to send V1 class')
email = EMailv1()
msg = email.buildmessage(a='a',b='b')
t = Thread(target=email.sendmail(msg))
t.start()
print('Threaded call of V1 Done after sendmail sleep finishes')
print('Starting V2 class inheriting Thread')
email = EMailv2()
msg = email.buildmessage(a='a',b='b')
email.start()
email.sendmail(msg)
print('V2 Done after sendmail sleep finishes')

In your second version, instead of
t = Thread(target=email.sendmail(msg))
you should do
t = Thread(target=email.sendmail, args=[msg])
In the way you wrote, it is evaluating email.sendmail(msg) before constructing the thread, that's why you see that you wait 5 seconds before proceeding.
Instead you should pass the thread a target function and it's args separately, without evaluating them.
Here is a fixed and minimalized version:
from time import sleep
from threading import Thread
class EMailv1():
def __init__(self):
self._from_address = 'a#b.com'
self._to_addresslist = []
def buildmessage(self, **kwargs):
message ={}
return message
#staticmethod
def sendmail(message):
sleep(5)
print("Launched thread: I'm done sleeping!")
return
if __name__ == "__main__":
print('Main thread: Starting threaded call to send V1 class')
email = EMailv1()
msg = email.buildmessage(a='a', b='b')
t = Thread(target=email.sendmail, args=[msg])
t.start()
print("Main thread: I have created the thread, and am done.")
Output:
Main thread: Starting threaded call to send V1 class
Main thread: I have created the thread, and am done.
Launched thread: I'm done sleeping!
With that said, I'd also suggest you to take a look at some abstractions provided in Python for doing multithreaded work. For instance, take a look at features like ThreadPoolExecutor - these are usually preferable to working directly with Threads.

Related

Stopping eval code dinamically on event fired [duplicate]

What's the proper way to tell a looping thread to stop looping?
I have a fairly simple program that pings a specified host in a separate threading.Thread class. In this class it sleeps 60 seconds, the runs again until the application quits.
I'd like to implement a 'Stop' button in my wx.Frame to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up.
Here is my threading class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets)
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
def run(self):
config = controller.getConfig()
fmt = config['timefmt']
start_time = datetime.now().strftime(fmt)
try:
if onlinecheck.check_status(self.asset):
status = "online"
else:
status = "offline"
except socket.gaierror:
status = "an invalid asset tag."
msg =("{}: {} is {}. \n".format(start_time, self.asset, status))
wx.CallAfter(self.window.Logger, msg)
And in my wxPyhton Frame I have this function called from a Start button:
def CheckAsset(self, asset):
self.count += 1
thread = PingAssets(self.count, asset, self)
self.threads.append(thread)
thread.start()
Threaded stoppable function
Instead of subclassing threading.Thread, one can modify the function to allow
stopping by a flag.
We need an object, accessible to running function, to which we set the flag to stop running.
We can use threading.currentThread() object.
import threading
import time
def doit(arg):
t = threading.currentThread()
while getattr(t, "do_run", True):
print ("working on %s" % arg)
time.sleep(1)
print("Stopping as you wish.")
def main():
t = threading.Thread(target=doit, args=("task",))
t.start()
time.sleep(5)
t.do_run = False
if __name__ == "__main__":
main()
The trick is, that the running thread can have attached additional properties. The solution builds
on assumptions:
the thread has a property "do_run" with default value True
driving parent process can assign to started thread the property "do_run" to False.
Running the code, we get following output:
$ python stopthread.py
working on task
working on task
working on task
working on task
working on task
Stopping as you wish.
Pill to kill - using Event
Other alternative is to use threading.Event as function argument. It is by
default False, but external process can "set it" (to True) and function can
learn about it using wait(timeout) function.
We can wait with zero timeout, but we can also use it as the sleeping timer (used below).
def doit(stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
print("Stopping as you wish.")
def main():
pill2kill = threading.Event()
t = threading.Thread(target=doit, args=(pill2kill, "task"))
t.start()
time.sleep(5)
pill2kill.set()
t.join()
Edit: I tried this in Python 3.6. stop_event.wait() blocks the event (and so the while loop) until release. It does not return a boolean value. Using stop_event.is_set() works instead.
Stopping multiple threads with one pill
Advantage of pill to kill is better seen, if we have to stop multiple threads
at once, as one pill will work for all.
The doit will not change at all, only the main handles the threads a bit differently.
def main():
pill2kill = threading.Event()
tasks = ["task ONE", "task TWO", "task THREE"]
def thread_gen(pill2kill, tasks):
for task in tasks:
t = threading.Thread(target=doit, args=(pill2kill, task))
yield t
threads = list(thread_gen(pill2kill, tasks))
for thread in threads:
thread.start()
time.sleep(5)
pill2kill.set()
for thread in threads:
thread.join()
This has been asked before on Stack. See the following links:
Is there any way to kill a Thread in Python?
Stopping a thread after a certain amount of time
Basically you just need to set up the thread with a stop function that sets a sentinel value that the thread will check. In your case, you'll have the something in your loop check the sentinel value to see if it's changed and if it has, the loop can break and the thread can die.
I read the other questions on Stack but I was still a little confused on communicating across classes. Here is how I approached it:
I use a list to hold all my threads in the __init__ method of my wxFrame class: self.threads = []
As recommended in How to stop a looping thread in Python? I use a signal in my thread class which is set to True when initializing the threading class.
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
self.signal = True
def run(self):
while self.signal:
do_stuff()
sleep()
and I can stop these threads by iterating over my threads:
def OnStop(self, e):
for t in self.threads:
t.signal = False
I had a different approach. I've sub-classed a Thread class and in the constructor I've created an Event object. Then I've written custom join() method, which first sets this event and then calls a parent's version of itself.
Here is my class, I'm using for serial port communication in wxPython app:
import wx, threading, serial, Events, Queue
class PumpThread(threading.Thread):
def __init__ (self, port, queue, parent):
super(PumpThread, self).__init__()
self.port = port
self.queue = queue
self.parent = parent
self.serial = serial.Serial()
self.serial.port = self.port
self.serial.timeout = 0.5
self.serial.baudrate = 9600
self.serial.parity = 'N'
self.stopRequest = threading.Event()
def run (self):
try:
self.serial.open()
except Exception, ex:
print ("[ERROR]\tUnable to open port {}".format(self.port))
print ("[ERROR]\t{}\n\n{}".format(ex.message, ex.traceback))
self.stopRequest.set()
else:
print ("[INFO]\tListening port {}".format(self.port))
self.serial.write("FLOW?\r")
while not self.stopRequest.isSet():
msg = ''
if not self.queue.empty():
try:
command = self.queue.get()
self.serial.write(command)
except Queue.Empty:
continue
while self.serial.inWaiting():
char = self.serial.read(1)
if '\r' in char and len(msg) > 1:
char = ''
#~ print('[DATA]\t{}'.format(msg))
event = Events.PumpDataEvent(Events.SERIALRX, wx.ID_ANY, msg)
wx.PostEvent(self.parent, event)
msg = ''
break
msg += char
self.serial.close()
def join (self, timeout=None):
self.stopRequest.set()
super(PumpThread, self).join(timeout)
def SetPort (self, serial):
self.serial = serial
def Write (self, msg):
if self.serial.is_open:
self.queue.put(msg)
else:
print("[ERROR]\tPort {} is not open!".format(self.port))
def Stop(self):
if self.isAlive():
self.join()
The Queue is used for sending messages to the port and main loop takes responses back. I've used no serial.readline() method, because of different end-line char, and I have found the usage of io classes to be too much fuss.
Depends on what you run in that thread.
If that's your code, then you can implement a stop condition (see other answers).
However, if what you want is to run someone else's code, then you should fork and start a process. Like this:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
now, whenever you want to stop that process, send it a SIGTERM like this:
proc.terminate()
proc.join()
And it's not slow: fractions of a second.
Enjoy :)
My solution is:
import threading, time
def a():
t = threading.currentThread()
while getattr(t, "do_run", True):
print('Do something')
time.sleep(1)
def getThreadByName(name):
threads = threading.enumerate() #Threads list
for thread in threads:
if thread.name == name:
return thread
threading.Thread(target=a, name='228').start() #Init thread
t = getThreadByName('228') #Get thread by name
time.sleep(5)
t.do_run = False #Signal to stop thread
t.join()
I find it useful to have a class, derived from threading.Thread, to encapsulate my thread functionality. You simply provide your own main loop in an overridden version of run() in this class. Calling start() arranges for the object’s run() method to be invoked in a separate thread.
Inside the main loop, periodically check whether a threading.Event has been set. Such an event is thread-safe.
Inside this class, you have your own join() method that sets the stop event object before calling the join() method of the base class. It can optionally take a time value to pass to the base class's join() method to ensure your thread is terminated in a short amount of time.
import threading
import time
class MyThread(threading.Thread):
def __init__(self, sleep_time=0.1):
self._stop_event = threading.Event()
self._sleep_time = sleep_time
"""call base class constructor"""
super().__init__()
def run(self):
"""main control loop"""
while not self._stop_event.isSet():
#do work
print("hi")
self._stop_event.wait(self._sleep_time)
def join(self, timeout=None):
"""set stop event and join within a given time period"""
self._stop_event.set()
super().join(timeout)
if __name__ == "__main__":
t = MyThread()
t.start()
time.sleep(5)
t.join(1) #wait 1s max
Having a small sleep inside the main loop before checking the threading.Event is less CPU intensive than looping continuously. You can have a default sleep time (e.g. 0.1s), but you can also pass the value in the constructor.
Sometimes you don't have control over the running target. In those cases you can use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped

Execute function periodically or when an event fires

I am working on Python's Flask server side code where there is a background task which runs periodically and executes a function (note that 'periodically' is not so hardline, execute once and then after x seconds also works). But I also need it to execute the same function immediately when the server receives a request (and then resume the background task).
This kind of reminds me of the SELECT system call in C, where the system waits for a timeout or until a packet arrives.
Here is what I came up minimally after looking up a lot of answers.
from flask import Flask, request
import threading, os, time
POOL_TIME = 2
myThread = threading.Thread()
def pollAndExecute(a='|'):
time.sleep(1)
print(time.time(), a)
# time.sleep(1)
myThread = threading.Timer(POOL_TIME, pollAndExecute)
myThread.start()
def startWork():
global myThread
myThread = threading.Timer(POOL_TIME, pollAndExecute)
myThread.start()
app = Flask(__name__)
#app.route('/ping', methods=['POST'])
def ping():
global myThread
myThread.cancel()
pollAndExecute("#")
return "Hello"
if __name__ == '__main__':
app.secret_key = os.urandom(12)
startWork()
app.run(port=5001)
Output:
But the output clearly says that it is not behaving properly after there is a request (sent using curl -X POST http://localhost:5001/ping)
Please guide me as to how to correct this or are there any other ways to do it. Just FYI, in the original code, there are various database updates in the pollAndExecute() as well and I need to take care that there are no race conditions between polling and ping. Needless to say, only one copy of the function should execute at a particular time (preferably in a single thread).
Here is the solution I made for your problem. I used a priority queue that takes in data to be run with the printTime function. The background and flask functions are two different threads that push data into the priority queue, which should prioritize the flask call over the background one. Notice how it now waits for the current thread to finish before executing another one.
from flask import Flask, request
import threading, os, time
from threading import Thread, Lock
from queue import PriorityQueue
POOL_TIME = 2
lock = Lock()
def printTime(a='|'):
time.sleep(1) # Simulate process taking 1 sec
print(time.time(), a)
jobs = PriorityQueue()
class Queue(Thread):
def __init__(self):
Thread.__init__(self)
self.daemon = True
self.start()
def run(self):
while True:
_, data = jobs.get()
printTime(data)
class backGroundProcess(Thread):
def __init__(self):
Thread.__init__(self)
self.daemon = True
self.start()
def run(self):
while True:
time.sleep(2) # Background process enqueues a job every 2 secs
jobs.put((0,"|"))
class flaskProcess(Thread):
def __init__(self):
Thread.__init__(self)
self.start()
def run(self):
jobs.put((1,"#"))
app = Flask(__name__)
#app.route('/ping', methods=['POST'])
def ping():
flaskThread = flaskProcess()
return "Hello"
if __name__ == '__main__':
backGroundProcess()
Queue()
app.secret_key = os.urandom(12)
app.run(port=5001)
The above snippet may be a little verbose because I used classes, but this should get you started.

Python threads, how do Event and Queue work together?

I was talking with my friend,after looking at example from Beasley's book
class ActorExit(Exception):
pass
class Actor:
def __init__(self):
self._mailbox = Queue()
def send(self, msg):
self._mailbox.put(msg)
def recv(self):
msg = self._mailbox.get()
if msg is ActorExit:
raise ActorExit()
return msg
def close(self):
self.send(ActorExit)
def start(self):
self._terminated = Event()
t = Thread(target=self._bootstrap)
t.daemon = True
t.start()
def _bootstrap(self):
try:
self.run()
except ActorExit:
pass
finally:
self._terminated.set()
def join(self):
self._terminated.wait()
def run(self):
while True:
msg = self.recv()
class PrintActor(Actor):
def run(self):
while True:
msg = self.recv()
print('Got:', msg)
My friend argues that sole purpose of Event is to block the main thread until the other thread performs set operation.
Is that true?
How can we watch thread execution?
Python threads, how do Event and Queue work together?
They don't. You can use Events without queues and queues without Events, there's no dependency on each other. Your example just happens to use both.
My friend argues that sole purpose of Event is to block the main thread until the other thread performs set operation. Is that true?
Calling .wait() on an Event-object will block any calling thread until the internal flag is .set().
If you look at the source for Event, you'll find that Events just consist of a Condition variable with a lock and a boolean flag + methods to handle and communicate (to waiting threads) state changes of that flag.
class Event:
"""Class implementing event objects.
Events manage a flag that can be set to true with the set() method and reset
to false with the clear() method. The wait() method blocks until the flag is
true. The flag is initially false.
"""
def __init__(self):
self._cond = Condition(Lock())
self._flag = False
...
How can we watch thread execution?
A simple method would be to apply some sort of utility function that prints out what you're interested in, for example:
def print_info(info=""):
"""Print calling function's name and thread with optional info-text."""
calling_func = sys._getframe(1).f_code.co_name
thread_name = threading.current_thread().getName()
print(f"<{thread_name}, {calling_func}> {info}", flush=True)
Another possibility would be to use logging like in this answer.
Not sure what Beazly wanted to demonstrate with the code you showed, but it deems a little over-engineered to me for this simple task. Involving Events here on top is unnecessary when you already use a queue. You can initialize thread termination by passing a sentinel-value.
Here's a simplified version of your example with sentinel ('STOP') and some info-prints with print_info from above:
import sys
import time
import threading
from queue import Queue
class Actor(threading.Thread):
def __init__(self):
super().__init__(target=self.run)
self.queue = Queue()
def send(self, msg):
self.queue.put(msg)
print_info(f"sent: {msg}") # DEBUG
def close(self):
print_info() # DEBUG
self.send('STOP')
def run(self):
for msg in iter(self.queue.get, 'STOP'):
pass
class PrintActor(Actor):
def run(self):
for msg in iter(self.queue.get, 'STOP'):
print_info(f"got: {msg}") # DEBUG
if __name__ == '__main__':
pa = PrintActor()
pa.start()
pa.send("Hello")
time.sleep(2)
pa.send("...World!")
time.sleep(2)
pa.close()
pa.join()
Output:
<MainThread, send> sent: Hello
<Thread-1, run> got: Hello
<MainThread, send> sent: ...World!
<Thread-1, run> got: ...World!
<MainThread, close>
<MainThread, send> sent: STOP

Python threading.Thread.join() is blocking

I'm working with asynchronous programming and wrote a small wrapper class for thread-safe execution of co-routines based on some ideas from this thread here: python asyncio, how to create and cancel tasks from another thread. After some debugging, I found that it hangs when calling the Thread class's join() function (I overrode it only for testing). Thinking I made a mistake, I basically copied the code that the OP said he used and tested it to find the same issue.
His mildly altered code:
import threading
import asyncio
from concurrent.futures import Future
import functools
class EventLoopOwner(threading.Thread):
class __Properties:
def __init__(self, loop, thread, evt_start):
self.loop = loop
self.thread = thread
self.evt_start = evt_start
def __init__(self):
threading.Thread.__init__(self)
self.__elo = self.__Properties(None, None, threading.Event())
def run(self):
self.__elo.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.__elo.loop)
self.__elo.thread = threading.current_thread()
self.__elo.loop.call_soon_threadsafe(self.__elo.evt_start.set)
self.__elo.loop.run_forever()
def stop(self):
self.__elo.loop.call_soon_threadsafe(self.__elo.loop.stop)
def _add_task(self, future, coro):
task = self.__elo.loop.create_task(coro)
future.set_result(task)
def add_task(self, coro):
self.__elo.evt_start.wait()
future = Future()
p = functools.partial(self._add_task, future, coro)
self.__elo.loop.call_soon_threadsafe(p)
return future.result() # block until result is available
def cancel(self, task):
self.__elo.loop.call_soon_threadsafe(task.cancel)
async def foo(i):
return 2 * i
async def main():
elo = EventLoopOwner()
elo.start()
task = elo.add_task(foo(10))
x = await task
print(x)
elo.stop(); print("Stopped")
elo.join(); print("Joined") # note: giving it a timeout does not fix it
if __name__ == "__main__":
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
assert isinstance(loop, asyncio.AbstractEventLoop)
try:
loop.run_until_complete(main())
finally:
loop.close()
About 50% of the time when I run it, It simply stalls and says "Stopped" but not "Joined". I've done some debugging and found that it is correlated to when the Task itself sent an exception. This doesn't happen every time, but since it occurs when I'm calling threading.Thread.join(), I have to assume it is related to the destruction of the loop. What could possibly be causing this?
The exception is simply: "cannot join current thread" which tells me that the .join() is sometimes being run on the thread from which I called it and sometimes from the ELO thread.
What is happening and how can I fix it?
I'm using Python 3.5.1 for this.
Note: This is not replicated on IDE One: http://ideone.com/0LO2D9

Python multiprocessing with twisted's reactor

I am working on a xmlrpc server which has to perform certain tasks cyclically. I am using twisted as the core of the xmlrpc service but I am running into a little problem:
class cemeteryRPC(xmlrpc.XMLRPC):
def __init__(self, dic):
xmlrpc.XMLRPC.__init__(self)
def xmlrpc_foo(self):
return 1
def cycle(self):
print "Hello"
time.sleep(3)
class cemeteryM( base ):
def __init__(self, dic): # dic is for cemetery
multiprocessing.Process.__init__(self)
self.cemRPC = cemeteryRPC()
def run(self):
# Start reactor on a second process
reactor.listenTCP( c.PORT_XMLRPC, server.Site( self.cemRPC ) )
p = multiprocessing.Process( target=reactor.run )
p.start()
while not self.exit.is_set():
self.cemRPC.cycle()
#p.join()
if __name__ == "__main__":
import errno
test = cemeteryM()
test.start()
# trying new method
notintr = False
while not notintr:
try:
test.join()
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
except KeyboardInterrupt:
notintr = True
How should i go about joining these two process so that their respective joins doesn't block?
(I am pretty confused by "join". Why would it block and I have googled but can't find much helpful explanation to the usage of join. Can someone explain this to me?)
Regards
Do you really need to run Twisted in a separate process? That looks pretty unusual to me.
Try to think of Twisted's Reactor as your main loop - and hang everything you need off that - rather than trying to run Twisted as a background task.
The more normal way of performing this sort of operation would be to use Twisted's .callLater or to add a LoopingCall object to the Reactor.
e.g.
from twisted.web import xmlrpc, server
from twisted.internet import task
from twisted.internet import reactor
class Example(xmlrpc.XMLRPC):
def xmlrpc_add(self, a, b):
return a + b
def timer_event(self):
print "one second"
r = Example()
m = task.LoopingCall(r.timer_event)
m.start(1.0)
reactor.listenTCP(7080, server.Site(r))
reactor.run()
Hey asdvawev - .join() in multiprocessing works just like .join() in threading - it's a blocking call the main thread runs to wait for the worker to shut down. If the worker never shuts down, then .join() will never return. For example:
class myproc(Process):
def run(self):
while True:
time.sleep(1)
Calling run on this means that join() will never, ever return. Typically to prevent this I'll use an Event() object passed into the child process to allow me to signal the child when to exit:
class myproc(Process):
def __init__(self, event):
self.event = event
Process.__init__(self)
def run(self):
while not self.event.is_set():
time.sleep(1)
Alternatively, if your work is encapsulated in a queue - you can simply have the child process work off of the queue until it encounters a sentinel (typically a None entry in the queue) and then shut down.
Both of these suggestions means that prior to calling .join() you can send set the event, or insert the sentinel and when join() is called, the process will finish it's current task and then exit properly.

Categories