I have a python script that I want always to run in the background. What the application does is that it goes to a Oracle database and checks if there is a message to be displayed to the user and if there is, use the pynotify to display a notification.
I tried using a Timer object but it only invoke the method after the selective time. I want it to invoke every time after the selective time.
if __name__ == '__main__':
applicationName = "WEWE"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
t = threading.Timer(5.0, runWorks)
t.start()
Will doing this work and is there a better way?
if __name__ == '__main__':
applicationName = "WEEWRS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
while True:
t = threading.Timer(5.0, runWorks)
t.start()
But that gave me another problem.
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
I solved the problem reducing the creation of strings. The below error -
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
comes when there is a lack of resources. Below is the corrected code.
if __name__ == '__main__':
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
I also used a lock file so that the script won't run multiple-times.
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
# If we have a lock already block the program
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do all the work
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
# Controls for the application
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
# Release the file
os.unlink(pidfile)
It is always better to use this simple Daemon script:
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
It does a very good job !
Related
I'm writing a simple time tracking application in Python3 and PyQt5. Time is tracked in separate thread. Function that this thread is running doesn't access GUI code. On Windows10 application freezes after trying to close it. It's caused by calling thread.join(). I need to end the process in task manager to close it. On Linux Mint it works fine. I'm using threads from threading library. It doesn't work also with QThread's. If I comment out the thread.join() line it closes without a problem, but the code that's running by this thread doesn't finish.
Thread is initialized in __init__() function of Window class.
self.trackingThread = Thread(target = self.track)
Function that is responsible for tracking time:
def track(self):
startTime = time()
lastWindowChangeTime = startTime
while self.running:
# check if active window has changed
if self.active_window_name != get_active_window_name():
if self.active_window_name in self.applications_time:
self.applications_time[self.active_window_name] += int(time() - lastWindowChangeTime) // 60 # time in minutes)
else:
self.applications_time[self.active_window_name] = int(time() - lastWindowChangeTime) // 60 # time in minutes
lastWindowChangeTime = time()
self.active_window_name = get_active_window_name()
totalTime = int(time() - startTime) // 60 # time in minutes
if date.today() in self.daily_time:
self.daily_time[date.today()] += totalTime
else:
self.daily_time[date.today()] = totalTime
Joining the thread:
def saveAndQuit(self):
self.running = False
self.trackingThread.join() # the line that's causing application freeze
self.save()
QApplication.instance().quit()
EDIT:
Example:
https://pastebin.com/vt3BfKJL
relevant code:
def get_active_window_name():
active_window_name = ''
if system() == 'Linux':
active_window_name = check_output(['xdotool', 'getactivewindow', 'getwindowname']).decode('utf-8')
elif system() == 'Windows':
window = GetForegroundWindow()
active_window_name = GetWindowText(window)
return active_window_name
EDIT2:
After removing those 2 lines app closes without any problem. Is there any other way of getting active window name on Windows except win32gui?:
window = GetForegroundWindow()
active_window_name = GetWindowText(window)
The issue occurs because GetWindowText() is blocking, and so your thread can never join. To understand why, we have to delve into the win32 documentation
If the target window is owned by the current process, GetWindowText causes a WM_GETTEXT message to be sent to the specified window or control. If the target window is owned by another process and has a caption, GetWindowText retrieves the window caption text. If the window does not have a caption, the return value is a null string. This behavior is by design. It allows applications to call GetWindowText without becoming unresponsive if the process that owns the target window is not responding. However, if the target window is not responding and it belongs to the calling application, GetWindowText will cause the calling application to become unresponsive.
You are attempting to join the thread from within a function (saveAndQuit) that has been called by the Qt event loop. As such, until this function returns, the Qt event loop will not process any messages. This means the call to GetWindowText in the other thread has sent a message to the Qt event loop which won't be processed until saveAndQuit finishes. However, saveAndQuit is waiting for the thread to finish, and so you have a deadlock!
There are several ways to solve the deadlock, probably the easiest to implement is to recursively call join, with a timeout, from the Qt event loop. It's somewhat "hacky", but other alternatives mean things like changing the way your thread behaves or using QThreads.
As such, I would modify your saveAndQuit as follows:
def saveAndQuit(self):
self.running = False
self.trackingThread.join(timeout=0.05)
# if thread is still alive, return control to the Qt event loop
# and rerun this function in 50 milliseconds
if self.trackingThread.is_alive():
QTimer.singleShot(50, self.saveAndQuit)
return
# if the thread has ended, then save and quit!
else:
self.save()
QApplication.instance().quit()
I had a similar problem and someone here on SO advised me to use something like this:
class MyThread(QThread):
def __init__(self):
super().__init__()
# initialize your thread, use arguments in the constructor if needed
def __del__(self):
self.wait()
def run(self):
pass # Do whatever you need here
def run_qt_app():
my_thread = MyThread()
my_thread.start()
qt_app = QApplication(sys.argv)
qt_app.aboutToQuit.connect(my_thread.terminate)
# Setup your window here
return qt_app.exec_()
Works fine for me, my_thread runs as long as qt_app is up, and finishes it's work on quit.
edit: typos
I am trying to build a application that will run a bash script every 10 minutes. I am using apscheduler to accomplish this and when i run my code from terminal it works like clock work. However when i try to run the code from another module it crashes i suspect that the calling module is waiting for the "schedule" module to finish and then crash when that never happens.
Error code
/bin/bash: line 1: 13613 Killed ( python ) < /tmp/vIZsEfp/26
shell returned 137
Function that calls schedule
def shedual_toggled(self,widget):
prosessSchedular.start_background_checker()
Schedule Program
def schedul_check():
"""set up to call prosess checker every 10 mins"""
print "%s check ran" %(counter)
counter =+ 1
app = prosessCheckerv3.call_bash() < calls the bash file
if app == False:
print "error with bash"
return False
else:
prosessCheckerv3.build_snap_shot(app)
def start_background_checker():
scheduler = BackgroundScheduler()
scheduler.add_job(schedul_check, 'interval', minutes=10)
scheduler.start()
while True:
time.sleep(2)
if __name__ == '__main__':
start_background_checker()
this program simply calls another ever 10 mins. As a side note i have been trying to stay as far away from multi-threading as possible but if that is required so be it.
Well I managed to figure it out my self. The issue that GTK+ is not thread safe so the timed module need to be either be ran in another thread or else you can realise/enter the thread before/after calling the module.
I just did it like this.
def shedual_toggeld(self,widget):
onOffSwitch = widget.get_active()
""" After main GTK has logicly finished all GUI work run thread on toggel button """
thread = threading.Thread(target=self.call_schedual, args=(onOffSwitch,))
thread.daemon = True
thread.start()
def call_schedual(self, onOffSwitch):
if onOffSwitch == True:
self.sch.start_background_checker()
else:
self.sch.stop_background_checker()
This article goes through it in more detail. Hopefully some one else will find this useful.
http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness
I'm using ftplib for connecting and getting file list from FTP server.
The problem I have is that the connection hangs from time to time and I don't know why. I'm running python script as a daemon, using threads.
See what I mean:
def main():
signal.signal(signal.SIGINT, signal_handler)
app.db = MySQLWrapper()
try:
app.opener = FTP_Opener()
mainloop = MainLoop()
while not app.terminate:
# suspend main thread until the queue terminates
# this lets to restart the queue automatically in case of unexpected shutdown
mainloop.join(10)
while (not app.terminate) and (not mainloop.isAlive()):
time.sleep(script_timeout)
print time.ctime(), "main: trying to restart the queue"
try:
mainloop = MainLoop()
except Exception:
time.sleep(60)
finally:
app.db.close()
app.db = None
app.opener = None
mainloop = None
try:
os.unlink(PIDFILE)
except:
pass
# give other threads time to terminate
time.sleep(1)
print time.ctime(), "main: main thread terminated"
MainLoop() has some functions for FTP connect, download specific files and disconnect from the server.
Here's how I get the file's list:
file_list = app.opener.load_list()
And how FTP_Opener.load_list() function looks like:
def load_list(self):
attempts = 0
while attempts<=ftp_config.load_max_attempts:
attempts += 1
filelist = []
try:
self._connect()
self._chdir()
# retrieve file list to 'filelist' var
self.FTP.retrlines('LIST', lambda s: filelist.append(s))
filelist = self._filter_filelist(self._parse_filelist(filelist))
return filelist
except Exception:
print sys.exc_info()
self._disconnect()
sleep(0.1)
print time.ctime(), "FTP Opener: can't load file list"
return []
Why sometimes the FTP connection hangs and how can I monitor this? So if it happens I would like to terminate the thread somehow and start a new one.
Thanks
If you are building for robustness, I would highly recommend that you look into using an event-driven method. One such which have FTP support is Twisted (API).
The advantage is that you don't block the thread while waiting for i/O and you can create simple timer functions to monitor your connections if you so prefer. It also scales a lot better. It is slightly more complicated to code using event-driven patterns, so if this is just a simple script it may or may not be worth the extra effort, but since you write that you are writing a daemon, it might be worth looking into.
Here is an example of an FTP client: ftpclient.py
I create a python thread.One it's kick to run by calling it's start() method , I monitor a falg inside the thread , if that flag==True , I know User no longer wants the thread to keep running , so I liek to do some house cleaning and terminate the thread.
I couldn't terminate the thread however. I tried thread.join() , thread.exit() ,thread.quit() , all throw exception.
Here is how my thread looks like .
EDIT 1 : Please notice the core() function is called within standard run() function , which I haven't show it here.
EDIT 2 : I just tried sys.exit() when the StopFlag is true , and it looks thread terminates ! is that safe to go with ?
class workingThread(Thread):
def __init__(self, gui, testCase):
Thread.__init__(self)
self.myName = Thread.getName(self)
self.start() # start the thread
def core(self,arg,f) : # Where I check the flag and run the actual code
# STOP
if (self.StopFlag == True):
if self.isAlive():
self.doHouseCleaning()
# none of following works all throw exceptions
self.exit()
self.join()
self._Thread__stop()
self._Thread_delete()
self.quit()
# Check if it's terminated or not
if not(self.isAlive()):
print self.myName + " terminated "
# PAUSE
elif (self.StopFlag == False) and not(self.isSet()):
print self.myName + " paused"
while not(self.isSet()):
pass
# RUN
elif (self.StopFlag == False) and self.isSet():
r = f(arg)
Several problems here, could be others too but if you're not showing the entire program or the specific exceptions this is the best I can do:
The task the thread should be performing should be called "run" or passed to the Thread constructor.
A thread doesn't call join() on itself, the parent process that started the thread calls join(), which makes the parent process block until the thread returns.
Usually the parent process should be calling run().
The thread is complete once it finishes (returns from) the run() function.
Simple example:
import threading
import time
class MyThread(threading.Thread):
def __init__(self):
super(MyThread,self).__init__()
self.count = 5
def run(self):
while self.count:
print("I'm running for %i more seconds" % self.count)
time.sleep(1)
self.count -= 1
t = MyThread()
print("Starting %s" % t)
t.start()
# do whatever you need to do while the other thread is running
t.join()
print("%s finished" % t)
Output:
Starting <MyThread(Thread-1, initial)>
I'm running for 5 more seconds
I'm running for 4 more seconds
I'm running for 3 more seconds
I'm running for 2 more seconds
I'm running for 1 more seconds
<MyThread(Thread-1, stopped 6712)> finished
There's no explicit way to kill a thread, either from a reference to thread instance or from the threading module.
That being said, common use cases for running multiple threads do allow opportunities to prevent them from running indefinitely. If, say, you're making connections to an external resource via urllib2, you could always specify a timeout:
import urllib2
urllib2.urlopen(url[, data][, timeout])
The same is true for sockets:
import socket
socket.setdefaulttimeout(timeout)
Note that calling the join([timeout]) method of a thread with a timeout specified will only block for hte timeout (or until the thread terminates. It doesn't kill the thread.
If you want to ensure that the thread will terminate when your program finishes, just make sure to set the daemon attribute of the thread object to True before invoking it's start() method.
I've been messing around with a Django project.
What I want to achieve is the Django project starting up in another process while the parent process initiates a load of arbitary code I have written (the backend of my project). Obviously, the Django process and parent processes communicate. I'd like a dictionary to be read and written to by the processes.
I have the following code, based upon examples from here:
#!/usr/bin/env python
from multiprocessing import Process, Manager
import os
import time
from dj import manage
def django(d, l):
print "starting django"
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
manage.start()
def stop(d, l):
print "stopping"
print d
print l
if (__name__ == '__main__'):
os.system('clear')
print "starting backend..."
time.sleep(1)
print "backend start complete."
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=django, args=(d, l))
p.start()
try:
p.join()
except KeyboardInterrupt:
print "interrupt detected"
stop(d, l)
When I hit CTRL+C to kill the Django process, I'm seeing the Django server shut down, and stop() being called. Then what I want to see is the dictionary, d, and list, l, being printed.
Output is:
starting backend...
backend start complete.
starting django
Validating models...
0 errors found
Django version 1.3, using settings 'dj.settings'
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
^Cinterrupt detected
stopping
<DictProxy object, typeid 'dict' at 0x141ae10; '__str__()' failed>
<ListProxy object, typeid 'list' at 0x1425090; '__str__()' failed>
It can't find the dictionary or list after the CTRL+C event. Has the Manager process been terminated when the SIGINT is issued? If it is, is there anyway to stop it from terminating there and terminating with the main process?
I hope this makes sense.
Any help greatly receieved.
Ok, as far I see no possibility to simply ignore exception. When you rise one, you always go straight into a "except" block if there is one. What I'm proposing here is something what will restart your django application on each ^C, but note, that there should be added some back door for leaving.
In theory, you can wrap each line with a try..except.. block and that would act like a restart of each line, what will not be as visible as restart of whole script. If anyone finds a really-working solution, I will be the first one to upvote him.
You can set all inside your if (__name__ == '__main__'): into o main function and leave something like this:
def main():
#all the code...
if (__name__ == '__main__'):
while True:
try:
main()
except KeyboardInterrupt:
pass