I am new here but I recently have been messing around with python and Qt. My situation is that one of the scripts I call does a lot of OS commands and basically waits for a response. When I call this script it runs fine and acts accordingly except in my main program the screen is frozen until I exit out of the cmd. I think this is because mt script just waits there for a response, is there anyway to make it so that even though the script is running and executing(waiting for response with cmd) the user can still use other aspects of the main program?
As mentioned in comments, you will need to use threading. Threading allows multiple functions to executed at the same time. Check out this link Python threading.
You'll just have to run your side script on a different thread.
Related
I have a python script I use at work that checks the contents of a webpage and reports to me the changes. It used to run fine checking the site every 5 minutes and then our company switched some security software. The script will still run but will stop after an hour or so. Its not consistent, it will sometimes be a few hours but about an hour seems average. There are no errors raised that are reported in the shell. Is there a way to have this re-started automatically? The code is below, it used to just call the function and then a sleep command, but I added the for loop and the print line for debugging to see what time it is stopping.
import time
import datetime
import txtWip
while True:
txtWip.main()
for i in range(1, 300,100):
current_time = time.strftime(r"%d.%m.%Y %H:%M:%S", time.localtime())
print(current_time)
time.sleep(100)
What you want is for your program to run as a daemon[1]: a background process that no longer responds to ordinary SIGKILL or even SIGHUP. You also want this daemon to restart itself on termination - effectively making it run forever.
Rather than write your own daemon script, it's far easier to do one of the following:
If on Linux, use Upstart.
This is a replacement for the init.d daemon that supervises all processes while your machine running. It is capable of respawning a process in the event of an unexpected crash - see an example here, and a Python-specific example here. It is the gold standard for such tasks on this platform.
For certain Ubuntu releases, systemd is the prodigal son and should be used instead.
An alternative with an external bash script that doesn't require messing around with upstart is also a possibility.
If on Windows, use ReStartMe
If on Mac, install and configure runit appropriately
[1] The following is not cross-platform advice. You may or may not actually need to run this as a true daemon. A simple background job (i.e. invoked by appending an & when you run python <file_name>.py to the end) should be sufficient - this will quit when the terminal you ran it in quits, but you can get around this by using the Linux utility screen.
I have a python script with GUI (using wxpython). I want to run it continuously on my (k)ubuntu system as a service. In case it exits due to some exception, I need it to restart automatically. I tried upstart but it immediately stops the service as soon as it starts.
Is there a super simple way to do this? (I tried restarting the python script within itself, tried simple shell scripts with infinite loops. But need something robust and reliable.)
Any help is greatly appreciated.
I know you said you tried shell scripts with infinite loops, but did you try using an "outer" Python script that runs perpetually as a service; it could just catch the exceptions and restart the Python GUI script if an exception were to occur.
Something like:
import myGUI
while True:
try:
myGUI.runGUICode() # make sure the execution stays in this loop
except:
pass # or do some recovery, initiallization, and logging here
I would like to attach gdb to a dying process, because the program runs in production and I need to debug it there, if I open the program with gdb it slows down and the computers are not that great. I tried to catch signals in the application and attach gdb there but it just works if I send them signals myself. When the program stalls (multi-threaded program, and the main thread gets a deadlock or somehow gets stuck (or apparently stuck)), and the user forces it to quit in the Desktop Environment (LXDE), I can't catch no signal. The program is all python with PySide for the graphical interface. Just care about linux.
My idea is to create a kernel driver and try too hook process termination or signals sending in there but since it would be much of a hassle I would like to ask if there is some tool for this kind of thing or some information that I could make use of. Thanks.
There might be a way to do what you want, but if you can't perhaps it would be sufficient to freeze the program and inspect its memory image?
Enable core dump file generation before it starts, and then once the process is hosed, terminate it with kill. Then use gdb to open the core file and analyze what was happening.
How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1
Requests may come pouring in sequentially
You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task.
To run a program forever in python:
while True :
do_work()
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
task = new Thread(target=do_work, args={})
task.start()
You probably want to take a look at http://docs.python.org/3/library/threading.html and http://docs.python.org/3/library/multiprocessing.html; threading would be more lightweight but only allows one thread to execute at a time (meaning it won't take advantage of multicore/hyperthreaded systems), while multiprocessing allows for true simultaneous execution but can be a bit less lightweight than threading if you're on a system that doesn't utilize lightweight subprocesses and may not be as necessary if the threads/processes spend lots of time doing I/O requests.
I'm developing a script that runs a program with other scripts over and over for testing purposes.
How it currently works is I have one Python script which I launch. That script calls the program and loads the other scripts. It kills the program after 60 seconds to launch the program again with the next script.
For some scripts, 60 seconds is too long, so I was wondering if I am able to set a FLAG variable (not in the main script), such that when the script finishes, it sets FLAG, so the main script and read FLAG and kill the process?
Thanks for the help, my writing may be confusing, so please let me know if you cannot fully understand.
You could use atexit to write a small file (flag.txt) when script1.py exits. mainscript.py could regularly be checking for the existence of flag.txt and when it finds it, will kill program.exe and exit.
Edit:
I've set persistent environment variables using this, but I only use it for python-based installation scripts. Usually I'm pretty shy about messing with the registry. (this is for windows btw)
This seems like a perfect use case for sockets, in particular asyncore.
You cannot use environment variables in this way. As you have discovered it is not persistent after the setting application completes