I have a python script running like this on my server:
python script.py &
The script works fine, but constantly I'm adding new things to the script and re-running it, somedays it runs for days without any problem, but sometimes the script stops running (Not running out of memory), but since I started the script as background I have no idea how to check for the Exception or error that cause the script to stop running. I'm on a Ubuntu server box running in Amazon. Any advice on how to approach this inconvenience ?
I use something like this. It will dump the exception which caused termination to your syslog, which you can see by examining /var/log/syslog after your script has stopped.
import traceback
import syslog
def syslog_trace(trace):
'''Log a python stack trace to syslog'''
log_lines = trace.split('\n')
for line in log_lines:
if len(line):
syslog.syslog(line)
def main():
# Your actual program here
if __name__ == '__main__':
try:
main()
except:
syslog_trace(traceback.format_exc())
Related
I'm trying to run one script from another script. I've read; What is the best way to call a script from another script? and I can't seem to get this to work.
My main script (Script A) does a lot of image processing and GUI interactions. However, randomly an error message or other window might appear interrupting the GUI interactions until the message or window is closed.
I've written a second script (Script B) that I want to run perpetually that closes these windows or error messages when discovered.
I'm trying to call Script B from Script A like this:
import close_windows
close_windows.closeWindows
print("Starting Close Windows....")
And Script B is:
import pyautogui as py
def closeWindows():
image = r'C:\image.jpg'
image2 = r'C:\image2.jpg'
while True:
foundimage = py.locateCenterOnScreen(image)
foundimage2 = py.locateCenterOnScreen(image2)
if foundimage or foundimage2 != None:
py.click(1887, 65)
When I run script B independently it works, when I try running it via Script A with close_windows.closeWindows nothing happens.
I've also tried from close_windows import closeWindows and calling closeWindows but again, nothing happens.
I'm working on a BCP wrapper method in Python, but have run into an issue invoking the command with subprocess.
As far as I can tell, the BCP command doesn't return any value or indication that it has completed outside of what it prints to the terminal window, which causes subprocess.call or subprocess.run to hang while they wait for a return.
subprocess.Popen allows a manual .terminate() method, but I'm having issues getting the table to write afterwards.
The bcp command works from the command line with no issues, it loads data from a source csv according to a .fmt file and writes an error log file. My script is able to dismount the file from log path, so I would consider the command itself irrelevant and the question to be around the behavior of the subprocess module.
This is what I'm trying at the moment:
process = subprocess.Popen(bcp_command)
try:
path = Path(log_path)
sleep_counter = 0
while path.is_file() == False and sleep_counter < 16:
sleep(1)
sleep_counter +=1
finally:
process.terminate()
self.datacommand = datacommand
My idea was to check that the error log file has been written by the bcp command as a way to tell that the process had finished, however while my script no longer freezes with this, and the files are apparently being successfully written and dismounted later on in the script. The script terminates in less than the 15 seconds that the sleep loop would use to end it as well.
When the process froze my Spyder shell (and Idle, so it's not the IDE), I could force terminate it by closing the console itself and it would write to the server at least.
However it seems like by using the .terminate() the command isn't actually writing anything to the server.
I checked if a dumb 15 second time-out (it takes about 2 seconds to do the BCP with this data) would work as well, in case it was writing an error log before the load finished.
Still resulted in an empty table on SQL server.
How can I get subprocess to execute a command without hanging?
Well, it seems to be a more general issue about calling helper functions with Popen
as seen here:
https://github.com/dropbox/pyannotate/issues/67
I was able to fix the hanging issue by changing it to:
subprocess.Popen(bcp_command, close_fds = True)
I have a script started with nohup python3 script.py & . It looks something like this:
import thing
import anotherthing
logfile = "logfile {}".format(datetime.datetime.today())
while True:
try:
logging.debug("Started loop.")
do_some_stuff()
logging.debug("Stuff was done.")
except Exception as e:
logging.exception("message")
logging.debug("Starting sleep.")
time.sleep(60)
This works fine, however it seems to hang up on time.sleep() (as in it just stops doing anything without killing the process) after about 2 days. According to logs, all parts of the script execute fine, but it always hangs up on the sleep part and doesn't start back. I checked for memory leaks, i/o hangups and connection timeouts, and none of those seem to be the case.
What could be the cause of that behavior and why?
EDIT: Added logging to pinpoint the cause. Logs always finish on DEBUG Starting Sleep.
So I have a set of python scripts. In an attempt to make a simple GUI, I have been combining html and CGI. So far so good. However, one of my scripts takes a long time to complete (>2 hours). So obviously, when I run this on my server (localhost on mac) I get a "gateway timeout error". I was reading about forking the sub process, and checking whether the process completed.
This is what I came up with, but it isn't working :(.
import os, time
#upstream stuff happening as part of main script
pid=os.fork()
if pid==0:
os.system("external_program") #this program will make "text.txt" as output
exit()
while os.stat(""text.txt").st_size == 0: # check whether text.txt has been written, if not print "processing" and continue to wait
print "processing"
sys.stdout.flush()
time.sleep(300)
#downstream stuff happening
As alwas, any help is appreciated
Did you try this one:
import os
processing = len(os.popen('ps aux | grep yourscript.py').readlines()) > 2
It tells you if your script is still running (returns boolean value).
How can a log of the crash of a Python script running on Windows be generated? A python program mysteriously crashes every few hours and the application window is closed so there is not sign of the error messages from the crash.
On Linux we can do python script.py >> /logdir/script.py.log 2>&1. What about on Windows?
The script running is basically an infinite loop:
while True:
if ...
...
else:
....
how about
logger = logging.getLogger("myApplication")
while True:
try:
if ...
...
else:
....
except Exception:
logger.exception("???")
and setup logging to log to a file?
Then, even if there is an exception, the program can keep going. If it truly is a crash that can't be caught as an exception, you should put logging statements in your program so you can see what happened successfully before the crash.