I have a python script that parse some files, but sometimes appear unknown errors and script will fail.
So i tried to make a program that check for a file which have timestamp pid, and main program will update timestamp every 30 seconds.
def start_server():
subprocess.Popen("C:\Server.py", shell=True)
while True:
f = open('C:\server.conf', 'r+')
text = f.read().split(' ')
pid = int(text[0])
lastTime = text[1]
if float(time.time()) - float(lastTime) > 90:
temp = subprocess.Popen("taskkill /F /T /PID %i" % pid , stdout=subprocess.PIPE, shell=True)
out, err = temp.communicate()
print ' [INFO] Server.py was killed, and started again.'
start_server()
time.sleep(30)
but this doesn't start new server.py if last instance of program will fail.
Any ideea how i can make this works?
Thanks!
Related
I automate a process with Task Scheduler that occasionally leaves EXCEL.EXE processes hung in the background and these interfere with future processes. I found a way to list these to a file with a .bat code. The scheduled task starts code that calls a .vbs file that executes a macro in Excel. So Task Scheduler can't be set up to cancel the process (PID) if it hangs.
tasklist /V /FO csv /FI "IMAGENAME eq EXCEL.EXE" > C:\[path]\Exceltasks.csv
creates...(example)
"Image Name","PID","Session Name","Session#","Mem Usage","Status","User Name","CPU Time","Window Title"
"EXCEL.EXE","62020","Console","1","622,528 K","Running","[network]\[user]","0:03:31","Work Record.xlsx - Excel"
"EXCEL.EXE","47536","Console","1","78,760 K","Running","[network]\[user]","0:00:00","N/A"
"EXCEL.EXE","61472","Console","1","620,752 K","Running","[network]\[user]","0:03:38","N/A"
"EXCEL.EXE","54156","Console","1","358,648 K","Not Responding","[network]\[user]","0:00:20","HardwareMonitorWindow"
"EXCEL.EXE","54604","Console","1","77,180 K","Running","[network]\[user]","0:00:00","N/A"
"EXCEL.EXE","45948","Console","1","368,400 K","Running","[network]\[user]","0:00:24","Publishing..."
Then I have this python script that will run through that file and uses taskkill to clear them out.
import csv
import os
FindValue = "EXCEL.EXE"
Substring = "- Excel"
with open("C:/[path]/Exceltasks.csv") as f:
reader = csv.reader(f)
for row in reader:
if (row[0]=="INFO: No tasks are running which match the specified criteria."):
print ("No " + FindValue + " processes running")
break
else:
print ("Check PID" + row[1])
if(row[0]==FindValue):
if(row[8].find("- Excel")!=-1):
print("Don't kill task " + row[1])
else:
Killstring = "taskkill /F /PID " + row[1]
print Killstring
os.system('cmd /k '+ Killstring)
The first if will break out if there are no EXCEL.EXE processes.
The second if works, but only on the first of what is often 3+ EXEL.EXE processes.
How do I get the for loop to not stop after the first os taskkill command?
Why subprocess.PIPE prevents a called executable from closing.
I use the following script to call an executable file with a number of inputs:
import subprocess, time
CREATE_NO_WINDOW = 0x08000000
my_proc = subprocess.Popen("myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
startupinfo=subprocess.STARTUPINFO(), stdout=subprocess.PIPE,
creationflags = CREATE_NO_WINDOW)
Then I monitor if the application has finished within a given time (300 seconds) and if not I just kill it. I also read the output of the application to know whether it failed in doing the required tasks.
proc_wait_time = 300
start_time = time.time()
sol_status = 'Fail'
while time.time() - start_time < proc_wait_time:
if (my_proc.poll() is None):
time.sleep(1)
else:
try:
sol_status = my_proc.stdout.read().replace('\r\n \r\n','')
break
except:
sol_status = 'Fail'
break
else:
try: my_proc.kill()
except: None
sol_status = 'Frozen'
if sol_status in ['Fail', 'Frozen']:
print ('Failed running my_proc')
As you can note from the code I need to wait for myApp.exe to finish, however, sometimes myApp.exe freezes. Since the script above is part of a loop, I need to identify such a situation (by a timer), keep track of it and kill myApp.exe so that the whole script doesn't get stuck!
Now, the issue is that if I use subprocess.PIPE (which I suppose I have to if I want read the output of the application) then myApp.exe doesn't close after finishing and consequently my_proc.poll() is None is always True.
I am using Python 2.7.
There was a pipe buffer limit/bug in case of huge amounts of data written to subprocess.PIPE. The easiest way to fix it is to pipe the data directly into a file:
_stdoutHandler = open('C:/somePath/stdout.log', 'w')
_stderrHandler = open('C:/somePath/stderr.log', 'w')
my_proc = subprocess.Popen(
"myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
stdout=_stdoutHandler,
stderr=_stderrHandler,
startupinfo=subprocess.STARTUPINFO(),
creationflags=CREATE_NO_WINDOW
)
...
_stdoutHandler.close()
_stderrHandler.close()
I am currently writing my first python program (in Python 2.6.6). The program facilitates starting and stopping different applications running on a server providing the user common commands (like starting and stopping system services on a Linux server).
I am starting the applications' startup scripts by
p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
print(output)
The problem is, that the startup script of one application stays in foreground and so p.communicate() waits forever. I have already tried to use "nohup startCommand &" in front of the startCommand but that did not work as expected.
As a workaround I now use the following bash script to call the application's start script:
#!/bin/bash
LOGFILE="/opt/scripts/bin/logs/SomeServerApplicationStart.log"
nohup /opt/someDir/startSomeServerApplication.sh >${LOGFILE} 2>&1 &
STARTUPOK=$(tail -1 ${LOGFILE} | grep "Server started in RUNNING mode" | wc -l)
COUNTER=0
while [ $STARTUPOK -ne 1 ] && [ $COUNTER -lt 100 ]; do
STARTUPOK=$(tail -1 logs/SomeServerApplicationStart.log | grep "Server started in RUNNING mode" | wc -l)
if (( STARTUPOK )); then
echo "STARTUP OK"
exit 0
fi
sleep 1
COUNTER=$(( $COUNTER + 1 ))
done
echo "STARTUP FAILED"
The bash script is called from my python code. This workaround works perfect but I would prefer to do all in python...
Is subprocess.Popen the wrong way? How could I accommplish my task in Python only?
First it is easy not to block the Python script in communicate... by not calling communicate! Just read from output or error output from the command until you find the correct message and just forget about the command.
# to avoid waiting for an EOF on a pipe ...
def getlines(fd):
line = bytearray()
c = None
while True:
c = fd.read(1)
if c is None:
return
line += c
if c == '\n':
yield str(line)
del line[:]
p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT) # send stderr to stdout, same as 2>&1 for bash
for line in getlines(p.stdout):
if "Server started in RUNNING mode" in line:
print("STARTUP OK")
break
else: # end of input without getting startup message
print("STARTUP FAILED")
p.poll() # get status from child to avoid a zombie
# other error processing
The problem with the above, is that the server is still a child a the Python process and could get unwanted signals such as SIGHUP. If you want to make it a daemon, you must first start a subprocess that next start your server. That way when first child will end, it can be waited by caller and the server will get a PPID of 1 (adopted by init process). You can use multiprocessing module to ease that part
Code could be like:
import multiprocessing
import subprocess
# to avoid waiting for an EOF on a pipe ...
def getlines(fd):
line = bytearray()
c = None
while True:
c = fd.read(1)
if c is None:
return
line += c
if c == '\n':
yield str(line)
del line[:]
def start_child(cmd):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
shell=True)
for line in getlines(p.stdout):
print line
if "Server started in RUNNING mode" in line:
print "STARTUP OK"
break
else:
print "STARTUP FAILED"
def main():
# other stuff in program
p = multiprocessing.Process(target = start_child, args = (server_program,))
p.start()
p.join()
print "DONE"
# other stuff in program
# protect program startup for multiprocessing module
if __name__ == '__main__':
main()
One could wonder what is the need for the getlines generator when a file object is itself an iterator that returns one line at a time. The problem is that it internally calls read that read until EOF when file is not connected to a terminal. As it is now connected to a PIPE, you will not get anything until the server ends... which is not what is expected
I have a python script 'b.py' which prints out time ever 5 sec.
while (1):
print "Start : %s" % time.ctime()
time.sleep( 5 )
print "End : %s" % time.ctime()
time.sleep( 5 )
And in my a.py, I call b.py by:
def run_b():
print "Calling run b"
try:
cmd = ["./b.py"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print (">>>" + line.rstrip())
except OSError as e:
print >>sys.stderr, "fcs Execution failed:", e
return None
and later on, I kill 'b.py' by:
PS_PATH = "/usr/bin/ps -efW"
def kill_b(program):
try:
cmd = shlex.split(PS_PATH)
retval = subprocess.check_output(cmd).rstrip()
for line in retval.splitlines():
if program in line:
print "line =" + line
pid = line.split(None)[1]
os.kill(int(pid), signal.SIGKILL)
except OSError as e:
print >>sys.stderr, "kill_all Execution failed:", e
except subprocess.CalledProcessError as e:
print >>sys.stderr, "kill_all Execution failed:", e
run_b()
time.sleep(600)
kill_b("b.py")
I have 2 questions.
1. why I don't see any prints out from 'b.py' and when I do 'ps -efW' I don't see a process named 'b.py'?
2. Why when I kill a process like above, I see 'permission declined'?
I am running above script on cygwin under windows.
Thank you.
Why I don't see any prints out from 'b.py' and when I do 'ps -efW' I don't see a process named 'b.py'?
Change run_b() lines:
p = subprocess.Popen(cmd,
stdout=sys.stdout,
stderr=sys.stderr)
You will not see a process named "b.py" but something like "python b.py" which is little different. You should use pid instead of name to find it (in your code "p.pid" has the pid).
Why when I kill a process like above, I see 'permission declined'?
os.kill is supported under Windows only 2.7+ and acts a little bit different than posix version. However you can use "p.pid". Best way to kill a process in a cross platform way is:
if platform.system() == "Windows":
subprocess.Popen("taskkill /F /T /PID %i" % p.pid, shell=True)
else:
os.killpg(p.pid, signal.SIGKILL)
killpg works also on OS X and other Unixy operating systems.
I have a process that is started via subprocess.Popen() which is meant to run indefinitely. The problem i was having was that the process seemed to stop running after about 20 seconds. Sure enough when i check top, it shows that the process is going to sleep. When i run the command manually this doesn't happen.
Anyone know how I can stop this from happening?
This is the subprocess call:
aireplay = subprocess.Popen('aireplay-ng -3 -b ' + target.mac + ' ' + interface,
shell=True, stdout = subprocess.PIPE, stderr = DN)
time.sleep(5)
starttime = time.time()
ivs = 0
second = False
print 'Sending deauth to generate arps...'
send_deauth(target)
while time.time() - starttime < 1200:
targets = parsecsvfile('crackattempt')
print 'Captured ' + str(ivs) + ' ivs.'
print aireplay.poll()
if len(targets[0]) > 0:
target = targets[0][0]
if ivs > 20000:
break
else :
ivs = int(target.ivs)
time.sleep(1)
You are piping the output of the subprocess. It will sleep when its buffer is full - did you remember to read stdout from the subprocess?
You could use the communicate method if you don't mind it blocking, or read from the stdout file descriptor, or maybe send the stdout to /dev/null since you don't seem to be using it.