I'm attempting to write a Python script that will ping/icmp an IP address and tell me if it's alive. I'm doing this because I have a intermittent issue. I wanted to ping, log the outcome, sleep for a period and attempt the ping again. I tried a while loop, but still getting errors like these:
line 33, in (module) systemPing('192.168.1.1')
line 30, in systemPing time.sleep(30)
KeyboardInterrupt
I'm using Python 2.6.
Ideally my question is how I loop through this method/function systemPing and what errors there are in my code? The script seems to work, but I get these errors when I hit ctrl-c.
from subprocess import Popen, PIPE
import datetime, time, re
logFile = open("textlog.txt", "a")
def getmyTime():
now = datetime.datetime.now()
return now.strftime("%Y-%m-%d %H:%M \n")
startTime = "Starting ..." + getmyTime()
logFile.write(startTime)
logFile.write("\n")
def systemPing(x):
cmd = Popen("ping -n 1 " + x , stdout=PIPE)
#print getmyTime()
for line in cmd.stdout:
if 'timed out' in line:
loggedTime = "Failure detected - " + getmyTime()
logFile.write(loggedTime)
if 'Reply' in line:
print "Replied..."
logFile.close()
print "Sleeping 30mins ... CTRL C to end"
time.sleep(30) #1800 is 30mins
systemPing('192.168.1.1')
if __name__ =='__main__':
systemPing('192.168.1.1')
Any help is always appreciated.
Thank you.
It's not really an error per se, it's just the default behavior for Python, upon receipt of a SIGINT (which is what happens when you press CTRL-C), to raise a KeyboardInterrupt exception.
You'll get the same thing if you send the signal with kill(1), like...
$ kill -INT <pid>
If you want to handle it, then you can change the code to something like...
if __name__ =='__main__':
try:
systemPing('192.168.1.1')
except KeyboardInterrupt:
print 'Finished'
...or whatever you want it to do.
Related
I want to break PumpWaitingMessages for every one hour and check for any unread mails and I tried with following code.
However, if I increase time.time()-starttime>10 then my outlook is hanging and not able to progress, Sometimes I am even able to get following error:
This is related to How to continuously monitor a new mail in outlook and unread mails of a specific folder in python
pTraceback (most recent call last):
File "final.py", line 94, in <module>
outlook_open=processExists('OUTLOOK.EXE')
File "final.py", line 68, in processExists
print('process "%s" is running!' % processname)
IOError: [Errno 0] Error
Please check the code and help me to resolve this.
import win32com.client
import ctypes # for the VM_QUIT to stop PumpMessage()
import pythoncom
import re
import time
import os
import subprocess
import pyodbc
class Handler_Class(object):
def __init__(self):
# First action to do when using the class in the DispatchWithEvents
outlook=self.Application.GetNamespace("MAPI")
inbox=outlook.Folders['mymail#gmail.com'].Folders['Inbox']
messages = inbox.Items
print "checking Unread mails"
# Check for unread emails when starting the event
for message in messages:
if message.UnRead:
print message.Subject.encode("utf-8") # Or whatever code you wish to execute.
message.UnRead=False
def OnQuit(self):
# To stop PumpMessages() when Outlook Quit
# Note: Not sure it works when disconnecting!!
print "Inside handler onQuit"
ctypes.windll.user32.PostQuitMessage(0)
def OnNewMailEx(self, receivedItemsIDs):
# RecrivedItemIDs is a collection of mail IDs separated by a ",".
# You know, sometimes more than 1 mail is received at the same moment.
for ID in receivedItemsIDs.split(","):
mail = self.Session.GetItemFromID(ID)
subject = mail.Subject
print subject.encode("utf-8")
mail.UnRead=False
try:
command = re.search(r"%(.*?)%", subject).group(1)
print command # Or whatever code you wish to execute.
except:
pass
# Function to check if outlook is open
def processExists(processname):
tlcall = 'TASKLIST', '/V', '/FI', 'imagename eq %s' % processname
# shell=True hides the shell window, stdout to PIPE enables
# communicate() to get the tasklist command result
tlproc = subprocess.Popen(tlcall, shell=True, stdout=subprocess.PIPE)
# trimming it to the actual lines with information
tlout = tlproc.communicate()[0].strip().split('\r\n')
# if TASKLIST returns single line without processname: it's not running
if len(tlout) > 1 and processname in tlout[-1]:
if "Not Responding" in tlout[2]:
print('process "%s" is not responding' % processname)
os.system("taskkill /f /im outlook.exe")
return False
print('process "%s" is running!' % processname)
return True
else:
print('process "%s" is NOT running!' % processname)
return False
# Loop
while True:
try:
outlook_open = processExists('OUTLOOK.EXE')
except:
outlook_open = False
#If outlook opened then it will start the DispatchWithEvents
if outlook_open == True:
outlook = win32com.client.DispatchWithEvents("Outlook.Application", Handler_Class)
while True:
starttime=time.time()
while (int(time.time()-starttime)<10):
pythoncom.PumpWaitingMessages()
ctypes.windll.user32.PostQuitMessage(0)
outlook_open=processExists('OUTLOOK.EXE')
if outlook_open == False:
break
#Handler_Class.__init__(outlook)
# To not check all the time (should increase 10 depending on your needs)
if outlook_open == False:
print "outlook not opened"
os.startfile("outlook")
time.sleep(10)
So to check your unread email at the place of #Handler_Class.__init__(outlook) just do:
win32com.client.DispatchWithEvents("Outlook.Application", Handler_Class)
(no need of assignment really). But I don't see the point in your code because the rest of the time you are monitoring for incoming email with pythoncom.PumpWaitingMessages() and can do the same thing.
For your problem with outlook is hanging, not sure what is the problem, I try to run it myself few hours and it works. Maybe to reduce some CPU use (this might be your problem depending on your computer), you can try to add a time.sleep() in your loop while such as:
while (int(time.time()-starttime)<100):
pythoncom.PumpWaitingMessages()
time.sleep(0.1)
And while I'm answering, I don't think that both ctypes.windll.user32.PostQuitMessage(0) you have in your code are necessary anymore with pythoncom.PumpWaitingMessages(), they were used to stop pythoncom.PumpMessages(). I had no difference in behavior with or without.
Let me know
This is my first try with threads in Python,
I wrote the following program as a very simple example. It just gets a list and prints it using some threads. However, Whenever there is an error, the program just hangs in Ubuntu, and I can't seem to do anything to get the control prompt back, so have to restart another SSH session to get back in.
Also have no idea what the issue with my program is.
Is there some kind of error handling I can put in to ensure it doesn't hang.
Also, any idea why ctrl/c doesn't work (I don't have a break key)
from Queue import Queue
from threading import Thread
import HAInstances
import logging
log = logging.getLogger()
logging.basicConfig()
class GetHAInstances:
def oraHAInstanceData(self):
log.info('Getting HA instance routing data')
# HAData = SolrGetHAInstances.TalkToOracle.main()
HAData = HAInstances.main()
log.info('Query fetched ' + str(len(HAData)) + ' HA Instances to query')
# for row in HAData:
# print row
return(HAData)
def do_stuff(q):
while True:
print q.get()
print threading.current_thread().name
q.task_done()
oraHAInstances = GetHAInstances()
mainHAData = oraHAInstances.oraHAInstanceData()
q = Queue(maxsize=0)
num_threads = 10
for i in range(num_threads):
worker = Thread(target=do_stuff, args=(q,))
worker.setDaemon(True)
worker.start()
for row in mainHAData:
#print str(row[0]) + ':' + str(row[1]) + ':' + str(row[2]) + ':' + str(row[3])i
q.put((row[0],row[1],row[2],row[3]))
q.join()
In your thread method, it is recommended to use the "try ... except ... finally". This structure guarantees to return the control to the main thread even when errors occur.
def do_stuff(q):
while True:
try:
#do your works
except:
#log the error
finally:
q.task_done()
Also, in case you want to kill your program, go find out the pid of your main thread and use kill #pid to kill it. In Ubuntu or Mint, use ps -Ao pid,cmd, in the output, you can find out the pid (first column) by searching for the command (second column) you yourself typed to run your Python script.
Your q is hanging because your worker as errored. So your q.task_done() never got called.
import threading
to use
print threading.current_thread().name
I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)
this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).
I have some Python code that runs on Windows that spawns a subprocess and waits for it to complete. The subprocess isn't well behaved so the script makes a non-blocking spawn call and watches the process on the side. If some timeout threshold is met it kills of the process, assuming it has gone of the rails.
In some instances, which are non-reproducible, the spawned subprocess will just disappear and the watcher routine won't pick up on this fact. It'll keep watching until the timeout threshold is passed, try to kill the subprocess and get an error, and then exit.
What might be causing the fact that the subprocess has gone away to be undetectable to the watcher process? Why isn't the return code trapped and returned by the call to Popen.poll()?
The code I use to spawn and watch the process follows:
import subprocess
import time
def nonblocking_subprocess_call(cmdline):
print 'Calling: %s' % (' '.join(cmdline))
p = subprocess.Popen(cmdline, shell=False, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return p
def monitor_subprocess(handle, timeout=1200):
start_time = time.time()
return_code = 0
while True:
time.sleep(60)
return_code = handle.poll()
if return_code == None:
# The process is still running.
if time.time() - start_time > timeout:
print 'Timeout (%d seconds) exceeded -- killing process %i' % (timeout, handle.pid)
return_code = handle.terminate()
# give the kill command a few seconds to work
time.sleep(5)
if not return_code:
print 'Error: Failed to kill subprocess %i -- return code was: %s' % (handle.pid, str(return_code))
# Raise an error to indicate that the process was hung up
# and we had to kill it.
raise RuntimeError
else:
print 'Process exited with return code: %i' % (return_code)
break
return return_code
What I'm seeing is that, in cases where the process has disappeared, the call to return_code = handle.poll() on line 15 is returning None instead of a return code. I know the process has gone away completely -- I can see that it is no longer there in Task Manager. And I know the process disappeared long before the timeout value was reached.
Can you give an example of your cmdline variable? And also what kind of subprocess are you spawning?
I ran this on a test script, calling a batch file with the command:
ping -n 151 127.0.0.1>nul
Sleep for 150 seconds
and it worked fine.
It may be that your subprocess isn't terminating correctly. Also, try changing your sleep command to something like time.sleep(2).
In the past I've found this to work better than a longer sleep (esspecially if your subprocess is another python process).
Also, I'm not sure if your script has this, but in the else: statement, you have an extra parenthesis.
else:
#print 'Process exited with return code: %i' % (return_code))
# There's an extra closing parenthesis
print 'Process exited with return code: %i' % (return_code)
break
And how come you have a global temp_cmdline being called in the join statement:
print 'Calling: %s' % (' '.join(temp_cmdline))
I'm not sure if cmdline is being parsed from a list variable temp_cmdline, or if temp_cmdline is being created from a string split on spaces. Either way, if your cmdline variable is a string, then would it make more sense to just print it?
print 'Calling: %s' % cmdline
poll method on subprocess objects does not seem to work too good.
I used to have same issues while i was spawning some threads to do some job.
I suggest that you use the multiprocessing module.
Popen.poll doesnt work as expected if stdout is captured by something else, you can check taking out this part of the code ", stdout=subprocess.PIPE"
I need to do the following in Python. I want to spawn a process (subprocess module?), and:
if the process ends normally, to continue exactly from the moment it terminates;
if, otherwise, the process "gets stuck" and doesn't terminate within (say) one hour, to kill it and continue (possibly giving it another try, in a loop).
What is the most elegant way to accomplish this?
The subprocess module will be your friend. Start the process to get a Popen object, then pass it to a function like this. Note that this only raises exception on timeout. If desired you can catch the exception and call the kill() method on the Popen process. (kill is new in Python 2.6, btw)
import time
def wait_timeout(proc, seconds):
"""Wait for a process to finish, or raise exception after timeout"""
start = time.time()
end = start + seconds
interval = min(seconds / 1000.0, .25)
while True:
result = proc.poll()
if result is not None:
return result
if time.time() >= end:
raise RuntimeError("Process timed out")
time.sleep(interval)
There are at least 2 ways to do this by using psutil as long as you know the process PID.
Assuming the process is created as such:
import subprocess
subp = subprocess.Popen(['progname'])
...you can get its creation time in a busy loop like this:
import psutil, time
TIMEOUT = 60 * 60 # 1 hour
p = psutil.Process(subp.pid)
while 1:
if (time.time() - p.create_time()) > TIMEOUT:
p.kill()
raise RuntimeError('timeout')
time.sleep(5)
...or simply, you can do this:
import psutil
p = psutil.Process(subp.pid)
try:
p.wait(timeout=60*60)
except psutil.TimeoutExpired:
p.kill()
raise
Also, while you're at it, you might be interested in the following extra APIs:
>>> p.status()
'running'
>>> p.is_running()
True
>>>
I had a similar question and found this answer. Just for completeness, I want to add one more way how to terminate a hanging process after a given amount of time: The python signal library
https://docs.python.org/2/library/signal.html
From the documentation:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
Since you wanted to spawn a new process anyways, this might not be the best soloution for your problem, though.
A nice, passive, way is also by using a threading.Timer and setting up callback function.
from threading import Timer
# execute the command
p = subprocess.Popen(command)
# save the proc object - either if you make this onto class (like the example), or 'p' can be global
self.p == p
# config and init timer
# kill_proc is a callback function which can also be added onto class or simply a global
t = Timer(seconds, self.kill_proc)
# start timer
t.start()
# wait for the test process to return
rcode = p.wait()
t.cancel()
If the process finishes in time, wait() ends and code continues here, cancel() stops the timer. If meanwhile the timer runs out and executes kill_proc in a separate thread, wait() will also continue here and cancel() will do nothing. By the value of rcode you will know if we've timeouted or not. Simplest kill_proc: (you can of course do anything extra there)
def kill_proc(self):
os.kill(self.p, signal.SIGTERM)
Koodos to Peter Shinners for his nice suggestion about subprocess module. I was using exec() before and did not have any control on running time and especially terminating it. My simplest template for this kind of task is the following and I am just using the timeout parameter of subprocess.run() function to monitor the running time. Of course you can get standard out and error as well if needed:
from subprocess import run, TimeoutExpired, CalledProcessError
for file in fls:
try:
run(["python3.7", file], check=True, timeout=7200) # 2 hours timeout
print("scraped :)", file)
except TimeoutExpired:
message = "Timeout :( !!!"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))
except CalledProcessError:
message = "SOMETHING HAPPENED :( !!!, CHECK"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))