How to set a timeout in libvirt (using Python) - python

I would like to set a short connection timeout (only some seconds) when using libvirt in my Python program instead of the long default one.
I found the C function: virEventAddTimeoutFunc() in the C libvirt API here:
http://libvirt.org/html/libvirt-libvirt.html#virEventAddTimeoutFunc
and eventInvokeTimeoutCallback(timer, callback, opaque) in libvirt.py around the line#150 but I do not know how to use it. I did not find any example on the net.
I tried this but I get a segmentation fault: :-(
import libvirt
def timeout_cb_d():
print 'Timeout !'
try:
# try to set the libvirt timeout to 2 seconds:
t = libvirt.eventInvokeTimeoutCallback(2, timeout_cb_d, "from dom0_class")
except:
...
Does anyone can give me a working example please?

We finally found a simple way to proceed using Python alarm & signal handler: http://docs.python.org/library/signal.html#example
Edit:
Here is the idea:
import string, time, sys, signal
class Host:
# (...)
def timeout_handler(self, sig_code, frame):
if 14 == sig_code: sig_code = 'SIGALRM'
print time.strftime('%F %T -'), 'Signal handler called with signal:', sig_code
raise Exception('Timeout!')
def libVirtConnect(self):
try:
# Enable the timeout with an alarm:
signal.signal(signal.SIGALRM, self.timeout_handler)
signal.alarm(self._libvirt_timeout_in_seconds)
self._virt_conn = libvirt.open('xen+tcp://'+self._ip)
signal.alarm(0) # Disable the alarm
except Exception, e:
signal.alarm(0) # Disable the alarm

I'm assuming libvirt communicates over a standard socket. If that's the case, you can set an application-wide timeout using socket.settimeout.
That's not to say the libvirt bindings for python don't call that function themselves, but it's worth a try.

I have often used monkeypatching to change a library so that sockets timeout. Usually you just need to find the method which calls select or poll and monkeypatch in a modified version. Sometimes you need to set up a try-catch which catches socket.timeout and does something to allow it to percolate up to your code without causing another error on the way. In one case I had to create a valid response object instead of None, for instance.

Related

How to bypass a request when it takes too long?

I have a python library which must be fast enough for online application. If a particular request (function call) takes too long, I want to just bypass this request with an empty result returned.
The function looks like the following:
def fast_function(text):
result = mylibrary.process(text)
...
If the mylibrary.process spend time more than a threshold limit, i.e. 100 milliseconds, I want to bypass this request and proceed to process the next 'text'.
What's the normal way to handle this? Is this a normal scenario? My application can afford to bypass a very small number of requests like this, if it takes too long.
One way is to use a signal timer. As an example:
import signal
def took_too_long():
raise TimeoutError
signal.signal(signal.SIGALRM, took_too_long)
signal.setitimer(signal.ITIMER_REAL, 0.1) # 0.1 seconds
try:
result = mylibrary.process(text)
signal.setitimer(signal.ITIMER_REAL, 0) # success, reset to 0 to disable the timer
except TimeoutError:
# took too long, do something
You'll have to experiment to see if this does or does not add too much overhead.
You can add a timeout to your function.
One way to implement it is to use a timeout decorator which will throw an exception if the function runs for more than the defined timeout. In order to pass to the next operation you can catch the exception thrown by the timeout.
Install this one for example: pip install timeout-decorator
import timeout_decorator
#timeout_decorator.timeout(5) # timeout of 5 seconds
def fast_function(text):
result = mylibrary.process(text)

Killing cv2 read process after N time

I'm desperate.
My code reads nframe in videos, sometimes the code just stop for no reason, and no error.
So I decided to somehow raise an error.
The thing is, the code does raise an error, but it ignores it for some reason, and just works as normal.
*Ive provided a code block on which exactly the same method works.
handler:
def handler(signum,frame):
print("error") ## This is printed
raise Exception('time out') ## I guess this is getting raised
Code part i want to wrap:
for i in range(0,int(frame_count), nframe): # basicly loads every nframe from the video
try:
frame = video.set(1,i)
signal.signal(signal.SIGALRM), handler)
signal.alarm(1) # At this point, the 'handler' did raise the error, but it did not kill this 'try' block.
_n,frame = video.read() # This line sometimes gets for infinit amount of time, and i want to wrap it
except Exception as e:
print('test') # Code does not get here, yet the 'handler' does raise an exception
raise e
# Here i need to return False, or rise an error, but the code just does not get here.
An example where exactly the same method will work:
import signal
import time
def handler(signum, frame):
raise Exception('time out')
def function():
try:
signal.signal(signal.SIGALRM,handler)
signal.alarm(5) # 5 seconds till raise
time.sleep(10) # does not get here, an Exception is raised after 5 seconds
except Exception as e:
raise e # This will indeed work
My guess is that the read() call is blocked somewhere inside C code. The signal handler runs, puts an exception into the Python interpreter somewhere, but the exception isn't handled until the Python interpreter regains control. This is a limitation documented in the signal module:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
One possible workaround is to read frames on a separate process using the multiprocessing module, and return them to the main process using a multiprocessing.Queue (from which you can get with a timeout). However, there will be extra overhead in sending the frames between processes.
Another approach might be to try and avoid the root of the problem. OpenCV has different video backends (V4L, GStreamer, ffmpeg, ...); one of them might work where another doesn't. Using the second argument to the VideoCapture constructor, you can indicate a preference for which backend to use:
cv.VideoCapture(..., cv.CAP_FFMPEG)
See the documentation for the full list of backends. Depending on your platform and OpenCV build, not all of them will be available.

How to avoid program freezing when connecting to server

I have a little script which filters those domain names which are not registred yet. I use pywhois module. The problem is that it suddenly freeze and do nothing after several (sometimes hundreds) of requests. I think it is not a ban because I can run the program right after freeze and it works.
I would like to avoid this freezing. My idea is to count runtime of the function and if the time cross some line (for example 10 seconds) it repeats the code.
Do you have any advice how to avoid the freezing? Or the better way to check domains?
Here is the code:
for keyword in keywords:
try:
details = pythonwhois.get_whois(keyword+'.com')
except Exception as e:
print e
continue
if 'status' not in details.keys():
print 'Free domain!'
print keyword
This method is prone to change (if the underlying library changes), however, you can call internal socket functions to set a timeout for all pythonwhois network calls. For example:
TIMEOUT = 5.0 # timeout in seconds
pythonwhois.net.socket.setdefaulttimeout(TIMEOUT)
pythonwhois.get_whois("example.com")
Maybe you could try dnspython. It looks like you just want to check if a domain name is registered. For example:
import dns.resolver
for keyword in keywords:
try:
dns.resolver.query(keyword+'.com')
except dns.resolver.NXDOMAIN:
print(keyword+'.com is available!')
DNS resolver has a default timeout of 2 seconds. If you want to change that, you can make a new instance of dns.resolver.Resolver with a different timeout.
To make it multithreaded, a thread pool would be the best choice if you can use python3:
from multiprocessing import Pool
def check_keyword(keyword):
try:
dns.resolver.query(keyword+'.com')
except dns.resolver.NXDOMAIN:
# You probably want to change this to a return
print(keyword+'.com is available!')
if __name__ == '__main__':
keywords = [...]
p = Pool(5)
print(p.map(check_keyword, keywords))

Win32 exception occurred when using pythoncom.CoUninitialize() and wmi

I am trying to create a to create a function in python 3.4 that will ping a server. Currently it only needs to work on Windows (possibly XP and above).
I have found that I can use the WMI Win32_PingStatus (Reference) class to ping a server and have written the following function:
import sys
if sys.platform.startswith('win32'):
import wmi
import pythoncom
def ping(ip_address):
""" Check if can ping ip address using OS ping tool"""
if sys.platform.startswith('win32'):
try:
pythoncom.CoInitialize()
c = wmi.WMI()
status_code = c.Win32_PingStatus(address=ip_address)[0].StatusCode
if status_code == 0:
successful = True
else:
successful = False
pythoncom.CoUninitialize()
except wmi.x_wmi:
# If WMI ping fails fallback
pass
else:
raise NotImplementedError
return successful
if __name__ == "__main__":
ping("127.0.0.1")
This works as intended as it returns True when you can ping the IP and false when you can't. However every time I run it I get a series of errors at the end:
Win32 exception occurred releasing IUnknown at 0x03140980
Win32 exception occurred releasing IUnknown at 0x031635d0
Win32 exception occurred releasing IUnknown at 0x03163560
This happens every time I run the script although the first 2 bytes of the address change each time. The last 2 bytes always stay the same.
I have tried commenting out various sections of the code and have found that if I remove pythoncom.CoUninitialize() the exceptions do not occur. I am using pythoncom.CoInitialize() and pythoncom.CoUninitialize() as I want to call the function in a thread as described here
I have tried adding print(pythoncom._GetInterfaceCount()) to see how many interfaces are present and have noticed that each time the function is run the interfaces increase by 6 and then occasionally but not often reduce however they do not reduce back down under 10 ever.
Does anyone know why this exception is occurring and what the correct way to deal with it is?
Thank you
I think I have managed to fix the problem. It seems that you should only call pythoncom.CoInitialize() in a separate thread as pythoncom automatically calls it on the main thread http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitializeEx_meth.html.
So I just check if the current thread is the main thread and if it is I don't call pythoncom.CoInitialize() and pythoncom.CoUninitialize().

Can I use Python Exceptions to handle the aftermath of SIGHUP signals for a daemon?

I'm writing a simple little file-syncing daemon in python to implement a synchronised file system between a primary and secondary machine. Most of the heavy lifting is done by rsync.
On the primary side it periodically calls rsync and sleeps for a few seconds before repeating. On the secondary side it spawns rsyncd with subprocess.Popen() and does a .wait() until it exits. However I want to trigger a reconfiguration of the daemon with a SIGHUP. I'm wondering what the best way of handling clean-up is.
My initial thoughts were to have the signal handler raise an exception that can trigger the clean-up:
def signal_handler(signum, frame):
raise fsync_config_exception
And:
rsync_args = [rsync_binary, "--daemon", "--no-detach", "--config=%s" % (config.name) ]
p = subprocess.Popen(rsync_args)
try:
p.wait()
if p.returncode != 0:
print "failed to spawn rsyncd"
return False
except fsync_config_exception:
print "spawn_and_monitor_rsyncd: config exceptions"
except:
(type, value, tb) = sys.exc_info()
print "we got %s with %s instead" % (type, value)
However I get a:
we got <type 'exceptions.TypeError'> with __init__() takes exactly 2 arguments (1 given) instead
instead of the expected fsync_config_exception. Has anyone any suggestions for the best approach to this sort of problem? I'm I hideously abusing Exceptions by trying to raise them from a signal context?
You should raise a fsync_config_exception instance, not the class. (Instantiate it using its __init__() signature)
However, I do not suggest raising asynchronous exceptions in that manner. You'll have a hard time guaranteeing that the exception is only raised when it can be properly handled. It's also not a good pattern to become accustomed to since you cannot interrupt blocking C extension calls in the Python interpreter context....(but that might not be a problem for you?)
In the signal handler, I would (glossing over details and not knowing if this is correct for your case):
Set state noting that the current iteration has been interrupted - use this for marking if special cleanup is in order
In the signal handler, kill the process that you're waiting on
Add another condition check after the wait to see if the process was killed by a signal (with additional check on the state variable noting if the SIGHUP was received)
Do what you need to do in response to the SIGHUP

Categories