How to avoid program freezing when connecting to server - python

I have a little script which filters those domain names which are not registred yet. I use pywhois module. The problem is that it suddenly freeze and do nothing after several (sometimes hundreds) of requests. I think it is not a ban because I can run the program right after freeze and it works.
I would like to avoid this freezing. My idea is to count runtime of the function and if the time cross some line (for example 10 seconds) it repeats the code.
Do you have any advice how to avoid the freezing? Or the better way to check domains?
Here is the code:
for keyword in keywords:
try:
details = pythonwhois.get_whois(keyword+'.com')
except Exception as e:
print e
continue
if 'status' not in details.keys():
print 'Free domain!'
print keyword

This method is prone to change (if the underlying library changes), however, you can call internal socket functions to set a timeout for all pythonwhois network calls. For example:
TIMEOUT = 5.0 # timeout in seconds
pythonwhois.net.socket.setdefaulttimeout(TIMEOUT)
pythonwhois.get_whois("example.com")

Maybe you could try dnspython. It looks like you just want to check if a domain name is registered. For example:
import dns.resolver
for keyword in keywords:
try:
dns.resolver.query(keyword+'.com')
except dns.resolver.NXDOMAIN:
print(keyword+'.com is available!')
DNS resolver has a default timeout of 2 seconds. If you want to change that, you can make a new instance of dns.resolver.Resolver with a different timeout.
To make it multithreaded, a thread pool would be the best choice if you can use python3:
from multiprocessing import Pool
def check_keyword(keyword):
try:
dns.resolver.query(keyword+'.com')
except dns.resolver.NXDOMAIN:
# You probably want to change this to a return
print(keyword+'.com is available!')
if __name__ == '__main__':
keywords = [...]
p = Pool(5)
print(p.map(check_keyword, keywords))

Related

How to bypass a request when it takes too long?

I have a python library which must be fast enough for online application. If a particular request (function call) takes too long, I want to just bypass this request with an empty result returned.
The function looks like the following:
def fast_function(text):
result = mylibrary.process(text)
...
If the mylibrary.process spend time more than a threshold limit, i.e. 100 milliseconds, I want to bypass this request and proceed to process the next 'text'.
What's the normal way to handle this? Is this a normal scenario? My application can afford to bypass a very small number of requests like this, if it takes too long.
One way is to use a signal timer. As an example:
import signal
def took_too_long():
raise TimeoutError
signal.signal(signal.SIGALRM, took_too_long)
signal.setitimer(signal.ITIMER_REAL, 0.1) # 0.1 seconds
try:
result = mylibrary.process(text)
signal.setitimer(signal.ITIMER_REAL, 0) # success, reset to 0 to disable the timer
except TimeoutError:
# took too long, do something
You'll have to experiment to see if this does or does not add too much overhead.
You can add a timeout to your function.
One way to implement it is to use a timeout decorator which will throw an exception if the function runs for more than the defined timeout. In order to pass to the next operation you can catch the exception thrown by the timeout.
Install this one for example: pip install timeout-decorator
import timeout_decorator
#timeout_decorator.timeout(5) # timeout of 5 seconds
def fast_function(text):
result = mylibrary.process(text)

python simple threading won't ends without doing anything (maybe)

When i run the following code (using "sudo python servers.py") the process seem to just finish immediately with just printing "test".
why doesn't the functions "proxy_server" won't run ? or maybe they do but i do not realize that. (because the first line in proxy function doesn't print anything)
this is an impotent code, i didn't want to put unnecessary content, yet it still demonstrate my problem:
import os,sys,thread,socket,select,struct,time
HTTP_PORT = 80
FTP_PORT=21
FTP_DATA_PORT = 20
IP_IN = '10.0.1.3'
IP_OUT = '10.0.3.3'
sys_http = 'http_proxy'
sys_ftp = 'ftp_proxy'
sys_ftp_data = 'ftp_data_proxy'
def main():
try:
thread.start_new_thread(proxy_server, (HTTP_PORT, IP_IN,sys_http,http_handler))
thread.start_new_thread(proxy_server, (FTP_PORT, IP_IN,sys_ftp,http_handler))
thread.start_new_thread(proxy_server, (FTP_DATA_PORT, IP_OUT,sys_ftp_data,http_handler))
print "test"
except e:
print 'Error!'
sys.exit(1)
def proxy_server(host,port,fileName,handler):
print "Proxy Server Running on ",host,":",port
def http_handler(src,sock):
return ''
if __name__ == '__main__':
main()
What am i missing or doing wrong ?
First, you have indentation problems related to using mixed tabs and spaces for indentation. While they didn't cause your code to misbehave in this particular case, they will cause you problems later if you don't stick to consistently using one or the other. They've already broken the displayed indentation in your question; see the print "test" line in main, which looks misaligned.
Second, instead of the low-level thread module, you should be using threading. Your problem is occurring because, as documented in the thread module documentation,
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
threading threads let you explicitly define whether other threads should survive the death of the main thread, and default to surviving. In general, threading is much easier to use correctly.

Python Memory leak using Yocto

I'm running a python script on a raspberry pi that constantly checks on a Yocto button and when it gets pressed it puts data from a different sensor in a database.
a code snippet of what constantly runs is:
#when all set and done run the program
Active = True
while Active:
if ResponseType == "b":
while Active:
try:
if GetButtonPressed(ResponseValue):
DoAllSensors()
time.sleep(5)
else:
time.sleep(0.5)
except KeyboardInterrupt:
Active = False
except Exception, e:
print str(e)
print "exeption raised continueing after 10seconds"
time.sleep(10)
the GetButtonPressed(ResponseValue) looks like the following:
def GetButtonPressed(number):
global buttons
if ModuleCheck():
if buttons[number - 1].get_calibratedValue() < 300:
return True
else:
print "module not online"
return False
def ModuleCheck():
global moduleb
return moduleb.isOnline()
I'm not quite sure about what might be going wrong. But it takes about an hour before the RPI runs out of memory.
The memory increases in size constantly and the button is only pressed once every 15 minutes or so.
That already tells me that the problem must be in the code displayed above.
The problem is that the yocto_api.YAPI object will continue to accumulate _Event objects in its _DataEvents dict (a class-wide attribute) until you call YAPI.YHandleEvents. If you're not using the API's callbacks, it's easy to think (I did, for hours) that you don't need to ever call this. The API docs aren't at all clear on the point:
If your program includes significant loops, you may want to include a call to this function to make sure that the library takes care of the information pushed by the modules on the communication channels. This is not strictly necessary, but it may improve the reactivity of the library for the following commands.
I did some playing around with API-level callbacks before I decided to periodically poll the sensors in my own code, and it's possible that some setting got left enabled in them that is causing these events to accumulate. If that's not the case, I can't imagine why they would say calling YHandleEvents is "not strictly necessary," unless they make ARM devices with unlimited RAM in Switzerland.
Here's the magic static method that thou shalt call periodically, no matter what. I'm doing so once every five seconds and that is taking care of the problem without loading down the system at all. API code that would accumulate unwanted events still smells to me, but it's time to move on.
#noinspection PyUnresolvedReferences
#staticmethod
def HandleEvents(errmsgRef=None):
"""
Maintains the device-to-library communication channel.
If your program includes significant loops, you may want to include
a call to this function to make sure that the library takes care of
the information pushed by the modules on the communication channels.
This is not strictly necessary, but it may improve the reactivity
of the library for the following commands.
This function may signal an error in case there is a communication problem
while contacting a module.
#param errmsg : a string passed by reference to receive any error message.
#return YAPI.SUCCESS when the call succeeds.
On failure, throws an exception or returns a negative error code.
"""
errBuffer = ctypes.create_string_buffer(YAPI.YOCTO_ERRMSG_LEN)
#noinspection PyUnresolvedReferences
res = YAPI._yapiHandleEvents(errBuffer)
if YAPI.YISERR(res):
if errmsgRef is not None:
#noinspection PyAttributeOutsideInit
errmsgRef.value = YByte2String(errBuffer.value)
return res
while len(YAPI._DataEvents) > 0:
YAPI.yapiLockFunctionCallBack(errmsgRef)
if not (len(YAPI._DataEvents)):
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
break
ev = YAPI._DataEvents.pop(0)
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
ev.invokeData()
return YAPI.SUCCESS

Win32 exception occurred when using pythoncom.CoUninitialize() and wmi

I am trying to create a to create a function in python 3.4 that will ping a server. Currently it only needs to work on Windows (possibly XP and above).
I have found that I can use the WMI Win32_PingStatus (Reference) class to ping a server and have written the following function:
import sys
if sys.platform.startswith('win32'):
import wmi
import pythoncom
def ping(ip_address):
""" Check if can ping ip address using OS ping tool"""
if sys.platform.startswith('win32'):
try:
pythoncom.CoInitialize()
c = wmi.WMI()
status_code = c.Win32_PingStatus(address=ip_address)[0].StatusCode
if status_code == 0:
successful = True
else:
successful = False
pythoncom.CoUninitialize()
except wmi.x_wmi:
# If WMI ping fails fallback
pass
else:
raise NotImplementedError
return successful
if __name__ == "__main__":
ping("127.0.0.1")
This works as intended as it returns True when you can ping the IP and false when you can't. However every time I run it I get a series of errors at the end:
Win32 exception occurred releasing IUnknown at 0x03140980
Win32 exception occurred releasing IUnknown at 0x031635d0
Win32 exception occurred releasing IUnknown at 0x03163560
This happens every time I run the script although the first 2 bytes of the address change each time. The last 2 bytes always stay the same.
I have tried commenting out various sections of the code and have found that if I remove pythoncom.CoUninitialize() the exceptions do not occur. I am using pythoncom.CoInitialize() and pythoncom.CoUninitialize() as I want to call the function in a thread as described here
I have tried adding print(pythoncom._GetInterfaceCount()) to see how many interfaces are present and have noticed that each time the function is run the interfaces increase by 6 and then occasionally but not often reduce however they do not reduce back down under 10 ever.
Does anyone know why this exception is occurring and what the correct way to deal with it is?
Thank you
I think I have managed to fix the problem. It seems that you should only call pythoncom.CoInitialize() in a separate thread as pythoncom automatically calls it on the main thread http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitializeEx_meth.html.
So I just check if the current thread is the main thread and if it is I don't call pythoncom.CoInitialize() and pythoncom.CoUninitialize().

How to set a timeout in libvirt (using Python)

I would like to set a short connection timeout (only some seconds) when using libvirt in my Python program instead of the long default one.
I found the C function: virEventAddTimeoutFunc() in the C libvirt API here:
http://libvirt.org/html/libvirt-libvirt.html#virEventAddTimeoutFunc
and eventInvokeTimeoutCallback(timer, callback, opaque) in libvirt.py around the line#150 but I do not know how to use it. I did not find any example on the net.
I tried this but I get a segmentation fault: :-(
import libvirt
def timeout_cb_d():
print 'Timeout !'
try:
# try to set the libvirt timeout to 2 seconds:
t = libvirt.eventInvokeTimeoutCallback(2, timeout_cb_d, "from dom0_class")
except:
...
Does anyone can give me a working example please?
We finally found a simple way to proceed using Python alarm & signal handler: http://docs.python.org/library/signal.html#example
Edit:
Here is the idea:
import string, time, sys, signal
class Host:
# (...)
def timeout_handler(self, sig_code, frame):
if 14 == sig_code: sig_code = 'SIGALRM'
print time.strftime('%F %T -'), 'Signal handler called with signal:', sig_code
raise Exception('Timeout!')
def libVirtConnect(self):
try:
# Enable the timeout with an alarm:
signal.signal(signal.SIGALRM, self.timeout_handler)
signal.alarm(self._libvirt_timeout_in_seconds)
self._virt_conn = libvirt.open('xen+tcp://'+self._ip)
signal.alarm(0) # Disable the alarm
except Exception, e:
signal.alarm(0) # Disable the alarm
I'm assuming libvirt communicates over a standard socket. If that's the case, you can set an application-wide timeout using socket.settimeout.
That's not to say the libvirt bindings for python don't call that function themselves, but it's worth a try.
I have often used monkeypatching to change a library so that sockets timeout. Usually you just need to find the method which calls select or poll and monkeypatch in a modified version. Sometimes you need to set up a try-catch which catches socket.timeout and does something to allow it to percolate up to your code without causing another error on the way. In one case I had to create a valid response object instead of None, for instance.

Categories