OSError: [Errno 11] Resource temporarily unavailable. What causes this? - python

Background
I have two python processes that need to communicate with each other. The comminication is handled by a class named Pipe. I made a seperate class for this because most of the information that needs to be communicated comes in the form of dictionaries so Pipe implements a pretty simple protocol for doing this.
Here is the Pipe constructor:
def __init__(self,sPath):
"""
create the fifo. if it already exists just associate with it
"""
self.sPath = sPath
if not os.path.exists(sPath):
try:
os.mkfifo(sPath)
except:
raise Exception('cannot mkfifo at path \n {0}'.format(sPath))
self.iFH = os.open(sPath,os.O_RDWR | os.O_NONBLOCK)
self.iFHBlocking = os.open(sPath,os.O_RDWR)
So ideally I would just construct a Pipe in each process with the same path and they would be able to talk nice.
I'm going to skip out stuff about the protocol because I think it is largely unnecessary here.
All read and write operations make use of the following 'base' functions:
def base_read_blocking(self,iLen):
self.lock()
lBytes = os.read(self.iFHBlocking,iLen)
self.unlock()
return lBytes
def base_read(self,iLen):
print('entering base read')
self.lock()
lBytes = os.read(self.iFH,iLen)
self.unlock()
print('exiting base read')
return lBytes
def base_write_blocking(self,lBytes):
self.lock()
safe_write(self.iFHBlocking,lBytes)
self.unlock()
def base_write(self,lBytes):
print('entering base write')
self.lock()
safe_write(self.iFH,lBytes)
self.unlock()
print('exiting base write')
safe_write was suggested in another post
def safe_write(*args, **kwargs):
while True:
try:
return os.write(*args, **kwargs)
except OSError as e:
if e.errno == 35:
import time
print(".")
time.sleep(0.5)
else:
raise
locking and unlocking is handled like this:
def lock(self):
print('locking...')
while True:
try:
os.mkdir(self.get_lock_dir())
print('...locked')
return
except OSError as e:
if e.errno != 17:
raise e
def unlock(self):
try:
os.rmdir(self.get_lock_dir())
except OSError as e:
if e.errno != 2:
raise e
print('unlocked')
The Problem
This sometimes happens:
....in base_read
lBytes = os.read(self.iFH,iLen)
OSError: [Errno 11] Resource temporarily unavailable
Sometimes it's fine.
The Magical Solution
I seem to have stopped the problem from happening. Please note this is not me answering my own question. My question is explained in the next section.
I changed the read functions to look more like this and it sorted stuff out:
def base_read(self,iLen):
while not self.ready_for_reading():
import time
print('.')
time.sleep(0.5)
lBytes = ''.encode('utf-8')
while len(lBytes)<iLen:
self.lock()
try:
lBytes += os.read(self.iFH,iLen)
except OSError as e:
if e.errno == 11:
import time
print('.')
time.sleep(0.5)
finally:
self.unlock()
return lBytes
def ready_for_reading(self):
lR,lW,lX = select.select([self.iFH,],[],[],self.iTimeout)
if not lR:
return False
lR,lW,lX = select.select([self.iFHBlocking],[],[],self.iTimeout)
if not lR:
return False
return True
The Question
I'm struggling to find out exactly why it is temporarily unavailable. The two processes cannot access the actual named pipe at the same time due to the locking mechanism (unless I am mistaken?) so is this due to something more fundamental to fifos that my program is not taking into account?
All I really want is an explanation... The solution I found works but it looks like magic. Can anyone offer an explanation?
System
Ubuntu 12.04,
Python3.2.3

I had a similar problem with Java before. Have a look at the accepted answer--- the problem was that I was creating new threads in a loop. I suggest you have a look at the code creating the pipe and make sure you are not creating multiple pipes.

Related

Correctly handle errors using watchdog

I'm trying to understand how to use the error handling correct in Python.
I'm usting Watchdog to look in my folers in a network connected disc. Somethimes the disc dissconnects shortly and then connects again and an error pops up. "Exception in thread Thread-2:"
I have an error handeler but I'm not sure I'm doing it correctly.
Should I put an other try at the observer.schedule step?
Python 3.6, Windows 10
if __name__ == '__main__':
path = "P:\\03_auto\\Indata"
observer = Observer()
observer.schedule(MyHandler(), path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
Well, the sys.excepthook approach I mentioned in my comment is not feasible at the moment. There's a more than a decade old bug that makes threads derived from threading.Thread ignore sys.excepthook. There are some workarounds outlined in the message thread of that bug, but I am hestiant to post an answer employing a workaround, especially since this bug finally seems to get a fix for Python 3.8.
The other option would be to derive a custom Observer from Watchdog's Observer.
The most basic way I can think of is a wrapper around the parent's run() method:
class CustomObserver(Observer):
def run(self):
while self.should_keep_running():
try:
# Tweak the super call if you require compatibility with Python 2
super().run()
except OSError:
# You did not mention the excpetion class in your post.
# Be specific about what you want to handle here
# give the file system some time to recover
time.sleep(.5)
Judging by a quick glance at the source, all Observers seem to be inheriting their run from EventDispatcher.run(), so you probably could even omit the wrapper and reimplement that method directly
class CustomObserver(Observer):
def run(self):
while self.should_keep_running():
try:
self.dispatch_events(self.event_queue, self.timeout)
except queue.Empty:
continue
except OSError:
time.sleep(.5)
However, I don't have that package installed on my box, so these things are untested; you might have to fiddle around a bit to get this going.
Oh, and make sure to replace the OSError* by whatever exception class actually raises in your case :)
Edit:
* As per #HenryYik's comment below, a network drive disconnecting, appears to be raising an OSError (WinError 64: ERROR_NETNAME_DELETED) on Windows systems. I find it likely that UNIX style OSs raise the same type of exception in that situation, hence I updated the code snippets to now use OSError instead of the FileNotFoundError I used originally.
I think super().run() is not a good answer.
I've tried many things with shmee's answer, but it was not concrete and meaningless.
Below is my answer.
I've finished the test.
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
class Watcher :
def __init__(self):
self.observer = Observer()
print('observer init...')
def run(self):
global osv_status
try :
event_handler = MyHandler(patterns=["*.txt"])
self.observer.schedule(event_handler, Directory, recursive=True)
self.observer.start()
osv_status = 1
except OSError as ex:
print("OSError")
time.sleep(.5)
if self.observer.is_alive() is True:
self.observer.stop()
self.observer.join()
print("Observer is removed [ ",osv_status," ]")
osv_status = 2
except Exception as ex:
self.logger.exception("Exception ex :{0} ".format(ex))
class MyHandler(PatternMatchingEventHandler):
def __init__(self,*args, **kwargs):
super(MyHandler, self).__init__(*args, **kwargs)
print("MyHandler Init")
def on_created(self, event):
print('New file is created ',event.src_path)
except Exception as ex:
self.logger.exception("Exception ex :{0} ".format(ex))
def on_modified(self,event):
print('File is modified ',event.src_path)
try :
doing()
except Exception as ex:
self.logger.exception("Exception ex :{0} ".format(ex))
if __name__ == "__main__":
.....
osv_status = 0 # 0: ready, 1:start 2: halt
wch =Watcher()
wch.run()
try :
while(1):
if is_Connect == False:
sock_connect()
print('sock_connect')
time.sleep(1)
if os.path.exists(Directory) is False and osv_status == 1:
wch.observer.stop()
wch.observer.join()
print("Observer is removed [ ",osv_status," ]")
osv_status = 0
elif os.path.exists(Directory) is True and (osv_status == 0 or osv_status == 2):
if wch.observer.is_alive() is False:
wch =Watcher()
wch.run()
I've also posted this in a related thread here
This is how i solved this problem:
from watchdog import observers
from watchdog.observers.api import DEFAULT_OBSERVER_TIMEOUT, BaseObserver
class MyEmitter(observers.read_directory_changes.WindowsApiEmitter):
def queue_events(self, timeout):
try:
super().queue_events(timeout)
except OSError as e:
print(e)
connected = False
while not connected:
try:
self.on_thread_start() # need to re-set the directory handle.
connected = True
print('reconnected')
except OSError:
print('attempting to reconnect...')
time.sleep(10)
observer = BaseObserver(emitter_class=MyEmitter, timeout=DEFAULT_OBSERVER_TIMEOUT)
...
Subclassing WindowsApiEmitter to catch the exception in queue_events. In order to continue after reconnecting, watchdog needs to re-set the directory handle, which we can do with self.on_thread_start().
Then use MyEmitter with BaseObserver, we can now handle losing and regaining connection to shared drives.

how to close python script without just activating "except" in try/except statement

I have a python script running over some 50,000 items in a database, it takes about 10 seconds per item and I want to stop it.
But, if i just press ctrl + C while the code is in the try except part, then my program enter the expect statement and delete that item then continue. The only way to close the program is to repeatedly delete items this way until I, just be sheer luck, catch it not in the try statement.
How do I exit the script without just killing a try statement?
EDIT 1
I have a statement that looks something like this:
for item in items:
try:
if is_item_gone(content) == 1:
data = (item['id'])
db.update('UPDATE items SET empty = 0 WHERE id = (%s);', data)
else:
do alot of stuff
except:
data = (item['id'])
db.delete('...')
EDIT 2
The top of my connect to db code looks like:
#!/usr/bin/python
import MySQLdb
import sys
class Database:
....
The issue is because you are using a blanket except which is always a bad idea, if you catch specific exceptions then when you KeyboardInterrupt your script will stop:
for item in items:
try:
if is_item_gone(content) == 1:
data = (item['id'])
db.update('UPDATE items SET empty = 0 WHERE id = (%s);', data)
else:
do alot of stuff
except MySQLdb.Error as e:
print(e)
data = (item['id'])
db.delete('...')
If you have other exceptions to catch you can use multiple in the except:
except (KeyError, MySQLdb.Error) as e
At the very least you could catch Exception as e and print the error.
for item in items:
try:
if is_item_gone(content) == 1:
data = (item['id'])
db.update('UPDATE items SET empty = 0 WHERE id = (%s);', data)
else:
do alot of stuff
except Exception as e:
print(e)
data = (item['id'])
db.delete('...')
The moral of the story is don't use a blanket except, catch what you expect and logging the errors might also be a good idea. The exception-hierarchy is also worth reading to see exactly what you are catching.
I would rewrite it like this:
try:
# ...
except KeyboardInterrupt:
sys.exit(1)
except Exception: # I would use a more specific exception here if you can.
data = (item['id'])
db.delete('...')
Or just
try:
# ...
except Exception: # I would use an even more specific exception here if you can.
data = (item['id'])
db.delete('...')
Python 2's exception hiearchy can be found here: https://docs.python.org/2/library/exceptions.html#exception-hierarchy
Also, many scripts are written with something like this boiler plate at the bottom:
def _main(args):
...
if __name__ == '__main__':
try:
sys.exit(_main(sys.argv[1:]))
except KeyboardInterrupt:
print >> sys.stderr, '%s: interrupted' % _SCRIPT_NAME
sys.exit(1)
When you use except: instead of except Exception: in your main body, you are not allowing this top level exception block to catch the KeyboardInterrupt. (You don't have to actually catch it for Ctrl-C to work -- this just makes it prettier when KeyboardInterrupt is raised.)
import signal
import MySQLdb
import sys
## Handle process signals:
def sig_handler(signal, frame):
global connection
global cursor
cursor.close()
connection.close()
exit(0)
## Register a signal to the handler:
signal.signal(signal.SIGINT, sig_handler)
class Database:
# Your class stuf here.
# Note that in sig_handler() i close the cursor and connection,
# perhaps your class has a .close call? If so use that.
for item in items:
try:
if is_item_gone(content) == 1:
data = (item['id'])
db.update('UPDATE items SET empty = 0 WHERE id = (%s);', data)
else:
do alot of stuff
except MySQLdb.Error as e:
data = (item['id'])
db.delete('...')
This would register and catch for instance Ctrl+c at any given time. The handler would terminate a socket accordingly, then exit with exit code 0 which is a good exit code.
This is a more "global" approach to catching keyboard interrupts.

Run a python server on a background

I am using gevent-socketio to to push real time data from a python script to a browser. However I would like it to work interactively from the interpreter so that the server would continue to run in the background. Some pseudo code:
#start server with the application and keep it running on the background
server_inst=MyAppServer()
#send data to the sever that pushes it to a browser (prints "foo" to the browser)
server_inst.send_data('foo')
Having looked at threading I am still confused how it could/should be done. Any pointers appriciated.
Is there a reason to have the server and the console program as the same process?
If not I would suggest the use of two separate processes and named pipes. If this solution is not ideal for whatever reason can you please give more details.
Anyway, here is some code that might be useful to you for implementing a Pipe class. I'm not sure exactly what form the data that you need to communicate looks like so you could make the Pipe class abstract some simple protocol and/or use pickle.
def __init__(self,sPath):
"""
create the fifo. if it already exists just associate with it
"""
self.sPath = sPath
if not os.path.exists(sPath):
try:
os.mkfifo(sPath)
except:
raise Exception('cannot mkfifo at path \n {0}'.format(sPath))
self.iFH = os.open(sPath,os.O_RDWR | os.O_NONBLOCK)
self.iFHBlocking = os.open(sPath,os.O_RDWR)
def base_read(self,iLen,blocking=False):
iFile = self.iFHBlocking if blocking else self.iFH
while not self.ready_for_reading():
import time
time.sleep(0.5)
lBytes = ''.encode('utf-8')
while len(lBytes)<iLen:
self.lock()
try:
lBytes += os.read(iFile,1)
self.unlock()
except OSError as e:
self.unlock()
if e.errno == 11:
import time
time.sleep(0.5)
else:
raise e
return lBytes
def base_write(self,lBytes,blocking = False):
iFile = self.iFHBlocking if blocking else self.iFH
while not self.ready_for_writing():
import time
time.sleep(0.5)
while True:
self.lock()
try:
iBytesWritten = os.write(iFile, lBytes)
self.unlock()
break
except OSError as e:
self.unlock()
if e.errno in [35,11]:
import time
time.sleep(0.5)
else:
raise
if iBytesWritten < len(lBytes):
self.base_write(lBytes[iBytesWritten:],blocking)
def get_lock_dir(self):
return '{0}_lockdir__'.format(self.sPath)
def lock(self):
while True:
try:
os.mkdir(self.get_lock_dir())
return
except OSError as e:
if e.errno != 17:
raise e
def unlock(self):
try:
os.rmdir(self.get_lock_dir())
except OSError as e:
if e.errno != 2:
raise e
def ready_for_reading(self):
lR,lW,lX = select.select([self.iFH,],[],[],self.iTimeout)
if not lR:
return False
lR,lW,lX = select.select([self.iFHBlocking],[],[],self.iTimeout)
if not lR:
return False
return True
def ready_for_writing(self):
lR,lW,lX = select.select([],[self.iFH,],[],self.iTimeout)
if not lW:
return False
return True
I have a more complete implementation of the class that I am willing to share but I think the protocol I abstract with it would probably be useless to you... How you would use this is create an instance in your server and your console application using the same path. You then use base_read and base_write to do the dirty work. All the other stuff here is to avoid weird race conditions.
Hope this helps.

fcntl.flock - how to implement a timeout?

I am using python 2.7
I want to create a wrapper function around fcntl.flock() that will timeout after a set interval:
wrapper_function(timeout):
I've tried calling on another thread and using thread.join(timeout) but it seems that fcntl.flock() continues blocking:
def GetLock(self, timeout):
"""Returns true if lock is aquired, false if lock is already in use"""
self.__lock_file = open('proc_lock', 'w')
def GetLockOrTimeOut():
print 'ProcessLock: Acquiring Lock'
fcntl.flock(self.__lock_file.fileno(), fcntl.LOCK_EX)
print 'ProcessLock: Lock Acquired'
thread = threading.Thread(target=GetLockOrTimeOut)
thread.start()
thread.join(timeout)
if thread.isAlive():
print 'GetLock timed out'
return False
else:
return True
I've looked into solutions for terminating threads, the most popular solution seems to be sub-classing threading.thread and adding a feature to raise an exception in the thread. However, I came across a link that says this method will not work with native calls, which I am pretty sure fcntl.flock() is calling a native function. Suggestions?
Context: I am using a file-lock to create a single instance application but I don't want a second instance of the application to sit around and hang until the first instance terminates.
Timeouts for system calls are done with signals. Most blocking system calls return with EINTR when a signal happens, so you can use alarm to implement timeouts.
Here's a context manager that works with most system calls, causing IOError to be raised from a blocking system call if it takes too long.
import signal, errno
from contextlib import contextmanager
import fcntl
#contextmanager
def timeout(seconds):
def timeout_handler(signum, frame):
pass
original_handler = signal.signal(signal.SIGALRM, timeout_handler)
try:
signal.alarm(seconds)
yield
finally:
signal.alarm(0)
signal.signal(signal.SIGALRM, original_handler)
with timeout(1):
f = open("test.lck", "w")
try:
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
except IOError, e:
if e.errno != errno.EINTR:
raise e
print "Lock timed out"
I'm sure there are several ways, but how about using a non-blocking lock? After some n attempts, give up and exit?
To use non-blocking lock, include the fcntl.LOCK_NB flag, as in:
fcntl.flock(self.__lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
For Python 3.5+, Glenn Maynard's solution no longer works because of PEP-475. This is a modified version:
import signal, errno
from contextlib import contextmanager
import fcntl
#contextmanager
def timeout(seconds):
def timeout_handler(signum, frame):
# Now that flock retries automatically when interrupted, we need
# an exception to stop it
# This exception will propagate on the main thread, make sure you're calling flock there
raise InterruptedError
original_handler = signal.signal(signal.SIGALRM, timeout_handler)
try:
signal.alarm(seconds)
yield
finally:
signal.alarm(0)
signal.signal(signal.SIGALRM, original_handler)
with timeout(1):
f = open("test.lck", "w")
try:
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
except InterruptedError:
# Catch the exception raised by the handler
# If we weren't raising an exception, flock would automatically retry on signals
print("Lock timed out")
I'm a fan of shelling out to flock here, since attempting to do a blocking lock with a timeout requires changes to global state, which makes it harder to reason about your program, especially if threading is involved.
You could fork off a subprocess and implement the alarm as above, or you could just exec http://man7.org/linux/man-pages/man1/flock.1.html
import subprocess
def flock_with_timeout(fd, timeout, shared=True):
rc = subprocess.call(['flock', '--shared' if shared else '--exclusive', '--timeout', str(timeout), str(fd)])
if rc != 0:
raise Exception('Failed to take lock')
If you have a new enough version of flock you can use -E to specify a different exit code for the command otherwise succeeding, but failed to take the lock after a timeout, so you can know whether the command failed for some other reason instead.
As a complement to #Richard Maw answer above https://stackoverflow.com/a/32580233/17454091 (Don't have enough reputation to post a comment).
In Python 3.2 and newer, for fds to be available in sub-processes one must also provide pass_fds argument.
Complete solution ends up as:
import subprocess
def flock_with_timeout(fd, timeout, shared=True):
rc = subprocess.call(['flock',
'--shared' if shared else '--exclusive',
'--timeout', str(timeout),
str(fd)],
pass_fds=[fd])
if rc != 0:
raise Exception('Failed to take lock')

In Python try until no error

I have a piece of code in Python that seems to cause an error probabilistically because it is accessing a server and sometimes that server has a 500 internal server error. I want to keep trying until I do not get the error. My solution was:
while True:
try:
#code with possible error
except:
continue
else:
#the rest of the code
break
This seems like a hack to me. Is there a more Pythonic way to do this?
It won't get much cleaner. This is not a very clean thing to do. At best (which would be more readable anyway, since the condition for the break is up there with the while), you could create a variable result = None and loop while it is None. You should also adjust the variables and you can replace continue with the semantically perhaps correct pass (you don't care if an error occurs, you just want to ignore it) and drop the break - this also gets the rest of the code, which only executes once, out of the loop. Also note that bare except: clauses are evil for reasons given in the documentation.
Example incorporating all of the above:
result = None
while result is None:
try:
# connect
result = get_data(...)
except:
pass
# other code that uses result but is not involved in getting it
Here is one that hard fails after 4 attempts, and waits 2 seconds between attempts. Change as you wish to get what you want form this one:
from time import sleep
for x in range(0, 4): # try 4 times
try:
# msg.send()
# put your logic here
str_error = None
except Exception as str_error:
pass
if str_error:
sleep(2) # wait for 2 seconds before trying to fetch the data again
else:
break
Here is an example with backoff:
from time import sleep
sleep_time = 2
num_retries = 4
for x in range(0, num_retries):
try:
# put your logic here
str_error = None
except Exception as e:
str_error = str(e)
if str_error:
sleep(sleep_time) # wait before trying to fetch the data again
sleep_time *= 2 # Implement your backoff algorithm here i.e. exponential backoff
else:
break
Maybe something like this:
connected = False
while not connected:
try:
try_connect()
connected = True
except ...:
pass
When retrying due to error, you should always:
implement a retry limit, or you may get blocked on an infinite loop
implement a delay, or you'll hammer resources too hard, such as your CPU or the already distressed remote server
A simple generic way to solve this problem while covering those concerns would be to use the backoff library. A basic example:
import backoff
#backoff.on_exception(
backoff.expo,
MyException,
max_tries=5
)
def make_request(self, data):
# do the request
This code wraps make_request with a decorator which implements the retry logic. We retry whenever our specific error MyException occurs, with a limit of 5 retries. Exponential backoff is a good idea in this context to help minimize the additional burden our retries place on the remote server.
The itertools.iter_except recipes encapsulates this idea of "calling a function repeatedly until an exception is raised". It is similar to the accepted answer, but the recipe gives an iterator instead.
From the recipes:
def iter_except(func, exception, first=None):
""" Call a function repeatedly until an exception is raised."""
try:
if first is not None:
yield first() # For database APIs needing an initial cast to db.first()
while True:
yield func()
except exception:
pass
You can certainly implement the latter code directly. For convenience, I use a separate library, more_itertools, that implements this recipe for us (optional).
Code
import more_itertools as mit
list(mit.iter_except([0, 1, 2].pop, IndexError))
# [2, 1, 0]
Details
Here the pop method (or given function) is called for every iteration of the list object until an IndexError is raised.
For your case, given some connect_function and expected error, you can make an iterator that calls the function repeatedly until an exception is raised, e.g.
mit.iter_except(connect_function, ConnectionError)
At this point, treat it as any other iterator by looping over it or calling next().
Here's an utility function that I wrote to wrap the retry until success into a neater package. It uses the same basic structure, but prevents repetition. It could be modified to catch and rethrow the exception on the final try relatively easily.
def try_until(func, max_tries, sleep_time):
for _ in range(0,max_tries):
try:
return func()
except:
sleep(sleep_time)
raise WellNamedException()
#could be 'return sensibleDefaultValue'
Can then be called like this
result = try_until(my_function, 100, 1000)
If you need to pass arguments to my_function, you can either do this by having try_until forward the arguments, or by wrapping it in a no argument lambda:
result = try_until(lambda : my_function(x,y,z), 100, 1000)
Maybe decorator based?
You can pass as decorator arguments list of exceptions on which we want to retry and/or number of tries.
def retry(exceptions=None, tries=None):
if exceptions:
exceptions = tuple(exceptions)
def wrapper(fun):
def retry_calls(*args, **kwargs):
if tries:
for _ in xrange(tries):
try:
fun(*args, **kwargs)
except exceptions:
pass
else:
break
else:
while True:
try:
fun(*args, **kwargs)
except exceptions:
pass
else:
break
return retry_calls
return wrapper
from random import randint
#retry([NameError, ValueError])
def foo():
if randint(0, 1):
raise NameError('FAIL!')
print 'Success'
#retry([ValueError], 2)
def bar():
if randint(0, 1):
raise ValueError('FAIL!')
print 'Success'
#retry([ValueError], 2)
def baz():
while True:
raise ValueError('FAIL!')
foo()
bar()
baz()
of course the 'try' part should be moved to another funcion becouse we using it in both loops but it's just example;)
Like most of the others, I'd recommend trying a finite number of times and sleeping between attempts. This way, you don't find yourself in an infinite loop in case something were to actually happen to the remote server.
I'd also recommend continuing only when you get the specific exception you're expecting. This way, you can still handle exceptions you might not expect.
from urllib.error import HTTPError
import traceback
from time import sleep
attempts = 10
while attempts > 0:
try:
#code with possible error
except HTTPError:
attempts -= 1
sleep(1)
continue
except:
print(traceback.format_exc())
#the rest of the code
break
Also, you don't need an else block. Because of the continue in the except block, you skip the rest of the loop until the try block works, the while condition gets satisfied, or an exception other than HTTPError comes up.
what about the retrying library on pypi?
I have been using it for a while and it does exactly what I want and more (retry on error, retry when None, retry with timeout). Below is example from their website:
import random
from retrying import retry
#retry
def do_something_unreliable():
if random.randint(0, 10) > 1:
raise IOError("Broken sauce, everything is hosed!!!111one")
else:
return "Awesome sauce!"
print do_something_unreliable()
e = ''
while e == '':
try:
response = ur.urlopen('https://https://raw.githubusercontent.com/MrMe42/Joe-Bot-Home-Assistant/mac/Joe.py')
e = ' '
except:
print('Connection refused. Retrying...')
time.sleep(1)
This should work. It sets e to '' and the while loop checks to see if it is still ''. If there is an error caught be the try statement, it prints that the connection was refused, waits 1 second and then starts over. It will keep going until there is no error in try, which then sets e to ' ', which kills the while loop.
Im attempting this now, this is what i came up with;
placeholder = 1
while placeholder is not None:
try:
#Code
placeholder = None
except Exception as e:
print(str(datetime.time(datetime.now()))[:8] + str(e)) #To log the errors
placeholder = e
time.sleep(0.5)
continue
Here is a short piece of code I use to capture the error as a string. Will retry till it succeeds. This catches all exceptions but you can change this as you wish.
start = 0
str_error = "Not executed yet."
while str_error:
try:
# replace line below with your logic , i.e. time out, max attempts
start = raw_input("enter a number, 0 for fail, last was {0}: ".format(start))
new_val = 5/int(start)
str_error=None
except Exception as str_error:
pass
WARNING: This code will be stuck in a forever loop until no exception occurs. This is just a simple example and MIGHT require you to break out of the loop sooner or sleep between retries.

Categories