Pythonic way to handle errors and exceptions - python

Some time ago I wrote a piece of code, a Flask route to log out users from a web application I was working on, that looked like that:
#app.route('/logout')
#login_required
def logout():
# lets get the user cookie, and if it exists, delete it
cookie = request.cookies.get('app_login')
response = make_response(redirect(url_for('login')))
if cookie:
riak_bucket = riak_connect('sessions')
riak_bucket.get(cookie).delete()
response.delete_cookie('app_login', None)
return response
return response
I did its job, and was certainly working, but now I am getting into making the app more robust by adding proper error handling, something that I havent done before on a large scale nowhere in my code. So I stumbled on this route function and I started writing its new version, when I realised I dont know how to do it 'the right way'. Here is what I came up with:
#app.route('/logout')
#login_required
def logout():
# why dont we call variables after what they are in specifics?
login_redirect = make_response(redirect(url_for('login')))
try:
cookie = request.cookies.get('app_login')
except:
return login_redirect
# if we are here, the above try/except went good, right?
try:
# perhaps sessions_bucket should really be bucket_object?
# is it valid to chain try statements like that, or should they be
# tried separately one by one?
sessions_bucket = riak_connect('sessions')
sessions_bucket.get(cookie).delete()
login_redirect.delete_cookie('app_login', None)
except:
return login_redirect
# return redirect by default, just because it seems more secure
return login_redirect
It also does it job, but still doesnt look 'right' to me. So, the question are, to all of you who have larger experience in writing really pythonic Python code, given the fact I would love the code to handle all errors nicely, be readable to others and do its job fast and well (in this particular case but also in rest of rather large codebase):
how are you calling your variables, extra specific or general: sessions_bucket vs riak_bucket vs bucket_object?
how do you handle errors, by usage of try/except one after another, or by nesting one try/except in another, or in any other way?
is it ok to do more than one thing in one try/except, or not?
and perhaps anything else, that comes to your mind to the above code examples
Thanks in advance!

I don't know the exact riak python API, so I don't know what exceptions are thrown. On the other hand, how should the web app behave on the different error conditions? Has the user to be informed?
Variable names: I prefer generic. If you change the implementation (e.g. Session store), you don't have to change the variable names.
Exceptions: Depends on the desired behavior. If you want to recover from errors, try/except one after another. (Generally, linear code is simpler.) If you don't recover from errors, I find one bigger try clause with several exception clauses very acceptable.
For me it's ok to do several things in one try/except. If there are too many try/except clauses, the code gets less readable.
More things: logging. logging.exception will log the traceback so you can know where exactly the error appeared.
Some suggestion:
import logging
log = loggin.getLogger(__name__)
#app.route('/logout')
#login_required
def logout():
login_redirect = make_response(redirect(url_for('login')))
try:
sessionid = request.cookies.get('app_login', None)
except AttributeError:
sessionid = None
log.error("Improperly configured")
if sessionid:
try:
session_store = riak_connect('sessions')
session = session_store.get(sessionid)
if session:
session.delete()
login_redirect.delete_cookie('app_login', None)
except IOError: # what errors appear when connect fails?
log.exception("during logout")
return login_redirect

Related

Pygame return variable from ThreadPool , local variable referenced before assignment with try, except

I'm trying to receive some data from thread, but every time it pass through exception, it not pass inside Try, i don't know what is wrong. I did it once, and i've searched every where. If someone please could help.
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
host = socket.gethostbyname(socket.gethostname())
server = (host,5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
pool = ThreadPool(processes=1)
async_result = pool.apply_async(receving, ('arg qualquer', s))
return_val = async_result.get()
print(return_val)
run = True
while run:
return_val = async_result.get()
print(return_val)
The error message is this:
return data
UnboundLocalError: local variable 'data' referenced before assignment
I've already tried to initialize before try: but the output is the same as default, it jumps Try: same way.
Also tried to make it global but no success.
The exception you describe is very straight forward. It's all in the function at the top of your code:
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
If the code in the try block causes an exception, the assignment to data won't have run. So when you try to return data later on, the local variable has no value and so it doesn't work.
It's not hard to fix that specific issue. Try putting data = None or something similar in the except clause, instead of just pass. That way, data will be defined (albeit perhaps with a value that's not very useful) regardless of whether there was an exception or not.
You should however consider tightening up the except clause so that you're not ignoring all exceptions. That's often a bad idea, since it can cause the program to run even with really broken code in it. For instance, you've never defined tLock in the code you've shown, and the try would catch the NameError caused by trying to acquire it (you'd still get an exception though when the finally clause tries to release it, so I'm guessing this isn't a real issue in your code). Normally you should specify the exception types you want to catch. I'm not exactly sure which ones would be normal for your current code, so I'll leave picking them to you.
You might also consider not having an except clause at all, if there's no reasonable result to return. That way, the exception would "bubble out" of your function and it would be the caller's responsibility to deal with it. For some kinds of exceptions (e.g. ones cause by programming bugs, not expected situations), this is usually the best way to go.
There's a lot of other weird stuff in your code though, so I'd expect you'll run into other issues after fixing the first one. For instance, you always return from the first iteration of your while loop (assuming I fixed your messed up indentation correctly), so there's not really much point in having it at all. If the return data line is actually indented less (it's at the same level as while run, then the loop will make the code inside run more than once, but it will never stop running since nothing inside it will ever change the value of the global run variable.
There may be other issues too, but it's not entirely obvious to me what you're trying to do, so I can't help with those. Multi-threaded code and network programming can be very tough to get right even for experienced programmers, so if you're new it might be a better idea to start with something simpler first.

In flask, should i manually catch all possible error in views?

I'm new on Flask, when writing view, i wander if all errors should be catched. If i do so, most of view code should be wrappered with try... except. I think it's not graceful.
for example.
#app.route('/')
def index():
try:
API.do()
except:
abort(503)
Should i code like this? If not, will the service crash(uwsgi+lnmp)?
You only catch what you can handle. The word "handle" means "do something useful with" not merely "print a message and die". The print-and-die is already handled by the exception mechanism and probably does it better than you will.
For example, this is not handling an exception usefully:
denominator = 0
try:
y = x / denominator
except ZeroDivisionError:
abort(503)
There is nothing useful you can do, and the abort is redundant as that's what uncaught exceptions will cause to happen anyway. Here is an example of a useful handling:
try:
config_file = open('private_config')
except IOError:
config_file = open('default_config_that_should_always_be_there')
but note that if the second open fails, there is nothing useful to do so it will travel up the call stack and possibly halt the program. What you should never do is have a bare except: because it hides information about what faulted where. This will result in much head scratching when you get a defect report of "all it said was 503" and you have no idea what went wrong in API.do().
Try / except blocks that can't do any useful handling clutter up the code and visually bury the main flow of execution. Languages without exceptions force you to check every call for an error return if only to generate an error return yourself. Exceptions exist in part to get rid of that code noise.

Django:General help in testing

I know this is more of a learning thing than a problem in programming but still I need to ask it.Please don't down vote it,I wouldn't have asked it here if I knew any other appropriate place.I have a view as follows:
def takedown(request,aid):
approveobj = get_object_or_404(approve,pk=aid)
# fetching mapping
map = mapping.objects.get(appval=approveobj)
try:
# deleting option from main database
map.optval.delete()
# changing the status of the appval
map.appval.status = 'Pending'
map.appval.save()
# finally deleting the map
map.delete()
except:
print("Error in taking down the entry")
redirect_url = "/wars/configure/"+str(map.appval.warval.id)+"/"
return HttpResponseRedirect(redirect_url)
I want to design some tests for the above view.At present I'm checking whether it redirects to appropriate url or not.What else I can test?I need to test it thoroughly.
Looking at your view, I can see three other possible tests:
Test that the view returns status code 404 for an aid that does not exist
Check that the map object exists in the database. Fetch the view in your test, then check that the map object has been deleted as you expected.
Test that your view works as expected when there is an exception in the try except block. It's not clear what you're expecting to go wrong here. Note that because you only print the error, nothing will be displayed to the user so it's tricky to test this.

Recovering from ECONNRESET in Python/Mechanize

I've got a large bulk downloading application written in Python/Mechanize, aiming to download something like 20,000 files. Clearly, any downloader that big is occasionally going to run into some ECONNRESET errors. Now, I know how to handle each of these individually, but there's two problems with that:
I'd really rather not wrap every single outbound web call in a try/catch block.
Even if I were to do so, there's trouble with knowing how to handle the errors once the exception has thrown. If the code is just
data = browser.response().read()
then I know precisely how to deal with it, namely:
data = None
while (data == None):
try:
data = browser.response().read()
except IOError as e:
if e.args[1].args[0].errno != errno.ECONNRESET:
raise
data = None
but if it's just a random instance of
browser.follow_link(link)
then how do I know what Mechanize's internal state looks like if an ECONNRESET is thrown somewhere in here? For example, do I need to call browser.back() before I try the code again? What's the proper way to recover from that kind of error?
EDIT: The solution in the accepted answer certainly works, and in my case it turned out to be not so hard to implement. I'm still academically interested, however, in whether there's an error handling mechanism that could result in quicker error catching.
Perhaps place the try..except block higher up in the chain of command:
import collections
def download_file(url):
# Bundle together the bunch of browser calls necessary to download one file.
browser.follow_link(...)
...
response=browser.response()
data=response.read()
urls=collections.deque(urls)
while urls:
url=urls.popleft()
try:
download_file(url)
except IOError as err:
if err.args[1].args[0].errno != errno.ECONNRESET:
raise
else:
# if ECONNRESET error, add the url back to urls to try again later
urls.append(url)

Django and Sqlite Concurrency issue

I've done a bit of reading related to the concurrency issues with sqlite, but I don't see how they'd apply to Django since it's inherently single threaded. I'm not using any multiprocess modules either. I have absolutely no experience with concurrent programming either, so if someone can identify WHY the following code is causing an OperationalError: 'database is locked' I'd be grateful.
views.py
def screening(request, ovramt=None):
errors = []
if request.method == "POST":
form = ScreeningForm(request.POST)
if form.is_valid():
print "Woo valid!!"
return HttpResponse()
else: # GET
if ovramt is None:
o = Ovramt.objects.select_related(depth=1).latest("date_completed")
print "found?"
print o.id
else:
try:
o = Ovramt.objects.select_related(depth=1).get(id=ovramt)
except:
errors.append("OVRAMT NOT FOUND")
if o.residents.count() <= 0:
o.add_active_residents()
residents = list(o.residents)
models.py
def add_active_residents(self):
ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)
for r in ssa_res:
self.residents.add(r.resident) # Fails Here
self.save()
The add_active_residents method works fine, until it is called from the views module. Is there an open connection to the database open in the view which prevents writing from the model? Does someone have an explanation why this code will error?
In the following method function
def add_active_residents(self):
ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)
for r in ssa_res:
self.residents.add(r.resident) # Fails Here
self.save()
Why is there a select_related? You only really need the FK's of ssa_res items. Why do additional queries for related items?
Are you using Python 2.6?
If so, this is (apparently) a known issue that can be mitigated by adding:
DATABASE_OPTIONS = {'timeout': 30}
to your settings.py
See http://code.djangoproject.com/ticket/9409
My understanding is that only write operations will result in a db-locked condition.
http://www.sqlite.org/lockingv3.html
It's hard to say what the problem is without knowing how django is handling sqlite internally.
Speaking from using sqlite with standard cgi, I've noticed that in some cases it can take a 'long' time to release the lock. You may want to increase the timeout value mentioned by Matthew Christensen.
Sounds like you are actually running a multithreaded application, despite what you say. I am a bit clueless about Django, but I would assume that even though it might be single-threaded, whatever debugging server, or production server you run your application in won't be "inherently single threaded".

Categories