I want to except requests.exceptions.ConnectionError, which is being raised by a library that I am importing. I am not importing requests. How can I except this specific exception?
If you need to catch a ConnectionError you can import the error and catch it like you would any other error
from requests.exceptions import ConnectionError
try:
code_that_raises_connection_error()
except ConnectionError:
handle()
Related
I am writing a python program to show whether a website is up and running or not. Here is my code till now:
import urllib.request
weburl = str(input('Enter the URL: '))
#print(urllib.request.urlopen("https://"+ weburl).getcode())
try:
webcode = urllib.request.urlopen("https://"+ weburl).getcode()
if webcode == 200:
print('Website is working')
except:
print("Website is down or doesn't exist")
However, if the website is down, or it doesn't exist, the code returns a URLError for both scenarios. This is the error for the server being down
Exception has occurred: URLError
<urlopen error [WinError 10061] No connection could be made because the target machine actively refused it>
File "C:\Programming\Python\test.py", line 4, in <module>
print(urllib.request.urlopen("https://"+ weburl).getcode())
and here is the exception when the URL doesn't exist:
Exception has occurred: URLError
<urlopen error [Errno 11001] getaddrinfo failed>
File "C:\Programming\Python\test.py", line 4, in <module>
print(urllib.request.urlopen("https://"+ weburl).getcode())
How can I differentiate between the server being down and the URL never existing in the first place? I have thought about using the time between the request and return in the 'except:' line because it is considerably faster when the website doesn't exist at all, however, I'm not sure if this would work due to people having different internet speeds.
Instead of catching all possible Exceptions when calling the urlopen method, you should catch the urllib.error.HTTPError one, which can tell you the status code of the response, as follows:
import urllib.request
from urllib.error import HTTPError, URLError
weburl = input('Enter the URL: ')
try:
urllib.request.urlopen(weburl)
except HTTPError as error:
if error.code == 404:
print("The server exists but the endpoint does not!")
else:
print("The server exists but there was an Internal Error!")
except URLError as error:
print("The server does not exist!")
Of course, appart from the HTTPError, other exceptions can be thrown like ValueError, URLError, etcetera, so just in case you want to handle them you could also catch them.
EDIT: I did not explain it well, sorry. The URLError is also raised when a server does not exist, so you should also catch it. I thought you only wanted to check if a concrete endpoint of an existing server existed or not, but if you want to also check if the server exists, you should also catch catch the URLError exception.
I'm using the requests library for python and I would like the code to continue whenever this ('Connection aborted.', gaierror(-2, 'Name or service not known')) error occurs. For some reason my try catch is being ignored and the error causes the application to exit anyway.
My code:
try:
self.doSomething();
except requests.exceptions.ConnectionError as e:
self.logger.error("A connection error occured.");
self.logger.error(str(e) + "\n");
except Exception as e:
self.logger.error("An error occured.");
self.logger.error(str(e) + "\n");
If the error is a socket.gaierror, which is a subclass of OSError which is a subclass of Exception, then the except Exception clause should be executed.
Try/except is a powerful tool in Python, however, if not used carefully, it can easily hide errors. Sometimes the best approach is to remove the try/except and read the full traceback. Another approach that works pretty well is to log the full traceback, unfortunately for the Python newcomer, there are ubiquitous examples of this done wrong. To get the full traceback, you need to either remove the try/except, or use
import traceback
try:
risky_call()
except Exception:
print(traceback.print_exc())
Or, if you have a logger configured
import logging
try:
risky_call()
except Exception:
logging.exception('')
I am quite newby into dealing with exceptions in python.
Particularly I would like to create an exception when:
URLError: <urlopen error [Errno 11001] getaddrinfo failed>`
and another one when:
HTTPError: HTTP Error 404: Not Found
If i am right it shall be in both cases a :
except IOError:
however I would like to carry out one code when one error arises and a different code when the other error arises,
How could I differentiate these 2 exceptions?
Thank you
You can set several exception handlers for each type of exception you want to handle, like this:
import urllib2
(...)
try:
(... your code ...)
except urllib2.HTTPError, e:
(... handle HTTPError ...)
except urllib2.URLError, e:
(... handle URLError ...)
Note that this will handle ONLY HTTPError and URLError, any other kind of exception won't be handled. You can add a final except Exception, e: to handle anything else, although this is discouraged as correctly pointed out in the comments.
Obviously replace evrything that's in parenthesis () with your code.
For some reason when I try to get and process the following url with python-requests I receive an error causing my program to fail. Other similar urls seems to work fine
import requests
test = requests.get('http://t.co/Ilvvq1cKjK')
print test.url, test.status_code
What could be causing this URL to fail instead of just producing a 404 status code?
The requests library has an exception hierarchy as listed here
So wrap your GET request in a try/except block:
import requests
try:
test = requests.get('http://t.co/Ilvvq1cKjK')
print test.url, test.status_code
except requests.exceptions.ConnectionError as e:
print e.request.url, "*connection failed*"
That way you end up with similar behaviour to what you're doing now (so you get the redirected url), but cater for not being able to connect rather than print the status code.
Using a Nokia N900 , I have a urllib.urlopen statement that I want to be skipped if the server is offline. (If it fails to connect > proceed to next line of code ) .
How should / could this be done in Python?
According to the urllib documentation, it will raise IOError if the connection can't be made.
try:
urllib.urlopen(url)
except IOError:
# exception handling goes here if you want it
pass
else:
DoSomethingUseful()
Edit: As unutbu pointed out, urllib2 is more flexible. The Python documentation has a good tutorial on how to use it.
try:
urllib.urlopen("http://fgsfds.fgsfds")
except IOError:
pass
If you are using Python3, urllib.request.urlopen has a timeout parameter. You could use it like this:
import urllib.request as request
try:
response = request.urlopen('http://google.com',timeout = 0.001)
print(response)
except request.URLError as err:
print('got here')
# urllib.URLError: <urlopen error timed out>
timeout is measured in seconds. The ultra-short value above is just to demonstrate that it works. In real life you'd probably want to set it to a larger value, of course.
urlopen also raises a urllib.error.URLError (which is also accessible as request.URLError) if the url does not exist or if your network is down.
For Python2.6+, equivalent code can be found here.