my question is simple as the title suggest
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
# retry the try part after some seconds
now i want it to retry and go over again in case the exception happen, after some time.
FOA (Looking at the accepted answer) I wouldn't use recursion where it's not necessary for a whole bunch of reasons, among which readability, mantainability, and the very name of this platform.
Then I would exempt doSomething() from catching the exception and embed the try-catch block in a while loop, like so:
def doSomething():
"do something here"
while True:
try:
doSomething()
print("success")
break
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("error, trying again in 10s")
time.sleep(10)
This does a better job at separating concerns; doSomething() just has to... do something. Error catching/logging can be handled outside.
You can always make it a recursive function and import time module to wait x seconds.
For instance:
import time
def doSomething():
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
# retry the try part after some seconds
time.sleep(1000)
# Try again
doSomething()
If I understand correctly you can do the following:
from time import sleep
no_of_attempts = 5 # set number of attempts
for i in range (no_of_attempts):
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
break
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
sleep(3)
continue
This way you can retry the "try" part as many times as you set no_of_attempts to be.
You could also set while True if you want it to try until is succeeds and then break inside the try but I would not recommend it
Related
Suppose I want to introduce a try-except block while handling a txt file. Which of the two following way of capturing the possible exception is correct?
try:
h = open(filename)
except:
h.close()
print('Could not read file')
try:
h = open(filename)
except:
print('Could not read file')
In other words, should the h.close() be called even if the exception occurs or not?
Secondly, suppose that you have the following code
try:
h = open(filename)
"code line here1"
"code line here2"
except:
h.close()
print('Could not read file')
If an error occurs in "code line here1", should I use h.close() in the except block?
Is there a difference with the previous coding lines?
You should use with, it will close the file appropriately:
with open(filename) as h:
#
# do whatever you need...
#
# when you get here, the file will be closed automatically.
You can enclose that in a try/except block if needed. The file will always be properly closed:
try:
with open(filename) as h:
#
# do whatever you need...
#
except FileNotFoundError:
print('file not found')
I'm trying to control exceptions when reading files, but I have a problem. I'm new to Python, and I am not yet able to control how I can catch an exception and still continue reading text from the files I am accessing. This is my code:
import errno
import sys
class Read:
#FIXME do immutables this 2 const
ROUTE = "d:\Profiles\user\Desktop\\"
EXT = ".txt"
def setFileReaded(self, fileToRead):
content = ""
try:
infile = open(self.ROUTE+fileToRead+self.EXT)
except FileNotFoundError as error:
if error.errno == errno.ENOENT:
print ("File not found, please check the name and try again")
else:
raise
sys.exit()
with infile:
content = infile.read()
infile.close()
return content
And from another class I tell it:
read = Read()
print(read.setFileReaded("verbs"))
print(read.setFileReaded("object"))
print(read.setFileReaded("sites"))
print(read.setFileReaded("texts"))
Buy only print this one:
turn on
connect
plug
File not found, please check the name and try again
And no continue with the next files. How can the program still reading all files?
It's a little difficult to understand exactly what you're asking here, but I'll try and provide some pointers.
sys.exit() will terminate the Python script gracefully. In your code, this is called when the FileNotFoundError exception is caught. Nothing further will be ran after this, because your script will terminate. So none of the other files will be read.
Another thing to point out is that you close the file after reading it, which is not needed when you open it like this:
with open('myfile.txt') as f:
content = f.read()
The file will be closed automatically after the with block.
For instance, I have a function like:
def example():
fp = open('example.txt','w+')
fp.write(str(1/0))
fp.close()
Then it will throw an exception because 1/0 is not defined. However, I can neither remove example.txt nor modify example.txt. But I have some important data in Python, so that I can't simply kill Python and run it again.
How could I finish opening the file when the function finish with an exception.
What shall we do if we didn't place a try:.. except:.. ?
with open('example.txt','w+') as fp:
try:
fp.write(...)
except ZeroDivisionError as e:
print('there was an error: {}'.format(e))
Using the with context manager any files opened by it will be closed automatically once they go out of scope.
You can wrap that in a try/except to handle the error and close the file reader before the program ends.
def example():
fp = open('example.txt', 'w+')
try:
fp.write(str(1/0))
except ZeroDivisionError:
fp.close()
fp.close()
Edit: The answer by #IanAuld is better than mine. It would be best to accept that one.
I want to save these email results to my results.txt file in the directory.
def parseAddress():
try:
website = urllib2.urlopen(getAddress())
html = website.read()
addys = re.findall('''[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?''', html, flags=re.IGNORECASE)
print addys
except urllib2.HTTPError, err:
print "Cannot retrieve URL: HTTP Error Code: ", err.code
except urllib2.URLError, err:
print "Cannot retrive URL: " + err.reason[1]
# need to write the addys data to results.txt
with open('results.txt', 'w') as f:
result_line = f.writelines(addys)
Use return addys at the end of your function. print will only output to your screen.
In order to retrieve addys, you would need to call the function in your with statement or create a variable that contains the result of parseAddress().
You can save the memory that a variable would use by simply calling the function, like so:
with open('results.txt', 'w') as f:
f.write ( parseAddress() )
You mistakenly indented the "with" statement one space. This makes it subjective to an earlier block. I would think any self-respecting Python interpreter would flag this as not matching any earlier indentation, but it seems to be fouling your output.
Also, please consider adding some tracing print statements to see where your code did execute. That output alone can often show you the problem, or lead us to it. You should always provide actual output for us, rather than just a general description.
You need to fix your indentation, which is important in Python as it is the only way to define a block of code.
You also have too many statements in your try block.
def parseAddress():
website = None
try:
website = urllib2.urlopen(getAddress())
except urllib2.HTTPError, err:
print "Cannot retrieve URL: HTTP Error Code: ", err.code
except urllib2.URLError, err:
print "Cannot retrive URL: " + err.reason[1]
if website is not None:
html = website.read()
addys = re.findall('''[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?''', html, flags=re.IGNORECASE)
print addys
# need to write the addys data to results.txt
with open('results.txt', 'w') as f:
result_line = f.writelines(addys)
I want to store some data in a web. I want to do two operations: the first is to open a URL, the second is to store data, with both of them in try...except blocks.
I'd like to know if nesting try...except is good or not, and why.
Solution one:
try:
# open url
# store data
except:
# url doesn't exist
# cannot store
Solution two:
try:
# open url
try:
# store data
except:
# cannot store
except:
# cannot open url
As naiquevin suggested, it might be useful to catch exactly what you intend to:
try:
openURL()
except URLError:
print "cannot open URL"
else:
try:
saveData()
except IOError:
print "cannot save data"