I want to store some data in a web. I want to do two operations: the first is to open a URL, the second is to store data, with both of them in try...except blocks.
I'd like to know if nesting try...except is good or not, and why.
Solution one:
try:
# open url
# store data
except:
# url doesn't exist
# cannot store
Solution two:
try:
# open url
try:
# store data
except:
# cannot store
except:
# cannot open url
As naiquevin suggested, it might be useful to catch exactly what you intend to:
try:
openURL()
except URLError:
print "cannot open URL"
else:
try:
saveData()
except IOError:
print "cannot save data"
Related
my question is simple as the title suggest
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
# retry the try part after some seconds
now i want it to retry and go over again in case the exception happen, after some time.
FOA (Looking at the accepted answer) I wouldn't use recursion where it's not necessary for a whole bunch of reasons, among which readability, mantainability, and the very name of this platform.
Then I would exempt doSomething() from catching the exception and embed the try-catch block in a while loop, like so:
def doSomething():
"do something here"
while True:
try:
doSomething()
print("success")
break
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("error, trying again in 10s")
time.sleep(10)
This does a better job at separating concerns; doSomething() just has to... do something. Error catching/logging can be handled outside.
You can always make it a recursive function and import time module to wait x seconds.
For instance:
import time
def doSomething():
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
# retry the try part after some seconds
time.sleep(1000)
# Try again
doSomething()
If I understand correctly you can do the following:
from time import sleep
no_of_attempts = 5 # set number of attempts
for i in range (no_of_attempts):
try:
response = requests.get(URL2) # download the data behind the URL
open(zipname, "wb").write(response.content) # Open the response into a new file
# extract zip file to specified location
with ZipFile(zipname, 'r') as zip_file:
zip_file.extractall(path=path)
os.remove(zipname) # removes the downloaded zip file
print("itworks")
break
except (requests.exceptions.ConnectionError, FileNotFoundError):
print("finally the error")
sleep(3)
continue
This way you can retry the "try" part as many times as you set no_of_attempts to be.
You could also set while True if you want it to try until is succeeds and then break inside the try but I would not recommend it
Suppose I want to introduce a try-except block while handling a txt file. Which of the two following way of capturing the possible exception is correct?
try:
h = open(filename)
except:
h.close()
print('Could not read file')
try:
h = open(filename)
except:
print('Could not read file')
In other words, should the h.close() be called even if the exception occurs or not?
Secondly, suppose that you have the following code
try:
h = open(filename)
"code line here1"
"code line here2"
except:
h.close()
print('Could not read file')
If an error occurs in "code line here1", should I use h.close() in the except block?
Is there a difference with the previous coding lines?
You should use with, it will close the file appropriately:
with open(filename) as h:
#
# do whatever you need...
#
# when you get here, the file will be closed automatically.
You can enclose that in a try/except block if needed. The file will always be properly closed:
try:
with open(filename) as h:
#
# do whatever you need...
#
except FileNotFoundError:
print('file not found')
I want to save these email results to my results.txt file in the directory.
def parseAddress():
try:
website = urllib2.urlopen(getAddress())
html = website.read()
addys = re.findall('''[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?''', html, flags=re.IGNORECASE)
print addys
except urllib2.HTTPError, err:
print "Cannot retrieve URL: HTTP Error Code: ", err.code
except urllib2.URLError, err:
print "Cannot retrive URL: " + err.reason[1]
# need to write the addys data to results.txt
with open('results.txt', 'w') as f:
result_line = f.writelines(addys)
Use return addys at the end of your function. print will only output to your screen.
In order to retrieve addys, you would need to call the function in your with statement or create a variable that contains the result of parseAddress().
You can save the memory that a variable would use by simply calling the function, like so:
with open('results.txt', 'w') as f:
f.write ( parseAddress() )
You mistakenly indented the "with" statement one space. This makes it subjective to an earlier block. I would think any self-respecting Python interpreter would flag this as not matching any earlier indentation, but it seems to be fouling your output.
Also, please consider adding some tracing print statements to see where your code did execute. That output alone can often show you the problem, or lead us to it. You should always provide actual output for us, rather than just a general description.
You need to fix your indentation, which is important in Python as it is the only way to define a block of code.
You also have too many statements in your try block.
def parseAddress():
website = None
try:
website = urllib2.urlopen(getAddress())
except urllib2.HTTPError, err:
print "Cannot retrieve URL: HTTP Error Code: ", err.code
except urllib2.URLError, err:
print "Cannot retrive URL: " + err.reason[1]
if website is not None:
html = website.read()
addys = re.findall('''[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?''', html, flags=re.IGNORECASE)
print addys
# need to write the addys data to results.txt
with open('results.txt', 'w') as f:
result_line = f.writelines(addys)
I have a directory that has bunch of sub directories, each subdir has many csv files, but I am only interest in certain csv file. So I wrote following python method, but I am unable to capture the file name, if I do *.csv it will find all the file but I don't want to all the files to be read in:
def gatherStats(template_file, csv_file):
for lang in getLanguageCodes(csv_file):
lang_dir = os.path.join(template_file, lang)
try:
for file in os.listdir(lang_dir):
if fnmatch.fnmatch(file, '*-*-template-users-data.csv'):
t_file = open(file, 'rb').read()
reader = csv.reader()
for row in reader:
print row
else:
print "didn't find the file"
except Exception, e:
logging.exception(e)
What am I doing wrong here? Is it a regular expression issue? Can we use regular expression with fnmath?
There are several problems with your code. Fix them first, then we might get to the bottom of what your issue really is.
First of all, don't use built-in names as variables, such as file. Rather replace it with filename.
Then os.path.join(lang_dir, filename) before opening the file. Meaning:
t_file = open(os.path.join(lang_dir, filename), 'rb').read()
How do you expect reader = csv.reader() to read your file if you don't reference your open file object in this line?
Your try/except block is a bit too wide for my taste. Take your time and narrow down the errors that actually can happen. Then decide which of them you want to ignore and which should crash your program. Take a close look at the exceptions actually thrown in this block. You'll probably find your issue there.
With the help provided by another user, I manage to fix the problem. I am putting this answer here for future reference for community.
def gatherStats(template_file, csv_file):
for lang in getLanguageCodes(csv_file):
lang_dir = os.path.join(template_file, lang)
try:
for filename in os.listdir(lang_dir):
path = os.path.join(lang_dir, filename)
if re.search(r'-.+-template-users-data.csv$',filename):
with open(path, 'rb') as template_user_data_file:
reader = csv.reader(template_user_data_file)
try:
for row in reader:
print row
except csv.ERROR as e:
logging.error(e)
else:
print "didn't find the file"
except Exception, e:
logging.exception(e)
def creating_folder_for_csv_files(cwd):
try:
os.makedirs(cwd+'\\migration_data\\trade')
except os.error, e:
print "Could not create the destination folder for CSV files"
# end of first try/except block
try:
os.makedirs(cwd+'\\migration_data\\voucher')
except os.error, e:
print "Could not create the destination folder for CSV files"
In my code, the first try/except block works but the second does not. What's the problem?
The voucher might already exist.