I am trying to write a quick little program that will parse through a CSV file pull usernames and passwords and test login to a FTP to create a directory and delete it with those usernames and passwords. I have written a try catch so it can output any errors I get so that I can review which FTP logins may need tweaking. The problem is when I run it, it will stall if any of the logins are broken instead of proceeding. I have tried using pass and finally after my try but it won't return to the loop. Any help is appreciated.
import csv
import ftplib
from ftplib import FTP
with open ('Filename.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
lineCount = 0
rowCount = 0
for row in readCSV:
username = row[rowCount]
passwordFTP = row[rowCount+1]
print (username)
ftp = FTP('ftp.org') #place the FTP address here
ftp.set_debuglevel(2) #this is set to highest debug level
try:
ftp.login(user='usernameGoesHere', passwd='passwordGoesHere', acct='')
except ftplib.all_errors as e:
errorcode_string = str(e).split(None, 1)[0]
pass
finally:
ftp.cwd('/TestDir')
ftp.mkd('testDir')
ftp.rmd('testDir')
ftp.quit()
pass
There is actually no pint in putting the code in finally. Despite the fact that you have put a pass statement, it has no role in this code. On getting an error on login the program will log the error and still try to make a folder for the given connection.
Instead move the cwd, mkd and other methods to the try block. If an error is thrown there, the except will catch it and stop the execution.
There is absolutely no need to put pass statements anywhere in the try. The only thing that can still remain in the finally block is ftp.quit() so it cleans up, closing the connection if one of the operations fails, but on the other hand what will happen if login is not successful?
Do something like:
with FTP('ftp.org') as ftp
try:
ftp.login(user='usernameGoesHere', passwd='passwordGoesHere', acct='')
ftp.cwd('/TestDir')
ftp.mkd('testDir')
ftp.rmd('testDir')
except ftplib.all_errors as e:
errorcode_string = str(e).split(None, 1)[0]
Using context manager (the with statement) will help you quitting manually.
Related
I am building a Python app using FBS, but part of it relies on an SQLite3 database. I have code to create this database if it doesn't find this, using a try-catch block.
When I try to run it after compiling, it not only can not find the preexisting SQLite3 file, but also won't create it. It does not display any error messages.
I have tried creating the file if it doesn't exist using this code:
try:
self.connection = sqlite3.connect(path)
self.cursor = self.connection.cursor()
except:
if not os.path.exists(path):
# Try and make the .config directory
try:
os.makedirs(".config")
except OSError as e:
if e.errno != errno.EEXIST:
raise
# Create the datastore, and close it
f = open(path, "w+")
f.close()
# And try connect to database again
return self.__connect(path)
else:
print(f"No database exists, and could not create one.\nPlease create file in app directory called: {path}\nThen restart application.")
raise
The code works find in dev, but as soon as I compile it to a Mac app, it refuses to find or create the database.
Fixed. If anyone has a similar issue please use the builtin appctxt.get_resource(file_path) method.
I have problem because I can not find the reason why my function in Django views.py sometimes runs two times. When I go to url, which call function create_db in Django view, function read json files from directory, parse it and write the data in the database. Most of the time it works perfectly, but sometimes for no reason it runs two times and write the same data in the data base. Does anyone know what can be the reason why code is sometimes done twice and how can I solve the problem?
Here is my create_db function:
def create_db(request):
response_data = {}
try:
start = time.time()
files = os.listdir()
print(files)
for filename in files:
if filename.endswith('.json'):
print(filename)
with open(f'{filename.strip()}', encoding='utf-8') as f:
data = json.load(f)
for item in data["CVE_Items"]:
import_item(item)
response_data['result'] = 'Success'
response_data['message'] = 'Baza podatkov je ustvarjena.'
except KeyError:
response_data['result'] = 'Error'
response_data['message'] = 'Prislo je do napake! Podatki niso bili uvozeni!'
return HttpResponse(json.dumps(response_data), content_type='application/json')
The console output that I expect:
['nvdcve-1.0-2002.json', 'nvdcve-1.0-2003.json', 'nvdcve-1.0-2004.json', 'nvdcve-1.0-2005.json', 'nvdcve-1.0-2006.json', 'nvdcve-1.0-2007.json', 'nvdcve-1.0-2008.json', 'nvdcve-1.0-2009.json', 'nvdcve-1.0-2010.json', 'nvdcve-1.0-2011.json', 'nvdcve-1.0-2012.json', 'nvdcve-1.0-2013.json', 'nvdcve-1.0-2014.json', 'nvdcve-1.0-2015.json', 'nvdcve-1.0-2016.json', 'nvdcve-1.0-2017.json']
nvdcve-1.0-2002.json
nvdcve-1.0-2003.json
nvdcve-1.0-2004.json
nvdcve-1.0-2005.json
nvdcve-1.0-2006.json
nvdcve-1.0-2007.json
nvdcve-1.0-2008.json
nvdcve-1.0-2009.json
nvdcve-1.0-2010.json
nvdcve-1.0-2011.json
nvdcve-1.0-2012.json
nvdcve-1.0-2013.json
nvdcve-1.0-2014.json
nvdcve-1.0-2015.json
nvdcve-1.0-2016.json
nvdcve-1.0-2017.json
Console output when error happened:
['nvdcve-1.0-2002.json', 'nvdcve-1.0-2003.json', 'nvdcve-1.0-2004.json', 'nvdcve-1.0-2005.json', 'nvdcve-1.0-2006.json', 'nvdcve-1.0-2007.json', 'nvdcve-1.0-2008.json', 'nvdcve-1.0-2009.json', 'nvdcve-1.0-2010.json', 'nvdcve-1.0-2011.json', 'nvdcve-1.0-2012.json', 'nvdcve-1.0-2013.json', 'nvdcve-1.0-2014.json', 'nvdcve-1.0-2015.json', 'nvdcve-1.0-2016.json', 'nvdcve-1.0-2017.json']
nvdcve-1.0-2002.json
['nvdcve-1.0-2002.json', 'nvdcve-1.0-2003.json', 'nvdcve-1.0-2004.json', 'nvdcve-1.0-2005.json', 'nvdcve-1.0-2006.json', 'nvdcve-1.0-2007.json', 'nvdcve-1.0-2008.json', 'nvdcve-1.0-2009.json', 'nvdcve-1.0-2010.json', 'nvdcve-1.0-2011.json', 'nvdcve-1.0-2012.json', 'nvdcve-1.0-2013.json', 'nvdcve-1.0-2014.json', 'nvdcve-1.0-2015.json', 'nvdcve-1.0-2016.json', 'nvdcve-1.0-2017.json']
nvdcve-1.0-2002.json
nvdcve-1.0-2003.json
nvdcve-1.0-2003.json
nvdcve-1.0-2004.json
nvdcve-1.0-2004.json
nvdcve-1.0-2005.json
nvdcve-1.0-2005.json
nvdcve-1.0-2006.json
nvdcve-1.0-2006.json
nvdcve-1.0-2007.json
nvdcve-1.0-2007.json
nvdcve-1.0-2008.json
nvdcve-1.0-2008.json
nvdcve-1.0-2009.json
nvdcve-1.0-2009.json
nvdcve-1.0-2010.json
nvdcve-1.0-2010.json
nvdcve-1.0-2011.json
nvdcve-1.0-2011.json
nvdcve-1.0-2012.json
nvdcve-1.0-2012.json
nvdcve-1.0-2013.json
nvdcve-1.0-2013.json
nvdcve-1.0-2014.json
nvdcve-1.0-2014.json
nvdcve-1.0-2015.json
nvdcve-1.0-2015.json
nvdcve-1.0-2016.json
nvdcve-1.0-2016.json
nvdcve-1.0-2017.json
nvdcve-1.0-2017.json
The problem is not in the code which you show us. Enable logging for the HTTP requests which your application receives to make sure the browser sends you just a single request. If you see two requests, make sure they use the same session (maybe another user is clicking at the same time).
If it's from the same user, maybe you're clicking the button twice. Could be a hardware problem with the mouse. To prevent this, use JavaScript to disable the button after the first click.
I am retrieving data files from a FTP server in a loop with the following code:
response = urllib.request.urlopen(url)
data = response.read()
response.close()
compressed_file = io.BytesIO(data)
gin = gzip.GzipFile(fileobj=compressed_file)
Retrieving and processing the first few works fine, but after a few request I am getting the following error:
530 Maximum number of connections exceeded.
I tried closing the connection (see code above) and using a sleep() timer, but this both did not work. What is it I am doing wrong here?
Trying to make urllib do FTP properly makes my brain hurt. By default, it creates a new connection for each file, apparently without really properly ensuring the connections close.
ftplib is more appropriate I think.
Since I happen to be working on the same data you are(were)... Here is a very specific answer decompressing the .gz files and passing them into ish_parser (https://github.com/haydenth/ish_parser).
I think it is also clear enough to serve as a general answer.
import ftplib
import io
import gzip
import ish_parser # from: https://github.com/haydenth/ish_parser
ftp_host = "ftp.ncdc.noaa.gov"
parser = ish_parser.ish_parser()
# identifies what data to get
USAF_ID = '722950'
WBAN_ID = '23174'
YEARS = range(1975, 1980)
with ftplib.FTP(host=ftp_host) as ftpconn:
ftpconn.login()
for year in YEARS:
ftp_file = "pub/data/noaa/{YEAR}/{USAF}-{WBAN}-{YEAR}.gz".format(USAF=USAF_ID, WBAN=WBAN_ID, YEAR=year)
print(ftp_file)
# read the whole file and save it to a BytesIO (stream)
response = io.BytesIO()
try:
ftpconn.retrbinary('RETR '+ftp_file, response.write)
except ftplib.error_perm as err:
if str(err).startswith('550 '):
print('ERROR:', err)
else:
raise
# decompress and parse each line
response.seek(0) # jump back to the beginning of the stream
with gzip.open(response, mode='rb') as gzstream:
for line in gzstream:
parser.loads(line.decode('latin-1'))
This does read the whole file into memory, which could probably be avoided using some clever wrappers and/or yield or something... but works fine for a year's worth of hourly weather observations.
Probably a pretty nasty workaround, but this worked for me. I made a script (here called test.py) which does the request (see code above). The code below is used in the loop I mentioned and calls test.py
from subprocess import call
with open('log.txt', 'a') as f:
call(['python', 'test.py', args[0], args[1]], stdout=f)
I have written a loop that uploads some data to a MySQL database. When the wifi is not working, it is supposed to write the data on a txt file instead. The problem is that it does not write the txt file. I have initialised the database (called it "database") and cursor (called it "cursor"). The txt file is called "test".
EDIT: by experimenting plugging in and out the ethernet cable, I realised that when I replug the cable, a bunch of data is automatically sent with the same timestamp (maybe saved in ram or some cache- this happens whenever I restart the programme as well but on smaller scale). Do you think there might be another way to get a back up? Maybe by writing everything on txt file and erasing the txt file after every 1GB of data (so that the SD won't get full- it's on a Raspberry Pi 2)?
try:
try:
cursor.execute("""INSERT INTO table (column1) VALUES(%s)""", (temperature))
database.commit
except:
text=open("test.txt","a")
test.write(temperature + "\n")
test.close()
except:
print "FAIL"
Since write() function expects a character buffer object, you might need to typecast temperature to string while passing it as argument to the write function.
test.write(str(temperature) + "\n")
This is my approach:
import urllib, time, mysqldb
time.sleep(1) # Protects Overlap from execfile
url = "https://www.google.co.uk/"
try:
checkcon = urllib.urlopen(url)
except Exception as e:
"""
If the program gets a socket error
it goes to sleep, after that it restarts
and closes this Script
"""
time.sleep(5)# retry timer
execfile('/path/to/your/file.py')
raise IOError(e)
# Now that you know connection is working
try:
cursor.execute("""INSERT INTO table (column1) VALUES(%s)""", (temperature))
database.commit()
except MySQLdb.IntegrityError as e:
# Database Error
with open ('test.txt'), 'a' as f:
f.write(temperature)
f.close()
raise IOError(e)
I want my code to write certain errors to text file. It's copying files over, and I want to write the "un-copied" files to a text file for a record. I have my script appending an array with file paths every time it hits an error (like so):
errors.append(srcfile)
After my loop, I have the following code, which I thought would write the paths to my text file:
text_file = open("%s_copy_log.txt" % username, "a")
for line in errors:
text_file.write(line)
text_file.close()
Am I missing something?
This is an example of an XY problem: You want to do something, think of a solution, find a problem with that solution, and ask for help with that. I'm assuming that although you could do logging yourself (as you are trying), but using Python's built in logger will make more sense. They've already done most of what you need, all you need to do is import it, configure it, and use it.
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
example.log:
DEBUG:root:This message should go to the log file
INFO:root:So should this
WARNING:root:And this, too
This also supports things like command line logging level setting, and a bunch of other stuff.
Docs Tutorial
Try changing a to a+, which tells python to create a file if it doesn't exist.
text_file = open("%s_copy_log.txt" % username, "a+")
Further Reading on Python File IO Types
I'm not sure what your application structure looks like, but if you have a number of users and want each username to have its own log (why?) when perhaps the best way would be something like:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
admin_handler = logging.FileHandler("app.log")
admin_handler.setLevel(logging.DEBUG)
logger.addHandler(admin_handler)
# this will write ALL events to one location
user_logger = logger.getChild("userlog")
def login(username, password):
if verify(username, password): # however you're doing this
user_logger.addHandler(logging.FileHandler("%s.log" % username))
user_logger.setLevel(logging.WARNING) # or maybe logging.INFO?
user_logger.info("%s logged in" % username)
# authenticate the user as you currently do
else:
logger.warning("%s attempted login with bad password!" % username)
# prompt the user as you currently do
def logout():
user_logger.handlers = [] # remove previous user logger
# de-authenticate as normal
def user_log_something(the_thing):
if the_thing.is(something_critical):
user_logger.critical(the_thing)