python- try data upload to mysql except write txt - python

I have written a loop that uploads some data to a MySQL database. When the wifi is not working, it is supposed to write the data on a txt file instead. The problem is that it does not write the txt file. I have initialised the database (called it "database") and cursor (called it "cursor"). The txt file is called "test".
EDIT: by experimenting plugging in and out the ethernet cable, I realised that when I replug the cable, a bunch of data is automatically sent with the same timestamp (maybe saved in ram or some cache- this happens whenever I restart the programme as well but on smaller scale). Do you think there might be another way to get a back up? Maybe by writing everything on txt file and erasing the txt file after every 1GB of data (so that the SD won't get full- it's on a Raspberry Pi 2)?
try:
try:
cursor.execute("""INSERT INTO table (column1) VALUES(%s)""", (temperature))
database.commit
except:
text=open("test.txt","a")
test.write(temperature + "\n")
test.close()
except:
print "FAIL"

Since write() function expects a character buffer object, you might need to typecast temperature to string while passing it as argument to the write function.
test.write(str(temperature) + "\n")

This is my approach:
import urllib, time, mysqldb
time.sleep(1) # Protects Overlap from execfile
url = "https://www.google.co.uk/"
try:
checkcon = urllib.urlopen(url)
except Exception as e:
"""
If the program gets a socket error
it goes to sleep, after that it restarts
and closes this Script
"""
time.sleep(5)# retry timer
execfile('/path/to/your/file.py')
raise IOError(e)
# Now that you know connection is working
try:
cursor.execute("""INSERT INTO table (column1) VALUES(%s)""", (temperature))
database.commit()
except MySQLdb.IntegrityError as e:
# Database Error
with open ('test.txt'), 'a' as f:
f.write(temperature)
f.close()
raise IOError(e)

Related

Ignore exceptions in python with FTP

I am trying to write a quick little program that will parse through a CSV file pull usernames and passwords and test login to a FTP to create a directory and delete it with those usernames and passwords. I have written a try catch so it can output any errors I get so that I can review which FTP logins may need tweaking. The problem is when I run it, it will stall if any of the logins are broken instead of proceeding. I have tried using pass and finally after my try but it won't return to the loop. Any help is appreciated.
import csv
import ftplib
from ftplib import FTP
with open ('Filename.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
lineCount = 0
rowCount = 0
for row in readCSV:
username = row[rowCount]
passwordFTP = row[rowCount+1]
print (username)
ftp = FTP('ftp.org') #place the FTP address here
ftp.set_debuglevel(2) #this is set to highest debug level
try:
ftp.login(user='usernameGoesHere', passwd='passwordGoesHere', acct='')
except ftplib.all_errors as e:
errorcode_string = str(e).split(None, 1)[0]
pass
finally:
ftp.cwd('/TestDir')
ftp.mkd('testDir')
ftp.rmd('testDir')
ftp.quit()
pass
There is actually no pint in putting the code in finally. Despite the fact that you have put a pass statement, it has no role in this code. On getting an error on login the program will log the error and still try to make a folder for the given connection.
Instead move the cwd, mkd and other methods to the try block. If an error is thrown there, the except will catch it and stop the execution.
There is absolutely no need to put pass statements anywhere in the try. The only thing that can still remain in the finally block is ftp.quit() so it cleans up, closing the connection if one of the operations fails, but on the other hand what will happen if login is not successful?
Do something like:
with FTP('ftp.org') as ftp
try:
ftp.login(user='usernameGoesHere', passwd='passwordGoesHere', acct='')
ftp.cwd('/TestDir')
ftp.mkd('testDir')
ftp.rmd('testDir')
except ftplib.all_errors as e:
errorcode_string = str(e).split(None, 1)[0]
Using context manager (the with statement) will help you quitting manually.

Record in a log what Python script did

I have the following line code in a Python script:
sql_create_table(total_stores, "total_stores")
This is a function I created to upload tables to an Oracle database. I want to do something like this in order to record what tables were not created because the line failed to run:
Try:
sql_create_table(total_stores, "total_stores")
except:
print in a log.txt "table x could not be created in the database"
Any suggestions?
Thanks in advance!
There is the python logging module, which has a good tutorial, which even includes how to log to a file.
Very basic example:
import logging
logging.basicConfig(filename="program.log", level=logging.INFO)
…
try:
sql_create_table(total_stores, "total_stores")
except:
logging.warning("table x could not be created in the database")
You can write the log to a txt file by doing the following:
Try:
sql_create_table(total_stores, "total_stores")
except:
with open('log.txt', 'a') as log:
log.write("table x could not be created in the database")
Note, that by using 'a', we are appending to the txt file and won't be overwriting old logs.

Python 3 urllib: 530 too many connections, in loop

I am retrieving data files from a FTP server in a loop with the following code:
response = urllib.request.urlopen(url)
data = response.read()
response.close()
compressed_file = io.BytesIO(data)
gin = gzip.GzipFile(fileobj=compressed_file)
Retrieving and processing the first few works fine, but after a few request I am getting the following error:
530 Maximum number of connections exceeded.
I tried closing the connection (see code above) and using a sleep() timer, but this both did not work. What is it I am doing wrong here?
Trying to make urllib do FTP properly makes my brain hurt. By default, it creates a new connection for each file, apparently without really properly ensuring the connections close.
ftplib is more appropriate I think.
Since I happen to be working on the same data you are(were)... Here is a very specific answer decompressing the .gz files and passing them into ish_parser (https://github.com/haydenth/ish_parser).
I think it is also clear enough to serve as a general answer.
import ftplib
import io
import gzip
import ish_parser # from: https://github.com/haydenth/ish_parser
ftp_host = "ftp.ncdc.noaa.gov"
parser = ish_parser.ish_parser()
# identifies what data to get
USAF_ID = '722950'
WBAN_ID = '23174'
YEARS = range(1975, 1980)
with ftplib.FTP(host=ftp_host) as ftpconn:
ftpconn.login()
for year in YEARS:
ftp_file = "pub/data/noaa/{YEAR}/{USAF}-{WBAN}-{YEAR}.gz".format(USAF=USAF_ID, WBAN=WBAN_ID, YEAR=year)
print(ftp_file)
# read the whole file and save it to a BytesIO (stream)
response = io.BytesIO()
try:
ftpconn.retrbinary('RETR '+ftp_file, response.write)
except ftplib.error_perm as err:
if str(err).startswith('550 '):
print('ERROR:', err)
else:
raise
# decompress and parse each line
response.seek(0) # jump back to the beginning of the stream
with gzip.open(response, mode='rb') as gzstream:
for line in gzstream:
parser.loads(line.decode('latin-1'))
This does read the whole file into memory, which could probably be avoided using some clever wrappers and/or yield or something... but works fine for a year's worth of hourly weather observations.
Probably a pretty nasty workaround, but this worked for me. I made a script (here called test.py) which does the request (see code above). The code below is used in the loop I mentioned and calls test.py
from subprocess import call
with open('log.txt', 'a') as f:
call(['python', 'test.py', args[0], args[1]], stdout=f)

MySQL connection/query make file not work

I've got some test code I'm working on. In a separate HTML file, a button onclick event gets the URL of the page and passes it as a variable (jquery_input) to this python script. Python then scrapes the URL and identifies two pieces of data, which it then formats and concatenates together (resulting in the variable lowerCaseJoined). This concatenated variable has a corresponding entry in a MySQL database. With each entry in the db, there is an associated .gif file.
From here, what I'm trying to do is open a connection to the MySQL server and query the concatenated variable against the db to get the associated .gif file.
Once this has been accomplished, I want to print the .gif file as an alert on the webpage.
If I take out the db section of the code (connection, querying), the code runs just fine. Also, I am successfully able to execute the db part of the code independently through the Python shell. However, when the entire code resides in one file, nothing happens when I click the button. I've systematically removed the lines of code related to the db connection, and my code begins stalling out at the first line (db = MySQLdb.connection...). So it looks like as soon as I start trying to connect to the db, the program goes kaput.
Here is the code:
#!/usr/bin/python
from bs4 import BeautifulSoup as Soup
import urllib
import re
import cgi, cgitb
import MySQLdb
cgitb.enable() # for troubleshooting
# the cgi library gets the var from the .html file
form = cgi.FieldStorage()
jquery_input = form.getvalue("stuff_for_python", "nothing sent")
# the next section scrapes the URL,
# finds the call no and location,
# formats them, and concatenates them
content = urllib.urlopen(jquery_input).read()
soup = Soup(content)
extracted = soup.find_all("tr", {"class": "bibItemsEntry"})
cleaned = str(extracted)
start = cleaned.find('browse') +8
end = cleaned.find('</a>', start)
callNo = cleaned[start:end]
noSpacesCallNo = callNo.replace(' ', '')
noSpacesCallNo2 = noSpacesCallNo.replace('.', '')
startLoc = cleaned.find('field 1') + 13
endLoc = cleaned.find('</td>', startLoc)
location = cleaned[startLoc:endLoc]
noSpacesLoc = location.replace(' ', '')
joined = (noSpacesCallNo2+noSpacesLoc)
lowerCaseJoined = joined.lower()
# the next section establishes a connection
# with the mySQL db and queries it
# using the call/loc code (lowerCaseJoined)
db = MySQLdb.connect(host="localhost", user="...", "passwd="...",
db="locations")
cur = db.cursor()
queryDb = """
SELECT URL FROM locations WHERE location = %s
"""
cur.execute(queryDb, lowerCaseJoined)
result = cur.fetchall()
cur.close()
db.close()
# the next 2 'print' statements are important for web
print "Content-type: text/html"
print
print result
Any ideas what I'm doing wrong?
I'm new at programming, so I'm sure there's a lot that can be improved upon here. But prior to refining it I just want to get the thing to work!
I figured out the problem. Seems that I had quotation mark before the password portion of the db connection line. Things are all good now.

Speeding Up Data Transfer Using Pyserial in Python

I have created a data transfer program using python and the pyserial module. I am currently using it to communicate text file over a radio device between a Raspberry Pi and my computer. The problem is, the file I am trying to send, which contains 5000 lines of text and is 93.0 Kb in size is taking quite a while to send. To be exact, it takes about a full minute. I need this to be done within seconds. Here is the following code, I am sure that there are many optimizations that can be made with file reading and such that would increase the data transfer speed. My radio device has a data speed of 250 kbps, which is obviously not being reached. Any help would be greatly appreciated.
Code to send(located on raspberry pi)
def s_file():
print 'start'
readline = lambda : iter(lambda:ser.read(1),"\n")
name = "".join(readline())
print name
file_loc = directory_name + name
sleep(1)
print('Waiting for command from client to send file...')
while "".join(readline()) != "<<SENDFILE>>":
pass
with open(file_loc) as FileObj:
for lines in FileObj:
ser.write(lines)
ser.write("\n<<EOF>>\n")
print 'done'
Code to receive(on my laptop)
def r_f_bird(self): #send command to bird to start func,
if ser_open == True:
readline = lambda : iter(lambda:ser.read(1),"\n")
NAME = self.tb2.get()
ser.write('/' + NAME)
print NAME
sleep(0.5)
ser.write('\n<<SENDFILE>>\n')
start = clock()
with open(str(NAME),"wb") as outfile:
while True:
line = "".join(readline())
if line == "<<EOF>>":
break
print >> outfile, line
elapsed = clock() - start
print elapsed
ser.flush()
else:
pass
Perhaps the overhead of ser.read(1) is slowing things down. It seems like you have a \n at the end of each line, so try using pySerial's readline() method rather than rolling your own. Changing line = "".join(readline()) to line = ser.readline() ought to do it. You will also need to change your loop end condition to == "<<EOF>>\n".
You may also need to add a ser.flush() on the writing side.

Categories