def readswitch(x,y,connn,read):
x='create vlan'
y='global'
conn = sqlite3.connect('server.db')
if conn:
cur = conn.cursor()
run= cur.execute("SELECT command FROM switch WHERE function =? or type = ? ORDER BY key ASC",(x,y))
read = cur.fetchall()
return run;
for row in read:
print (readswitch())
I am going to search x and y in my database and I want it to return my sql statement for the command
but it seems cant run this function like
for row in read:
NameError: name 'read' is not defined
can anyone fix this error?
Your code has several problems, including argument passing and variable scope. I'm not sure what it's really trying to do. I suggest rewriting it with no function, just straight sequential execution. Once you get that working, try to pull out the function call.
Related
I'm using AWS RDS, which I'm accessing with pymysql. I have a python lambda function that inserts a row into one of my tables. I then call cursor.commit() on the pymysql cursor object. Later, my lambda invokes a second lambda; this second lambda (using a different db connection) executes a SELECT to look for the newly-added row. Unfortunately, the row is not found immediately. As a debugging step, I added code like this:
lambda_handler.py
...
uuid_values = [uuid_value] # A single-item list
things = queries.get_things(uuid_values)
# Added for debugging
if not things:
print('For debugging: things not found.')
time.sleep(5)
things = queries.get_things()
print(f'for debugging: {str(things)}')
return things
queries.py
def get_things(uuid_values):
# Creates a string of the form 'UUID_TO_BIN(%s), UUID_TO_BIN(%s)' for use in the query below
format_string = ','.join(['UUID_TO_BIN(%s)'] * len(uuid_values))
tuple_of_keys = tuple([str(key) for key in uuid_values])
with db_conn.get_cursor() as cursor:
# Lightly simplified query
cursor.execute( '''
SELECT ...
FROM table1 t1
JOIN table2 t2 ON t1.id = t2.t1_id
WHERE
t1.uuid_value IN ({format_string})
AND t2.status_id = 1
''' % format_string,
tuple_of_keys)
results = cursor.fetchall()
db_conn.conn.commit()
return results
This outputs
'For debugging: things not found.'
'\<thing list\>'
meaning the row is not found immediately, but is after a brief delay. I'd rather not leave this delay in when I ship to production. I'm not doing anything with transactions or isolation level. So it's very strange to me that this second query would not find the newly-inserted row. Any idea what might be causing this?
I'm writing a module for a program that needs to listen for new entries in a db, and execute a function on the event of new rows being posted to this table... aka a trigger.
I have written some code, but it does not work. Here's my logic:
connect to db, query for the newest row, compare that row with variable, if not equal, run function, store newest row to variable, else close. Run every 2 seconds to compare newest row with whatever is stored in the variable/object.
Everything runs fine and pulls the expected results from the db, however I'm getting a 'local variable 'last_sent' referenced before assignment.
This confuses me for 2 reasons.
I thought I set last_sent to 'nothing' as a global variable/object before the functions are called.
In order for my comparison logic to work, I can't set last_sent within the sendListener() function before the if/else
Here's the code.
from Logger import Logger
from sendSMS import sendSMS
from Needles import dbUser, dbHost, dbPassword, pull_stmt
import pyodbc
import time
#set last_sent to something
last_sent = ''
def sendListener():
#connect to db
cnxn = pyodbc.connect('UID='+dbUser+';PWD='+dbPassword+';DSN='+dbHost)
cursor = cnxn.cursor()
#run query to pull newest row
cursor.execute(pull_stmt)
results = cursor.fetchone()
#if query results different from results stored in last_sent, run function.
#then set last_sent object to the query results for next comparison.
if results != last_sent:
sendSMS()
last_sent = results
else:
cnxn.close()
# a loop to run the check every 2 seconds- as to lessen cpu usage
def sleepLoop():
while 0 == 0:
sendListener()
time.sleep(2.0)
sleepLoop()
I'm sure there is a better way to implement this.
Here:
if results != last_sent:
sendSMS()
last_sent = results
else:
cnxn.close()
Python sees that you're assigning to last_sent, but it's not marked as global in this function, so it must be local. Yet you're reading it in results != last_sent before its definition, so you get the error.
To solve this, mark it as global at the beginning of the function:
def sendListener():
global last_sent
...
This is the reproduced sample:
import mysql.connector
conn = mysql.connector.connect(
user='root', password='12347',
host='localhost')
def getCursor():
return conn.cursor()
def execQuery(cursor=getCursor()):
cursor.execute("SELECT 2")
cursor.fetchall()
cursor.close()
for i in range(4):
cursor = execQuery()
This code works without cursor.close(). But what I find weird is that this sample works even with cursor.close() with a simple change:
def execQuery():
cursor=getCursor()
cursor.execute("SELECT 2")
cursor.fetchall()
cursor.close()
By moving the default parameter to the body of the function.
I don't know if it's the best practice to close the cursor, so I can skip closing cursor while keeping the first form. If it's not the best practice to have a default parameter that uses return value of a function, I can go with the second form. But I want to why they act differently
It's like I'm having the same error as the following:
cursor.execute("SELECT 2")
cursor.fetchall()
cursor.close()
cursor.execute("SELECT 2")
It's like every call of execQuery is using the same cursor, so it gets blocked right at the second call.
When you need to connect to Database you need something like cursor. You need a cursor object to fetch results.
In the sample program when you run loop in range (4) it calls execQuery() . Looking into definition you can find def execQuery(cursor=getCursor()): the function takes input as cursor and by default it used getCursor() function which creates cursor everything when the loop is executed.
While in your program you are closing the cursor but not creating it again hence when second execute query comes there is no cursor present and the program throws an error.
I'm attempting to run some MySQL queries and output the results in my Python program. I've created this function that is called and the cursor is passed through. However, I am running into a problem where running the below code will always return None / nothing.
Here is what I have:
def showInformation(cursor):
number_rows = 'SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = "DB"'
testing = cursor.execute(number_rows)
print(testing)
When using the cursor object itself, I do not run into any problems:
for row in cursor:
print(row)
I guess you need:
print(cursor.fetchone())
because you are returning only a count and so you expect one row.
Calling execute is not supposed to return anything unless multi=True is specified according to mysql documentation. The programmer can only iterate the cursor like you did, or call fetchone to retrieve one row or call fetchall to retrieve all rows or call fetchmany to retrieve some rows.
I'm writing a python CGI script that will query a MySQL database. I'm using the MySQLdb module. Since the database will be queryed repeatedly, I wrote this function....
def getDatabaseResult(sqlQuery,connectioninfohere):
# connect to the database
vDatabase = MySQLdb.connect(connectioninfohere)
# create a cursor, execute and SQL statement and get the result as a tuple
cursor = vDatabase.cursor()
try:
cursor.execute(sqlQuery)
except:
cursor.close()
return None
result = cursor.fetchall()
cursor.close()
return result
My question is... Is this the best practice? Of should I reuse my cursor within my functions? For example. Which is better...
def callsANewCursorAndConnectionEachTime():
result1 = getDatabaseResult(someQuery1)
result2 = getDatabaseResult(someQuery2)
result3 = getDatabaseResult(someQuery3)
result4 = getDatabaseResult(someQuery4)
or do away with the getDatabaseeResult function all together and do something like..
def reusesTheSameCursor():
vDatabase = MySQLdb.connect(connectionInfohere)
cursor = vDatabase.cursor()
cursor.execute(someQuery1)
result1 = cursor.fetchall()
cursor.execute(someQuery2)
result2 = cursor.fetchall()
cursor.execute(someQuery3)
result3 = cursor.fetchall()
cursor.execute(someQuery4)
result4 = cursor.fetchall()
The MySQLdb developer recommends building an application specific API that does the DB access stuff for you so that you don't have to worry about the mysql query strings in the application code. It'll make the code a bit more extendable (link).
As for the cursors my understanding is that the best thing is to create a cursor per operation/transaction. So some check value -> update value -> read value type of transaction could use the same cursor, but for the next one you would create a new one. This is again pointing to the direction of building an internal API for the db access instead of having a generic executeSql method.
Also remember to close your cursors, and commit changes to the connection after the queries are done.
Your getDatabaseResult function doesn't need to have a connect for every separate query though. You can share the connection between the queries as long as you act responsible with the cursors.