I'm facing a odd problem where result = cursor.fetchone() return None when there is data in the DB.
please let me explain why is odd, I created a single connection
connection = psycopg2.connect(
user=environ["DB_USER"],
password=environ["DB_PASS"],
host=environ["DB_HOST"],
port=environ["DB_PORT"],
database=environ["DB_NAME"],
)
then I pass that connection to a function, this function creates a cursor, does some queries, closes the cursor and return the result. At this point the connections works, the cursor works. Then I pass the same connection to another function and this is where I have the problem:
def check_dependency(connection, uuid):
cursor = connection.cursor()
tablename = environ["PIPILE_NAME"]
sql_str = f"SELECT * FROM {tablename} "
sql_str += "WHERE uuid = %s "
sql_str += "AND NOT decrypt_status;"
cursor.execute(sql_str, (uuid,))
result = cursor.fetchone()
cursor.close()
print(sql_str, uuid, result) #<-- this output
return result
I copied the output, ran it in postgresql directly and it returned the row that I was expected but the function returns None.
I believe the problem might be in the connection or the cursor, but I don't know how to make sure they are ok
You are unsure whether the connection is still working properly.
To debug this, create a connection and then
immediately call the check_dependency function twice.
If the 2nd call fails we may be able to blame it on the connection,
but if the 1st call fails then you'll want to revise the function.
Related
I am inserting JSON data into a MySQL database
I am parsing the JSON and then inserting it into a MySQL db using the python connector
Through trial, I can see the error is associated with this piece of code
for steps in result['routes'][0]['legs'][0]['steps']:
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
if steps['travel_mode'] == "pub_tran":
travel_mode = steps['travel_mode']
Orig_lat = steps['var_1']['dep']['lat']
Orig_lng = steps['var_1']['dep']['lng']
Dest_lat = steps['var_1']['arr']['lat']
Dest_lng = steps['var_1']['arr']['lng']
time_stamp = leg['_sent_time_stamp']
if steps['travel_mode'] =="a_pied":
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
travel_mode = steps['travel_mode']
Orig_lat = steps['var_2']['lat']
Orig_lng = steps['var_2']['lng']
Dest_lat = steps['var_2']['lat']
Dest_lng = steps['var_2']['lng']
time_stamp = leg['_sent_time_stamp']
cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
leg_no = cursor.fetchone()[0]
print(leg_no)
I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct
However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
The issue seems similar to MySQL Unread Result with Python
Is the query too complex and needs splitting or is there another issue?
If the query is indeed too complex, can anyone advise how best to split this?
EDIT As per #Gord's help, Ive tried to dump any unread results
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
leg_no = cursor.fetchone()[0]
try:
cursor.fetchall()
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
pass
else:
raise
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))
But, I still get
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]
scratches head
EDIT 2 - when I print the ie.msg, I get -
No result set to fetch from
All that was required was for buffered to be set to true!
cursor = cnx.cursor(buffered=True)
The reason is that without a buffered cursor, the results are "lazily" loaded, meaning that "fetchone" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain.
I was able to recreate your issue. MySQL Connector/Python apparently doesn't like it if you retrieve multiple rows and don't fetch them all before closing the cursor or using it to retrieve some other stuff. For example
import mysql.connector
cnxn = mysql.connector.connect(
host='127.0.0.1',
user='root',
password='whatever',
database='mydb')
crsr = cnxn.cursor()
crsr.execute("DROP TABLE IF EXISTS pytest")
crsr.execute("""
CREATE TABLE pytest (
id INT(11) NOT NULL AUTO_INCREMENT,
firstname VARCHAR(20),
PRIMARY KEY (id)
)
""")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Gord')")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Anne')")
cnxn.commit()
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # InternalError: Unread result found.
If you only expect (or care about) one row then you can put a LIMIT on your query
crsr.execute("SELECT firstname FROM pytest LIMIT 0, 1")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # OK now
or you can use fetchall() to get rid of any unread results after you have finished working with the rows you retrieved.
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
try:
crsr.fetchall() # fetch (and discard) remaining rows
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
# no problem, we were just at the end of the result set
pass
else:
raise
crsr.execute("SELECT firstname FROM pytest") # OK now
cursor.reset() is really what you want.
fetchall() is not good because you may end up moving unnecessary data from the database to your client.
The problem is about the buffer, maybe you disconnected from the previous MySQL connection and now it cannot perform the next statement. There are two ways to give the buffer to the cursor. First, only to the particular cursor using the following command:
import mysql.connector
cnx = mysql.connector.connect()
# Only this particular cursor will buffer results
cursor = cnx.cursor(buffered=True)
Alternatively, you could enable buffer for any cursor you use:
import mysql.connector
# All cursors created from cnx2 will be buffered by default
cnx2 = mysql.connector.connect(buffered=True)
cursor = cnx.cursor()
In case you disconnected from MySQL, the latter works for you.
Enjoy coding
If you want to get only one result from a request, and want after to reuse the same connexion for other requests, limit your sql select request to 1 using "limit 1" at the end of your request.
ex "Select field from table where x=1 limit 1;"
This method is faster using "buffered=True"
Set the consume_results argument on the connect() method to True.
cnx = mysql.connector.connect(
host="localhost",
user="user",
password="password",
database="database",
consume_results=True
)
Now instead of throwing an exception, it basically does fetchall().
Unfortunately this still makes it slow, if you have a lot of unread rows.
There is also a possibility that your connection to MySQL Workbench is disconnected. Establish the connection again. This solved the problem for me.
cursor.reset()
and then create tables and load entries
Would setting the cursor within the for loop, executing it, and then closing it again in the loop help?
Like:
for steps in result['routes'][0]['legs'][0]['steps']:
cursor = cnx.cursor()
....
leg_no = cursor.fetchone()[0]
cursor.close()
print(leg_no)
I want to insert given values from my docker app-service to the MariaDB-service.
The connection has been established because I can execute SELECT * FROM via the MariaDB.connection.cursor.
First of all I create the connection:
def get_conn() -> mariadb.connection:
try:
conn = mariadb.connect(
user="XXX",
database="XXX",
password="XXX",
host="db",
port=33030,
)
except mariadb.Error as e:
print(f'Error connecting to MariaDB Platform: {e}')
sys.exit(1)
return conn
Then I create a mariadb.connection.cursor-Object:
def get_cur() -> mariadb.connection.cursor:
conn = get_conn()
cur = conn.cursor()
return cur
Finally I want to insert new values in the table testing:
def write_data():
cursor = get_cur()
conn = get_conn()
cursor.execute('INSERT INTO testing (title) VALUE ("2nd automatic entry");')
print("Executed Query")
conn.commit()
cursor.close()
conn.close()
print("Closed Connection")
return True
To test, if the entries are inserted, I started with 1 manual entry, then executed the write_data()-function and to finish of I inserted a 2nd manual entry via the console.
After the procedure the table looks like:
Note that the ìd is on AUTO_INCREMENT. So the function write_data() was not skipped entirely, because the 2nd manual entry got the id 3 and not 2.
You're committing a transaction in a different connection than the one your cursor belongs to.
get_conn() creates a new database connection and returns it.
get_cur() calls get_conn, that gets it a new connection, retrieves a cursor object that belongs to it, and returns it.
In your main code, you call get_conn - that gives you connection A.
Then you obtain a cursor by calling get_cur - that creates a connection B and returns a cursor belonging to it.
You run execute on the cursor object (Connection B) but commit the connection you got in the first call (Connection A).
PS: This was a really fun problem to debug, thanks :)
It's really easy, in a new table with new code, to unintentionally do an INSERT without a COMMIT. That is especially true using the Python connector, which doesn't use autocommit. A dropped connection with an open transaction rolls back the transaction. And, a rolled-back INSERT does not release the autoincremented ID value for reuse.
This kind of thing happens, and it's no cause for alarm.
A wise database programmer won't rely on a set of autoincrementing IDs with no gaps in it.
The below is my way of handling database connection, but it is a bit clumsy than I want...So the question is whether or not there are some other even more proper ways to close the database while returning an error message to client if DB operations return some errors.
#app.route('/get-data/', methods=['GET'])
def get_data():
db_error = False
try:
conn = pymysql.connect(db_url, db_username, db_password, db_name)
cursor = conn.cursor()
cursor.execute('SELECT a_column FROM a_table WHERE a_condition = 0')
results = cursor.fetchall()
except Exception as ex:
logging.error('{ex}')
db_error = True # Cannot simply return here; otherwise DB connection is left open
finally:
cursor.close()
conn.close()
if db_error:
return Response('Database error', 500)
return jsonify(results) # Let's assume the jsonify() function will not throw an error...
Suppose I use a context manager, does it mean that both conn and cursor will definitely be closed even when an exception is thrown? Or is it something implementation-dependent, i.e., some packages, say, pymysql, will make sure all cursors and conns are closed, regardless of errors are thrown or not; while other packages, say, pyodbc, will NOT ensure this. (Here pymysql and pyodbc are just two examples of course...)
Can someone please explain to me why the defined test().commit() does not work as varcon.commit()? Everything else seem to work fine. (using vagrant virtualbox of ubuntu-trusty-32)
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import psycopg2
varcon = psycopg2.connect('dbname=tournament')
def test():
try:
psycopg2.connect("dbname=tournament")
except:
print("Connection to Tournament Database Failed")
else:
return psycopg2.connect('dbname=tournament')
def writer():
#db = psycopg2.connect('dbname=tournament')
c =varcon.cursor()
c.execute('select * from players')
data = c.fetchall()
c.execute("insert into players (name) values ('Joe Smith')")
varcon.commit()
varcon.close
print(data)
def writer2():
#db = psycopg2.connect('dbname=tournament')
c =test().cursor()
c.execute('select * from players')
data = c.fetchall()
c.execute("insert into players (name) values ('Joe Smith')")
test().commit()
test().close
print(data)
writer2() #this seem not commited, but database registers the insert by observing the serial promotion
#writer() # this works as expected
maybe this is because the return statement in a function block (def) is not equal to an assignment like =
The psycopg2 connection function returns a connection object and this is assigned to conn or varcon
conn = psycopg2.connect("dbname=test user=postgres password=secret")
http://initd.org/psycopg/docs/module.html
the test() function also returns the psycopg2 connection object but in writer2 it is not assigned to a variable (memory place) meaning that there is no reference
this also explains why the database connection is established (initialized in the test function) but the commit does not work (broken reference)
(http://www.icu-project.org/docs/papers/cpp_report/the_anatomy_of_the_assignment_operator.html)
maybe try
ami=test()
ami.commit()
to establish a reference
Every time you call psycopg2.connect() you open new connection to database.
So effectively your code executes SQL in one connection, commits another, and then closes third connection. Even in your test() function you are opening two different connections.
I use the following pattern to access PostgreSQL:
conn = psycopg2.connect(DSN)
with conn:
with conn.cursor() as curs:
...
curs.execute(SQL1)
with conn:
with conn.cursor() as curs:
...
curs.execute(SQL2)
conn.close()
with statement ensures transaction is opened and properly committed around your SQL. It also automatically rolls transaction back in case your code inside with raises an exception.
Reference: http://initd.org/psycopg/docs/usage.html#with-statement
I'm new to mySQL and Python.
I have code to insert data from Python into mySQL,
conn = MySQLdb.connect(host="localhost", user="root", passwd="kokoblack", db="mydb")
for i in range(0,len(allnames)):
try:
query = "INSERT INTO resumes (applicant, jobtitle, lastworkdate, lastupdate, url) values ("
query = query + "'"+allnames[i]+"'," +"'"+alltitles[i]+"',"+ "'"+alldates[i]+"'," + "'"+allupdates[i]+"'," + "'"+alllinks[i]+"')"
x = conn.cursor()
x.execute(query)
row = x.fetchall()
except:
print "error"
It seems to be working fine, because "error" never appears. Instead, many rows of "1L" appear in my Python shell. However, when I go to MySQL, the "resumes" table in "mydb" remains completely empty.
I have no idea what could be wrong, could it be that I am not connected to MySQL's server properly when I'm viewing the table in MySQL? Help please.
(I only use import MySQLdb, is that enough?)
use commit to commit the changes that you have done
MySQLdb has autocommit off by default, which may be confusing at first
You could do commit like this
conn.commit()
or
conn.autocommit(True) Right after the connection is created with the DB