I am pretty sure the code I am doing is logical and makes sense but when I run it against my test the result comes back quite unusual, I am trying find a session from a cookie, and then use it to retrieve the user from the sessions table. I am using the bottle framework to test my program
active_session = bottle.request.get_cookie(COOKIE_NAME)
cursor = db.cursor()
if active_session:
cursor.execute("SELECT usernick FROM sessions WHERE sessionid=?", (active_session, ))
active_user = cursor.fetchone()
return active_user
else:
return None
The result is as follows
self.assertEqual(nick_from_cookie, nick)
AssertionError: ('Bobalooba',) != 'Bobalooba'\
I know I am so close could someone point me in the right direction
if active_session:
cursor.execute("SELECT usernick FROM sessions WHERE sessionid=?", (active_session, ))
active_user = cursor.fetchone()
return active_user[0] if active_user else active_user
It's because you are fetching the entire row and comparing it to a single string. You need to grab the single field contained in the retrieved row. Try this as the input to your assertion.
return active_user[0]
Related
I am inserting JSON data into a MySQL database
I am parsing the JSON and then inserting it into a MySQL db using the python connector
Through trial, I can see the error is associated with this piece of code
for steps in result['routes'][0]['legs'][0]['steps']:
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
if steps['travel_mode'] == "pub_tran":
travel_mode = steps['travel_mode']
Orig_lat = steps['var_1']['dep']['lat']
Orig_lng = steps['var_1']['dep']['lng']
Dest_lat = steps['var_1']['arr']['lat']
Dest_lng = steps['var_1']['arr']['lng']
time_stamp = leg['_sent_time_stamp']
if steps['travel_mode'] =="a_pied":
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
travel_mode = steps['travel_mode']
Orig_lat = steps['var_2']['lat']
Orig_lng = steps['var_2']['lng']
Dest_lat = steps['var_2']['lat']
Dest_lng = steps['var_2']['lng']
time_stamp = leg['_sent_time_stamp']
cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
leg_no = cursor.fetchone()[0]
print(leg_no)
I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct
However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
The issue seems similar to MySQL Unread Result with Python
Is the query too complex and needs splitting or is there another issue?
If the query is indeed too complex, can anyone advise how best to split this?
EDIT As per #Gord's help, Ive tried to dump any unread results
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
leg_no = cursor.fetchone()[0]
try:
cursor.fetchall()
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
pass
else:
raise
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))
But, I still get
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]
scratches head
EDIT 2 - when I print the ie.msg, I get -
No result set to fetch from
All that was required was for buffered to be set to true!
cursor = cnx.cursor(buffered=True)
The reason is that without a buffered cursor, the results are "lazily" loaded, meaning that "fetchone" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain.
I was able to recreate your issue. MySQL Connector/Python apparently doesn't like it if you retrieve multiple rows and don't fetch them all before closing the cursor or using it to retrieve some other stuff. For example
import mysql.connector
cnxn = mysql.connector.connect(
host='127.0.0.1',
user='root',
password='whatever',
database='mydb')
crsr = cnxn.cursor()
crsr.execute("DROP TABLE IF EXISTS pytest")
crsr.execute("""
CREATE TABLE pytest (
id INT(11) NOT NULL AUTO_INCREMENT,
firstname VARCHAR(20),
PRIMARY KEY (id)
)
""")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Gord')")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Anne')")
cnxn.commit()
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # InternalError: Unread result found.
If you only expect (or care about) one row then you can put a LIMIT on your query
crsr.execute("SELECT firstname FROM pytest LIMIT 0, 1")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # OK now
or you can use fetchall() to get rid of any unread results after you have finished working with the rows you retrieved.
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
try:
crsr.fetchall() # fetch (and discard) remaining rows
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
# no problem, we were just at the end of the result set
pass
else:
raise
crsr.execute("SELECT firstname FROM pytest") # OK now
cursor.reset() is really what you want.
fetchall() is not good because you may end up moving unnecessary data from the database to your client.
The problem is about the buffer, maybe you disconnected from the previous MySQL connection and now it cannot perform the next statement. There are two ways to give the buffer to the cursor. First, only to the particular cursor using the following command:
import mysql.connector
cnx = mysql.connector.connect()
# Only this particular cursor will buffer results
cursor = cnx.cursor(buffered=True)
Alternatively, you could enable buffer for any cursor you use:
import mysql.connector
# All cursors created from cnx2 will be buffered by default
cnx2 = mysql.connector.connect(buffered=True)
cursor = cnx.cursor()
In case you disconnected from MySQL, the latter works for you.
Enjoy coding
If you want to get only one result from a request, and want after to reuse the same connexion for other requests, limit your sql select request to 1 using "limit 1" at the end of your request.
ex "Select field from table where x=1 limit 1;"
This method is faster using "buffered=True"
Set the consume_results argument on the connect() method to True.
cnx = mysql.connector.connect(
host="localhost",
user="user",
password="password",
database="database",
consume_results=True
)
Now instead of throwing an exception, it basically does fetchall().
Unfortunately this still makes it slow, if you have a lot of unread rows.
There is also a possibility that your connection to MySQL Workbench is disconnected. Establish the connection again. This solved the problem for me.
cursor.reset()
and then create tables and load entries
Would setting the cursor within the for loop, executing it, and then closing it again in the loop help?
Like:
for steps in result['routes'][0]['legs'][0]['steps']:
cursor = cnx.cursor()
....
leg_no = cursor.fetchone()[0]
cursor.close()
print(leg_no)
I'm facing a odd problem where result = cursor.fetchone() return None when there is data in the DB.
please let me explain why is odd, I created a single connection
connection = psycopg2.connect(
user=environ["DB_USER"],
password=environ["DB_PASS"],
host=environ["DB_HOST"],
port=environ["DB_PORT"],
database=environ["DB_NAME"],
)
then I pass that connection to a function, this function creates a cursor, does some queries, closes the cursor and return the result. At this point the connections works, the cursor works. Then I pass the same connection to another function and this is where I have the problem:
def check_dependency(connection, uuid):
cursor = connection.cursor()
tablename = environ["PIPILE_NAME"]
sql_str = f"SELECT * FROM {tablename} "
sql_str += "WHERE uuid = %s "
sql_str += "AND NOT decrypt_status;"
cursor.execute(sql_str, (uuid,))
result = cursor.fetchone()
cursor.close()
print(sql_str, uuid, result) #<-- this output
return result
I copied the output, ran it in postgresql directly and it returned the row that I was expected but the function returns None.
I believe the problem might be in the connection or the cursor, but I don't know how to make sure they are ok
You are unsure whether the connection is still working properly.
To debug this, create a connection and then
immediately call the check_dependency function twice.
If the 2nd call fails we may be able to blame it on the connection,
but if the 1st call fails then you'll want to revise the function.
i am dealing with sql server database,
where i have a table named 'table1' containing 1 column and 1 row
exp_num
0
I am trying to update the 0 value exp_num column to +1 and also return old experiment and updated experiment.
For this i am using declare statements.
DECLARE #UpdateOutput1 table (Oldexp_num int,Newexp_num int);
UPDATE get_exp_num
SET exp_num = exp_num+1
OUTPUT
DELETED.exp_num,
INSERTED.exp_num
INTO #UpdateOutput1;
select * from #UpdateOutput1
When i'm running this in SQL editor i am getting the results.
Oldexp_num Newexp_num
0 1
but if i make this same as a query, and try to use pyodbc package i am getting error.
import pyodbc
connection = pyodbc.connect() # i am getting a connection
query = "DECLARE #UpdateOutput1 table (Oldexp_num int,Newexp_num int);UPDATE get_exp_num SET exp_num = exp_num+1 OUTPUT DELETED.exp_num, INSERTED.exp_num INTO #UpdateOutput1; select Newexp_num from #UpdateOutput1;"
cursor = connection.cursor()
cursor.execute(query)
cursor.fetchone()
When im doing cursor.fetchone() , i am getting following error.
File "<ipython-input-1398-bdaba305080c>", line 1, in <module>
cursor.fetchone()
ProgrammingError: No results. Previous SQL was not a query.
Is there any error in pyodbc package? or in my query
The problem was solved by adding SET NOCOUNT ON; to the beginning of the anonymous code block. That statement suppresses the record count values generated by DML statements like UPDATE ... and allows the result set to be retrieved directly.
Whenever the server generates some informative messages this scenario may occur. The thing is that pyodbc is not ready to handle multiple result sets at the same time that it is receiving "messages" from the server. By setting "NOCOUNT ON/OFF" you may get rid of just one kind of this "messages". The server could also yield some warnings or some procedure may PRINT something and those would "break" the SQL provoking the same error.
So a more generalist solution would be to iterate over the result sets while also checking if there are more sets to retrieve and inspecting if the server has sent any messages in between. For instance:
def process_query(self, query):
try:
self.cursor.execute(query)
rowlist = []
rows = self.__extract_resultset()
while rows or self.__has_next():
if rows:
rowlist.append(rows)
rows = self.__extract_resultset()
self.cursor.commit()
return rowlist
except pyodbc.ProgrammingError as e:
raise CustomException()
except Exception as e:
raise CustomException()
def __has_next(self):
try:
has_next = self.cursor.nextset()
if self.cursor.messages:
print(f'Info Message: {self.cursor.messages}', 'info')
except pyodbc.ProgrammingError as err:
has_next = False
print(f'ProgrammingError: {err}', 'error')
return has_next
def __extract_resultset(self):
data = []
try:
records = self.cursor.fetchall()
headers = [x[0] for x in self.cursor.description]
for record in records:
data.append(dict(zip(headers, record)))
except pyodbc.ProgrammingError as err:
print(f'ProgrammingError: {err}', 'error')
return data
Also some exception handling is more likely to be mandatory since both cursor.fetchall() and cursor.nextset() are very prone to fail because we don't know before hand when a message from the server will appear and any time they do, then the fetch* operations will have been failed. In the other hand nextset will fail (instead of just returning False) when no more result sets are available.
Hope this helps!
In my Flask web app I have a login system. When user logged, in the table of my database I want to update the datetime when user made the last login.
I'm using this code:
#app.route('/login', methods=['POST'])
def do_admin_login():
POST_CODICE_FISCALE = str(request.form['codice_fiscale'])
POST_PASSWORD = str(request.form['password'])
pwd_enc=base64.b64encode(POST_PASSWORD)
Session = sessionmaker(bind=engine)
s = Session()
query = s.query(User).filter(User.codice_fiscale.in_([POST_CODICE_FISCALE]), User.password.in_([pwd_enc]))
result = query.first()
if result:
session['logged_in'] = True
query = s.query(User).filter(User.codice_fiscale.in_([POST_CODICE_FISCALE]).update(User.data_ora_ultimo_accesso=datetime.now()))
query.first()
db.session.commit()
else:
flash('wrong password!')
return home()
but I receive the error:
query = s.query(User).filter(User.codice_fiscale.in_([POST_CODICE_FISCALE]).update(User.
data_ora_ultimo_accesso=datetime.now()))
SyntaxError: keyword can't be an expression
what it is wrong?
thanks.
As the error says, a keyword in a function call argument list cannot be an expression, such as User.data_ora_ultimo_accesso, but an identifier. Instead you should pass Query.update() a dictionary of column, expression pairs:
query = s.query(User).\
filter(User.codice_fiscale.in_([POST_CODICE_FISCALE]).\
update({ User.data_ora_ultimo_accesso: datetime.now() },
synchronize_session=False))
Note that since you commit right away, there's no need to synchronize the session, since all state will be expired anyway.
You could also make some changes that'd improve your code's readability. For example instead of
filter(User.codice_fiscale.in_([POST_CODICE_FISCALE]),
User.password.in_([pwd_enc]))
just
filter(User.codice_fiscale == POST_CODICE_FISCALE,
User.password == pwd_enc)
No point in checking if a list of 1 item contains something, compared to just testing equality.
Finally, you create a new Session class and an instance s of it, but you commit a different session: db.session, which you should've probably been using all along. What this means is that your updates will not take place, as that session's transaction is not actually committed.
I'm having an issue getting my python script to update my sqlite db.
The first part seems to work fine:
conn = sqlite3.connect('/Users/test/Desktop/my-accounts.db')
currentAccount = None
for row in conn.execute('SELECT email FROM accounts WHERE active=0'):
currentAccount = row[0]
print "Checking out: ",currentAccount
break
if currentAccount is None:
print "No available accounts"
Then this next part I want to take the variable currentAccount and update the row in the db where that value is.
else:
conn.execute('UPDATE accounts SET active=1 WHERE email=?', [currentAccount,])
conn.close()
I don't get any errors in the console but the db does not update. The email column is a VARCHAR and the active column is an INT.
Thanks.
SOLUTION was to add
conn.commit() after execute()
try to add conn.commit() after conn.execute("XXX"). Sometimes sqlite3 doesn't auto commit the execution.