How to pass a mysql local variable to python? - python

I would like to pass a local variable from MySQL to python using MySQLdb module.
I know there are a lot of answers about the other way around (from python to mysql), and I am not looking for that.
I know also how to get any data from a table using
SELECT data_name
FROM table_name
The affected rows can be fetched to get the result.
However, running a SELECT #my_variable from python returns an empty result.
LOCK TABLES `usedID` WRITE;
/...*Do some stuff here*/
SELECT `lastaugID` FROM `usedID` INTO #ID;
UNLOCK TABLES;
SELECT #ID;
Executing the above snippet from mysql returns the expected result:
+-------+
| #ID |
+-------+
| 58404 |
+-------+
1 row in set (0.00 sec)
But the passed value to python is an empty list.
Here is the python snippet making SQL query:
def write_sql(self, sqlreq, sqlreqdict):
if self.sockname:
con = MySQLdb.Connect( unix_socket=self.sockname, user=self.sqluser,
passwd=self.sqlpasswd, db=self.sqldb,
cursorclass=MySQLdb.cursors.DictCursor );
else:
con = MySQLdb.Connect( host=self.sqlhost, user=self.sqluser,
passwd=self.sqlpasswd, db=self.sqldb,
cursorclass=MySQLdb.cursors.DictCursor );
cursor = con.cursor();
try:
cursor.execute( sqlreq, sqlreqdict );
except MySQLdb.IntegrityError, val:
html.print_errmsg_exit( "Database integrity error: " + str(val) );
res = cursor.fetchall();
del cursor;
con.commit();
con.close();
# Re-open the read-only cursors of the DB, to be able to process
# a request to read the just-created file:
self.init_db();
return res
PS: Why not just read from the table directly? I need to lock the table and read the value before it is unlocked. If I read after unlocking, I might get a value that has been changed by another session.

Related

Is there a proper way to handle cursors returned from a postgresql function in psycopg?

I'm trying to make friends with postgresql (14.0 build 1914 64-bit on windows), psycopg2 (2.9.1 installed using pip) and python 3.8.10 on windows.
I have created a postgresql function in a database that returns a cursor, somthing like below
CREATE get_rows
...
RETURNS refcursor
...
DECLARE
res1 refcursor;
BEGIN
OPEN res1 FOR
SELECT some_field, and_another_field FROM some_table;
RETURN res1;
END
The function can be run from pgAdmin4 Quert tool
SELECT get_rows();
and will then return a cursor like "<unnamed portal 1>"
Still within query tool in pgAdmin4 I can issue:
BEGIN;
SELECT get_rows();
FETCH ALL IN "<unnamed portal 2>"; -- Adjust counter number
And this will get me the rows returned by the cursor.
Now I want to replicate this using psycopg instead of pgAdmin4
I have the below code
conn = psycopg2.connect("dbname='" + db_name + "' "\
"user='" + db_user + "' " +\
"host='" + db_host + "' "+\
"password='" + db_passwd + "'")
cursor = conn.cursor()
cursor.callproc('get_rows')
print("cursor.description: ", end = '')
print(cursor.description)
for record in cursor:
print("record: ", end = '')
print (record)
The above code only gives the cursor string name (as returned by the postgresql function 'get_rows') in the single record of the cursor created by psycopg.
How can I get a cursor-class object from psycopg that provides access the cursor returned by 'get_rows'?
https://www.psycopg.org/docs/cursor.html says cursor.name is read-only and I dont see an obvious way to connect the cursor from 'get_rows' with a psycopg cursor-instance
The cursor link you show refers to the Python DB API cursor not the Postgres one. There is an example of how to do what you want here Server side cursor in section:
Note It is also possible to use a named cursor to consume a cursor created in some other way than using the DECLARE executed by execute(). For example, you may have a PL/pgSQL function returning a cursor:
CREATE FUNCTION reffunc(refcursor) RETURNS refcursor AS $$
BEGIN
OPEN $1 FOR SELECT col FROM test;
RETURN $1;
END;
$$ LANGUAGE plpgsql;
You can read the cursor content by calling the function with a regular, non-named, Psycopg cursor:
cur1 = conn.cursor()
cur1.callproc('reffunc', ['curname'])
and then use a named cursor in the same transaction to “steal the cursor”:
cur2 = conn.cursor('curname')
for record in cur2: # or cur2.fetchone, fetchmany...
# do something with record
pass
UPDATE
Be sure and close the named cursor(cur2) to release the server side cursor. So:
cur2.close()

psycpg2 Insert into fails

I got some python code (psycopg2) with which should insert data into a database:
def debug(self):
try:
self.connection.execute(
"SELECT test();")
res = self.connection.fetchall()
print(res)
except Exception as e:
print(e)
return
The test() function in pgsql is this:
CREATE OR REPLACE FUNCTION test(
) RETURNS setof varchar
AS $Body$
BEGIN
INSERT INTO Linie(name) VALUES('3');
RETURN QUERY(SELECT * FROM linie);
END;
$Body$ LANGUAGE plpgsql;
When i change the "name" value and execute the query in pgAdmin there is a now entry in the database. However when calling the function from python it always overrides the value.
The table is defined as follows:
CREATE TABLE Linie(
name varchar,
PRIMARY KEY (name)
);
For example with pgAdmin i can insert 1,2,3,4,5.
With python after running 5 equivalent queries it is just 5.
Calling the test function with nodeJS works fine.
When calling the function once from python then changing the insert value and then calling it from python again, the values are not replaced but inserted.
Also it does not throw any errors and returns the table as it should (except the replaced value).
why is this happening and what can i do against it?
Psycopg2 by default will not commit changes made to the database unless you explicitly call connection.commit() after executing your SQL. You could alter you code like so:
def debug(self):
try:
self.connection.execute(
"SELECT test();")
res = self.connection.fetchall()
self.connection.commit()
print(res)
except Exception as e:
print(e)
return
However, please be careful doing this as I have no information on what exactly self.connection is an instance of, therefore I have assumed it to be of type connection :)
Alternatively, when you setup your connection to the DB, set the property autocommit to True, as documented here. Example:
self.connection = psycopg2.connect(user='foo', password='bar', host='localhost', dbname='mydb')
self.connection.autocommit = True
If you are already using autocommit let me know and I'll have another look at your question.

python sqlite - deleting selected records

I'm trying to use sqlite3 in python to delete a selection of rows from a table. My attempt fails, but I can't work out why.
The sql query works ok, but I can't implement it within the python code.
I have a set of records that are moved from current_table to archive_table after a period of time.
I'd like to clean up the current_table by removing those rows that are in the archive_table (matched on id).
Intended SQL query:
DELETE FROM current_table WHERE id IN ( SELECT id FROM archive_table);
Attempted python code:
import sqlite3
def clean_up(current_table, archive_table):
db = sqlite3.connect(sqlite_db)
cursor = db.cursor()
sql_query_delete = '''DELETE FROM %s WHERE id IN ( SELECT id FROM %s);''' % (current_table, archive_table)
try:
cursor.execute(sql_query_delete)
db.commit()
db.close()
except:
print("error deleting")
Now working. The database file was locked by another process. Removing the pointless try/except led me to the detailed error message.

using python 2.7 to query sqlite3 database and getting "sqlite3 operational error no such table"

My simple test code is listed below. I created the table already and can query it using the SQLite Manager add-in on Firefox so I know the table and data exist. When I run the query in python (and using the python shell) I get the no such table error
def TroyTest(self, acctno):
conn = sqlite3.connect('TroyData.db')
curs = conn.cursor()
v1 = curs.execute('''
SELECT acctvalue
FROM balancedata
WHERE acctno = ? ''', acctno)
print v1
conn.close()
When you pass SQLite a non-existing path, it'll happily open a new database for you, instead of telling you that the file did not exist before. When you do that, it'll be empty and you'll instead get a "No such table" error.
You are using a relative path to the database, meaning it'll try to open the database in the current directory, and that is probably not where you think it is..
The remedy is to use an absolute path instead:
conn = sqlite3.connect('/full/path/to/TroyData.db')
You need to loop over the cursor to see results:
curs.execute('''
SELECT acctvalue
FROM balancedata
WHERE acctno = ? ''', acctno)
for row in curs:
print row[0]
or call fetchone():
print curs.fetchone() # prints whole row tuple
The problem is the SQL statment. you must specify the db name and after the table name...
'''SELECT * FROM db_name.table_name WHERE acctno = ? '''

DB-API with Python

I'm trying to insert some data into a local MySQL database by using MySQL Connector/Python -- apparently the only way to integrate MySQL into Python 3 without breaking out the C Compiler.
I tried all the examples that come with the package; Those who execute can enter data just fine. Unfortunately my attempts to write anything into my tables fail.
Here is my code:
import mysql.connector
def main(config):
db = mysql.connector.Connect(**config)
cursor = db.cursor()
stmt_drop = "DROP TABLE IF EXISTS urls"
cursor.execute(stmt_drop)
stmt_create = """
CREATE TABLE urls (
id TINYINT UNSIGNED NOT NULL AUTO_INCREMENT,
str VARCHAR(50) DEFAULT '' NOT NULL,
PRIMARY KEY (id)
) CHARACTER SET 'utf8'"""
cursor.execute(stmt_create)
cursor.execute ("""
INSERT INTO urls (str)
VALUES
('reptile'),
('amphibian'),
('fish'),
('mammal')
""")
print("Number of rows inserted: %d" % cursor.rowcount)
db.close()
if __name__ == '__main__':
import config
config = config.Config.dbinfo().copy()
main(config)
OUTPUT:
Number of rows inserted: 4
I orientate my code strictly on what was given to me in the examples and can't, for the life of mine, figure out what the problem is. What am I doing wrong here?
Fetching table data with the script works just fine so I am not worried about the configuration files. I'm root on the database so rights shouldn't be a problem either.
You need to add a db.commit() to commit your changes before you db.close()!

Categories