txpostgres stored procedure returns cursor - python

I'm trying to get a cursor from stored procedure in txpostgres.
Psycopg2 has a named cursors which are working fine for it. But there is no curs = conn.cursor('name') statement in txpostgres.
Is there another way to get it ?

txpostgres doesn't have a named cursor feature. However, psycopg2's named cursors are just a convenience wrapper for PostgreSQL's cursors. I don't have much experience with stored procedures, but here's an example with a simple query:
#inlineCallbacks
def transaction(cursor):
yield cursor.execute('mycursor CURSOR FOR SELECT bigtable')
yield cursor.execute('FETCH ALL FROM mycursor')
data = yield cursor.fetchall()
conn.runInteraction(transaction)

Related

Does creating/closing a cursor in "mysql-connector-python" do anything with MySql?

Im using the mysql-connector-python library for a script I wrote to access a MySql 8 Database:
def get_document_by_id(conn,id):
cursor = conn.cursor(dictionary=True)
try:
cursor.execute("SELECT * FROM documents WHERE id = %s",(id,))
return cursor.fetchone()
except Exception as e:
print(str(e))
return {}
finally:
cursor.close()
Since i need to call this function multiple times during a loop, I was wondering about the following:
Does the act of creating/closing a cursor actually interact with my MySql-Database in any way or is it just used as an abstraction in python for grouping together SQL queries?
Thanks a lot for your help.
The doc says that clearly
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests.
You can also refer to Connector/Python Connection Pooling for further information.

Connect to SQLite Memory Database from seperate file (Python)

the application I am building will require a single sqlite memory database that separate routines and threads will need to access. I am having difficulties achieving this.
I understand that:
file:my_db?mode=memory&cache=shared', uri=True
should create a memory database that can be modified and accessed by separate connections.
Here is my test that return an error:
"sqlite3.OperationalError: no such table: my_table"
Code below saved as "test_create.py":
import sqlite3
def create_a_table():
db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)
cursor = db.cursor()
cursor.execute('''
CREATE TABLE my_table(id INTEGER PRIMARY KEY, some_data TEXT)
''')
db.commit()
db.close()
The above code is imported into the code below in a separate file:
import sqlite3
import test_create
test_create.create_a_table()
db = sqlite3.connect('file:my_db')
cursor = db.cursor()
# add a row of data
cursor.execute('''INSERT INTO my_table(some_data) VALUES(?)''', ("a bit of data",))
db.commit()
The above code works fine is written in a single file. Can anyone advise how I can keep the code in separate files which will hopefully allow me to make multiple separate connections?
Note: I don't to save the database. Thanks.
Edit: If you want use threading ensure you enable the following option.
check_same_thread=False
e.g.
db = sqlite3.connect('file:my_db?mode=memory&cache=shared', check_same_thread=False, uri=True)
You opened a named, in-memory database connection with shared cache. Yes, you can share the cache on that database, but only if you use the exact same name. This means you need to use the full URI scheme!
If you connect with db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True), any additional connection within the process can see the same table, provided the original connection is still open, and you don't mind that the table is 'private', in-memory only and not available to other processes or connections that use a different name. When the last connection to the database closes, the table is gone.
So you also need to keep the connection open in the other module for this to work!
For example, if you change the module to use a global connection object:
db = None
def create_a_table():
global db
if db is None:
db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)
with db:
cursor = db.cursor()
cursor.execute('''
CREATE TABLE my_table(id INTEGER PRIMARY KEY, some_data TEXT)
''')
and then use that module, the table is there:
>>> import test_create
>>> test_create.create_a_table()
>>> import sqlite3
>>> db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)
>>> with db:
... cursor = db.cursor()
... cursor.execute('''INSERT INTO my_table(some_data) VALUES(?)''', ("a bit of data",))
...
<sqlite3.Cursor object at 0x100d36650>
>>> list(db.cursor().execute('select * from my_table'))
[(1, 'a bit of data')]
Another way to achieve this is to open a database connection in the main code before calling the function; that creates a first connection to the in-memory database, and opening and closing additional connections won't cause the changes to be lost.
From the documentation:
When an in-memory database is named in this way, it will only share its cache with another connection that uses exactly the same name.
If you didn't mean for the database to be just in-memory, and you wanted the table to be committed to disk (to be there next time you open the connection), drop the mode=memory component.

In a database program what do these lines of code mean and do?

In a database program what does these lines of code mean and do?
conn=sqlite3.connect(filename)
c=conn.cursor()
conn.commit()
You could think of conn = sqlite3.connect(filename) as creating a connection, or a reference, to that database specified in the filename. So anytime you carry out an action with conn, it will be an action performed on the database specified by filename.
c = conn.cursor() is a cursor object, which allows you to carry out SQL queries on the database. It is created using a call on the conn variable created earlier, and so is a cursor object for that specific database. This is most commonly useful for its .execute() method, which is used to execute SQL commands on the database.
conn.commit() 'commits' the changes to the database; that is, when this command is called, any changes that had been made by the cursor will be saved to the database.

Executing MySQL Queries using Python

I'm attempting to run some MySQL queries and output the results in my Python program. I've created this function that is called and the cursor is passed through. However, I am running into a problem where running the below code will always return None / nothing.
Here is what I have:
def showInformation(cursor):
number_rows = 'SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = "DB"'
testing = cursor.execute(number_rows)
print(testing)
When using the cursor object itself, I do not run into any problems:
for row in cursor:
print(row)
I guess you need:
print(cursor.fetchone())
because you are returning only a count and so you expect one row.
Calling execute is not supposed to return anything unless multi=True is specified according to mysql documentation. The programmer can only iterate the cursor like you did, or call fetchone to retrieve one row or call fetchall to retrieve all rows or call fetchmany to retrieve some rows.

should I reuse the cursor in the python MySQLdb module

I'm writing a python CGI script that will query a MySQL database. I'm using the MySQLdb module. Since the database will be queryed repeatedly, I wrote this function....
def getDatabaseResult(sqlQuery,connectioninfohere):
# connect to the database
vDatabase = MySQLdb.connect(connectioninfohere)
# create a cursor, execute and SQL statement and get the result as a tuple
cursor = vDatabase.cursor()
try:
cursor.execute(sqlQuery)
except:
cursor.close()
return None
result = cursor.fetchall()
cursor.close()
return result
My question is... Is this the best practice? Of should I reuse my cursor within my functions? For example. Which is better...
def callsANewCursorAndConnectionEachTime():
result1 = getDatabaseResult(someQuery1)
result2 = getDatabaseResult(someQuery2)
result3 = getDatabaseResult(someQuery3)
result4 = getDatabaseResult(someQuery4)
or do away with the getDatabaseeResult function all together and do something like..
def reusesTheSameCursor():
vDatabase = MySQLdb.connect(connectionInfohere)
cursor = vDatabase.cursor()
cursor.execute(someQuery1)
result1 = cursor.fetchall()
cursor.execute(someQuery2)
result2 = cursor.fetchall()
cursor.execute(someQuery3)
result3 = cursor.fetchall()
cursor.execute(someQuery4)
result4 = cursor.fetchall()
The MySQLdb developer recommends building an application specific API that does the DB access stuff for you so that you don't have to worry about the mysql query strings in the application code. It'll make the code a bit more extendable (link).
As for the cursors my understanding is that the best thing is to create a cursor per operation/transaction. So some check value -> update value -> read value type of transaction could use the same cursor, but for the next one you would create a new one. This is again pointing to the direction of building an internal API for the db access instead of having a generic executeSql method.
Also remember to close your cursors, and commit changes to the connection after the queries are done.
Your getDatabaseResult function doesn't need to have a connect for every separate query though. You can share the connection between the queries as long as you act responsible with the cursors.

Categories