REINDEX DATABASE cannot run inside a transaction block - python

I am using an old version of sqlalchemy (0.8) and I need to execute "REINDEX DATABASE <dbname>" on PostgreSQLql 9.4 by using sqlalchemy api.
Initially I tried with:
conn = pg_db.connect()
conn.execute('REINDEX DATABASE sg2')
conn.close()
but I got error "REINDEX DATABASE cannot run inside a transaction block".
I read in internet and tried other changes:
engine.execute(text("REINDEX DATABASE sg2").execution_options(autocommit=True))
(I tried also with autocommit=False).
and
conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute('REINDEX DATABASE sg2')
cursor.close()
I always have the same error.
I tried also following:
conn.execution_options(isolation_level="AUTOCOMMIT").execute(query)
but I got error
Invalid value 'AUTOCOMMIT' for isolation_level. Valid isolation levels for postgresql are REPEATABLE READ, READ COMMITTED, READ UNCOMMITTED, SERIALIZABLE
What am I missing here ? Thanks for any help.

Related

List databases using python sqlite3 API

Is there a way to list all attached databases for a sqlite3 Connection? For instance:
con = sqlite3.connect(":memory:")
con.execute("attach database 'a.db' as 'a'")
con.execute("attach database 'b.db' as 'b'")
con.list_databases() # <- doesn't exist
The command with the sqlite3 command shell is .databases. Tried poking at sqlite_master but of course that's a table, and it exists on each attached DB. I see nothing in the docs either. Is this possible?
Finally found it by digging into the sqlite3 source code:
c.execute("PRAGMA database_list").fetchall()
Forgot about PRAGMAs 🙃
(https://github.com/sqlite/sqlite/blob/master/src/shell.c.in#L8417)

Pyodbc stored procedure with params not updating table

I am using python 3.9 with a pyodbc connection to call a SQL Server stored procedure with two parameters.
This is the code I am using:
connectionString = buildConnection() # build connection
cursor = connectionString.cursor() # Create cursor
command = """exec [D7Ignite].[Service].[spInsertImgSearchHitResults] #RequestId = ?, #ImageInfo = ?"""
values = (requestid, data)
cursor.execute(command, (values))
cursor.commit()
cursor.close()
requestid is simply an integer number, but data is defined as follows (list of json):
[{"ImageSignatureId":"27833", "SimilarityPercentage":"1.0"}]
The stored procedure I am trying to run is supposed to insert data into a table, and it works perfectly fine when executed from Management Studio. When running the code above I notice there are no errors but data is not inserted into the table.
To help me debug, I printed the query preview:
exec [D7Ignite].[Service].[spInsertImgSearchHitResults] #RequestId = 1693, #ImageInfo = [{"ImageSignatureId":"27833", "SimilarityPercentage":"1.0"}]
Pasting this exact line into SQL Server runs the stored procedure with no problem, and data is properly inserted.
I have enabled autocommit = True when setting up the connection and other CRUD commands work perfectly fine with pyodbc.
Is there anything I'm overlooking? Or is pyodbc simply not processing my query properly? If so, are there any other ways to run Stored Procedures from Python?

Running simple query through python: No results

I am trying to learn how to get Microsoft SQL query results using python and pyodbc module and have run into an issue in returning the same results using the same query that I use in Microsoft SQL Management Studio.
I've looked at the pyodbc documentation and set up my connection correctly... at least I'm not getting any connection errors at execution. The only issue seems to be returning the table data
import pyodbc
import sys
import csv
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=<server>;DATABASE=<db>;UID=<uid>;PWD=<PWD>')
cursor = cnxn.cursor()
cursor.execute("""
SELECT request_id
From audit_request request
where request.reception_datetime between '2019-08-18' and '2019-08-19' """)
rows = cursor.fetchall()
for row in cursor:
print(row.request_id)
When I run the above code i get this in the python terminal window:
Process returned 0 (0x0) execution time : 0.331 s
Press any key to continue . . .
I tried this same query in SQL Management Studio and it returns the results I am looking for. There must be something I'm missing as far as displaying the results using python.
You're not actually setting your cursor up to be used. You should have something like this before executing:
cursor = cnxn.cursor()
Learn more here: https://github.com/mkleehammer/pyodbc/wiki/Connection#cursor

Write a DataFrame to an SQL database (Oracle)

I need to upload a table I modified to my oracle database. I exported the table as pandas dataframe modified it and now want to upload it to the DB.
I am trying to do this using the df.to_sql function as follows:
import sqlalchemy as sa
import pandas as pd
engine = sa.create_engine('oracle://"IP_address_of_server"/"serviceDB"')
df.to_sql("table_name",engine, if_exists='replace', chunksize = None)
I always get this error: DatabaseError: (cx_Oracle.DatabaseError) ORA-12505: TNS:listener does not currently know of SID given in connect descriptor (Background on this error at: http://sqlalche.me/e/4xp6).
I am not an expert of this, so I could not understand what the matter is, specially that the IP_address I am givingg is the right one.
Could anywone help? Thanks a lot!

python script hangs when calling cursor.fetchall() with large data set

I have a query that returns over 125K rows.
The goal is to write a script the iterates through the rows, and for each, populate a second table with data processed from the result of the query.
To develop the script, I created a duplicate database with a small subset of the data (4126 rows)
On the small database, the following code works:
import os
import sys
import random
import mysql.connector
cnx = mysql.connector.connect(user='dbuser', password='thePassword',
host='127.0.0.1',
database='db')
cnx_out = mysql.connector.connect(user='dbuser', password='thePassword',
host='127.0.0.1',
database='db')
ins_curs = cnx_out.cursor()
curs = cnx.cursor(dictionary=True)
#curs = cnx.cursor(dictionary=True,buffered=True) #fail
with open('sql\\getRawData.sql') as fh:
sql = fh.read()
curs.execute(sql, params=None, multi=False)
result = curs.fetchall() #<=== script stops at this point
print len(result) #<=== this line never executes
print curs.column_names
curs.close()
cnx.close()
cnx_out.close()
sys.exit()
The line curs.execute(sql, params=None, multi=False) succeeds on both the large and small databases.
If I use curs.fetchone() in a loop, I can read all records.
If I alter the line:
curs = cnx.cursor(dictionary=True)
to read:
curs = cnx.cursor(dictionary=True,buffered=True)
The script hangs at curs.execute(sql, params=None, multi=False).
I can find no documentation on any limits to fetchall(), nor can I find any way to increase the buffer size, and no way to tell how large a buffer I even need.
There are no exceptions raised.
How can I resolve this?
I was having this same issue, first on a query that returned ~70k rows and then on one that only returned around 2k rows (and for me RAM was also not the limiting factor). I switched from using mysql.connector (i.e. the mysql-connector-python package) to MySQLdb (i.e. the mysql-python package) and then was able to fetchall() on large queries with no problem. Both packages seem to follow the python DB API, so for me MySQLdb was a drop-in replacement for mysql.connector, with no code changes necessary beyond the line that sets up the connection. YMMV if you're leveraging something specific about mysql.connector.
Pragmatically speaking, if you don't have a specific reason to be using mysql.connector the solution to this is just to switch to a package that works better!

Categories