Does Pyodbc treat SQLWarnings as errors? - python

I am writting Python code to connect to a MS SQL Server using Pyodbc.
So far, things have been going smoothly and I managed to call several Stored Procedures on the database.
I have now, however run into troubles. The stored procedure I am calling is outputting sqlwarnings regarding null values
Warning: Null value is eliminated by an aggregate or other SET operation. (8153)
While this is something that could/should be treated in the SQL part I would like to simply ignore it for now on the Python level.
The code is called in a fairly standard way (I think), like so (not providing a minimal code for now)
conn = None
try:
#Connection is created in another class, but retrieved here. Works ok.
conn = db_conn.connect_to_db()
cur = conn.cursor()
cur.execute(str_sql)
# This is a hack, but I think is unrelated to the issue
# Sorry I haven't found a better way to make pyodbc wait for the SP to finish than this
# https://stackoverflow.com/questions/68025109/having-trouble-calling-a-stored-procedure-in-sql-server-from-python-pyodbc
while cur.nextset():
time.sleep(1)
cur.commit()
cur.close()
return True
except db.Error as ex:
log.error(str(ex.args[1]))
raise ConnectionError(ex.args[1])
The problem is the ConnectionError is raised on the SQLWarning. Can pyodbc be configured to ignore this?
Related posts tells me to turn off the ANSI Warnings on the Stored procedure, but I think that is a work around.
Other posts has stuff about importing '''warnings''' and catchAll() warnings, but this is tried, but didn't work. I guess since pyodbc sees it as an error thus the warning part never reached Python.
Did I misunderstand something or is it not possible?
Python version 3.7
Pyodbc version 4.0.32
ODBC Driver 17 for SQL Server
Called from macOS

Okay, so I did somewhat resolve my issue.
The stored procedure actually did produce an error in my call. I found this after testing the call directly on the database.
So to answer my own question no pyodbc doesn't treat warnings as errors.
I did however only see the sql warning in the errors (or at least as far as I could tell). The real error thrown by a THROW 50001, ...... was no where to be seen in the pyodbc.Error.
I tried to make a minimal reproducible example, but failed to do so. The following code seems to ignore the throwing of the error. I assume I made some mistake and this kind of sql string cannot be used. The expected behavior would be to land in the ERROR part, but instead the correct values are returned in the fetchall.
import pyodbc
def test_warnings_after_errors():
# Connect to your own MS SQL database
conn = None
cur = conn.cursor()
try:
cur.execute('''
SELECT C1,
MAX(C2) as MaxC2
FROM (VALUES(1,1),
(1,2),
(2,4),
(1, NULL),
(2,4)) as V(C1, C2)
GROUP BY C1
THROW 51000, 'Will we get this error back?', 1;
'''
)
result = cur.fetchall()
print(result)
except pyodbc.Error as error:
print(error.args[1])
print("Executed sql")
If I remove the whole select part the error is thrown as expected. The code runs as written in Azure Data Studio against the server and in that case it will return the error (and warnings regarding null before that).
To actually remove my error I had to do a cleanup of data, but that was totally unrelated to the issues posted here.
In my case I can live with the "weird" sqlwarning in case of error thrown, but it still puzzles me.

Related

Replace mysql.connector-function with reading csv/tsv data

I would like to run a large python script programmed by someone else (https://github.com/PatentsView/PatentsView-Disambiguation to be precise). At numerous times during the execution of this code, python connects to a mysql database in order to read some data. The problem is that I
a) cannot get a mysql installation on the server I use. Being only a guest at the institution whose computers I use, I cannot really influence IT to change this
b) would like to alter the code as little as possible, since I have very little python experience
The original code uses a function granted_table() that returns a mysql.connector.connect(...), where host, user, password etc. are given in the dots. My idea is to switch out this function with one that reads in TSV files instead, which I have all stored on my machine. This way, I do not have to go over the long script and fiddle around with it (I have almost no experience with python and might mess something up without realizing it).
I have tried a few things already and it almost works, but not quite. The reproducible example follows:
First, there is the original function that accesses mysql, which has to go
def granted_table(config):
if config['DISAMBIGUATION']['granted_patent_database'].lower() == 'none':
logging.info('[granted_table] no db given')
return None
else:
logging.info('[granted_table] trying to connect to %s', config['DISAMBIGUATION']['granted_patent_database'])
return mysql.connector.connect(host=config['DATABASE']['host'],
user=config['DATABASE']['username'],
password=config['DATABASE']['password'],
database=config['DISAMBIGUATION']['granted_patent_database'])
Second, the function that tries to call the above function to read in data (I shortened and simplified it but kept the structure as I understand it. This version of the function actually only prints whatever data it receives): I do not want to change this if at all possible
def build_granted(config):
cnx = granted_table(config)
cursor = cnx.cursor()
query = "SELECT uuid, patent_id, assignee_id, rawlocation_id, type, name_first, name_last, organization, sequence FROM rawassignee"
cursor.execute(query)
for rec in cursor:
print(rec)
Third, there is my attempt at a new function granted_table(config) which will behave like the old one but not access mysql
def granted_table(config):
return(placeholder())
class placeholder:
def cursor(self):
return conn_to_data()
class conn_to_data:
def execute(self,query): # class method
#define folder etc
print(query)
filename = "foo.tsv"
path ="/share/bar/"
file=open((path+filename))
print(file.read(100))
return file
Now, if I execute all of the above and then run
config="test"
build_granted(config)
the code breaks with
"Type Error: conn_to_data object is not iterable",
so it does not access the actual data as intended. However, if I change the line cursor.execute(query) to cursor=cursor.execute(query) it works as intended. I could do that if there is really no better solution, but it seems like there must be. I am a python newb, so maybe there even is a simple function/class that is intended for this and one of you can point it out. You might realize that the code currently always opens the same file, without actually interpreting the query. This is something I will still have to fix later, once it even works at all. It seems to me to be a somewhat messy but conceptually simple string-parsing problem, but if some of you have a smart idea, it would also be a huge help.
Thanks in advance!

pyodbc MERGE INTO error: HY000: The driver did not supply an error

I'm trying to execute many (~1000) MERGE INTO statements into oracledb 11.2.0.4.0(64bit) using python 3.9.2(64bit) and pyodbc 4.0.30(64bit). However, all the statements return an exception:
HY000: The driver did not supply an error
I've tried everything I can think of to solve this problem, but no luck. I tried changing code, encodings/decodings and ODBC driver from oracle home 12.1(64bit) to oracle home 19.1(64bit). I also tried using pyodbc 4.0.22 in which case the error just changed into:
<class 'pyodbc.ProgrammingError'> returned a result with an error set
Which is not any more helpful error than the first one. The issue I assume cannot be the MERGE INTO statement itself, because when I try running them directly in the database shell, it completes without issue.
Below is my code. I guess I should also mention the commands and parameters are read from stdin before being executed, and oracledb is using utf8 characterset.
cmds = sys.stdin.readlines()
comms = json.loads(cmds[0])
conn = pyodbc.connect(connstring)
conn.setencoding(encoding="utf-8")
cursor = conn.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD"T"HH24:MI:SS.....'""")
for comm in comms:
params = [(None) if str(x)=='None' or str(x)=='NULL' else (x) for x in comm["params"]]
try:
cursor.execute(comm["sql"],params)
except Exception as e:
print(e)
conn.commit()
conn.close()
Edit: Another things worth mentioning for sure - this issue began after python2.7 to 3.9.2 update. The code itself didn't require any changes at all in this particular location, though.
I've had my share of HY000 errors in the past. It almost always came down to a syntax error in the SQL query. Double check all your double and single quotes, and makes sure the query works when run independently in an SQL session to your database.

MySQL stored procedure sometimes returns 0 rows

EDIT: I've now tried pyodbc as well as pymysql, and have the same result (zero rows returned when calling a stored procedure). Forgot to mention before that this is on Ubuntu 16.04.2 LTS using the MySQL ODBC 5.3 Driver (libmyodbc5w.so).
I'm using pymysql (0.7.11) on Python 3.5.2, executing various stored procedures against a MySQL 5.6.10 database. I'm running into a strange and inconsistent issue where I'm occasionally getting zero results returned, though I can immediately re-run the exact same code and get the number of rows I expect.
The code is pretty straightforward...
from collections import OrderedDict
import pymysql
from pymysql.cursors import DictCursorMixin, Cursor
class OrderedDictCursor(DictCursorMixin, Cursor):
dict_type = OrderedDict
try:
connection = pymysql.connect(
host=my_server,
user=my_user,
password=my_password,
db=my_database,
connect_timeout=60,
cursorclass=pymysql.cursors.DictCursor
)
param1 = '2017-08-23 00:00:00'
param2 = '2017-08-24 00:00:00'
proc_args = tuple([param1, param2])
proc = 'my_proc_name'
cursor = connection.cursor(OrderedDictCursor)
cursor.callproc(proc, proc_args)
result = cursor.fetchall()
except Exception as e:
print('Error: ', e)
finally:
if not isinstance(connection, str):
connection.close()
More often than not, it works just fine. But every once in awhile, it completes almost instantly but with zero rows in the result set. No error that I can see or anything, just nothing... Run it again, and no problem.
Turns out that the problem had nothing to do with pymysql, odbc, etc., but rather was a problem with the order in which the parameters were passed to the stored procedure.
On my desktop, I was using Python 3.6 and things worked just fine. I didn't realize, tho, that one of the changes between 3.5.2 and 3.6 affected how items added to a dictionary object via json.loads were ordered.
The parameters being passed were coming from a dict object originally populated via json.loads... since they were unordered pre-3.6, running the code would occasionally mean that my starttime and endtime parameters were passed to the MySQL stored procedure backwards. Hence, zero rows returned.
Once I realized that was the issue, fixing it was just a matter of adding object_pairs_hook=OrderedDict to the json.loads part.

Does Psycopg2 allow udf create queries to run on redshift using Python?

I am able to connect to aws-redshift with psycopg2 using python, I can query tables and get data back, etc...
However, when I try to run a create udf fucntion through psycopg2, nothing happens, no error returns but nothing gets created.
Here's my code:
def _applyFunctionToDB():
con=psycopg2.connect(dbname = redhsiftDatabase, host = redshiftHost, port = '5439', user = redshiftUser, password = redshiftPwd)
cur = con.cursor()
udf=_fileOpenWrite(udfFile)
size = os.stat(udfFile).st_size
udfCode=udf.read(size)
cur.execute(udfCode)
con.close()
I have run it through the debugger and all the pieces are there, but nothing happens when the "execute" method is invoked on the cursor.
If anyone has any advice and/or ideas on what might be going on here, please advise.
Thanks!
found answer just after posting here: Copying data from S3 to AWS redshift using python and psycopg2
I need to invoke a commit.
So, add con.commit in above code after execute.

How to receive a out parameter(sys_refcursor) of stored procedure Oracle in Django

I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))

Categories