Dropping SQL Procedures/Functions w/SQL Alchemy and Python - python

I'm building the skeleton of a larger application which relies on SQL Server to do much of the heavy lifting and passes data back to Pandas for consumption by the user/insertion into flat files or Excel. Thus far my code is able to insert a stored procedure and function into a database and execute both without issue. However when I try to run the code a second time, in the same database, the drop commands don't seem to work. Here are the various files and flow of code through them.
First, proc_drop.sql, which is to used to store the SQL drop commands.
/*
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'proc_create_tractor_test'))
DROP PROCEDURE [dbo].[proc_create_tractor_test]
GO
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'fn_mr_parse'))
DROP FUNCTION [dbo].[fn_mr_parse]
GO
*/
IF OBJECT_ID('[proc_create_tractor_test]') IS NOT NULL DROP PROCEDURE [proc_create_tractor_test]
IF OBJECT_ID('[fn_mr_parse]') IS NOT NULL DROP PROCEDURE [fn_mr_parse]
I realize there are two kinds of drop statements in the file. Have tested a number of different iterations and none of the drop statements seem to work when executed by Python/SQL Alchemy but all work on their own when executed in SQL Management Studio
Next, the helper.py file in which I am storing helper functions. My drop SQL originally fed into the "deploy_procedures" function as a file and to be execute in the body of the function. I've since isolated the drop SQL reading/executing into another function for testing purposes only.
def clean_databases(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
print code
engine.execute(code)
def deploy_procedures(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
engine.execute(code)
Next, proc_create_tractor_test.sql, which is executed by the code and creates an associated stored procedure in the database. For brevity I've added the top portion of that code only:
CREATE PROCEDURE [dbo].[proc_create_tractor_test]
#meta_risks varchar(256)
AS
BEGIN
Finally, the major file piecing it altogether below. You'll notice that I create SQL Alchemy engines which are passed to the helper functions after being initialized with connection information. Those engines are passed as a list as are the other SQL procedures I referenced and the helper function simply iterates through each engine and procedure executing one at a time. Also using PYMSSQL as a driver to connect which is working fine.
So it is the "deploy_procedures" function which is crashing when trying to create the function or stored procedure the second time the code is run. And as far as I can tell, this is because the drop SQL at the top of my explanation is never run.
Can anyone shed some light on what the issue is or whether I am missing something totally obvious?
run_tractor.py:
import pandas
import pandas.io.sql as pdsql
import sqlalchemy as sqla
import xlwings
import helper as hlp
# Last import contains user-defined functions
# ---------- SERVER VARIABLES ---------- #
server = 'DEV\MSSQLSERVER2K12'
database = 'injection_test'
username= 'username'
password = 'pwd'
# ---------- CONFIGURING [Only change base path if relevant, not file names ---------- #
base_path = r'C:\Code\code\Tractor Analysis Basic Code'
procedure_drop = r'' + base_path + '\proc_drop.sql'
procedure_create_curves = r'' + base_path + '\proc_create_tractor_test.sql'
procedure_create_mr_function = r'' + base_path + '\create_mr_parse_function.sql'
procedures = [procedure_create_curves, procedure_create_mr_function]
del_procedures = [procedure_drop]
engine_analysis = sqla.create_engine('mssql+pymssql://{2}:{3}#{0}/{1}'.format(server,database,username,password))
engine_analysis.connect()
engines = [engine_analysis]
hlp.clean_databases(engines, del_procedures)
hlp.deploy_procedures(engines, procedures)

Related

Print SQL generated by SQLObject

from sqlobject import *
class Data(SQLObject):
ts = TimeCol()
val = FloatCol()
Data.select().count()
Fails with:
AttributeError: No connection has been defined for this thread or process
How do I get the SQL which would be generated, without declaring a connection?
It's impossible for two reasons. 1st, .count() not only generates a query, it also executes it, so not only it requires a connection, it also requires a database and a populated table. 2nd, different queries could be generated for different backends (esp. in the area of quoting strings) so a connection is required to render a query object to a string.
To generate a query string with accumulator function you need to repeat the code that generates the query. So the full solution for your question is
#! /usr/bin/env python
from sqlobject import *
__connection__ = "sqlite:/:memory:?debug=1"
class Data(SQLObject):
ts = TimeCol()
val = FloatCol()
print(Data.select().queryForSelect().newItems("COUNT(*)"))

SQL code not executing from SQL Alchemy in containerized MSSQL -> comment block preventing execution

Background on the problem: I've got MSSQL running in a Docker container, and I'm executing SQL scripts from a file within the container using Python/SQLAlchemy. I know this works because I've been executing other scripts without any problem. But one particular SQL script is executing without any result, and after careful elimination it seems to be caused by a single block of commented out code. Again this doesn't seem to cause an issue elsewhere, and the SQL script also runs faultlessly when executed directly in MSSQL.
The code that calls the SQL script:
with engine.connect() as con:
with open(file_loc) as file:
a = file.read()
a = re.split('\nGO|\nGO\n|\ngo|\ngo\n', a) # split the query at GO
for q in a:
con.execute(sa.text(q))
The code that causes the issue:
-- create new PMT() function
create function [financials].[pmt] (
#r decimal(38,16)
) returns decimal(24,4)
as
begin
declare #pmt decimal(24,4);
...
return #pmt;
end;
GO
------ present value <<<<ANY COMMENT HERE PREVENTS THE FUNCTIONS FROM BEING CREATED
create function [financials].[pmt_PV] (
#pmt decimal(38,16) <<removed trailing comma here
) returns decimal(24,4)
as
begin
declare #pmt_pv decimal(24,4);
...
return #pmt_pv;
end;
GO
Any comment in the the space marked in the middle prevents the functions from being created.

What's am I doing wrong with this backup using PyODBC and SQL Express Server 2008

For details I'm running a very recent version of python and PyODBC and am trying to remotely instantiate a backup on SQL. I don't know much about SQL. I've tried googling the issue and have tried a few tricks below including using .nextset() and trying different encodings just in case. I see a large surge on the local network of the PC with the SQL server but it very quickly tapers to nothing. I also can't seem to run a stored procedure as it will never seem to find it despite it existing and being able to run from my local machine using SSMS or the SSMS on the remote PC. Also I've given complete access to the remote account accessing the sql server as far as I know
Here is the relevant function:
def back_up_and_restore(self):
if self.Arbin not in [3, 4]:
self.errorString = "Not remote server!"
return self.errorString
self.cnxn = pyodbc.connect(self.connectionString(self.db[0]))
self.cursor = self.cnxn.cursor()
self.cnxn.autocommit = True
if self.Arbin == 3:
folder = "Fred"
else:
folder = "George"
query = r"BACKUP DATABASE [%s] TO DISK = N'\\10.130.130.5\TestLab\DATA\%s\ArbinMasterData.bak' WITH NOFORMAT, INIT, NAME = N'ArbinMasterData-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10" % ("ArbinMasterData", folder)
#query = "{CALL BackUp%s2}" % folder
query.encode(encoding='UTF-8')
print(query)
self.cursor.execute(query)
time.sleep(2)
while self.cursor.nextset():
pass
self.cnxn.close
When I run the function it just dies quickly and doesn't say anything. If I try running a stored procedure it says that the stored procedure doesn't exist. Any help would be great.
This code is working great if you need an example of PyODBC making a backup. I unfortunately was looking in the George folder instead of Fred.

Does Psycopg2 allow udf create queries to run on redshift using Python?

I am able to connect to aws-redshift with psycopg2 using python, I can query tables and get data back, etc...
However, when I try to run a create udf fucntion through psycopg2, nothing happens, no error returns but nothing gets created.
Here's my code:
def _applyFunctionToDB():
con=psycopg2.connect(dbname = redhsiftDatabase, host = redshiftHost, port = '5439', user = redshiftUser, password = redshiftPwd)
cur = con.cursor()
udf=_fileOpenWrite(udfFile)
size = os.stat(udfFile).st_size
udfCode=udf.read(size)
cur.execute(udfCode)
con.close()
I have run it through the debugger and all the pieces are there, but nothing happens when the "execute" method is invoked on the cursor.
If anyone has any advice and/or ideas on what might be going on here, please advise.
Thanks!
found answer just after posting here: Copying data from S3 to AWS redshift using python and psycopg2
I need to invoke a commit.
So, add con.commit in above code after execute.

How to receive a out parameter(sys_refcursor) of stored procedure Oracle in Django

I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))

Categories