Executing Multiple SQL statements using execute() - python

I am trying to do an SQL injection in a server of mine.
I am using the command :
cursor.execute("select * from some_table")
to execute the SQL commands in my server.
But is there a way to execute multiple commands using the same execute() function.
I tried :
cursor.execute("select * from some_table ; INSERT INTO ...")
DBMS is mariadb

Here is an overview of SQL injection strategies. The one you are trying to do is called stacking queries. It seems that at least this strategy is prevented by most database APIs.
You mention MariaDB which is basically more or less the same as MySQL.
And although python is not listed explicitly, I would also assume that the python database API prevents query stacking.
Update: When you check the API of execute() you can see there is a parameter multi which defaults to False. As long as you don't set it to True, you should be safe.

MySQL (and MariaDB) allows you to run several SQL statements in one go by setting capability flag CLIENT_MULTI_STATEMENTS (0x10000) on connecting to the database server. Check out the documentation of the python database driver used in your implementation, there should be method to set the flag and you need to do so in advance before creating cursor and execute the SQL statements.
Here is the code example of mariadb python driver, for other drivers (like pymysql) they may work the same way
import mariadb
from mariadb.constants.CLIENT import MULTI_STATEMENTS
conn_params= {
"user" : "YOUR_USERNAME",
"password" : "YOUR_PASSWORD",
"host" : "NETWORK_DOMAIN_NAME",
"database" : "DB_NAME",
"client_flag": MULTI_STATEMENTS,
}
db_conn = mariadb.connect(**conn_params)
rawsqls = [
'SELECT * FROM table2',
'INSERT INTO table3 ....',
'SELECT * FROM table4',
]
with db_conn.cursor() as cursor:
cursor.execute(';'.join(rawsqls))
rows1 = cursor.fetchall()
cursor.nextset()
rows2 = cursor.fetchall()
cursor.nextset()
rows3 = cursor.fetchall()
CAUTION
To avoid SQL injection, You should be careful and use the flag CLIENT_MULTI_STATEMENTS ONLY when you are pretty sure that all the inputs to your SQL statements come from trusted source.

Related

pyodbc MERGE INTO error: HY000: The driver did not supply an error

I'm trying to execute many (~1000) MERGE INTO statements into oracledb 11.2.0.4.0(64bit) using python 3.9.2(64bit) and pyodbc 4.0.30(64bit). However, all the statements return an exception:
HY000: The driver did not supply an error
I've tried everything I can think of to solve this problem, but no luck. I tried changing code, encodings/decodings and ODBC driver from oracle home 12.1(64bit) to oracle home 19.1(64bit). I also tried using pyodbc 4.0.22 in which case the error just changed into:
<class 'pyodbc.ProgrammingError'> returned a result with an error set
Which is not any more helpful error than the first one. The issue I assume cannot be the MERGE INTO statement itself, because when I try running them directly in the database shell, it completes without issue.
Below is my code. I guess I should also mention the commands and parameters are read from stdin before being executed, and oracledb is using utf8 characterset.
cmds = sys.stdin.readlines()
comms = json.loads(cmds[0])
conn = pyodbc.connect(connstring)
conn.setencoding(encoding="utf-8")
cursor = conn.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD"T"HH24:MI:SS.....'""")
for comm in comms:
params = [(None) if str(x)=='None' or str(x)=='NULL' else (x) for x in comm["params"]]
try:
cursor.execute(comm["sql"],params)
except Exception as e:
print(e)
conn.commit()
conn.close()
Edit: Another things worth mentioning for sure - this issue began after python2.7 to 3.9.2 update. The code itself didn't require any changes at all in this particular location, though.
I've had my share of HY000 errors in the past. It almost always came down to a syntax error in the SQL query. Double check all your double and single quotes, and makes sure the query works when run independently in an SQL session to your database.

Python call to 'ibm_db.exec_imediate' gives message 'Function (%s) returns too few values'

I am calling a stored procedure with ibm_db like this:
SQL = "EXECUTE PROCEDURE db_x:example_procedure(8, 1234567)"
stmt = ibm_db.exec_immediate(conn, sql)
But line exec_imediate gives the error: Transaction couldn't be completed:[IBM][CLI Driver][IDS/UNIX64] Function (%s) returns too few values. SQLCODE=-685
In the IBM site we have the following:
685 Function <function-name> returns too few values.
The number of returned values from a function is less than the number
of values that the caller expects.
I dont know where exactly the error occurs and why? How can I debug this and solve it?
Ps.: I do not have access to the procedure code.
Thanks.
The ibm_db uses DRDA protocol, and it is not a best choice with Informix database. You may try the same with Informix native python driver, which is IfxPy.
Here is the Informix native python driver homepage
https://openinformix.github.io/IfxPy/

How to receive a out parameter(sys_refcursor) of stored procedure Oracle in Django

I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))

Dropping SQL Procedures/Functions w/SQL Alchemy and Python

I'm building the skeleton of a larger application which relies on SQL Server to do much of the heavy lifting and passes data back to Pandas for consumption by the user/insertion into flat files or Excel. Thus far my code is able to insert a stored procedure and function into a database and execute both without issue. However when I try to run the code a second time, in the same database, the drop commands don't seem to work. Here are the various files and flow of code through them.
First, proc_drop.sql, which is to used to store the SQL drop commands.
/*
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'proc_create_tractor_test'))
DROP PROCEDURE [dbo].[proc_create_tractor_test]
GO
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'fn_mr_parse'))
DROP FUNCTION [dbo].[fn_mr_parse]
GO
*/
IF OBJECT_ID('[proc_create_tractor_test]') IS NOT NULL DROP PROCEDURE [proc_create_tractor_test]
IF OBJECT_ID('[fn_mr_parse]') IS NOT NULL DROP PROCEDURE [fn_mr_parse]
I realize there are two kinds of drop statements in the file. Have tested a number of different iterations and none of the drop statements seem to work when executed by Python/SQL Alchemy but all work on their own when executed in SQL Management Studio
Next, the helper.py file in which I am storing helper functions. My drop SQL originally fed into the "deploy_procedures" function as a file and to be execute in the body of the function. I've since isolated the drop SQL reading/executing into another function for testing purposes only.
def clean_databases(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
print code
engine.execute(code)
def deploy_procedures(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
engine.execute(code)
Next, proc_create_tractor_test.sql, which is executed by the code and creates an associated stored procedure in the database. For brevity I've added the top portion of that code only:
CREATE PROCEDURE [dbo].[proc_create_tractor_test]
#meta_risks varchar(256)
AS
BEGIN
Finally, the major file piecing it altogether below. You'll notice that I create SQL Alchemy engines which are passed to the helper functions after being initialized with connection information. Those engines are passed as a list as are the other SQL procedures I referenced and the helper function simply iterates through each engine and procedure executing one at a time. Also using PYMSSQL as a driver to connect which is working fine.
So it is the "deploy_procedures" function which is crashing when trying to create the function or stored procedure the second time the code is run. And as far as I can tell, this is because the drop SQL at the top of my explanation is never run.
Can anyone shed some light on what the issue is or whether I am missing something totally obvious?
run_tractor.py:
import pandas
import pandas.io.sql as pdsql
import sqlalchemy as sqla
import xlwings
import helper as hlp
# Last import contains user-defined functions
# ---------- SERVER VARIABLES ---------- #
server = 'DEV\MSSQLSERVER2K12'
database = 'injection_test'
username= 'username'
password = 'pwd'
# ---------- CONFIGURING [Only change base path if relevant, not file names ---------- #
base_path = r'C:\Code\code\Tractor Analysis Basic Code'
procedure_drop = r'' + base_path + '\proc_drop.sql'
procedure_create_curves = r'' + base_path + '\proc_create_tractor_test.sql'
procedure_create_mr_function = r'' + base_path + '\create_mr_parse_function.sql'
procedures = [procedure_create_curves, procedure_create_mr_function]
del_procedures = [procedure_drop]
engine_analysis = sqla.create_engine('mssql+pymssql://{2}:{3}#{0}/{1}'.format(server,database,username,password))
engine_analysis.connect()
engines = [engine_analysis]
hlp.clean_databases(engines, del_procedures)
hlp.deploy_procedures(engines, procedures)

sql server function native parameter bind error

I'm using the following software stack on Ubuntu 10.04 Lucid LTS to
connect to a database:
python 2.6.5 (ubuntu package)
pyodbc git trunk commit eb545758079a743b2e809e2e219c8848bc6256b2
unixodbc 2.2.11 (ubuntu package)
freetds 0.82 (ubuntu package)
Windows with Microsoft SQL Server 2000 (8.0)
I get this error when trying to do native parameter binds in arguments
to a SQL SERVER function:
Traceback (most recent call last):
File "/home/nosklo/devel/testes/sqlfunc.py", line 32, in <module>
cur.execute("SELECT * FROM fn_FuncTest(?)", ('test',))
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL
Server]SqlDumpExceptionHandler: Process 54 generated fatal exception
c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this
process.\r\n (0) (SQLPrepare)')
Here's the reproduction code:
import pyodbc
constring = 'server=myserver;uid=uid;pwd=pwd;database=db;TDS_Version=8.0;driver={FreeTDS}'
con = pyodbc.connect(constring)
print 'VERSION: ', con.getinfo(pyodbc.SQL_DBMS_VER)
cur = con.cursor()
try:
cur.execute('DROP FUNCTION fn_FuncTest')
con.commit()
print "Function dropped"
except pyodbc.Error:
pass
cur.execute('''
CREATE FUNCTION fn_FuncTest (#testparam varchar(4))
RETURNS #retTest TABLE (param varchar(4))
AS
BEGIN
INSERT #retTest
SELECT #testparam
RETURN
END''')
con.commit()
Now the function is created. If I try to call it using a value direct in the query (no native binds of values) it works fine:
cur.execute("SELECT * FROM fn_FuncTest('test')")
assert cur.fetchone()[0] == 'test'
However I get the error above when I try to do a native bind (by using a parameter placeholder and passing the value separately):
cur.execute("SELECT * FROM fn_FuncTest(?)", ('test',))
Further investigation reveals some weird stuff I'd like to relate:
Everything works fine if I change TDS Version to 4.2 (however,
version report from sql server is wrong -- using TDS version 4.2 I get
'95.08.0255' instead of the real version '08.00.0760').
Everything works fine for the other two types of functions ->
functions that return a value and functions that are just a SELECT
query (like a view) both work fine. You can even define a new function
that returns the result of a query on the other (broken) function, and
this way everything will work, even when doing native binds on the
parameters. For example: CREATE FUNCTION fn_tempFunc(#testparam
varchar(4)) RETURNS TABLE AS RETURN (SELECT * FROM
fn_FuncTest(#testparam))
Connection gets very unstable after this error, you can't recover.
The error happens when trying to bind any type of data.
How can I pursue this further? I'd like to do native binds to function parameters.
Ultimately, this probably isn't the answer you're looking for, but when I had to connect to MSSQL from Perl two or three years ago, ODBC + FreeTDS was initially involved, and I didn't get anywhere with it (though I don't recall the specific errors, I was trying to do binding, though, and it seemed the source of some of the trouble).
On the Perl project, I eventually wound up using a driver intended for Sybase (which MSSQL forked off from), so you might want to look into that.
The Python wiki has a page on Sybase and another on SQL Server that you'll probably want to peruse for alternatives:
http://wiki.python.org/moin/Sybase
http://wiki.python.org/moin/SQL%20Server

Categories