convert parameterized queries from psycopg2 to pyodbc - python

I use psycopg2 code to open an sql file (postgresql), and I pass the value for LIMIT as a parameter in my code. And it works fine. Here is some of the codes:
.
.
cur.execute(open("sample.sql","r").read(),(lim,))
.
.
my sample.sql creates a table named sample and populates it with some data from another table named test
CREATE TABLE sample(name varchar(500));
INSERT INTO sample(name) SELECT name FROM test LIMIT %s;
%s takes the value lim passed from cur.execute
My question is: how do I translate this into pyodbc code to use it in sql server?

psycopg2 and pyodbc both implement Python's DB API, and that specification defines several parameter styles that can be used. The developers of psycopg2 and pyodbc simply chose different styles:
psycopg2 uses "format" placeholders: ... WHERE city = %s
pyodbc uses "qmark" placeholders: ... WHERE city = ?

Related

Django Pyodbc Stored procedure not all arguments converted during string formatting 1- Sql server

I am unable to call STORED Procedure from Django. I am able to call use the STORED Procedure from normal python program.
Sample working pythoncode
cursor = connect.cursor()
params=('IBM','2016')
cursor.execute("EXEC SP_Competitor_Extract ?, ? ",params)
This piece of code is working fine . But when i am trying to execute from Django it does not working .
def index(request):
cursor = connection.cursor()
try:
params=["IBM", "2015"]
cursor.execute("{call SP_Competitor_Extract (?,?)}",params)
while cursor.nextset():
try:
results={'results':cursor.fetchall()}
break
except pyodbc.ProgrammingError:
continue
This is giving me the error message not all arguments converted during string formatting
Django's internal .cursor() method differs from pyodbc's method. Try this:
cursor.execute(
"EXEC SP_Competitor_Extract %s, %s",
[params]
)
For more information, the documentation covers it well here: https://docs.djangoproject.com/en/1.11/topics/db/sql/#executing-custom-sql-directly
The other option is to import connection from pyodbc and create a connection manually like you do in Python, with your Django DATABASES settings variables. Good luck!
Its working now in PYodbc there is no callproc . you can pass the argument directly . you dont need the place holder . In my case cursor.execute("{call SP_Competitor_Extract IBM,2016}") worked . i am getting all the values as variableand creating the SQL .cursor.execute(SQL)

psycopg2 difference between AsIs and sql module

To choose dynamically a table name in a query I used to use AsIs() from psycopg2.extensions ( http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.AsIs ), with the following syntax:
cur.execute("SELECT * FROM %s WHERE id = %s;", (AsIs('table_name'), id))
However, the documentation now recommends to use the new psycopg2.sql module available in version 2.7 ( http://initd.org/psycopg/docs/sql.html#module-psycopg2.sql ) with the following syntax:
from psycopg2 import sql
cur.execute(
sql.SQL("SELECT * FROM {} WHERE id = %s;")
.format(sql.Identifier('table_name')), (id, )
What's the difference between those two options besides the fact that objects exposed by the sql module can be passed directly to execute()?
AsIs is... as it is. It won't perform any escape of the table name, if it contains characters need quoting. The objects in the sql module instead know what is an identifier.
More subtly, AsIs is for parameter values only: if currently works is mostly an implementation accident and in the future the behaviour may change. Query values should not be used to represent variable parts of the query, such as table or field names.

SQLAlchemy/pandas to_sql for SQLServer -- CREATE TABLE in master db

Using MSSQL (version 2012), I am using SQLAlchemy and pandas (on Python 2.7) to insert rows into a SQL Server table.
After trying pymssql and pyodbc with a specific server string, I am trying an odbc name:
import sqlalchemy, pyodbc, pandas as pd
engine = sqlalchemy.create_engine("mssql+pyodbc://mssqlodbc")
sqlstring = "EXEC getfoo"
dbdataframe = pd.read_sql(sqlstring, engine)
This part works great and worked with the other methods (pymssql, etc). However, the pandas to_sql method doesn't work.
finaloutput.to_sql("MyDB.dbo.Loader_foo",engine,if_exists="append",chunksize="10000")
With this statement, I get a consistent error that pandas is trying to do a CREATE TABLE in the sql server Master db, which it is not permisioned for.
How do I get pandas/SQLAlchemy/pyodbc to point to the correct mssql database? The to_sql method seems to ignore whatever I put in engine connect string (although the read_sql method seems to pick it up just fine.
To have this question as answered: the problem is that you specify the schema in the table name itself. If you provide "MyDB.dbo.Loader_foo" as the table name, pandas will interprete this full string as the table name, instead of just "Loader_foo".
Solution is to only provide "Loader_foo" as table name. If you need to specify a specific schema to write this table into, you can use the schema kwarg (see docs):
finaloutput.to_sql("Loader_foo", engine, if_exists="append")
finaloutput.to_sql("Loader_foo", engine, if_exists="append", schema="something_else_as_dbo")

Can I write a python/SQL code that is independent of the sql engine (PostGres / Sqlite)

I have a python code, in which I make SQL requests in a database. I would like to be able to switch between a postgresql (using module psycopg2) database and a sqlite one (using module sqlite3), without need of adapting my code. This means, I would like to have in my code some fixed SQL request strings, and I want to switch between the engine, only changing the definition of the database connector object, using one of those:
my_db = psycopg2.connect(...)
my_db = sqlite3.connect(...)
For the moment, I don't see any possibilty since:
Everyone knows that one should NOT use string concatenation to pass arguments to a SQL request, but rather use placeholders (from psycopg2 docu :never, NEVER use Python string concatenation ... to pass variables to a SQL query string. Not even at gunpoint. )
The synthax for placeholders are different is the 2 APIs psycopg2 and sqlite3. Even for NON-named placeholders. Psycopg uses "%" and sqlite3 uses "?":
my_cursor.execute("SELECT * FROM table WHERE id= ?", (my_id,)) # for SQLITE3
my_cursor.execute("SELECT * FROM table WHERE id= %", (my_id,)) # for PSYCOPG2
One could in principle use the SQL built-in placeholder synthax ("?"
for postgresql), but this would mean precisely preparing a SQL-string with python string concatenation, and so on... that is forbidden by 1.
I'm lacking ideas...

Parameterized queries with psycopg2 / Python DB-API and PostgreSQL

What's the best way to make psycopg2 pass parameterized queries to PostgreSQL? I don't want to write my own escpaing mechanisms or adapters and the psycopg2 source code and examples are difficult to read in a web browser.
If I need to switch to something like PyGreSQL or another python pg adapter, that's fine with me. I just want simple parameterization.
psycopg2 follows the rules for DB-API 2.0 (set down in PEP-249). That means you can call execute method from your cursor object and use the pyformat binding style, and it will do the escaping for you. For example, the following should be safe (and work):
cursor.execute("SELECT * FROM student WHERE last_name = %(lname)s",
{"lname": "Robert'); DROP TABLE students;--"})
From the psycopg documentation
(http://initd.org/psycopg/docs/usage.html)
Warning Never, never, NEVER use Python string concatenation (+) or string parameters interpolation (%) to pass variables to a SQL query string. Not even at gunpoint.
The correct way to pass variables in a SQL command is using the second argument of the execute() method:
SQL = "INSERT INTO authors (name) VALUES (%s);" # Note: no quotes
data = ("O'Reilly", )
cur.execute(SQL, data) # Note: no % operator
Here are a few examples you might find helpful
cursor.execute('SELECT * from table where id = %(some_id)d', {'some_id': 1234})
Or you can dynamically build your query based on a dict of field name, value:
query = 'INSERT INTO some_table (%s) VALUES (%s)'
cursor.execute(query, (my_dict.keys(), my_dict.values()))
Note: the fields must be defined in your code, not user input, otherwise you will be susceptible to SQL injection.
I love the official docs about this:
https://www.psycopg.org/psycopg3/docs/basic/params.html

Categories