Is there any pywin32 odbc connector documentation available? - python

What is a good pywin32 odbc connector documentation and tutorial on the web?

Alternatives:
mxODBC by egenix.com (if you need ODBC)
pyODBC
sqlalchemy and DB-API 2.0 modules (which isn't ODBC) but it's maybe better alternative

The answer is: 'there isn't one'. However, here is an example that shows how to open a connection and issue a query, and how to get column metadata from the result set. The DB API 2.0 specification can be found in PEP 249.
import dbi, odbc
SQL2005_CS=TEMPLATE="""\
Driver={SQL Native Client};
Server=%(sql_server)s;
Database=%(sql_db)s;
Trusted_Connection=yes;
"""
CONN_PARAMS = {'sql_server': 'foo',
'sql_db': 'bar'}
query = "select foo from bar"
db = odbc.odbc(SQL2005_CS_TEMPLATE % CONN_PARAMS)
c = db.cursor()
c.execute (query)
rs = c.fetchall() # see also fetchone() and fetchmany()
# looping over the results
for r in rs:
print r
#print the name of column 0 of the result set
print c.description[0][0]
#print the type, length, precision etc of column 1.
print c.description[1][1:5]
db.close()

The only 'documentation' that I found was a unit test that was installed with the pywin32 package. It seems to give an overview of the general functionality. I found it here:
python dir\Lib\site-packages\win32\test\test_odbc.py
I should also point out that I believe it is implements the Python Database API Specification v1.0, which is documented here:
http://www.python.org/dev/peps/pep-0248/
Note that there is also V2.0 of this specification (see PEP-2049)
On a side note, I've been trying to use pywin32 odbc, but I've had problems with intermittent crashing with the ODBC driver I'm using. I've recently moved to pyodbc and my issues were resolved.

Related

SQLite3 syntax error on tested sql script

I'm using python 3.6.4 and sqlite3 2.6.0 to query the nearest consecutive dates in my table in a sqlite 3.27.2 file.
I've tried to get the actual sql string with vscode debugger and test it with DB Browser for SQLite. It works as I expect.
Here's the code:
sql = 'WITH \
dates(cast_date) AS (\
SELECT DISTINCT play_date\
FROM TimeTable\
),\
groups AS (\
SELECT\
date(cast_date, \'-\'||(ROW_NUMBER() OVER (ORDER BY cast_date))||\' days\') AS grp,\
cast_date\
FROM dates\
)\
SELECT\
MIN(cast_date) AS date_start,\
MAX(cast_date) AS date_end\
FROM groups GROUP BY grp ORDER BY 2 DESC LIMIT 1'
cursor = conn.cursor()
result = []
try:
cursor.execute(sql)
result = cursor.fetchone()
except sqlite3.OperationalError:
FileLogger.exception('Exception at '+__file__+' '+__name__)
An exception occurs:
cursor.execute(sql)
sqlite3.OperationalError: near "OVER": syntax error
Window functions support was first added to SQLite with release version 3.25.0 (2018-09-15), according to official documentation.
When using Python, you are using Python SQLite3 client library (which is distributed with Python) instead of your system SQLite3 installation. For Python 2.7, the version is 3.11.0, which is below your required version.
You may try using a newer SQLite3 client library, as suggested by these answers.

String-encoding SQL statement

I have a very strange issue. I have a SQL statement that works locally, but the same statement does not work on a remote ubuntu machine (same mysql version). I think it has to do with how the SQL string is being encoded by the driver. Here is the statement I have:
group_ids = ('43ede7a1e1f048872c025867602dc54d', '43ede7a1e1f048872c025867602dc54d', '7a8ec12901c43606aee041f1e6d5b2d4', '0f57f4ad
cursor = connection.cursor()
cursor.execute( '''
SELECT
DISTINCT c.group_id
FROM
main_cue c
LEFT OUTER JOIN
main_passongroup p
ON
(c.group_id=p.group_id AND p.user_id=%s)
WHERE
c.group_id in %s
ORDER BY
p.timestamp ASC, c.id DESC''', (user.pk, group_ids))
results = cursor.fetchall()
print '>>> 1', results
On my local machine, it seems to be properly SQL-encoding the statement, but on the remote server, it is only working if I hard code the exact sql statement.
Is there a better way to encode the SQL statement?
This seemed to be an issue with an outdated version of MySQLdb, which was not properly encoding the string.
I was able to solve this by uninstalling MySQLdb and then reinstalling the newer version, (MySQL-python==1.2.5).

Can I write a python/SQL code that is independent of the sql engine (PostGres / Sqlite)

I have a python code, in which I make SQL requests in a database. I would like to be able to switch between a postgresql (using module psycopg2) database and a sqlite one (using module sqlite3), without need of adapting my code. This means, I would like to have in my code some fixed SQL request strings, and I want to switch between the engine, only changing the definition of the database connector object, using one of those:
my_db = psycopg2.connect(...)
my_db = sqlite3.connect(...)
For the moment, I don't see any possibilty since:
Everyone knows that one should NOT use string concatenation to pass arguments to a SQL request, but rather use placeholders (from psycopg2 docu :never, NEVER use Python string concatenation ... to pass variables to a SQL query string. Not even at gunpoint. )
The synthax for placeholders are different is the 2 APIs psycopg2 and sqlite3. Even for NON-named placeholders. Psycopg uses "%" and sqlite3 uses "?":
my_cursor.execute("SELECT * FROM table WHERE id= ?", (my_id,)) # for SQLITE3
my_cursor.execute("SELECT * FROM table WHERE id= %", (my_id,)) # for PSYCOPG2
One could in principle use the SQL built-in placeholder synthax ("?"
for postgresql), but this would mean precisely preparing a SQL-string with python string concatenation, and so on... that is forbidden by 1.
I'm lacking ideas...

MSSQL in python 2.7

Is there a module available for connection of MSSQL and python 2.7?
I downloaded pymssql but it is for python 2.6. Is there any equivalent module for python 2.7?
I am not aware of it if anyone can provide links.
Important note: in the meantime there is a pymssql module available. Don't miss to read the answer at the end of this page: https://stackoverflow.com/a/25749269/362951
You can also use pyodbc to connect to MSSQL from Python.
An example from the documentation:
import pyodbc
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=testdb;UID=me;PWD=pass')
cursor = cnxn.cursor()
cursor.execute("select user_id, user_name from users")
rows = cursor.fetchall()
for row in rows:
print row.user_id, row.user_name
The SQLAlchemy library (mentioned in another answer), uses pyodbc to connect to MSSQL databases (it tries various libraries, but pyodbc is the preferred one). Example code using sqlalchemy:
from sqlalchemy import create_engine
engine = create_engine("mssql://me:pass#localhost/testdb")
for row in engine.execute("select user_id, user_name from users"):
print row.user_id, row.user_name
If you're coming across this question through a web search, note that pymssql nowadays does support Python 2.7 (and 3.3) or newer. No need to use ODBC.
From the pymssql requirements:
Python 2.x: 2.6 or newer. Python 3.x: 3.3 or newer.
See http://pymssql.org/.
Install pyodbc using pip as follows: pip install pyodbc
import pyodbc
cnxn = pyodbc.connect("DRIVER={SQL Server};SERVER=SOME-PC;DATABASE=my_db")
cursor = cnxn.cursor()
cursor.execute("insert into test_tb values(6, 'name')")
cursor.execute("select id, name from my_tb")
rows = cursor.fetchall()
for row in rows:
print row.id, row.name
For details, see
https://github.com/mkleehammer/pyodbc/wiki
You can try out SQLAlchemy:
The SQLAlchemy Object Relational Mapper presents a method of associating user-defined Python classes with database tables, and instances of those classes (objects) with rows in their corresponding tables.
You can refer following links:
1> http://www.sqlalchemy.org/docs/
2> http://www.rmunn.com/sqlalchemy-tutorial/tutorial.html

question about postgresql bind variables

I was looking at the question and decided to try using the bind variables. I use
sql = 'insert into abc2 (interfield,textfield) values (%s,%s)'
a = time.time()
for i in range(10000):
#just a wrapper around cursor.execute
db.executeUpdateCommand(sql,(i,'test'))
db.commit()
and
sql = 'insert into abc2 (intfield,textfield) values (%(x)s,%(y)s)'
for i in range(10000):
db.executeUpdateCommand(sql,{'x':i,'y':'test'})
db.commit()
Looking at the time taken for the two sets, above it seems like there isn't much time difference. In fact, the second one takes longer. Can someone correct me if I've made a mistake somewhere? using psycopg2 here.
The queries are equivalent in Postgresql.
Bind is oracle lingo. When you use it will save the query plan so the next execution will be a little faster. prepare does the same thing in Postgres.
http://www.postgresql.org/docs/current/static/sql-prepare.html
psycopg2 supports an internal 'bind', not prepare with cursor.executemany() and cursor.execute()
(But don't call it bind to pg people. Call it prepare or they may not know what you mean:)
IMPORTANT UPDATE :
I've seen into source of all python libraries to connect to PostgreSQL in FreeBSD ports and can say, that only py-postgresql does real prepared statements! But it is Python 3+ only.
also py-pg_queue is funny lib implementing official DB protocol (python 2.4+)
You've missed answer for that question about prepared statements to use as many as possible. "Binded variables" are better form of this, let's see:
sql_q = 'insert into abc (intfield, textfield) values (?, ?)' # common form
sql_b = 'insert into abc2 (intfield, textfield) values (:x , :y)' # should have driver and db support
so your test should be this:
sql = 'insert into abc2 (intfield, textfield) values (:x , :y)'
for i in range (10000):
cur.execute(sql, x=i, y='test')
or this:
def _data(n):
for i in range (n):
yield (i, 'test')
sql = 'insert into abc2 (intfield, textfield) values (? , ?)'
cur.executemany(sql, _data(10000))
and so on.
UPDATE:
I've just found interest reciple how to transparently replace SQL queries with prepared and with usage of %(name)s
As far as I know, psycopg2 has never supported server-side parameter binding ("bind variables" in Oracle parlance). Current versions of PostgreSQL do support it at the protocol level using prepared statements, but only a few connector libraries make use of it. The Postgres wiki notes this here. Here are some connectors that you might want to try: (I haven't used these myself.)
pg8000
python-pgsql
py-postgresql
As long as you're using DB-API calls, you probably ought to consider cursor.executemany() instead of repeatedly calling cursor.execute().
Also, binding parameters to their query in the server (instead of in the connector) is not always going to be faster in PostgreSQL. Note this FAQ entry.

Categories