pyODBC insert failing silently - python

I'm using python 3.4 (ActiveState) and pyodbc 3.0.7 on a Windows 7 box to connect to a SQL Server 2008 RC2 database running on Window NT 6.1.
The problem I'm having is that the code below fails silently. No changes are made to the database.
connection = pyodbc.connect("DRIVER={SQL Server};SERVER=(local);DATABASE=Kerb;UID=sa;PWD=password", autocommit=True)
cursor = connection.cursor()
cursor.execute('''INSERT INTO [My].[Sample] (Case) VALUES (1);''')
I've also attempted to force the insert with a commit statement (which, unless I'm mistaken, shouldn't be necessary due to the autocommit=True), this also fails with no output.
cursor.execute('''INSERT INTO [My].[Sample] (Case) VALUES (1);''')
cursor.commit()
So my solution so far has been to add a sleep, which has solved the problem. But I worry about implementing this solution in production as it doesn't take into account network lag, etc.
cursor.execute('''INSERT INTO [My].[Sample] (Case) VALUES (1);''')
time.sleep(1)
I believe my question may be related to:
pyODBC and SQL Server 2008 and Python 3
If anyone has any ideas for further debugging or has documentation regarding this bit of asynchronous behavior I would love to hear it.
Thanks!

Unfortunately it appears that PyODBC cannot execute insert statements without the use of a timeout. I have started using PyMSSQL and the timeout is no longer required for a successful commit.

Related

Does BEGIN TRAN and COMMIT TRAN work in a SQL Server stored procedure? Is there a way to separate code within a stored procedure into batches? [duplicate]

I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them.
The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions.
Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance:
1) I have never written a stored procedure.
2) I heard that pyodbc does not return results from stored procedures as of yet.
3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth.
So, what's the best way to go about this?
By its documentation, pyodbc does support transactions, but only if the odbc driver support it. Furthermore, as pyodbc is compliant with PEP 249, data is stored only when a manual commit is done.
This means that you have to explicitely commit() the transaction, or rollback() the entire transaction.
Note that pyodbc also support autocommit feature, and in that case you cannot have any transaction.
By default, autocommit is off, but your codebase might have tuerned it on.
You should check the connection, when it is performed
cnxn = pyodbc.connect(cstring, autocommit=True)
Alternatively, you can also explicitely turn off the autocommit mode with
cnxn.autocommit = False
but this might have quite a big impact on your system.
Note: you can get more information on the autocommit mode of pyodbc on its wiki
I don't think pyodbc has any specific support for transactions. You need to send the SQL command to start/commit/rollback transactions.

Python loses connection to MySQL database after about a day

I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.

sqlalchemy + MySQL connection timeouts

I have a daemon that uses sqlalchemy to interact with MySQL database. Since interaction is seldom, the connections are prone to timing out. I've tried to fix the problem by setting various flags when creating the database engine, e.g. pool_recycle=3600, but nothing seems to help.
To help me debug the problem, I set the timeout of my local mysql server to 10 seconds, and tried the following program.
import time
import sqlalchemy
engine = sqlalchemy.engine.create_engine("mysql://localhost")
while True:
connection = engine.connect()
result = connection.execute("SELECT 1")
print result.fetchone()
connection.close()
time.sleep(15)
Surprisingly, I continue to get exceptions like the following:
sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away')
However, if I remove the call to connection.close(), the problem goes away. What is going on here? Why isn't sqlalchemy trying to establish a new connection each time I call connect()?
I'm using Python 2.7.3 with sqlalchemy 0.9.8 and MySQL 5.5.40.
the document mention:
MySQL features an automatic connection close behavior,
for connections that have been idle for eight hours or more.
To circumvent having this issue, use the pool_recycle option
which controls the maximum age of any connection:
engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)
You can just put "pool_recycle" parameter when the create_engine is invoked.
I'm not 100% sure if this is going to fix your problem or not, but I ran into a similar issue when dealing with mysql. It was only fixed when I used pymysql to connect to the database.
You can install like this:
pip install pymysql
Which will work for both linux and windows (if you have it installed)
Then give your connection string like this:
import time
import sqlalchemy
engine = sqlalchemy.engine.create_engine("mysql+pymysql://localhost")
while True:
connection = engine.connect()
result = connection.execute("SELECT 1")
print result.fetchone()
connection.close()
time.sleep(15)
I get the following output when I run it:
(1,)
(1,)
(1,)
(1,)
On another note, I have found certain queries to break with SQLAlchemy 0.9.8. I had to install version 0.9.3 in order for my applications to not break anymore.

Make Python access to MS Access faster on Windows

I am on a windows server 2003 and accessing a locally stored MS Access 2000 MDB from python 2.5.4 scripts using pyodbc 2.1.5.
The db access is very slow this way (I am on fast machine and all other db operations are normal) and I wonder if there is a better way to access the MDB from python? Maybe a better odbc driver?
This is an example script like I use:
import pyodbc
cstring = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=t:\data.mdb'
conn = pyodbc.connect(cstring)
cursor = conn.cursor()
sql="UPDATE ..."
cursor.execute(sql)
conn.commit()
conn.close()
Try setting up your connection once on program startup and then reusing the connection everywhere. Rather than closing it after every Execute or Commit.
Tony's suggestion makes the most sense to me. However, if it's not enough, you could also try a later version of the driver, such as this one that works with Office 2007 files (as well as older versions, of course). You can download and install it even if you don't have Office.
Once you have it installed, try a connection string like this:
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=T:\data.mdb;

SQL queries through PYODBC fail silently on one machine, works on another

I am working on a program to automate parsing data from XML files and storing it into several databases. (Specifically the USGS realtime water quality service, if anyone's interested, at http://waterservices.usgs.gov/rest/WaterML-Interim-REST-Service.html) It's written in Python 2.5.1 using LXML and PYODBC. The databases are in Microsoft Access 2000.
The connection function is as follows:
def get_AccessConnection(db):
connString = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=' + db
cnxn = pyodbc.connect(connString, autocommit=False)
cursor = cnxn.cursor()
return cnxn, cursor
where db is the filepath to the database.
The program:
a) opens the connection to the database
b) parses 2 to 8 XML files for that database and builds the values from them into a series of records to insert into the database (using a nested dictionary structure, not a user-defined type)
c) loops through the series of records, cursor.execute()-ing an SQL query for each one
d) commits and closes the database connection
If the cursor.execute() call throws an error, it writes the traceback and the query to the log file and moves on.
When my coworker runs it on his machine, for one particular database, specific records will simply not be there, with no errors recorded. When I run the exact same code on the exact same copy of the database over the exact same network path from my machine, all the data that should be there is there.
My coworker and I are both on Windows XP computers with Microsoft Access 2000 and the same versions of Python, lxml, and pyodbc installed. I have no idea how to check whether we have the same version of the Microsoft ODBC drivers. I haven't been able to find any difference between the records that are there and the records that aren't. I'm in the process of testing whether the same problem happens with the other databases, and whether it happens on a third coworker's computer as well.
What I'd really like to know is ANYTHING anyone can think of that would cause this, because it doesn't make sense to me. To summarize: Python code executing SQL queries will silently fail half of them on one computer and work perfectly on another.
Edit:
No more problem. I just had my coworker run it again, and the database was updated completely with no missing records. Still no idea why it failed in the first place, nor whether or not it will happen again, but "problem solved."
I have no idea how to check whether
we have the same version of the
Microsoft ODBC drivers.
I think you're looking for Control Panel | Administrative Tools | Data Sources (ODBC). Click the "Drivers" tab.
I think either Access 2000 or Office 2000 shipped with a desktop edition of SQL Server called "MSDE". Might be worth installing that for testing. (Or production, for that matter.)

Categories