Why does windows give an sqlite3.OperationalError and linux does not? - python

The problem
I've got a programm that uses storm 0.14 and it gives me this error on windows:
sqlite3.OperationError: database table is locked
The thing is, under linux it works correctly.
I've got the impression that it happens only after a certain amount of changes have been done, as it happens in some code, that copies a lot of objects.
Turning on the debug mode gives me this on windows:
83 EXECUTE: 'UPDATE regularorder_product SET discount=? WHERE regularorder_product.order_id = ? AND regularorder_product.product_id = ?', (Decimal("25.00"), 788, 274)
84 DONE
85 EXECUTE: 'UPDATE repeated_orders SET nextDate=? WHERE repeated_orders.id = ?', (datetime.date(2009, 3, 31), 189)
86 ERROR: database table is locked
On linux:
83 EXECUTE: 'UPDATE regularorder_product SET discount=? WHERE regularorder_product.order_id = ? AND regularorder_product.product_id = ?', (Decimal("25.00"), 789, 274)
84 DONE
85 EXECUTE: 'UPDATE repeated_orders SET nextDate=? WHERE repeated_orders.id = ?', (datetime.date(2009, 3, 31), 189)
86 DONE
System info
Windows
Windows XP SP 3
Python 2.5.4
NTFS partition
Linux
Ubuntu 8.10
Python 2.5.2
ext3 partition
Some code
def createRegularOrderCopy(self):
newOrder = RegularOrder()
newOrder.date = self.nextDate
# the exception is thrown on the next line,
# while calling self.products.__iter__
# this happens when this function is invoked the second time
for product in self.products:
newOrder.customer = self.customer
newOrder.products.add(product)
return newOrder
orders = getRepeatedOrders(date)
week = timedelta(days=7)
for order in orders:
newOrder = order.createRegularOrderCopy()
store.add(newOrder)
order.nextDate = date + week
The question
Is there anything about sqlite3/python that differs between windows and linux? What could be the reason for this bug and how can I fix it?
Another observation
When adding a COMMIT at the place where the error happens, this error is thrown instead: sqlite3.OperationalError: cannot commit transaction - SQL statements in progress
Answers to answers
I'm not using multiple threads / processes, therefore concurrency shouldn't be a problem and also I've got only one Store object.

The "database table is locked" error is often a generic/default error in SQLite, so narrowing down your problem is not obvious.
Are you able to execute any SQL queries? I would start there, and get some basic SELECT statements working. It could just be a permissions issue.

Hard to say without a little more info on the structure of your database access (which is a little obscured by using Storm).
I'd start by reading these documents; they contain very relevant information:
https://storm.canonical.com/Manual#SQLite%20and%20threads
http://sqlite.org/lockingv3.html

Are you running any sort of anti-virus scanners? Anti-virus scanners will frequently lock a file after it has been updated, so that they can inspect it without it being changed. This may explain why you get this error after a lot of changes have been made; the anti-virus scanner has more new data to scan.
If you are running an anti-virus scanner, try turning it off and see if you can reproduce this problem.

It looks to me like storm is broken, though my first guess was virus scanner as Brian suggested.
Have you tried using sqlite3_busy_timeout() to set the timeout very high? This might cause SQLite3 to wait long enough for the lock holder, whoever that is, to release the lock.

I've solved the problem at the moment by replacing the sqlite3-dll with the newest version. I'm still not sure if this was a bug in the windows code of sqlite or if python installed an older version on windows than on linux.
Thanks for your help.

Related

Does Pyodbc treat SQLWarnings as errors?

I am writting Python code to connect to a MS SQL Server using Pyodbc.
So far, things have been going smoothly and I managed to call several Stored Procedures on the database.
I have now, however run into troubles. The stored procedure I am calling is outputting sqlwarnings regarding null values
Warning: Null value is eliminated by an aggregate or other SET operation. (8153)
While this is something that could/should be treated in the SQL part I would like to simply ignore it for now on the Python level.
The code is called in a fairly standard way (I think), like so (not providing a minimal code for now)
conn = None
try:
#Connection is created in another class, but retrieved here. Works ok.
conn = db_conn.connect_to_db()
cur = conn.cursor()
cur.execute(str_sql)
# This is a hack, but I think is unrelated to the issue
# Sorry I haven't found a better way to make pyodbc wait for the SP to finish than this
# https://stackoverflow.com/questions/68025109/having-trouble-calling-a-stored-procedure-in-sql-server-from-python-pyodbc
while cur.nextset():
time.sleep(1)
cur.commit()
cur.close()
return True
except db.Error as ex:
log.error(str(ex.args[1]))
raise ConnectionError(ex.args[1])
The problem is the ConnectionError is raised on the SQLWarning. Can pyodbc be configured to ignore this?
Related posts tells me to turn off the ANSI Warnings on the Stored procedure, but I think that is a work around.
Other posts has stuff about importing '''warnings''' and catchAll() warnings, but this is tried, but didn't work. I guess since pyodbc sees it as an error thus the warning part never reached Python.
Did I misunderstand something or is it not possible?
Python version 3.7
Pyodbc version 4.0.32
ODBC Driver 17 for SQL Server
Called from macOS
Okay, so I did somewhat resolve my issue.
The stored procedure actually did produce an error in my call. I found this after testing the call directly on the database.
So to answer my own question no pyodbc doesn't treat warnings as errors.
I did however only see the sql warning in the errors (or at least as far as I could tell). The real error thrown by a THROW 50001, ...... was no where to be seen in the pyodbc.Error.
I tried to make a minimal reproducible example, but failed to do so. The following code seems to ignore the throwing of the error. I assume I made some mistake and this kind of sql string cannot be used. The expected behavior would be to land in the ERROR part, but instead the correct values are returned in the fetchall.
import pyodbc
def test_warnings_after_errors():
# Connect to your own MS SQL database
conn = None
cur = conn.cursor()
try:
cur.execute('''
SELECT C1,
MAX(C2) as MaxC2
FROM (VALUES(1,1),
(1,2),
(2,4),
(1, NULL),
(2,4)) as V(C1, C2)
GROUP BY C1
THROW 51000, 'Will we get this error back?', 1;
'''
)
result = cur.fetchall()
print(result)
except pyodbc.Error as error:
print(error.args[1])
print("Executed sql")
If I remove the whole select part the error is thrown as expected. The code runs as written in Azure Data Studio against the server and in that case it will return the error (and warnings regarding null before that).
To actually remove my error I had to do a cleanup of data, but that was totally unrelated to the issues posted here.
In my case I can live with the "weird" sqlwarning in case of error thrown, but it still puzzles me.

pyodbc MERGE INTO error: HY000: The driver did not supply an error

I'm trying to execute many (~1000) MERGE INTO statements into oracledb 11.2.0.4.0(64bit) using python 3.9.2(64bit) and pyodbc 4.0.30(64bit). However, all the statements return an exception:
HY000: The driver did not supply an error
I've tried everything I can think of to solve this problem, but no luck. I tried changing code, encodings/decodings and ODBC driver from oracle home 12.1(64bit) to oracle home 19.1(64bit). I also tried using pyodbc 4.0.22 in which case the error just changed into:
<class 'pyodbc.ProgrammingError'> returned a result with an error set
Which is not any more helpful error than the first one. The issue I assume cannot be the MERGE INTO statement itself, because when I try running them directly in the database shell, it completes without issue.
Below is my code. I guess I should also mention the commands and parameters are read from stdin before being executed, and oracledb is using utf8 characterset.
cmds = sys.stdin.readlines()
comms = json.loads(cmds[0])
conn = pyodbc.connect(connstring)
conn.setencoding(encoding="utf-8")
cursor = conn.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD"T"HH24:MI:SS.....'""")
for comm in comms:
params = [(None) if str(x)=='None' or str(x)=='NULL' else (x) for x in comm["params"]]
try:
cursor.execute(comm["sql"],params)
except Exception as e:
print(e)
conn.commit()
conn.close()
Edit: Another things worth mentioning for sure - this issue began after python2.7 to 3.9.2 update. The code itself didn't require any changes at all in this particular location, though.
I've had my share of HY000 errors in the past. It almost always came down to a syntax error in the SQL query. Double check all your double and single quotes, and makes sure the query works when run independently in an SQL session to your database.

MySQL stored procedure sometimes returns 0 rows

EDIT: I've now tried pyodbc as well as pymysql, and have the same result (zero rows returned when calling a stored procedure). Forgot to mention before that this is on Ubuntu 16.04.2 LTS using the MySQL ODBC 5.3 Driver (libmyodbc5w.so).
I'm using pymysql (0.7.11) on Python 3.5.2, executing various stored procedures against a MySQL 5.6.10 database. I'm running into a strange and inconsistent issue where I'm occasionally getting zero results returned, though I can immediately re-run the exact same code and get the number of rows I expect.
The code is pretty straightforward...
from collections import OrderedDict
import pymysql
from pymysql.cursors import DictCursorMixin, Cursor
class OrderedDictCursor(DictCursorMixin, Cursor):
dict_type = OrderedDict
try:
connection = pymysql.connect(
host=my_server,
user=my_user,
password=my_password,
db=my_database,
connect_timeout=60,
cursorclass=pymysql.cursors.DictCursor
)
param1 = '2017-08-23 00:00:00'
param2 = '2017-08-24 00:00:00'
proc_args = tuple([param1, param2])
proc = 'my_proc_name'
cursor = connection.cursor(OrderedDictCursor)
cursor.callproc(proc, proc_args)
result = cursor.fetchall()
except Exception as e:
print('Error: ', e)
finally:
if not isinstance(connection, str):
connection.close()
More often than not, it works just fine. But every once in awhile, it completes almost instantly but with zero rows in the result set. No error that I can see or anything, just nothing... Run it again, and no problem.
Turns out that the problem had nothing to do with pymysql, odbc, etc., but rather was a problem with the order in which the parameters were passed to the stored procedure.
On my desktop, I was using Python 3.6 and things worked just fine. I didn't realize, tho, that one of the changes between 3.5.2 and 3.6 affected how items added to a dictionary object via json.loads were ordered.
The parameters being passed were coming from a dict object originally populated via json.loads... since they were unordered pre-3.6, running the code would occasionally mean that my starttime and endtime parameters were passed to the MySQL stored procedure backwards. Hence, zero rows returned.
Once I realized that was the issue, fixing it was just a matter of adding object_pairs_hook=OrderedDict to the json.loads part.

Python equivalent of ignoreboth:erasedups

I'm running iPython (Jupyter) through Anaconda, on a Mac Sierra, through iTerm, with $SHELL=bash - if I've missed any helpful set up details, just let me know.
I love the $HISTCONTROL aspect of bash, mentioned here. To sum that answer up: when traversing history (aka hitting the up arrow), it's helpful to remove duplicate entries so you don't scroll past the same command multiple times, and this is accomplished with $HISTCONTROL=ignoreboth:erasedups.
Is there any equivalent for this inside the Python interpreter (or iPython, specifically)? I have readline installed and feel like that's a good place to start, but nothing jumped out as obviously solving the problem, and I would've thought this was built in somewhere.
Through some deep-diving into IPython, sifting through poorly-explained and/or deprecated documentation, I've pieced together a solution that seems to work fine, though I'm sure it's not optimal for a number of reasons, namely:
it runs a GROUP BY query on the history database every time I run a line in IPython
it doesn't take care to clean up/coordinate the database tables - I only modify history, but ignore output_history and sessions tables
I put the following in a file (I named it dedupe_history.py, but name is irrelevant) inside $HOME/.ipython/profile_default/startup:
import IPython
import IPython.core.history as H
## spews a UserWarning about locate_profile() ... seems safe to ignore
HISTORY = H.HistoryAccessor()
def dedupe_history():
query = ("DELETE FROM history WHERE rowid NOT IN "
"(SELECT MAX(rowid) FROM history GROUP BY source)")
db = HISTORY.db
db.execute(query)
db.commit()
def set_pre_run_cell_event():
IPython.get_ipython().events.register("pre_run_cell", dedupe_history)
## dedupe history at start of new session - maybe that's sufficient, YMMV
dedupe_history()
## run dedupe history every time you run a command
set_pre_run_cell_event()

Named Parameters in SQL Queries - cx_Oracle - ORA-01460: unimplemented or unreasonable conversion requested

I have encountered a problem after implementing the named parameters in RAW SQL Queries as per Python DB-API.
Earlier, my code was as follows (and this works fine, both on my DEV Server and my Client's test server)
cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = '%s' " %(TAG_NBR))
I changed it to the following
cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR " ,{'TAG_NBR':TAG_NBR})
This changed version (with named parameters) works fine on my Development Server
Windows XP Oracle XE
SQL*Plus: Release 11.2.0.2.0
cx_Oracle-5.1.2-11g.win32-py2.7
However, when deployed on my Client's Test Server, it does not.... execution of all queries fail.
Characteristics of my client's server are as follows
Windows Server 2003
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
cx_Oracle-5.1.2-10g.win32-py2.7
The error that I get is as follows
Traceback (most recent call last):
File "C:/Program Files/App_Logic/..\apps\views.py", line 400, in regularize_TAG
T_cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR " ,{'TAG_NBR':TAG_NBR})
DatabaseError: ORA-01460: unimplemented or unreasonable conversion requested
Appreciate if someone could help me through this issue.
This issue presents itself only when the cx_Oracle code is run inside the Web App (Hosted on Apache).
If i run the same code with named parameters from within the python command line then the query runs just fine.
Here is how this got solved.
I tried typecasting unicode to str and the results were positive.
This one worked for example
T_cursor.execute("SELECT DISTINCT(TAG_STATUS) FROM TAG_HIST WHERE TAG_NBR = :TAG_NBR", {'TAG_NBR': str(TAG_NBR)})
So in effect, unicode was getting mangled by getting encoded into the potentially non-unicode database character set.
To solve that, here is another option
import os
os.environ.update([('NLS_LANG', '.UTF8'),('ORA_NCHAR_LITERAL_REPLACE', 'TRUE'),])
import cx_Oracle
Above guarantees that we are really in UTF8 mode.
Second environment variable one is not an absolute necessity. And AFAIK there is no other way to set these variables (except before running app itself) due the fact that NLS_LANG is
read by OCI libs from the environment.

Categories