I'm trying to run a python script with multiple threads, but I'm getting the following error:
sqlite3.OperationalError: database is locked
I've found out that I need to extend the sqlite3_busy_timeout to make it wait a bit longer before writing to the database.
The code used for this looks like the following:
'db.configure("busyTimeout", 10000)' //This should make it wait for 10 seconds.
What I want to know is how do I implement this code? Where should I place it, before or after the SQLite command? also, do I have to write anything before it? like c.execute("code")?
You can set the timeout with the busy_timeout pragma. Here is an example setting the busy timeout to 10 seconds:
import sqlite3
with sqlite3.connect('example.db') as db
db.execute('pragma busy_timeout=10000')
# do more work with db...
Related
I am not proficient in Python but I have written a python code that executes a stored procedure (SQL server) which within it contains multiple stored procedures therefore it usually takes 5 mins or so to run on SSMS.
I can see the stored procedure runs halfway through without error when I run the Python code which makes me think that somehow it needs more time to execute when coding in python.
I found other posts where people suggested subprocess but I don't know how to code this. Below is an example of a (not mine) python code to execute the stored procedure.
mydb_lock = pyodbc.connect('Driver={SQL Server Native Client 11.0};'
'Server=localhost;'
'Database=InterelRMS;'
'Trusted_Connection=yes;'
'MARS_Connection=yes;'
'user=sa;'
'password=Passw0rd;')
mycursor_lock = mydb_lock.cursor()
sql_nodes = "Exec IVRP_Nodes"
mycursor_lock.execute(sql_nodes)
mydb_lock.commit()
How can I edit the above code to use the subprocess? Is the subprocess the right choice? Any other method you can suggest?
Many thanks.
Python 2.7 and 3
SQL Server
UPDATE 04/04/2022:
#AlwaysLearning, I tried
NEWcnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password+';Connection Timeout=0')
But there was no change. What I noticed is that to check how much of the code it executes, I inserted the following two lines of code right after each other somewhere in the nested procedure where I thought the SP stopped.
INSERT INTO CheckTable (OrgID,Stage,Created) VALUES(#OrgID,2.5331,getdate())
INSERT INTO CheckTable (OrgID,Stage,Created) VALUES(#OrgID,2.5332,getdate())
Only the first query is completed. I use Azure DB if that helps.
UPDATE 05/04/2022:
I tried what #AlwaysLearning suggested, after my connection, I added, NEWconxn.timeout=4000 and it's working now
I tried what #AlwaysLearning suggested, after my connection, I added, NEWconxn.timeout=4000 and it's working now. Many thanks.
In my program there are multiple asynchronous functions updating data in a database.
There can be some cases where the functions are executed parallelly.
My question is :
In my case, do I need to create a new connection each time in a function or having a single connection throughout the program will work fine.
Second, in case of a single connection is it necessary to close it at the end?
Also, please recommend the best tool to access .db file outside the code [Just in case], that shouldn't interrupt the connections of the code with the database even if I make some changes personally outside the code.
Note : Am on windows
Thanks!
I have a Python script which reads files into my tables in MySQL. Now this program runs automatically every now and then. However I'am afraid of 2 things:
Their might come a time the program stops running because it cant connect to the MySQL server. There are a lot of processes depending on this tables, so if the tables are not up to date the rest of my process will also stop working.
Their might sneak a file inside the process which does not have the expected content. After the script finished running, every value of column X must have 12 rows. If it does not have 12 rows this means the files did not have the right content inside them.
My question is: Is there something I can do to tackle this before it happens? Like send an e-mail to myself so I can be notified if the connection fails or like run the program on another server or if a certain value has like NOT 12 rows?
I'm very eager to know how you guys handle this situations.
I have a very simple connection made like this:
mydb = mysql.connector.connect(
host= 'localhost',
user = 'root',
passwd = '*****.',
database= 'my_database'
)
The event you are talking about is very unlikely to happen, and the only possible situation I could see this happening is when your database runs out of memory. For which you can set up a 2 min period every 2 to 3 days when you will go and check the amount of memory left in your server.
It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.
One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals.
If the value in the counter is increased then you could use the subprocess module to call another Python script.
It's possible to execute an external script from a MySql trigger, but I never used it and I don't know the implications of something like this.
MySql provides a way to implement your own functions, its called User Defined Functions. With this you can define your own functions and call them from MySql events. You need to write your own logic in a C program by following the interface provided by MySql.
Fortunately someone already did a library to call an external program from MySql: LIB_MYSQLUDF_SYS. After installing it, the following trigger should work:
CREATE TRIGGER Test_Trigger
AFTER INSERT ON MyTable
FOR EACH ROW
BEGIN
DECLARE cmd CHAR(255);
DECLARE result int(10);
SET cmd=CONCAT('/YOUR_SCRIPT');
SET result = sys_exec(cmd);
END;
I'm trying to call a stored procedure in my MSSQL database from a python script, but it does not run completely when called via python. This procedure consolidates transaction data into hour/daily blocks in a single table which is later grabbed by the python script. If I run the procedure in SQL studio, it completes just fine.
When I run it via my script, it gets cut short about 2/3's of the way through. Currently I found a work around, by making the program sleep for 10 seconds before moving on to the next SQL statement, however this is not time efficient and unreliable as some procedures may not finish in that time. I'm looking for a more elegant way to implement this.
Current Code:
cursor.execute("execute mySP")
time.sleep(10)
cursor.commit()
The most related article I can find to my issue is here:
make python wait for stored procedure to finish executing
I tried the solution using Tornado and I/O generators, but ran into the same issue as listed in the article, that was never resolved. I also tried the accepted solution to set a runningstatus field in the database by my stored procedures. At the beginnning of my SP Status is updated to 1 in RunningStatus, and when the SP finished Status is updated to 0 in RunningStatus. Then I implemented the following python code:
conn=pyodbc_connect(conn_str)
cursor=conn.cursor()
sconn=pyodbc_connect(conn_str)
scursor=sconn.cursor()
cursor.execute("execute mySP")
cursor.commit()
while 1:
q=scursor.execute("SELECT Status FROM RunningStatus").fetchone()
if(q[0]==0):
break
When I implement this, the same problem happens as before with my storedprocedure finishing executing prior to it actually being complete. If I eliminate my cursor.commit(), as follows, I end up with the connection just hanging indefinitely until I kill the python process.
conn=pyodbc_connect(conn_str)
cursor=conn.cursor()
sconn=pyodbc_connect(conn_str)
scursor=sconn.cursor()
cursor.execute("execute mySP")
while 1:
q=scursor.execute("SELECT Status FROM RunningStatus").fetchone()
if(q[0]==0):
break
Any assistance in finding a more efficient and reliable way to implement this, as opposed to time.sleep(10) would be appreciated.
As OP found out, inconsistent or imcomplete processing of stored procedures from application layer like Python may be due to straying from best practices of TSQL scripting.
As #AaronBetrand highlights in this Stored Procedures Best Practices Checklist blog, consider the following among other items:
Explicitly and liberally use BEGIN ... END blocks;
Use SET NOCOUNT ON to avoid messages sent to client for every row affected action, possibly interrupting workflow;
Use semicolons for statement terminators.
Example
CREATE PROCEDURE dbo.myStoredProc
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM foo;
SELECT * FROM bar;
END
GO