I have a python script that makes about ten INSERTs into a MySQL database. This is its current structure:
conn = MySQLdb.connect (host = DB_HOST,
port = DB_PORT,
user = DB_USER,
passwd = DB_PASSWORD,
db = DB_NAME)
cursor = conn.cursor()
cursor.execute("INSERT INTO...")
# do some stuff, then another INSERT
cursor.execute("INSERT INTO...")
# do some other stuff, then another INSERT
cursor.execute("INSERT INTO...")
etc...
conn.commit()
cursor.close()
conn.close()
Is the above the correct way to do multiple inserts, or should I be closing the cursor after each INSERT? Should I be doing a commit after each INSERT?
should I be closing the cursor after each INSERT?
It doesn't much matter. Cursors are reused cleverly.
You can close them to be super careful of your resources.
Do this.
from contextlib import closing
with closing( conn.cursor() ) as cursor:
cursor.execute("INSERT INTO...")
This assures that the cursor is closed no matter what kind of exceptions happen.
Should I be doing a commit after each INSERT?
That depends on what your application is expected to do.
If it's an "all or nothing" proposition, then you do one commit. All the inserts are good or none of them are.
If partial results are acceptable, then you can commit after each insert.
Related
connection.py File
def create_connection():
connection = psycopg2.connect("dbname=suppliers user=postgres password=postgres")
return connection
def create_cursor():
connection = create_connection()
cursor = connection.cursor()
return cursor
Above example will create a connection 2 times when calling both create_connection() and create_cursor() method
Query File
def common_query():
sql_query = "INSERT INTO supply VALUES (1, 'test', 'test')"
conn = create_connection()
cursor = create_cursor()
with conn:
with cursor as cur:
cur.execute(sql_query)
conn.close()
Above example will call create_connection and create_cursor method, but as you can see while calling create_connection , connection has already been established and in create_cursor() while calling create_connection() method again it create another connection.
So while execution query it does't show any error nor it insert my data into database.
Let me know whats happened in it ?
You create two connections for each call to common_query. One is explicitly closed, the other is closed at some point because it went out of scope. (Python is a garbage collected language)
You don't commit on either one, so whatever work you did gets rolled back automatically. This is unrelated to the first point. The same thing would happen if you had only created one connection (and also didn't commit on it).
All tutorials told me database connection is a precious resource.We must close it after do some operations on it.And reopen it when we want do another thing.But I only find out a property(open) that indicate the connection status.
It means I need create connection object for every query|update|delete?
If I won't create connection for every operation (code like below) and how safely destroy the connection?
connection = pymysql.connect(host='localhost',
user='root',
password='password',
db='blog',
charset='utf8',
cursorclass=pymysql.cursors.DictCursor)
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * from categories"
cursor.execute(sql)
result = cursor.fetchall()
print(result)
connection.close()
#do other things ..............................
#maby return here and do not execute below
#occur error below
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * from categories"
cursor.execute(sql)
result = cursor.fetchall()
print(result)
Normally your program opens a database connection and keeps it open until the program finishes. Creating/opening a connection is an expensive operation so normally you just want to keep it open for the duration of your program. During the creation of a connection the database has to allocate resources for that connection. If you are going to open and close a connection for every operation this will negatively effect the complete database performance since the database has needs exclusive access to those memory structures and that causes waits.
I'm running this from PyDev in Eclipse...
import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='userid', passwd='password', db='fan')
cur = conn.cursor()
print "writing to db"
cur.execute("INSERT INTO cbs_transactions(leagueID) VALUES ('test val')")
print "wrote to db"
The result is, at the top of the Console it says C:...test.py, and in the Console:
writing to db
wrote to db
So it's not terminating until after the execute command. But when I look in the table in MySQL it's empty. A record did not get inserted.
First off, why isn't it writing the record. Second, how can I see a log or error to see what happened. Usually there should be some kind of error in red if the code fails.
Did you commit it? conn.commit()
PyMySQL disable autocommit by default, you can add autocommit=True to connect():
conn = pymysql.connect(
host='localhost',
user='user',
passwd='passwd',
db='db',
autocommit=True
)
or call conn.commit() after insert
You can either do
conn.commit() before calling close
or
enable autocommit via conn.autocommit(True) right after creating the connection object.
Both ways have been suggested from various people at a duplication of the question that can be found here: Database does not update automatically with MySQL and Python
The backup statement can't be used in a transaction when it execute with pyodbc cursor. It seems that the pyodbc execute the query inside a default transaction.
I have also tried to use the autocommit mode or add the commit statement before the backup statement. Both of these are not working.
#can't execute the backup statement in transaction
cur.execute("backup database database_name to disk = 'backup_path'")
#not working too
cur.execute("commit;backup database database_name to disk = 'backup_path'")
Is it possible to execute the backup statement by pyodbc? Thanks in advance!
-----Added aditional info-----------------------------------------------------------------------
The backup operation is encapsulate in a function such as:
def backupdb(con, name, save_path):
# with autocommit mode, should be pyodbc.connect(con, autocommit=True)
con = pyodbc.connect(con)
query = "backup database %s to disk = '%s'" % (name, save_path)
cur = con.cursor()
cur.execute(query)
cur.commit()
con.close()
If the function is called by following code,
backupdb('DRIVER={SQL Server};SERVER=.\sqlexpress;DATABASE=master;Trusted_Connection=yes',
'DatabaseName',
'd:\\DatabaseName.bak')
then the exception will be:
File "C:/Documents and Settings/Administrator/Desktop/bakdb.py", line 14, in <module>'d:\\DatabaseName.bak')
File "C:/Documents and Settings/Administrator/Desktop/bakdb.py", line 7, in backupdb cur.execute(query)
ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot perform a backup or restore operation within a transaction. (3021) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP DATABASE is terminating abnormally. (3013)')
With open the keyword autocommit=True, the function will run silently but there is no backup file generated in the backup folder.
Assuming you are using SQL Server, specify autocommit=True when the connection is built:
>>> import pyodbc
>>> connection = pyodbc.connect(driver='{SQL Server Native Client 11.0}',
server='InstanceName', database='master',
trusted_connection='yes', autocommit=True)
>>> backup = "BACKUP DATABASE [AdventureWorks] TO DISK = N'AdventureWorks.bak'"
>>> cursor = connection.cursor().execute(backup)
>>> connection.close()
This is using pyodbc 3.0.7 with Python 3.3.2. I believe with older versions of pyodbc you needed to use Cursor.nextset() for the backup file to be created. For example:
>>> import pyodbc
>>> connection = pyodbc.connect(driver='{SQL Server Native Client 11.0}',
server='InstanceName', database='master',
trusted_connection='yes', autocommit=True)
>>> backup = "E:\AdventureWorks.bak"
>>> sql = "BACKUP DATABASE [AdventureWorks] TO DISK = N'{0}'".format(backup)
>>> cursor = connection.cursor().execute(sql)
>>> while cursor.nextset():
>>> pass
>>> connection.close()
It's worth noting that I didn't have to use Cursor.nextset() for the backup file to be created with the current version of pyodbc and SQL Server 2008 R2.
After hours I found solution. It must be performed no MASTER, other sessions must be terminated, DB must be set to OFFLINE, then RESTORE and then set to ONLINE again.
def backup_and_restore():
server = 'localhost,1433'
database = 'myDB'
username = 'SA'
password = 'password'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE=MASTER;UID='+username+';PWD='+ password)
cnxn.autocommit = True
def execute(cmd):
cursor = cnxn.cursor()
cursor.execute(cmd)
while cursor.nextset():
pass
cursor.close()
execute("BACKUP DATABASE [myDB] TO DISK = N'/usr/src/app/myDB.bak'")
# do something .......
execute("ALTER DATABASE [myDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;")
execute("ALTER DATABASE [myDB] SET OFFLINE;")
execute("RESTORE DATABASE [myDB] FROM DISK = N'/usr/src/app/myDB.bak' WITH REPLACE")
execute("ALTER DATABASE [myDB] SET ONLINE;")
execute("ALTER DATABASE [myDB] SET MULTI_USER;")
I'm using MySQLdb module for Python to make some simple queries. When I do a certain UPDATE, it hangs for a while and finally gives this error:
operational error (1205 'lock wait timeout exceeded try restarting
transaction')
The code I'm using is the following:
def unselectAll():
try:
db = MySQLdb.connect(host='localhost', user='user', passwd='', db='mydatabase')
cursor = db.cursor()
cursor.execute('UPDATE MYTABLE SET Selected=0')
except MySQLdb.Error, e:
print 'ERROR ' + e.args[0] + ': ' + e.args[1]
If I try to use that query in console, works perfectly. Also, if connecting without db parameter and using mydatabase.MYTABLE at the query doesn't work either.
Any help?
This could be because the UPDATE isn't getting commited - have you tried autocommit=True for the connection? As in
db = MySQLdb.connect(host='localhost', user='user', passwd='', db='mydatabase', autocommit=True)
or maybe even
db.autocommit(True)
after you've created the connection.