I have a SQLServer TSQL query that has multiple INSERT statements that range from pretty basic to somewhat complex.
This query works in SQLServer Management Studio.
When I use Python pypyodbc package and run the script, the script runs but does not commit. I have tried with and without the commit() function.
BUT if I specify a SELECT statement at the end, the script commits the inserts.
So it's all good because it works, but I am putting an inapplicable SELECT statement at the end of all of my scripts.
Does anyone have any ideas how I can get these to commit without the SELECT statement at the end? I do not want to split the queries up into multiple queries.
Thank you!
def execute_query(self,
query,
tuple_of_query_parameters,
commit=False,
return_insert_id=False,
return_results=True):
self.open_cursor()
try:
self.connection_cursor.execute(query,
tuple_of_query_parameters)
result_set = None
if return_results:
if return_insert_id:
result_set = self.connection_cursor.fetchone()[0]
else:
result_set = self.connection_cursor.fetchall()
if commit:
self.connection_cursor.commit()
except pypyodbc.Error as e:
print('Check for "USE" in script!!!')
raise
finally:
self.close_cursor()
return result_set
Try this:
self.connection_cursor.execute(query,
tuple_of_query_parameters)
if commit:
self.connection_cursor.commit() #put commit here, immediately after execute
I think that will do the trick.
Related
Im using the mysql-connector-python library for a script I wrote to access a MySql 8 Database:
def get_document_by_id(conn,id):
cursor = conn.cursor(dictionary=True)
try:
cursor.execute("SELECT * FROM documents WHERE id = %s",(id,))
return cursor.fetchone()
except Exception as e:
print(str(e))
return {}
finally:
cursor.close()
Since i need to call this function multiple times during a loop, I was wondering about the following:
Does the act of creating/closing a cursor actually interact with my MySql-Database in any way or is it just used as an abstraction in python for grouping together SQL queries?
Thanks a lot for your help.
The doc says that clearly
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests.
You can also refer to Connector/Python Connection Pooling for further information.
This is super basic, but I cannot seem to get it to work correctly, most of the querying I've done in python has been with the django orm.
This time I'm just looking to do a simple insert of data with python MySQLdb, I currently have:
phone_number = toasted_tree.xpath('//b/text()')
try:
#the query to execute.
connector.execute("""INSERT INTO mydbtable(phone_number) VALUES(%s) """,(phone_number))
conn.commit()
print 'success!'
except:
conn.rollback()
print 'failure'
conn.close()
The issue is, it keeps hitting the except block. I've triple-checked my connection settings to mysql and did a fake query directly against mysql like: INSERT INTO mydbtable(phone_number) VALUES(1112223333); and it works fine.
Is my syntax above wrong?
Thank you
We can't tell what the problem is, because your except block is catching and swallowing all exceptions. Don't do that.
Remove the try/except and let Python report what the problem is. Then, if it's something you can deal with, catch that specific exception and add code to do so.
Context
So I am trying to figure out how to properly override the auto-transaction when using SQLite in Python. When I try and run
cursor.execute("BEGIN;")
.....an assortment of insert statements...
cursor.execute("END;")
I get the following error:
OperationalError: cannot commit - no transaction is active
Which I understand is because SQLite in Python automatically opens a transaction on each modifying statement, which in this case is an INSERT.
Question:
I am trying to speed my insertion by doing one transaction per several thousand records.
How can I overcome the automatic opening of transactions?
As #CL. said you have to set isolation level to None. Code example:
s = sqlite3.connect("./data.db")
s.isolation_level = None
try:
c = s.cursor()
c.execute("begin")
...
c.execute("commit")
except:
c.execute("rollback")
The documentaton says:
You can control which kind of BEGIN statements sqlite3 implicitly executes (or none at all) via the isolation_level parameter to the connect() call, or via the isolation_level property of connections.
If you want autocommit mode, then set isolation_level to None.
I am using Python MySQLdb module. I am querying a table every 1 seconds. New rows are being added to this table all the time. The code is as follows;
def main():
connectMySqlDb_tagem()
while True:
queryTable()
time.sleep(1)
closeDBconnection()
The problem with the code is that the query does not return the latest rows. It always return the same rows. To solve this problem, I have to close the MySQL connection and make a new MySQL connection everytime. The workable code looks like this;
def main():
while True:
connectMySqlDb_tagem()
queryTable()
closeDBconnection()
time.sleep(1)
How can I avoid making a new connection everytime in order to get the latest rows?
Pass SQL_NO_CACHE in your SELECT query, or turn it off on a session level:
cursor.execute("SET SESSION query_cache_type = OFF")
See also:
How to turn off MySQL query cache while using SQLAlchemy?
MySQL - force not to use cache for testing speed of query
Hope that helps.
try:
connection.commit()
instead of disconnecting and reconnecting and see if it solves your problem.
I'm writing a python CGI script that will query a MySQL database. I'm using the MySQLdb module. Since the database will be queryed repeatedly, I wrote this function....
def getDatabaseResult(sqlQuery,connectioninfohere):
# connect to the database
vDatabase = MySQLdb.connect(connectioninfohere)
# create a cursor, execute and SQL statement and get the result as a tuple
cursor = vDatabase.cursor()
try:
cursor.execute(sqlQuery)
except:
cursor.close()
return None
result = cursor.fetchall()
cursor.close()
return result
My question is... Is this the best practice? Of should I reuse my cursor within my functions? For example. Which is better...
def callsANewCursorAndConnectionEachTime():
result1 = getDatabaseResult(someQuery1)
result2 = getDatabaseResult(someQuery2)
result3 = getDatabaseResult(someQuery3)
result4 = getDatabaseResult(someQuery4)
or do away with the getDatabaseeResult function all together and do something like..
def reusesTheSameCursor():
vDatabase = MySQLdb.connect(connectionInfohere)
cursor = vDatabase.cursor()
cursor.execute(someQuery1)
result1 = cursor.fetchall()
cursor.execute(someQuery2)
result2 = cursor.fetchall()
cursor.execute(someQuery3)
result3 = cursor.fetchall()
cursor.execute(someQuery4)
result4 = cursor.fetchall()
The MySQLdb developer recommends building an application specific API that does the DB access stuff for you so that you don't have to worry about the mysql query strings in the application code. It'll make the code a bit more extendable (link).
As for the cursors my understanding is that the best thing is to create a cursor per operation/transaction. So some check value -> update value -> read value type of transaction could use the same cursor, but for the next one you would create a new one. This is again pointing to the direction of building an internal API for the db access instead of having a generic executeSql method.
Also remember to close your cursors, and commit changes to the connection after the queries are done.
Your getDatabaseResult function doesn't need to have a connect for every separate query though. You can share the connection between the queries as long as you act responsible with the cursors.