Why my sql doesn't execute? - python

I have a python script that needs to update a database information.
So, in my init() method, I start the connection. But when I call
the update method, the script does not give me any answer, it seems
like it is into an infinite loop.
def update(self,id,newDescription):
try:
sql="""UPDATE table SET table.new_string=:1 WHERE table.id=:2"""
con=self.connection.cursor()
con.execute(sql,(newDescription,id))
con.close()
except Exception,e:
self.errors+=[str(e)]
What I've tried so far:
Change the query, just to see if the connection is alright. When I did that (I used 'SELECT info from table'), the script worked.
I thought that my query was wrong, but when I execute it in SQLDeveloper
program, it goes right.
What can be happening?
Thanks

You are forgetting to call commit.

Not sure about how to do it in python script, but i think you need to call a "commit" before closing connection. Otherwise oracle rollsback your transaction.
Try adding con.commit() before close()

Related

Python script to create a backup file of a SQL Server database (with pyodbc)

I am trying to write a python script to do a backup of a SQL Server database and then restore it into a new database.
The SQL script itself seems to work fine when run in SQL Server:
BACKUP DATABASE TEST_DB
TO DISK = 'D:/test/test_db.BAK';
However, when I try to run it from a python script it fails:
con = pyodbc.connect('UID=xx; PWD=xxxxxx, driver='{SQL Server}', server=r'xxxxxx', database='TEST_DB')
sql_cursor = con.cursor()
query = ("""BACKUP DATABASE TEST_DB
TO DISK = 'D:/test/test_db.BAK';""")
con.autocommit = True
sql_cursor.execute(query1)
con.commit()
First of all, if I don't add the line "con.autocommit = True", it will fail with the message:
Cannot perform a backup or restore operation within a transaction. (3021)
No idea what a transaction is. I read in another post that the line "con.autocommit = True" removes the error, and indeed it does. I have no clue why though.
Finally, when I run the python script with con.autocommit set to True, no errors are thrown, the BAK file can be seen temporarily in the expected location ('D:/test/test_db.BAK'), but when the script finishes running, the BAK file disappears (????). Does anyone know why this is happening?
The solution, as described in this GitHub issue, is to call .nextset() repeatedly after executing the BACKUP statement ...
crsr.execute(backup_statement)
while (crsr.nextset()):
pass
... to "consume" the progress messages issued by the BACKUP. If those messages are not consumed before the connection is closed then SQL Server figures that something went wrong and cancels the backup.
SSMS can apparently capture those messages directly, but an ODBC connection must issue calls to the ODBC SQLMoreResults function to retrieve each message, and that's what happens when we call the pyodbc .nextset() method.

Should I call connect() and close() for every Sqlite3 transaction?

I want to write a Python module to abstract away database transactions for my application. My question is whether I need to call connect() and close() for every transaction? In code:
import sqlite3
# Can I put connect() here?
conn = sqlite3.connect('db.py')
def insert(args):
# Or should I put it here?
conn = sqlite3.connect('db.py')
# Perform the transaction.
c = conn.cursor()
c.execute(''' insert args ''')
conn.commit()
# Do I close the connection here?
conn.close()
# Or can I close the connection whenever the application restarts (ideally, very rarely)
conn.close()
I have don't much experience with databases, so I'd appreciate an explanation for why one method is preferred over the other.
You can use the same connection repeatedly. You can also use the connection (and the cursor) as a context manager so that you don't need to explicitly call close on either.
def insert(conn, args):
with conn.cursor() as c:
c.execute(...)
conn.commit()
with connect('db.py') as conn:
insert(conn, ...)
insert(conn, ...)
insert(conn, ...)
There's no reason to close the connection to the database, and re-opening the connection each time can be expensive. (For example, you may need to establish a TCP session to connect to a remote database.)
Using a single connection will be faster, and operationally should be fine.
Use the atexit module if you want to ensure the closing eventually happens (even if your program is terminated by an exception). Specifically, import atexit at the start of your program, and atexit.register(conn.close) right after you connect -- note, no () after close, you want to register the function to be called at program exist (whether normal or via an exception), not to call the function.
Unfortunately if Python should crash due e.g to an error in a C-coded module that Python can't catch, or a kill -9, etc, the registered exit function(s) may end up not being called. Fortunately in this case it shouldn't hurt anyway (besides being, one hopes, a rare and extreme occurrence).

Mysql Database Insert with Python via MySQLdb

This is super basic, but I cannot seem to get it to work correctly, most of the querying I've done in python has been with the django orm.
This time I'm just looking to do a simple insert of data with python MySQLdb, I currently have:
phone_number = toasted_tree.xpath('//b/text()')
try:
#the query to execute.
connector.execute("""INSERT INTO mydbtable(phone_number) VALUES(%s) """,(phone_number))
conn.commit()
print 'success!'
except:
conn.rollback()
print 'failure'
conn.close()
The issue is, it keeps hitting the except block. I've triple-checked my connection settings to mysql and did a fake query directly against mysql like: INSERT INTO mydbtable(phone_number) VALUES(1112223333); and it works fine.
Is my syntax above wrong?
Thank you
We can't tell what the problem is, because your except block is catching and swallowing all exceptions. Don't do that.
Remove the try/except and let Python report what the problem is. Then, if it's something you can deal with, catch that specific exception and add code to do so.

Python SQLite - How to manually BEGIN and END transactions?

Context
So I am trying to figure out how to properly override the auto-transaction when using SQLite in Python. When I try and run
cursor.execute("BEGIN;")
.....an assortment of insert statements...
cursor.execute("END;")
I get the following error:
OperationalError: cannot commit - no transaction is active
Which I understand is because SQLite in Python automatically opens a transaction on each modifying statement, which in this case is an INSERT.
Question:
I am trying to speed my insertion by doing one transaction per several thousand records.
How can I overcome the automatic opening of transactions?
As #CL. said you have to set isolation level to None. Code example:
s = sqlite3.connect("./data.db")
s.isolation_level = None
try:
c = s.cursor()
c.execute("begin")
...
c.execute("commit")
except:
c.execute("rollback")
The documentaton says:
You can control which kind of BEGIN statements sqlite3 implicitly executes (or none at all) via the isolation_level parameter to the connect() call, or via the isolation_level property of connections.
If you want autocommit mode, then set isolation_level to None.

GAE: Is it necessary to call fetch on a query before getting its cursor?

When the following code is executed:
q = MyKind.all()
taskqueue.add(url="/admin/build", params={'cursor': q.cursor()})
I get:
AssertionError: No cursor available.
Why does this happen? Do I need to fetch something first? (I'd rather not; the code is cleaner just to get the query and pass it on.)
I'm using Python on Google App Engine 1.3.5.
Yes, a cursor is only available if you've fetched something; there's no cursor for the first result in the query.
As a workaround, you could wrap the call to cursor() in a try/except and pass on None to the next task if there isn't a cursor available.

Categories