PyMysql UPDATE query - python

I've been trying using PyMysql and so far everything i did worked (Select/insert) but when i try to update it just doesn't work, no errors no nothing, just doesn't do anything.
import pymysql
connection = pymysql.connect(...)
cursor = connection.cursor()
cursor.execute("UPDATE Users SET IsConnected='1' WHERE Username='test'")
cursor.close()
connection.close()
and yes I've double checked that Users, IsConnected and Username are all correct and test does exist (SELECT works on it)
what's my problem here?

When you execute your update, MySQL is implicitly starting a transaction. You need to commit this transaction by calling connection.commit() after you execute your update to keep the transaction from automatically rolling back when you disconnect.
MySQL (at least when using the InnoDB engine for tables) supports transactions, which allow you to run a series of update/insert statements then have them either all commit at once effectively as a single operation, or rollback so that none are applied. If you do not explicitly commit a transaction, it will rollback automatically when you close your connection to the database.

In fact, what #JoeDay has described above has little to do with default MySQL transaction's behaviour. MySQL by default operates in auto-commit mode and normally you don't need any additional twist to persist your changes:
By default, MySQL runs with autocommit mode enabled. This means that as soon as you execute a statement that updates (modifies) a table, MySQL stores the update on disk to make it permanent. The change cannot be rolled back.
PEP-249's (DB API) authors decided to complicate things and break Zen of Python by making a transaction's start implicit, by proposing auto-commit to be disabled by default.
What I suggest to do, is to restore MySQL's default behaviour. And use transactions explicitly, only when you need them.
import pymysql
connection = pymysql.connect(autocommit=True)
I've also written about it here with a few references.

Related

Does BEGIN TRAN and COMMIT TRAN work in a SQL Server stored procedure? Is there a way to separate code within a stored procedure into batches? [duplicate]

I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them.
The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions.
Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance:
1) I have never written a stored procedure.
2) I heard that pyodbc does not return results from stored procedures as of yet.
3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth.
So, what's the best way to go about this?
By its documentation, pyodbc does support transactions, but only if the odbc driver support it. Furthermore, as pyodbc is compliant with PEP 249, data is stored only when a manual commit is done.
This means that you have to explicitely commit() the transaction, or rollback() the entire transaction.
Note that pyodbc also support autocommit feature, and in that case you cannot have any transaction.
By default, autocommit is off, but your codebase might have tuerned it on.
You should check the connection, when it is performed
cnxn = pyodbc.connect(cstring, autocommit=True)
Alternatively, you can also explicitely turn off the autocommit mode with
cnxn.autocommit = False
but this might have quite a big impact on your system.
Note: you can get more information on the autocommit mode of pyodbc on its wiki
I don't think pyodbc has any specific support for transactions. You need to send the SQL command to start/commit/rollback transactions.

sqlalchemy disable cache (calling to DB in every call) [duplicate]

I have a caching problem when I use sqlalchemy.
I use sqlalchemy to insert data into a MySQL database. Then, I have another application process this data, and update it directly.
But sqlalchemy always returns the old data rather than the updated data. I think sqlalchemy cached my request ... so ... how should I disable it?
The usual cause for people thinking there's a "cache" at play, besides the usual SQLAlchemy identity map which is local to a transaction, is that they are observing the effects of transaction isolation. SQLAlchemy's session works by default in a transactional mode, meaning it waits until session.commit() is called in order to persist data to the database. During this time, other transactions in progress elsewhere will not see this data.
However, due to the isolated nature of transactions, there's an extra twist. Those other transactions in progress will not only not see your transaction's data until it is committed, they also can't see it in some cases until they are committed or rolled back also (which is the same effect your close() is having here). A transaction with an average degree of isolation will hold onto the state that it has loaded thus far, and keep giving you that same state local to the transaction even though the real data has changed - this is called repeatable reads in transaction isolation parlance.
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29
This issue has been really frustrating for me, but I have finally figured it out.
I have a Flask/SQLAlchemy Application running alongside an older PHP site. The PHP site would write to the database and SQLAlchemy would not be aware of any changes.
I tried the sessionmaker setting autoflush=True unsuccessfully
I tried db_session.flush(), db_session.expire_all(), and db_session.commit() before querying and NONE worked. Still showed stale data.
Finally I came across this section of the SQLAlchemy docs: http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#transaction-isolation-level
Setting the isolation_level worked great. Now my Flask app is "talking" to the PHP app. Here's the code:
engine = create_engine(
"postgresql+pg8000://scott:tiger#localhost/test",
isolation_level="READ UNCOMMITTED"
)
When the SQLAlchemy engine is started with the "READ UNCOMMITED" isolation_level it will perform "dirty reads" which means it will read uncommited changes directly from the database.
Hope this helps
Here is a possible solution courtesy of AaronD in the comments
from flask.ext.sqlalchemy import SQLAlchemy
class UnlockedAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
if "isolation_level" not in options:
options["isolation_level"] = "READ COMMITTED"
return super(UnlockedAlchemy, self).apply_driver_hacks(app, info, options)
Additionally to zzzeek excellent answer,
I had a similar issue. I solved the problem by using short living sessions.
with closing(new_session()) as sess:
# do your stuff
I used a fresh session per task, task group or request (in case of web app). That solved the "caching" problem for me.
This material was very useful for me:
When do I construct a Session, when do I commit it, and when do I close it
This was happening in my Flask application, and my solution was to expire all objects in the session after every request.
from flask.signals import request_finished
def expire_session(sender, response, **extra):
app.db.session.expire_all()
request_finished.connect(expire_session, flask_app)
Worked like a charm.
I have tried session.commit(), session.flush() none worked for me.
After going through sqlalchemy source code, I found the solution to disable caching.
Setting query_cache_size=0 in create_engine worked.
create_engine(connection_string, convert_unicode=True, echo=True, query_cache_size=0)
First, there is no cache for SQLAlchemy.
Based on your method to fetch data from DB, you should do some test after database is updated by others, see whether you can get new data.
(1) use connection:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print "username:", row['username']
connection.close()
(2) use Engine ...
(3) use MegaData...
please folowing the step in : http://docs.sqlalchemy.org/en/latest/core/connections.html
Another possible reason is your MySQL DB is not updated permanently. Restart MySQL service and have a check.
As i know SQLAlchemy does not store caches, so you need to looking at logging output.

Discrepancy between my SQL python data and the DB Browser

I am using sqlite combined with tkinter to write and delete records within my Python program. The deletion works perfectly fine in my program and also when I restart the program, the record does not exist anymore.
However, I always cross check using the Linux standard software DB Browser for SQLite and look at my SQL Table. Strangely, all records still exist in the DB Browser. Now I am wondering, why's that? Why is it gone within my Python sqlite queries but not in the DB Browser? Somehow the records are still there. How can I completely destroy my records?
For deletion I use:
(The user can chose a specific entry using a listbox. Eventually, I "translate" the selected item into its specific ID and trigger the deletion.)
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
For my query I use:
(I query the data for a specific year and month.)
self.c.execute("SELECT ID, Date, Item, Price FROM financial_table WHERE strftime('%Y-%m', Date) = '{}' ORDER BY Date ".format(date))
single_dates = self.c.fetchall()
Thank you very much for your help.
The solution to my question is: I am stupid!
I was tired yesterday evening and looked at the wrong sql file in a subfolder which had the same name than the one from my python program. So it is actually working. Please excuse my stupidity.
#Bruceskyaus
Despite my stupidity I learned from your answer, especially the try ... except block. I am going to implement it. Thanks.
You may have an problem with controlling transactions on your database, but it could also be the connection itself. Make sure you don't have any uncommitted DML statements on a different connection (i.e. an INSERT, UPDATE or DELETE in your DB Browser that wasn't committed), this could cause the conn.commit() to fail. With SQLite, an uncommitted transaction could lock the entire database - for a brief period of time.
Try ensuring that there is a new cursor for the delete statement and call conn.close() after the conn.commit(). Before you execute the code, make sure that no other connections are accessing the database - including the DB Browser. Only check in the DB Browser when you have shut down the application (for this test). This eliminates multithreading or locking as a possible cause. See also SQLite - Data Persistence and SQLite - Controlling Transactions
It is also helpful to trap all errors for DML statements using a try...except block. Something like this:
import sqlite3
try:
self.conn = sqlite3.connect('mydb.db')
self.c = conn.cursor()
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
except sqlite3.Error as e:
print("An error occurred:", e.args[0])
finally:
self.conn.close()

Django w/ MySQL non-transactional changed tables couldn't be rolled back

Keep getting this warning using a MySQL database:
Some non-transactional changed tables couldn't be rolled back
I'm not sure what it means or if it is even causing a problem but I was hoping someone would be able to fill me in on what this means.
I am taking a CSV file, reading it line-by-line and creating Django objects using get_or_create. After I get the message, when I try to recreate it, I get further into the CSV file before the warning occurs.
I tried reading about this error online but I really don't understand what it means. It would be ideal to figure out whats causing this but if I can't I am wondering if I can suppress the warning because maybe it isn't effect my database negatively.
This happens when you mix transactional and non-transactional tables. Changes to non- transactional tables are not effected by a ROLLBACK statement.
For some reasons this may have happened to you we can turn to the docs:
if you were not deliberately mixing transactional and nontransactional tables within the transaction, the most likely cause for this message is that a table you thought was transactional actually is not. This can happen if you try to create a table using a transactional storage engine that is not supported by your mysqld server (or that was disabled with a startup option). If mysqld does not support a storage engine, it instead creates the table as a MyISAM table, which is nontransactional.
This will effect things negatively if you say have an HTTP request that kicks of a transaction, you make some changes, and you need to rollback. The transactional tables will rollback but the others will not. If a transactional storage engine is a requirement for your software you should consider taking steps to migrate all the relevant tables to the InnoDB engine.
For me this error happened after I imported a table from another Django application. The origin DB had all the table engines set to MyISAM and the destination app had all the engines set as InnoDB. When I imported the existing table the engine was changed from InnoBD to MyISAM to match the source. I resolved this using MySQL on the command line like so:
$ mysql -uroot -pPASSWORD
> use MY_DB;
> show table status;
> alter table TABLE_WITH_MYISAM engine=innodb;
> quit;
I had imported 5 tables so I had to do the alter command for each table. The show command above will print out the table names and engine settings for all tables in MY_DB.
I hope this helps solve your issue! Cheers!

sqlite3.OperationalError: database is locked

I'm trying to insert all values of a list to my sqlite3 database. When I simulate this query by using the python interactive interpreter, I am able to insert the single value to DB properly. But my code fails while using an iteration:
...
connection=lite.connect(db_name)
cursor=connection.cursor()
for name in match:
cursor.execute("""INSERT INTO video_dizi(name) VALUES (?)""",(name,))
connection.commit()
...
error:cursor.execute("""INSERT INTO video_dizi(name) VALUES (?)""",(name,))
sqlite3.OperationalError: database is locked
Any way to overcome this problem?
Do you have another connection elsewhere in your code that you use to begin a transaction that is still active (not committed) when you try to commit the operation that fails?
As this error can happen because you have opened your site.db or database file in DBbrowser type application to view in interactive database interface. Just close that it will work fine.
Because your database is use by another process or connection. If you need real concurrency, use a real RDBMS.

Categories