I am facing a problem where data from MySQL retrieved using PySQLPool is returned as the db was at start of process, INSERT or UPDATE queries from python or MySQL client do not show up until a kill and re-run of the python process.
Would appreciate any help regarding this.
Ref: Why are some mysql connections selecting old data the mysql database after a delete + insert?
MySQL's isolation level was causing this. Somehow only python clients get affected and never stumbled across this issue earlier. It is a valid problem and has a detailed solution. My question was targeting python and pySQLPool because it did not occur to me that MySQL could be the one causing this. Now my deployment procedure includes details on altering global isolation level for MySQL to be "READ-COMMITTED".
SET GLOBAL tx_isolation='READ-COMMITTED';
Related
I am building a web application in Flask.
We have opened up the database window of PyCharm and established a data source to a SQL server database.
My question is what does establishing a data source do?
Does is remove the need to connect to a database manually?, like for example
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
If the answer is yes it does remove the need to set
updb = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
then how can you access the data in the database, and establish a cursor object?
The JetBrains IDEs such as PyCharm or IntelliJ have a database browser, basically productionalized as it's own IDE called DataGrip, but that's besides the point.
Fact is, no, that doesn't replace the need for code, and you could have zero code and make a database connection, or entirely code and never touch the database window, ever (because you don't need PyCharm to write said code).
So, they are separate things, just like how "SQL Server" means something completely different from "MySQL" (e.g. you might need a different library)
So I have a Google sheet that maintains a lot of data. I also have a MySQL DB with a huge junk of data. There is a vital piece of information in the Sheet that is also present in the DB. Both needs to be in sync. The information always enters the Sheet first. I had a python script with mysql queries to update my database separately.
Now the work flow has changed. Data will enter the sheet and whenever that happens the database has to updated automatically.
After some research, I found that using the onEdit function of Google AppScript (I learned from here.), I could pickup when the file has changed.
The Next step is to fetch the data from relevant cell, which I can do using this.
Now I need to connect to the DB and send some queries. This is where I am stuck.
Approach 1:
Have a python web-app running live. Send the data via UrlFetchApp.This I yet have to try.
Approach 2:
Connect to mySQL remotely through appscript. But I am not sure this is possible after 2-3 hours of reading the docs.
So this is my scenario. Any viable solution you can think of or a better approach?
Connect directly to mySQL. You likely missed reading this part https://developers.google.com/apps-script/guides/jdbc
Using JDBC within Apps Script will work if you have the time to build this yourself.
If you don't want to roll your own solution, check out SeekWell. It allows you to connect to databases and write SQL queries directly in Sheets. You can create a run a “Run Sheet” that will run multiple queries at once and schedule those queries to be run without you even opening the Sheet.
Disclaimer: I made this.
Keep getting this warning using a MySQL database:
Some non-transactional changed tables couldn't be rolled back
I'm not sure what it means or if it is even causing a problem but I was hoping someone would be able to fill me in on what this means.
I am taking a CSV file, reading it line-by-line and creating Django objects using get_or_create. After I get the message, when I try to recreate it, I get further into the CSV file before the warning occurs.
I tried reading about this error online but I really don't understand what it means. It would be ideal to figure out whats causing this but if I can't I am wondering if I can suppress the warning because maybe it isn't effect my database negatively.
This happens when you mix transactional and non-transactional tables. Changes to non- transactional tables are not effected by a ROLLBACK statement.
For some reasons this may have happened to you we can turn to the docs:
if you were not deliberately mixing transactional and nontransactional tables within the transaction, the most likely cause for this message is that a table you thought was transactional actually is not. This can happen if you try to create a table using a transactional storage engine that is not supported by your mysqld server (or that was disabled with a startup option). If mysqld does not support a storage engine, it instead creates the table as a MyISAM table, which is nontransactional.
This will effect things negatively if you say have an HTTP request that kicks of a transaction, you make some changes, and you need to rollback. The transactional tables will rollback but the others will not. If a transactional storage engine is a requirement for your software you should consider taking steps to migrate all the relevant tables to the InnoDB engine.
For me this error happened after I imported a table from another Django application. The origin DB had all the table engines set to MyISAM and the destination app had all the engines set as InnoDB. When I imported the existing table the engine was changed from InnoBD to MyISAM to match the source. I resolved this using MySQL on the command line like so:
$ mysql -uroot -pPASSWORD
> use MY_DB;
> show table status;
> alter table TABLE_WITH_MYISAM engine=innodb;
> quit;
I had imported 5 tables so I had to do the alter command for each table. The show command above will print out the table names and engine settings for all tables in MY_DB.
I hope this helps solve your issue! Cheers!
I have an authentication script in ejabberd (XMPP server) that based off of THIS LINK
I have slightly modified the script so that instead of setting the variable out, it just returns true or false.
I'm using Ubuntu, MySQL, ejabberd, and Python.
I can authenticate all the records that are already on the database. But, when I add or remove records (I do this through phpMyAdmin), the script doesn't seem to know that the database has changed (I remove a user in phpMyAdmin and it still authenticates the user). The only time when the script recognizes the new records is when I restart or force-reload the ejabberd server. I've already been told its not a mySQL caching problem. I made sure I turned off external authentication caching for ejabberd.
That's all I can think of right now. I'll add more information if I can think of it. Any help is appreciated. I have no idea what is going on.
Addition: I turned on the MySQL logs, and all the queries there so there is not skipping queries.
I managed to fix this problem by changing the database engine back to MYISAM rather than INNODB. But I would like to know if this can be fixed for INNODB.
Edit: to fix it in innodb, set autocommit to true
I'm running a Django project on Postgresql 8.1.21 (using Django 1.1.1, Python2.5, psycopg2, Apache2 with mod_wsgi 3.2). We've recently encountered this lovely error:
OperationalError: FATAL: connection limit exceeded for non-superusers
I'm not the first person to run up against this. There's a lot of discussion about this error, specifically with psycopg, but much of it centers on older versions of Django and/or offer solutions involving edits to code in Django itself. I've yet to find a succinct explanation of how to solve the problem of the Django ORM (or psycopg, whichever is really responsible, in this case) leaving open Postgre connections.
Will simply adding connection.close() at the end of every view solve this problem? Better yet, has anyone conclusively solved this problem and kicked this error's ass?
Edit: we later upped Postgresql's limit to 500 connections; this prevented the error from cropping up, but replaced it with excessive memory usage.
This could be caused by other things. For example, configuring Apache/mod_wsgi in a way that theoretically it could accept more concurrent requests than what the database itself may be able to accept at the same time. Have you reviewed your Apache/mod_wsgi configuration and compared limit on maximum clients to that of PostgreSQL to make sure something like that hasn't been done. Obviously this presumes though that you have managed to reach that limit in Apache some how and also depends on how any database connection pooling is set up.