"OperationalError: database is locked" when deploying site to Azure - python

I have built a django website and a part of it is Microsoft authentication link.
When I upload the site to azure cloud and click on the "log in" link, I recieve the following error:
OperationalError at /login
database is locked
Request Method: GET
Request URL: http://bhkshield.azurewebsites.net/login
Django Version: 2.2.2
Exception Type: OperationalError
Exception Value:
database is locked
Exception Location: /home/site/wwwroot/antenv3.6/lib/python3.6/site-packages/django/db/backends/base/base.py in _commit, line 240
Python Executable: /usr/local/bin/python
Python Version: 3.6.7
Python Path:
['/usr/local/bin',
'/home/site/wwwroot',
'/home/site/wwwroot/antenv3.6/lib/python3.6/site-packages',
'/usr/local/lib/python36.zip',
'/usr/local/lib/python3.6',
'/usr/local/lib/python3.6/lib-dynload',
'/usr/local/lib/python3.6/site-packages']
Server time: Fri, 14 Jun 2019 13:19:22 +0000
I am using sqlite3 (setting.py code piece):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
I don't understand why I get this error because I don't insert or commit anything to the database.
My website consists only of one page that has a sign in link (4 views: home, contex intialize, login and callback). That's it.
Just to mention, when I run the site locally, everything works. It stops working only after deployment.
Another weird thing is that I have uploaded something like this before to another site on azure and the login worked. For some reason, it doesn't work now and I have no idea why...
Has anyone encountered this type of error and can help?
If you need me to provide more files' content, let me know which files and I will.

It seems like a duplication of this question: OperationalError: database is locked.
From the documentation of Django:
https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errorsoption
SQLite is meant to be a lightweight database, and thus can’t support a
high level of concurrency. OperationalError: database is locked errors
indicate that your application is experiencing more concurrency than
sqlite can handle in default configuration. This error means that one
thread or process has an exclusive lock on the database connection and
another thread timed out waiting for the lock the be released.
Python’s SQLite wrapper has a default timeout value that determines
how long the second thread is allowed to wait on the lock before it
times out and raises the OperationalError: database is locked error.
If you’re getting this error, you can solve it by:
Switching to another database backend. At a certain point SQLite
becomes too “lite” for real-world applications, and these sorts of
concurrency errors indicate you’ve reached that point.
Rewriting your code to reduce concurrency and ensure that database
transactions are short-lived.
Increase the default timeout value by setting the timeout database
option
I have been developing a Django web app as well, and I chose a Azure SQL Server to be the database of the application. Everything has been working OK.

Related

Force django to reopen database connection if lost

In my Python-Django web application, sometimes the database it will disconnect (problems related to my test environment, not so much stable...) and my web-app give me this error:
File "/usr/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 222, in create_cursor,
django.db.utils.InterfaceError: connection already closed,
cursor = self.connection.cursor()
Now, how i can tell django to retry to open the connection and continue? it seems that django remains stuck at this point...
Thanks.
There's no way to tell Django that it should retry on connection error. It's instead designed to simply fail on that one request. From the documentation:
If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.
However, this shouldn't be a problem if you follow this advice in the documentation:
If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server.

Google Cloud SQL w/ Django - Extremely Slow Connection

Edit:
After doing some further investigation, the delay seems to be more Django than the Cloud SQL Proxy.
I added a couple of print statements at the start and end of a view, and they print instantly when the request is made, but it takes a further 60 seconds for the page to load.
I've stripped back the template files to include only the bare bones, removing most scripts and static resources and it's still pretty similar.
Changing my view to return a simple HttpResponse('Done') cuts the time drastically.
Whilst developing locally I am using Django to serve the static files as described in the docs. Again, I don't have this issue with other projects though.
Original Post:
I've recently noticed my Django application is incredibly slow to connect to my Google Cloud SQL database when using the Cloud SQL Proxy in my local development environment.
The initial connection takes 2-3 minutes, then 60 seconds per request thereafter. This applies when performing migrations or running the development server. Eventually the request completes.
I've tried scaling up the database but to no effect (it's relatively small anyway). Database version is MySQL 5.7 with machine type db-n1-standard-1. Previously I've used Django Channels but have since removed all references to this.
The Middleware and settings.py are relatively standard and identical to another Django app that connects in an instant.
The live site also connects very fast without any issues.
Python version is 3.6 w/ Django 2.1.4 and mysqlclient 1.3.14.
My database settings are defined as:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': os.getenv('DB_NAME'),
'USER': os.getenv('DB_USER'),
'PASSWORD': os.getenv('DB_PASSWORD'),
'PORT': '3306',
}
}
DATABASES['default']['HOST'] = os.getenv('DB_HOST')
if os.getenv('GAE_INSTANCE'):
pass
else:
DATABASES['default']['HOST'] = '127.0.0.1'
Using environment variables or not doesn't seem to make a difference.
I'm starting the Cloud SQL Proxy via ./cloud_sql_proxy -instances="my-project:europe-west1:my-project-instance"=tcp:3306.
After invoking the proxy via the command line I see Ready for new connections. Running python manage.py runserver shows New connection for "my-project:europe-west1:my-project-instance" but then takes an age before I see Starting development server at http://127.0.0.1:8000/.
I'm also noticing several errors in Stackdriver:
_mysql_exceptions.OperationalError: (2006, "Lost connection to MySQL server at 'reading initial communication packet', system error: 95")
django.db.utils.OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 95")
AttributeError: 'SessionStore' object has no attribute '_session_cache'
These appear - or don't - from time to time without changing any settings.
I've read they may be an access rights issue but the connection is eventually made, it's just incredibly slow. I'm authorising via the Google Cloud SDK, which seems to work fine.
Eventually I found that the main source of the delay was a recursive function being called in one of my admin forms (which delayed the initial startup) and context processors (which delayed each load). After removing it, the pages loaded without issue. It worked fine when deployed to App Engine or when using a test/local SQLite database, though, which is what made debugging a little harder.

'max_user_connections' in django web application

i have developed a Django web application for a chatbot that acts as s store's customers service.
At first i was using Sqlite3 foe my databases, i was working fine but some times it causes the (Data Base is Locked error).So, i switch to Mysql for the application backend and the other databases.
to be clear i have 5 DB all are under the user eman_saad:
1.db.
2.chating_log.
3.female_brain.
4.male_brain.
5.orderss.
But now i have this error:
django.db.utils.OperationalError: (1203, "User eman_saad already has more than 'max_user_connections' active connections")
i'ev tired increasing the max_connection using the terminal and login mysql>.
but still the error remain the same.
PS: I am using django10, Python3.5,webfaction shared server,Mysql

Getting timeout on django application

I'm currently working on a django application. I can't add an element to my database on the admin view. I fill all the information but when I click on save button but the operation doesn't finish and I get a timeout. I use sqlite3 as database.
My question is there any one that know the origin of this problem. If not how could I investigate the problem. When I worked with other language (Java, C ...etc) when I have a problem I can use a debugger. What are the options I have?
This Problem can occur because of following reasons:
(Less Probable) You computation code is too Slow: Which is a rarity because the Timeout is set to about 1 minute or so, and code doesn't take that time to Execute
Your app is waiting on some external resource but it is not Responding. For this you will have to check for the Django logs and check if some external resource error is there
(Most Probable) Database taking too much time: This can occur either because:
App can't connect to Database: For this you have to check database logs OR try and connect manually with database through python manage.py dbshell
DB Query Taking so much time to execute: You can test this by checking database logs for how much time a query is taking OR you can connect manually via dbshell and make the same query there
Your can also use tools Like django-profiler , Django debug toolbar etc for debugging purposes. and for native python code python debugger

MySQL and python-mysql (mysqldb) crashing under heavy load

I was just putting the finishing touches to a site built using web.py, MySQL and python-mysql (mysqldb module) feeling good about having projected from sql injections and the like when I leant on the refresh button sending 50 or so simultaneous requests and it crashed my server! I reproduced the error and found that I get the following two errors interchangeably, sometimes its one and sometimes the other:
Error 1:
127.0.0.1:60712 - - [12/Sep/2013 09:54:34] "HTTP/1.1 GET /" - 500 Internal Server Error
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x10b287750>> ignored
Traceback (most recent call last):
Error 2:
python(74828,0x10b625000) malloc: *** error for object 0x7fd8991b6e00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Clearly the requests are straining MySQL and causing it to fall over so my question is how do I protect against this happening.
My server setup is setup using Ubuntu 13.04, nginx, MySQL (which I connect to with the mysqldb python module), web.py and fast-cgi.
When the web.py app starts up it connects to the database as so:
def connect():
global con
con = mdb.connect(host=HOST, user=USER, passwd=PASSWORD, db=DATABASE)
if con is None:
print 'error connecting to database'
and the con object is assigned to a global variable so various parts of the application can access it
I access the databse data like this:
def get_page(name):
global con
with con:
cur = con.cursor()
cur.execute("SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='jt_website' AND `TABLE_NAME`='pages'")
table_info = cur.fetchall()
One idea I had was to open and close the database before and after each request but that seems overkill to me, does anybody have any opinions on this?
What sort of methods do people use to protect their database connections in python and other environments and what sort of best practices should I be following?
I don't use web.py but docs and tutorials show a different way to deal with database.
They suggest to use a global object (you create it in .connect) which probably will be a global proxy in the Flask style.
Try organizing your code as in this example ←DEAD LINK and see if it happens again.
The error you reported seems a concurrency problem, that normally is handled automatically by the framework.
About the latter question:
What sort of methods do people use to protect their database connections in python and other environments and what sort of best practices should I be following?
It's different depending on the web framework you use. Django for example hides everything and it just works.
Flask lets you choose what you want to do. You can use flask-sqlalchemy which uses the very good SQLAlchemy ORM managing the connection proxy for the web application.

Categories