Multi-master database replication with Django webapp and MySQL - python

I am working on scaling out a webapp and providing some database redundancy for protection against failures and to keep the servers up when updates are needed. The app is still in development, so I have chosen a simple multi-master redundancy with two separate database servers to try and achieve this. Each server will have the Django code and host its own database, and the databases should be as closely mirrored as possible (updated within a few seconds).
I am trying to figure out how to set up the multi-master (master-master) replication between databases with Django and MySQL. There is a lot of documentation about setting it up with MySQL only (using various configurations), but I cannot find any for making this work from the Django side of things.
From what I understand, I need to approach this by adding two database entries in the Django settings (one for each master) and then write a database router that will specify which database to read from and which to write from. In this scenario, both databases should accept both reads and writes, and writes/updates should be mirrored over to the other database. The logic in the router could simply use a round-robin technique to decide which database to use. From there on, further configuration to set up the actual replication should be done through MySQL configuration.
Does this approach sound correct, and does anyone have any experience with getting this to work?

Your idea of the router is great! I would add that you need automatically detect whether a databases is [slow] down. You can detect that by the response time and by connection/read/write errors. And if this happens then you exclude this database from your round-robin list for a while, trying to connect back to it every now and then to detect if the databases is alive.
In other words the round-robin list grows and shrinks dynamically depending on the health status of your database machines.
The another important notice is that luckily you don't need to maintain this round-robin list common to all the web servers. Each web server can store its own copy of the round-robin list and its own state of inclusion and exclusion of databases into this list. This is just because a database server can be seen from one web server and can be not seen from another one due to local network problems.

Related

Using web2py for a user frontend crud

I was asked to port a Access database to MySQL and
provide a simple web frontend for the users.
The DB consists of 8-10 tables and stores data about
clients consulting (client, consultant,topic, hours, ...).
I need to provide a webinterface for our consultants to use,
where they insert all this information during a session into a predefined mask/form.
My initial thought was to port the Access-DB to MySQL, which I have done
and then use the web2py framework to build a user interface with login,
inserting data, browse/scroll through the cases and pulling reports.
web2py with usermanagment and a few samples views & controllers and
MySQL-DB is running. I added the DB to the DAL in web2py,
but now I noticed, that with web2py it is mandatory to define every table
again in web2py for it being able to communicate with the SQL-Server.
While struggeling to succesfully run the extract_mysql_models.py script
to export the structure of the already existing SQL DB for use in web2py
concerns about web2py are accumulating.
This double/redundant way of talking to my DB strikes me as odd and
web2py does not support python3.
Is web2py the correct way to fulfill my task or is there better way?
Thank you very much for listening/helping out.
This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3.
Any abstraction you want to use to communicate with your database (whether it be the web2py DAL, the Django ORM, SQLAlchemy, etc.) will have to have some knowledge of the database schema in order to construct queries.
Even if you programmatically generated all the SQL statements yourself without use of an ORM/DAL, your code would still have to have some knowledge of the database structure (i.e., somewhere you have to specify names of tables and fields, etc.).
For existing databases, we aim to automate this process via introspection of the database schema, which is the purpose of the extract_mysql_models.py script. If that script isn't working, you should report an issue on Github and/or open a thread on the web2py Google Group.
Also, note that when creating a new database, web2py helps you avoid redundant specification of the schema by handling migrations (including table creation) for you -- so you specify the schema only in web2py, and the DAL will automatically create the tables in the database (of course, this is optional).

Writing a Django backend program that runs indefinitely -- what to keep in mind?

I am trying to write a Django app that queries a remote database for some data, performs some calculations on a portion of this data and stores the results (in the local database using Django models). It also filters another portion and stores the result separately. My front end then queries my Django database for these processed data and displays them to the user.
My questions are:
How do I write an agent program that continuously runs in the backend, downloads data from the remote database, does calculations/ filtering and stores the result in the local Django database ? Particularly, what are the most important things to keep in mind when writing a program that runs indefinitely?
Is using cron for this purpose a good idea ?
The data retrieved from the remote database belong to multiple users and each user's data must be kept/ stored separately in my local database as well. How do I achieve that? using row-level/ class-instance level permissions maybe? Remember that the backend agent does the storage, update and delete. Front end only reads data (through http requests).
And finally, I allow creation of new users. If a new user has valid credentials for the remote database the user should be allowed to use my app. In which case, my backend will download this particular user's data from the remote database, performs calculations/ filtering and presents the results to the user. How can I handle the dynamic creation of objects/ database tables for the new users? and how can I differentiate between users' data when retrieving them ?
Would very much appreciate answers from experienced programmers with knowledge of Django. Thank you.
For
1) The standard get-go solution for timed and background task is Celery which has Django integration. There are others, like Huey https://github.com/coleifer/huey
2) The usual solution is that each row contains user_id column for which this data belongs to. This maps to User model using Django ORM's ForeignKey field. Do your users to need to query the database directly or do they have direct database accounts? If not then this solution should be enough. It sounds like it your front end has 1 database connection and all permission logic is handled by the front end, not the database itself.
3) See 2

Effectively communicating between two Django applications on two servers (Multitenancy)

I'm hoping to be pointed in the right direction as far as what tools to use while in the process of developing an application that runs on two servers per client.
[Main Server][Client db Server]
Each client has their own server which has a django application managing their respective data, in addition to serving as a simple front end.
The main application server has a more feature-rich front end, using the same models/db schemas. It should have full read/write access to the client's database server.
The final desired effect would be a typical SaaS type deal:
client1.djangoapp.com => Connects to mysql database # client1_IP
client2.djangoapp.com => Connects to mysql database # client2_IP...
Thanks in advance!
You could use different settings files, let's say settings_client_1.py and settings_client_2.py, import common settings from a common settings.py file to keep it DRY. Then add respective database settings.
Do the same with wsgi files, create one for each settings. Say, wsgi_c1.py and wsgi_c2.py
Then, in your web server direct the requests for client1.djangoapp.com to wsgi_c1.py and client2.djangoapp.com to wsgi_c2.py

Amazon EC2 file structure / web app with separate Python backend?

I'm currently running a t2.micro instance on EC2 right now. I have the html/web interface side of it working, along with a MySQL database.
The site allows users to register and stores them in the DB via a PHP script.
I want there to be an actual Python application that queries the MySQL database and returns user data, to then be executed in a Python script.
What I cannot find is whether I host this Python application as a totally separate instance or if it can exist on the same instance, in a different directory. I ultimately just need to query the database, which makes me thing it must exist on the same instance.
Could someone please provide some guidance?
Let me just be clear: this is not a Python web app. This Python backend is entirely separate except making queries against the database.
Either approach is possible, but there are pros & cons to each.
Running separate Python app on the same server:
Pros:
Setting up local access to the database is fairly simple
Only need to handle backups or making snapshots, etc. for a single instance
Cons:
Harder to scale up individual pieces if you need more memory, processing power, etc. in the future
Running the Python app on a separate server:
Pros:
Separate pieces means you can scale up & down the hardware each piece is running on, according to their individual needs
If you're using all micro instances, you get more resources to work with, without any extra costs (assuming you're still meeting all the other 'free tier eligible' criteria)
Cons:
In general, more pieces == more time spent on configuration, administration tasks, etc.
You have to open up the database to non-local access
Simplest: open up the database to access from anywhere (e.g. all remote IP addresses), and have the Python app log in via the internet
Somewhat safer, more complex: set the Python app server up with an elastic IP, open up the database to access only from that address
Much safer, more complex: set up your own virtual private cloud (VPC), and allow connections to the database only from within the VPC. You'd have to configure public access for each of the servers for whatever public traffic you'll have, presumably ports 80 and/or 443.

How to separate read db server and write db server on django 0.97?

I am using Django 0.97 version with postgresql 9.0. I have configured hot streaming replication on a master db server and a slave db server. My application has heavy bot-driven writes on the DB and reads only from the users. So, it makes it very optimized if I make the read-access slave db for the users' and write-access master db for the bot write access. Unfortunately only Django 1.2 has multiple database support and its a huge effort to upgrade in my application. I got some leads through the following link : http://www.eflorenzano.com/blog/post/easy-multi-database-support-django/ However, this also requires me to change all instances of db access in my application. Is there any simpler way to assign separate db servers for read access and write access by fiddling with the the django core db module?
Your best bet is to upgrade to 1.2 as it will be significantly less work than hacking together features that already exist. If you stick with 0.97 for much longer your life will only be more difficult down the road.
I'm guessing you may have some misconceptions on how using multiple DBs works in Django 1.2. You do not have to "change all instances of db access in [your] application" if you use the Database Routers feature of Django.
With a router, you can specify which database to use for reads and writes. All of your existing django models should work and begin sending requests to the proper database. It's pretty simple to set up a router, just check the docs. All that is required is to create the router class, put it somewhere, then add a line in your settings.
It works really nicely and is not as much work as you may expect. You may have other issues with upgrading that you aren't telling us, but as far as models go you shouldn't have many problems.

Categories