I currently have a tediously long process for creating new instances of a CMS we make.
I plan to script as much of the process as I can, using Python.
The first step is creating a database.
Currently it is a manual process where I will create an empty database "MyNewSite" and then select restore from backup and restore it from the "master" db file. Before the restore I change the data and log paths accordingly (so they dont overwrite the master).
Is there any way to automate this? I'm not really sure where to begin so any help would be appreciated.
The CMS you make should have a deployment script. All your development process should update upgrade scripts, never touch the database directly. Database updates should be deployed through source code (upgrade scripts) and version control: Version Control and your Database.
Related
I am currently creating an application that will be using Python Flask for the back-end and API and PostgreSQL as the database to store my data in JSON format. My plan is to have a front-end in JS to interact with the API which will pull relevant information from my database.
How do I package the database into the program so that if a fresh copy is pulled from GitHub, a user would have everything needed to host and use the service? I am still a new developer and having difficulty taking my hobbyist code and presenting it in a clean, organized way.
Thank you for all help in advance.
Though your question leaves quite a few options open, here are two things you could do:
If you assume your users can install a PostgreSQL database themselves: you could dump the database which contains the minimum required to run your application (using pg_dump). When your application starts on your user's server, it should detect the database it's connecting to is empty, which should trigger an import of your data. The only thing your users should do is fill out their database connection details
If your users don't know anything about configuring servers: You could create a Docker image containing your Python code and PostgreSQL. This package will contain all dependencies of your application and runs anywhere. Admittedly, this is a bit more 'advanced' and could lead to other difficulties both on your side as well as on your users.
I have a development Django project using MySQL, and it is deployed at PythonAnywhere. I am able to push my code updates via GIT, and the Django migrations take care of the database STRUCTURE but my question is about data.
During development I may add a new capability that relies on master data which I enter in the DEV database as I develop and test. When deploying I'd like to copy over the master data to the new database rather than re-enter it all.
Is exporting and importing files the best way or is there a more professional way?
I think the simplest way to do this is by using the dumpdata management command.
The output for this command can be used in executing the loaddata management command.
I have built an application in python that is hosted on heroku which basically uses a script written in Python to store some results into a database (it runs as a scheduled task on daily basis). I would have done this with ruby/rails to avoid this confusion, but the application partner did not support Ruby.
I would like to know if it will be possible to build the front-end with Ruby on Rails and use the same database.
My rails application will need to make use MVC and have its own tables on the database, but it will also use the database that python sends data to just to retrieve some data from there.
Can I create the Rails app and reference the details of the database that my python application uses?
How could I test this on my local machine?
What would be the best approach to this?
I don't see any problem in doing this, as far as rails manages the database structure and python script populates it with data.
My advice, but just to make it simpler, is to define the database schema through migrations in your rails app and build it like the python script doesn't exist.
Once you have completed it, simply start the python script so it can start populating the Database (could be necessary to rename some table in the python script, but no more than this).
If you want to test in your local machine you can one of this:
run the python script in your local machine
configure the database.ymlin your rails app to point to the remote DB (can be difficult if you don't have administration access to the host server, because of port farwarding etc)
The only thing you should keep in mind is about concurrent accesses.
Because you have 2 application that both read and write in your DB, would be better if the python script makes its job in a single and atomic transaction, to avoid your rails app finding the DB in an half-updated state.
You can see the database like a shared box, it doesn't matter how many applications use it.
I've read many threads about sqlite database and how to update with wal and checkpoint but I really do not understand despite all the information on Internet.
I merge two sqlite databases on a Desktop computer with a Python script but when I put the new file (store.data) back to the folder in Xcode and replace the old one the app crashes at running (error 259). I think this happens because the original wal/shm file does not recognize the new database (which has now new information/columns/rows) but I cannot figure out how I can create new wal/shm from this file. Or should I modify those ones at the same time I merge databases?
EDIT:
it works fine when I execute the PRAGMA wal_checkpoint(RESTART) in sqlite browser app... I replace the old files in my Xcode folder but it only works if I make a new rebuild and not if I just open it in the simulator... This will not cause a problem on a real device since I will not rebuild the app but just launch it? Is there a way to get round of it?
I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.