Database embedded with Oracle on Cx_Oracle from Python - python

Hello I am creating a small program on Python with the cx_oracle module which allows me to connect to my Oracle database on my Computer. However I would like to send it to a friend and therefore I would like him to be able to handle the same database as me. So I thought of an embeddable database (a bit like a file on SQLite) but with Oracle I did not find such a possibility. I would like to know if there is a way to do it with Oracle or if I am forced to connect to a local database.

First of all, you can export and import Oracle databases, which would help you a lot at the initial sharing. However, if you share your database with a friend and both of you are working on the same database but with different copies, then the two databases will eventually diverge from each other, with ever greater impact. So, you will need to consider your options carefully:
Using a central server
You could use a central server (which could be a remote server or even your computer if you apply port forwarding), ensure that both you and your friend are connected to that database and then both your and his changes will automatically be applied to the same database, without copies.
Versioned dumps
You could use a versioning tool, like git to store the versions of your database dump/structure/data and both you and your friend could use this, maybe storing the versions in a central repositories, so you would not need to send and communicate your database changes again and again. This would ensure schema and data synchronization, albeit you will have frequent merge conflicts and other merge-related problems.
Versioned scripts
You and your friend could write versioned scripts. This would apply on structural changes, so your and your friend's test data would diverge, but the structure would not.
Migration scripts
Some ORMs have automatically generated migration scripts and one can go forwards or backwards some levels. I do not particularly like the idea of automatically generated change scripts, but it is certainly a possible solution.

Related

Packaging a database with a full-stack Python application

I am currently creating an application that will be using Python Flask for the back-end and API and PostgreSQL as the database to store my data in JSON format. My plan is to have a front-end in JS to interact with the API which will pull relevant information from my database.
How do I package the database into the program so that if a fresh copy is pulled from GitHub, a user would have everything needed to host and use the service? I am still a new developer and having difficulty taking my hobbyist code and presenting it in a clean, organized way.
Thank you for all help in advance.
Though your question leaves quite a few options open, here are two things you could do:
If you assume your users can install a PostgreSQL database themselves: you could dump the database which contains the minimum required to run your application (using pg_dump). When your application starts on your user's server, it should detect the database it's connecting to is empty, which should trigger an import of your data. The only thing your users should do is fill out their database connection details
If your users don't know anything about configuring servers: You could create a Docker image containing your Python code and PostgreSQL. This package will contain all dependencies of your application and runs anywhere. Admittedly, this is a bit more 'advanced' and could lead to other difficulties both on your side as well as on your users.

Solution for storing digital asset management metadata and dependencies in Python

I'm planning to write some sort of digital management system to help users to deal with files in VCS (SVN, Perforce...) easily. The main premise is, that all files custom metadata and dependencies are stored alongside real files in VCS and not on separate database server.
But when querying the metadata it would be super slow to load everything from VCS on demand, so I would like to cache all metadata and dependencies locally and just update them incrementally when needed.
I need to write the whole system in Python, since it have to run in several environments that are embedding python.
Theoretically my needs will be fulfilled by nosql embedded graph database with multiprocess access, but sadly I can find anything to match this criteria:
every file can have different metadata structure, so I can't use schemas, thus no SQL db
I need to store dependencies
ability to search metadata and dependencies
several processes need to be able to read the database at once
serverless solution (only local machine will use it)
Python support
Optionally a way to inform connected processes about database update
I would really appreciate if someone more experienced could point me to the right direction. I'm not looking exclusively for one silver-bullet software that would fulfill my needs, it can also be an combination of several solutions. I just don't like reinventing the well, so I would like to use 3rd party solution rather than writing something on my own.
Thank you
ZODB satisfies most of your criteria:
support for arbitrary python constructs
usable mostly like having everything in memory as normal objects
if opened in read-only mode multiple processes can read at the same time (i think)
if using with ZEO (server) many clients can access at the same time with automatic notification on changes. See sample application in ZEO guide
You can load the same database with ZEO or ZODB so you can switch.
There is some tutorial stuff and general info at http://www.zodb.org

Remotely accessing sqlite3 in Django using a python script

I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database.
After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ftp and perform the function, instead I want to access it remotely.
What could be the other possible ways to do this? I don't want to change the database, but am looking for alternate ways to achieve the connection.
Leaving aside the question of whether it is sensible to run a production Django installation against sqlite (it really isn't), you seem to have forgotten that, well, you are actually running Django. That means that Django can be the main interface to your data; and therefore you should write code in Django that enables this.
Luckily, there exists the Django REST Framework that allows you to simply expose your data via HTTP interfaces like GET and POST. That would be a much better solution than accessing it via ssh.
Sqlite needs to access the provided file. So this is more of a filesystem question rather than a python one. You have to find a way for sqlite and python to access the remote directory, be it sftp, sshfs, ftp or whatever. It entirely depends on your remote and local OS. Preferably mount the remote subdirectory on your local filesystem.
You would not need to make a copy of it although if the file is large you might want to consider that option too.

Rails app to work with a remote heroku database

I have built an application in python that is hosted on heroku which basically uses a script written in Python to store some results into a database (it runs as a scheduled task on daily basis). I would have done this with ruby/rails to avoid this confusion, but the application partner did not support Ruby.
I would like to know if it will be possible to build the front-end with Ruby on Rails and use the same database.
My rails application will need to make use MVC and have its own tables on the database, but it will also use the database that python sends data to just to retrieve some data from there.
Can I create the Rails app and reference the details of the database that my python application uses?
How could I test this on my local machine?
What would be the best approach to this?
I don't see any problem in doing this, as far as rails manages the database structure and python script populates it with data.
My advice, but just to make it simpler, is to define the database schema through migrations in your rails app and build it like the python script doesn't exist.
Once you have completed it, simply start the python script so it can start populating the Database (could be necessary to rename some table in the python script, but no more than this).
If you want to test in your local machine you can one of this:
run the python script in your local machine
configure the database.ymlin your rails app to point to the remote DB (can be difficult if you don't have administration access to the host server, because of port farwarding etc)
The only thing you should keep in mind is about concurrent accesses.
Because you have 2 application that both read and write in your DB, would be better if the python script makes its job in a single and atomic transaction, to avoid your rails app finding the DB in an half-updated state.
You can see the database like a shared box, it doesn't matter how many applications use it.

How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?

I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.

Categories