Where Does Django Store It's Test Databse - python

When using an SQLite database where does Django store the database that it uses when running tests?
Is there way to define this path?
I would like to be able to manually look at the contents of the test database following each test.
The command python manage.py test --keepdb is supposed to keep the database, which it does, but I cannot seem to find where this database is stored.
The dev database is stored in the root of the project but the test database is not found there.

from the docs:
When using SQLite, the tests will use an in-memory database by default (i.e., the database will be created in memory, bypassing the filesystem entirely!). The TEST dictionary in DATABASES offers a number of settings to configure your test database. For example, if you want to use a different database name, specify NAME in the TEST dictionary for any given database in DATABASES.
for more info see:
https://docs.djangoproject.com/en/2.1/topics/testing/overview/#the-test-database

Related

Use alembic migration or docker volumes to populate docker postgres database?

I believe this question already shows that I am new to docker and alembic. I am building a flask+sqlalchemy app using docker and postgres. So far I am not using alembic, but I am about to plug it in and some questions came up. I will have to create a pg_trgm extension and also populate one of the tables with data I already have. Until now I have only created brand new databases using sqlalchemy for the tests. So here is what I am thinking/doing:
To create the extension I could simple add a volume to the postgres docker service like: ./pg_dump.sql:/docker-entrypoint-initdb.d/pg_dump.sql. The extension does not depend on any specific db, so a simple "CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;" would do it, right?
If I use the same strategy to populate the tables I need a pg_dump.sql that creates the complete db and tables. To accomplish that I first created the brand new database on sqlalchemy, then I used a script to populate the tables with data I have on a json file. I then generated the complete pg_dump.sql and now I can place this complete .sql file on the docker service volume and when I run my docker-compose the postgres container will have the dabatase ready to go.
Now I am starting with alembic and I am thinking I could just keep the pg_dump.sql to create the extensions, and have a alembic migration script to populate the empty tables (dropping the item 2 above).
Which way is the better way? 2, 3 or none of them? tks
Create the extension in a /docker-entrypoint-initdb.d script (1). Load the data using your application's migration system (3).
Mechanically, one good reason to do this is that the database init scripts only run the very first time you create a database container on a given storage. If you add a column to a table and need to run migrations, the init-script sequence requires you to completely throw away and recreate the database.
Philosophically, I'd give you the same answer whether you were using Docker or something else. You could imagine running a database on a dedicated server, or using a cloud-hosted database. You'd have to ask your database administrator to install the extension for you, but they'd generally expect to give you credentials to an empty database and have you load the data yourself; or in a cloud setup you could imagine checking a "install this extension" checkbox in their console but there wouldn't be a way to load the data without connecting to the database remotely.
So, a migration system will work anywhere you have access to the database, and will allow incremental changes to the schema. The init script setup is Docker-specific and requires deleting the database to make any change.

Create postgres database from pony ORM

Is it possible to create a new database from pony ORM? We couldn't find it within pony ORM docs, it's always assumed the database exists or is using SQLite file.
We would like to create a testing database and drop it afterwards.
No. Per:
https://docs.ponyorm.org/api_reference.html#sqlite
Supported databases
If you look at the .bind() API for the various databases, SQLite is the only one with create_db. This is because in SQLite creating a database is just creating a single file. The other engines need to go through their own program to initialize a database. You will need to create an independent script that creates the database.
If you have your sqlite database file, you can try using pgloader.

Specify database backend store creation in specific schema

When creating an mlflow tracking server and specifying that a SQL Server database is to be used as a backend store, mlflow creates a bunch of table within the dbo schema. Does anyone know if it is possible to specify a different schema in which to create these tables?
It is possible to alter mlflow/mlflow/store/sqlalchemy_store.py to change the schema of the tables that are stored.
It is very likely that this is the wrong solution for you, since you will go out of sync with the open source and lose newer features that alter this, unless you maintain the fork yourself. Could you maybe reply with your use case?
You can use postgres uri options:
Postgres URI options sample:
"postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema"
In your Mlflow Code:
mlflow.set_tracking_uri("postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema")
! Don't forget to create 'mlflow_schema' schema.
how-to-specify-schema-in-psycopg2-connection-method
I'm using MSSQLServer as the backend store. I could use a different schema than dbo by specifying the default schema for the SQLServer user being used by MLFlow.
In my case, if the MLFlow tables (e.g: experiences) exist in dbo, then those tables will be used. If not, MLFlow will create those tables in the default schema.

Django use multiple databases dynamically without defining in settings

TL;DR: Besides my default django database, i need data pulled in from two different user-selected databases. not sure how to setup django to access these besides just running manual queries using connection.cursor().execute("SQL")
Situtaion:
A process creates a sqlitedb. The database is imported into mysql. I'm writing a django app that interacts with that mysql database (call it StreamDB), another mysql database with additional info user needs to see (call this SourceDB), and of course the default Django app mysql DB (call it AppDB).
There will be two versions of SourceDB (prod and test) the StreamDB imported maps to data to one and only one of these SourceDB's.
I have a table/model in my "AppDB" that identifies these sources (the StreamDB name, which of the two SourceDB's it maps to, and some other data. Here's a sample record:
name: foo
path: /var/www/data/test/foo.sqlite
db_name: foo
source_db_name: bar
date_imported: 2014-05-03 10:20:30
These are managed through the django admin and added manually (or dyanmically via external script)
Dilemma
Depending on which source is selected, my SQL needs to join the tables from those two DB's. Example query with dyanmic db names in :
SELECT a.image_id, a.image_name, b.title, b.begin_time
FROM <selected_streamdb>.image a JOIN <selected_sourcedb>.event b ON b.event_id = a.event_id
WHERE a.image_type = 'png'
Do I fill in and with variables perhaps?
Question
Is there anyway to use Django's ORM model in a situation like this? Can django grab the DATABASE.settings from a DB table? Do I create a model that manages this? (i.e. the source-table above?)
I don't mind managing db permissions on the backend (assume app database user in django settings has access to all of the databases)
Hope all of the above makes sense.

Flask-SQLAlchemy - When are the tables/databases created and destroyed?

I am a little confused with the topic alluded to in the title.
So, when a Flask app is started, does the SQLAlchemy search theSQLALCHEMY_DATABASE_URI for the correct, in my case, MySQL database. Then, does it create the tables if they do not exist already?
What if the database that is programmed into theSQLALCHEMY_DATABASE_URI variable in the config.py file does not exist?
What if that database exists, and only a few of the tables exist (There are more tables coded into the SQLAlchemy code than exist in the actual MySQL database)? Does it erase those tables and then create new tables with the current specs?
And what if those tables do all exist? Do they get erased and re-created?
I am trying to understand how the entire process works so that I (1) Don't lose database information when changes are made to the schema, and (2) can write the necessary code to completely manage how and when the SQLAlchemy talks to the actual Database.
Tables are not created automatically; you need to call the SQLAlchemy.create_all() method to explicitly to have it create tables for you:
db = SQLAlchemy(app)
db.create_all()
You can do this with command-line utility, for example. Or, if you deploy to a PaaS such as Google App Engine, a dedicated admin-only view.
The same applies for database table destruction; use the SQLAlchemy.drop_all() method.
See the Creating and Dropping tables chapter of the documentation, or take a look at the database chapter of the Mega Flask Tutorial.
You can also delegate this task to Flask-Migrate or similar schema versioning tools. These help you record and edit schema creation and migration steps; the database schema of real-life projects is never static and you would want to be able to move existing data between versions or the schema. Creating the initial schema is then just the first step.

Categories