I am newbie to Django, and following this SO question, I was able to create an application that helped me to upload an image to the server from the user end:
Need a minimal Django file upload example
However, I wish to access the database wherein the information about the updated files has been stored. Within the env and that directory, when I run sqlite>.database, I get the following output :
sqlite> .databases
seq name file
--- --------------- ----------------------------------------------------------
0 main
1 temp
I am not able to locate where my database has been stored, and how to access it. How do I get to access the database, and view it? ( so as to alter it)?
Related
I'm working on a project that has all its data written as python files(codes). Users with authentication could upload data through a web page, and this will directly add and change codes on the production server. This is causing trouble every time I want to git pull changes from the git repo to the production server since the codes added by users directly on production are untracked.
I wonder if anyone has some other ideas. I know this is ill-designed but this is what I got from the beginning, and implementing a database will require a lot of effort since all the current codes are designed for python files.I guess it's because the people who wrote this didn't know much about databases and it works only because there is relatively little data.
The only two solutions I could think of is
use a database instead of having all datas being codes
git add/commit/push all changes on the production server every time the user upload data
Details added:
There is a 'data' folder of python files, with each file storing the information of a book. For instance, latin_stories.py has a dictionary variable text, an integer variable section, a string variable language, etc. When users upload a csv file according to a certain format, a program will automatically add and change the python files in the data folder, directly on the production server.
I believe this question already shows that I am new to docker and alembic. I am building a flask+sqlalchemy app using docker and postgres. So far I am not using alembic, but I am about to plug it in and some questions came up. I will have to create a pg_trgm extension and also populate one of the tables with data I already have. Until now I have only created brand new databases using sqlalchemy for the tests. So here is what I am thinking/doing:
To create the extension I could simple add a volume to the postgres docker service like: ./pg_dump.sql:/docker-entrypoint-initdb.d/pg_dump.sql. The extension does not depend on any specific db, so a simple "CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;" would do it, right?
If I use the same strategy to populate the tables I need a pg_dump.sql that creates the complete db and tables. To accomplish that I first created the brand new database on sqlalchemy, then I used a script to populate the tables with data I have on a json file. I then generated the complete pg_dump.sql and now I can place this complete .sql file on the docker service volume and when I run my docker-compose the postgres container will have the dabatase ready to go.
Now I am starting with alembic and I am thinking I could just keep the pg_dump.sql to create the extensions, and have a alembic migration script to populate the empty tables (dropping the item 2 above).
Which way is the better way? 2, 3 or none of them? tks
Create the extension in a /docker-entrypoint-initdb.d script (1). Load the data using your application's migration system (3).
Mechanically, one good reason to do this is that the database init scripts only run the very first time you create a database container on a given storage. If you add a column to a table and need to run migrations, the init-script sequence requires you to completely throw away and recreate the database.
Philosophically, I'd give you the same answer whether you were using Docker or something else. You could imagine running a database on a dedicated server, or using a cloud-hosted database. You'd have to ask your database administrator to install the extension for you, but they'd generally expect to give you credentials to an empty database and have you load the data yourself; or in a cloud setup you could imagine checking a "install this extension" checkbox in their console but there wouldn't be a way to load the data without connecting to the database remotely.
So, a migration system will work anywhere you have access to the database, and will allow incremental changes to the schema. The init script setup is Docker-specific and requires deleting the database to make any change.
I am trying to access excel files through the django-admin model viewing portal.
Each excel file is generated through a seperate algorithm, and is already in a directory called excel_stored
each excel file is generated with an ID that corresponds to its model # in django. So it would be excel_%ID%.xlsx or excel_23.xlsx for example.
I want my django FileField() to access the relevant excel file so that I can download it from my django admin portal, just like I can access my other model information (city name, time uploaded, etc).
Here is a pseudo-code of what I'd want to do:
My models.py would look like this
excel = models.FileField()
The process of saving would look like this
create_excel()
### EXCEL WAS SAVED TO DIR: excel_stored ###
save_excel = Model(excel = file.directory((os.path.join(BASE_DIR, 'lead/excel_stored/excel_%s.xlsx' %ID))
save_excel.save()
Id then be able to download it like this https://i.stack.imgur.com/4HRUU.gif
I know there's a lot I'm missing, but most documentation I find refers to uploading an excel file through forms, not accessing it.
I've been stuck on this for a while, so I'd appreciate some direction! Thank you!
I have two issues both of which are inter-related
Issue #1
My app has an online Postgres Database that it is using to store data. Because this is a Dockerized app, migrations that I create no longer appear on my local host but are instead stored in the Docker container.
All of the questions I've seen so far do not seem to have issues with making migrations and adding the unique constraint to one of the fields within the table.
I have written shell code to run a python script that returns me the contents of the migrations file in the command prompt window. I was able to obtain the migrations file that was to be applied and added a row to the django_migrations table to specify the same. I then ran makemigrations and migrate but it said there were no changes applied (which leads me to believe that the row which I added into the database should only have automatically been created by django after it had detected migrations on its own instead of me specifying the migrations file and asking it to make the changes). The issue is that now, the new migrations still detect the following change
Migrations for 'mdp':
db4mdp/mdp/migrations/0012_testing.py
- Alter field mdp_name on languages
Despite detecting this apparent 'change', I get the following error,
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "mdp_mdp_mdp_fullname_281e4228_uniq" already exists
I have already checked on my postgres server using pgadmin4 to check if the constraint has actually been applied on it. And it has with the name next to relation as specified above. So why then, does Django apparently detect this as a change that is to be made. The thing is, if I now remove the new migrations file that I created in my python directory, it probably will run (since the changes have 'apparently' been made in the database) but I won't have the migrations file to keep track of the changes. I don't need if I need to keep the migrations around now that I'm using an online database though. I will not be rolling any changes I make back nor will I be making changes to often. This is just a one/two time thing but I want to resolve the error.
Issue #2
The reason I used 'apparently' in my above issue is that even though the constraints section in my public schema show me that the constraints have been applied, for some reason, when I try to create a new entry into my table with a non-unique string in the field that I've defined as unique, it allows it creation anyway.
You never add anything manually to django_migrations table. Let django do it. If it is not doing it, no matter what, you code is not production ready.
I understand that you are doing your development inside docker. When you do it, you mount your docker volume to local volume. Since you have not mounted that, your migratins will not show in local.
Refer Volumes. It should resolve your issues.
For anyone trying to find an alternate solution to this problem other than mounting volumes due to time constraints, this answer might help but #deosha 's is still the correct way to go about it. I fixed the problem by deleting all my tables and the rows corresponding to migrations to my specific app (it isn't necessary to delete the auth tables etc. because you won't be deleting the rows corresponding to those in the django_migrations table). Following this I used the following within the shell script called by my Dockerfile.
python manage.py makemigrations --name testing
python testing_migrations.py
It needs to be named for the next step. After this line of code I ran the python script testing_migrations which contains the following code:
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
migrations_file = os.path.join(BASE_DIR, '<path_to_new_migration_file>')
with open(migrations_file, 'r') as file:
print(file.read())
Typically the first migration created for the app you have made will be /0001_testing.py (which is why naming it was necessary earlier). Since the contents of this file are visible to the container while it is up and running, you will be able to print the contents of the file. Then run the migrate command. This creates a column in the django_migrations table that makes it appear to django that the migration has been applied. However, on your local machine this migrations file doesn't exist. So copy the contents of the file from the print statement above and put it into a .py file with the same name as mentioned above and save it in the migrations folder of the above on your local device.
You can follow this method for all successive migrations by repeating the process and incrementing the number in the testing_migrations file as required.
Squashing migrations once you're done making the table will help. If you're doing this all in development and have no requirement to roll-back the changes to the database schema then just put this into production after deleting all the migrations files and the rows in your django_migrations table corresponding to your app as was done initially. Drop your tables and let your first new migrations file re-create them and then import in your data once again.
This is not the recommended method. Use deosha's if you're not on a time crunch.
TL;DR: Besides my default django database, i need data pulled in from two different user-selected databases. not sure how to setup django to access these besides just running manual queries using connection.cursor().execute("SQL")
Situtaion:
A process creates a sqlitedb. The database is imported into mysql. I'm writing a django app that interacts with that mysql database (call it StreamDB), another mysql database with additional info user needs to see (call this SourceDB), and of course the default Django app mysql DB (call it AppDB).
There will be two versions of SourceDB (prod and test) the StreamDB imported maps to data to one and only one of these SourceDB's.
I have a table/model in my "AppDB" that identifies these sources (the StreamDB name, which of the two SourceDB's it maps to, and some other data. Here's a sample record:
name: foo
path: /var/www/data/test/foo.sqlite
db_name: foo
source_db_name: bar
date_imported: 2014-05-03 10:20:30
These are managed through the django admin and added manually (or dyanmically via external script)
Dilemma
Depending on which source is selected, my SQL needs to join the tables from those two DB's. Example query with dyanmic db names in :
SELECT a.image_id, a.image_name, b.title, b.begin_time
FROM <selected_streamdb>.image a JOIN <selected_sourcedb>.event b ON b.event_id = a.event_id
WHERE a.image_type = 'png'
Do I fill in and with variables perhaps?
Question
Is there anyway to use Django's ORM model in a situation like this? Can django grab the DATABASE.settings from a DB table? Do I create a model that manages this? (i.e. the source-table above?)
I don't mind managing db permissions on the backend (assume app database user in django settings has access to all of the databases)
Hope all of the above makes sense.