I am a little confused with the topic alluded to in the title.
So, when a Flask app is started, does the SQLAlchemy search theSQLALCHEMY_DATABASE_URI for the correct, in my case, MySQL database. Then, does it create the tables if they do not exist already?
What if the database that is programmed into theSQLALCHEMY_DATABASE_URI variable in the config.py file does not exist?
What if that database exists, and only a few of the tables exist (There are more tables coded into the SQLAlchemy code than exist in the actual MySQL database)? Does it erase those tables and then create new tables with the current specs?
And what if those tables do all exist? Do they get erased and re-created?
I am trying to understand how the entire process works so that I (1) Don't lose database information when changes are made to the schema, and (2) can write the necessary code to completely manage how and when the SQLAlchemy talks to the actual Database.
Tables are not created automatically; you need to call the SQLAlchemy.create_all() method to explicitly to have it create tables for you:
db = SQLAlchemy(app)
db.create_all()
You can do this with command-line utility, for example. Or, if you deploy to a PaaS such as Google App Engine, a dedicated admin-only view.
The same applies for database table destruction; use the SQLAlchemy.drop_all() method.
See the Creating and Dropping tables chapter of the documentation, or take a look at the database chapter of the Mega Flask Tutorial.
You can also delegate this task to Flask-Migrate or similar schema versioning tools. These help you record and edit schema creation and migration steps; the database schema of real-life projects is never static and you would want to be able to move existing data between versions or the schema. Creating the initial schema is then just the first step.
Related
I have a bigquery table about 200 rows, i need to insert,delete and update values in this through a web interface(the table cannot be migrated to any other relational or non-relational database).
The web application will be deployed in google-cloud on app-engine and the user who acts as admin and owner privileges on Bigquery will be able to create and delete records and the other users with view permissions on the dataset in bigquery will be able to view records only.
I am planning to use the scripting language as python,
server(django or flask or any other)-> not sure which one is better
The web application should be displayed as a data-grid like appearance with buttons create,delete or view visiblility according to their roles.
I have not done anything like this in python,bigquery and django. I am already familiar with calling bigquery from python-client but to call in a web interface and in a transactional way, i am totally new.
I am seeing examples only related to django with their inbuilt model and not with big-query.
Can anyone please help me and clarify whether this is possible to implement and how?
I was able to achieve all of "C R U D" on Bigquery with the help of SQLAlchemy, though I had make a lot of concessions like if i use sqlalchemy class i needed to use a false primary key as Bigquery does not use any primary key and for storing sessions i needed to use file based session On Django for updates and create sqlalchemy does not allow without primary key, so i used raw sql part of SqlAlchemy. Thanks to the #mhawke who provided the hint for me to carry out this exericse
No, at most you could achieve the "R" of "CRUD." BigQuery isn't a transactional database, it's for querying vast amounts of data and preparing the results as an immutable view.
It doesn't provide a method to modify the source data directly and even if you did you'd need to run the query again. Also important to note are that queries are asynchronous and require much longer to perform than traditional databases.
The only reasonable solution would be to export the table data to GCS and then import it into a normal database for querying. Alternatively if you can't use another database and since you said there are only 1,000 rows you could perform your CRUD actions directly on that exported CSV.
I'm currently developing a Python Flask app that will allow users to write out paragraphs and store them in a MySQL database. Are there any Python libraries that will let users have the benefits of version control? Ideally users would be able to track edits so that users can revert to previous versions of the text they've written.
If you're using SQL Alchemy, checkout: sqlalchemy-continuum.
Features:
Does not store updates which don’t change anything
Supports alembic migrations
Can revert objects data as well as all object relations at given transaction even if the object was deleted
Transactions can be queried afterwards using SQLAlchemy query syntax
Querying for changed records at given transaction
Querying for versions of entity that modified given property
Querying for transactions, at which entities of a given class changed
History models give access to parent objects relations at any given point in time
Or check versioning-objects in SQLAlchemy documentation.
I also found this tutorial Database Content Versioning Using SQLAlchemy by Googling: sqlalchemy history table, you may find other solutions.
I've been working on a Flask app for a while, using SQLAlchemy to access a MySQL database. I've finally started looking into writing tests for this (I'm a strong believer in testing, but am new to Flask and SQLA and Python for that matter, so delayed this), and am having a problem getting my structure set up.
My production database isn't using any unusual MySQL features, and in other languages/frameworks I've been able to set up a test framework using an in-memory SQLite database. (For example, I have a Perl app using DBIx::Class to run a SQL Server db but with a test suite built on SQLite.) However, with SQLAlchemy I've needed to declare a few specific MySQL things in my model, and I'm not sure how to get around this. In particular, I use TINYINT and CHAR types for a few columns, and I seem to have to import these from sqlalchemy.dialects.mysql, since these aren't generic types in SQLA. Thus I'll have a class declaration like:
class Item(db.Model):
...
size = db.Column(TINYINT, db.ForeignKey('size.size_id'), nullable=False)
So even though if I were using raw SQL, I could use TINYINT with SQLite or MySQL and it would work fine, here, it's coming from the mysql dialect class.
I don't want to override my entire model class in order to cover seemingly trivial things like this. Is there some other solution? I've read what I could about using different databases for testing and production, but this issue hasn't been mentioned. It would be a lot easier to use an in-memory SQLite db for testing, instead of having to have a MySQL test database available for everything.
Is there a way to create a table without using Base.metadata.create_all(). I'm looking for something like Mytable.create() which should create only its corresponding table.
The reason I want to do so is because i'm using Postgres schemas for a multi-tenant web app and I want to create the public tables(Useretc) separately and the user specific(each having a separate schema, ex. Blog,Post) tables when the user signs up. However all the definitions lie in the same file and it seems that create_all creates all the tables defined in the file.
Please read documentation Creating and Dropping Database Tables.
You can do user_table.create(engine), or if you are using declarative extension: User.__table__.create(engine).
TL;DR: Besides my default django database, i need data pulled in from two different user-selected databases. not sure how to setup django to access these besides just running manual queries using connection.cursor().execute("SQL")
Situtaion:
A process creates a sqlitedb. The database is imported into mysql. I'm writing a django app that interacts with that mysql database (call it StreamDB), another mysql database with additional info user needs to see (call this SourceDB), and of course the default Django app mysql DB (call it AppDB).
There will be two versions of SourceDB (prod and test) the StreamDB imported maps to data to one and only one of these SourceDB's.
I have a table/model in my "AppDB" that identifies these sources (the StreamDB name, which of the two SourceDB's it maps to, and some other data. Here's a sample record:
name: foo
path: /var/www/data/test/foo.sqlite
db_name: foo
source_db_name: bar
date_imported: 2014-05-03 10:20:30
These are managed through the django admin and added manually (or dyanmically via external script)
Dilemma
Depending on which source is selected, my SQL needs to join the tables from those two DB's. Example query with dyanmic db names in :
SELECT a.image_id, a.image_name, b.title, b.begin_time
FROM <selected_streamdb>.image a JOIN <selected_sourcedb>.event b ON b.event_id = a.event_id
WHERE a.image_type = 'png'
Do I fill in and with variables perhaps?
Question
Is there anyway to use Django's ORM model in a situation like this? Can django grab the DATABASE.settings from a DB table? Do I create a model that manages this? (i.e. the source-table above?)
I don't mind managing db permissions on the backend (assume app database user in django settings has access to all of the databases)
Hope all of the above makes sense.