I am writing a model for an external Oracle DB, which I need to pull info from to my project. One of the fields in that Oracle DB is an XMLType, and it hosts large portion of data, which needs to be pulled via .getClobVal() method, instead of querying it straight as it is.
I was wondering, if Django supports any sort of custom logic, for Model's specific fields. Something along the lines of a getter for each field, which I could overwrite to query that specific field as XMLType.getClobVal() instead of just XMLType. At the same time, I don`t want to touch the logic for querying the rest of the Model's fields.
Any ideas?
Related
I am creating a web-app w/ Flask + Flask-WTF that has CRM-like features as a project. My current database (MongoDB) structure is I have:
Users who can login,
People who are assigned to users, and
Records who are assigned to people.
People have various fields to be filled out (Name, Phone Number, Email, etc).
I want Users to be able to create custom fields for people. I am trying to plan out how to implement this from a database design perspective. My initial thoughts are to:
For each field created, add a new field without a value for each People assigned to the user.
Use a for-loop to dynamically create the form class by looping through each field-value in my database and excluding non-required ones.
Use a for-loop to dynamically output the web form by looping through each field-value in my database and excluding non-required ones.
Another idea I have is:
For each field created, add the custom field, with a parentRecord equal to the User ID to a new MongoDB collection.
Use a for-loop to create the form class & web form dynamically, but I wouldn't need to exclude non-required ones as the only fields in the collection would be the custom ones, and wouldn't include any special data points that don't get displayed.
So my questions are:
Will my ideas above work?
Which one is better?
Is there a better way?
I decided to create a customfields MongoDB collection that has a parentRecord as the User.
I faced a few challenges:
I had to dynamically create a form via flask WTF. I ended up using wtforms_dynamic_fields to accomplish this. I then used a for-loop to dynamically generate each form in Jinja. I simply queried the customfields DB where the parentRecord matched the logged in User, and then created custom fields based upon the values saved in customfields.
The second issue I faced was getting the data from the submitted form and then building a MongoDB-friendly list to insert a new record with when creating new record. This was accomplished by using request.form.items() and iterating through them and using list.update() to add all of my required fields to a list.
Django stores a history of the modification for every object, it is something we can access to through the Django admin:
It contains data about when the object was created/modified, the user who performed the action and the timestamp of the action:
By giving a look at the database, I can guess this data is stored in a default table called django_admin_log:
I am wondering if we can make use of this data in any way through the instance of a model ? I got used to adding manually my timestamps on every models through an Abstract Base Class, but I am wondering if it is useful in any way ?
Or this table records only the modification taking place in the Django admin panel, which would makes the custom timestamp still needed for when the models instance were to be updated outside it.
The history is only related to actions done in the admin view. To add metadata you can also use model_utils, which also offers some other handy functionalities: https://django-model-utils.readthedocs.io/en/latest/
Let us assume every action would be stored in a history table. This would indicate that you always have make a join in the db to get a view where each row also has created and updated information. This is quite some overhead. Therefore, keep it simple and add the timestamp to each model :)
I am currently using Django framework including its Models mechanism to abstract the database schema declaration and general db access, which is working fine for most scenarios.
However, my application also requires tables to be created and accessed dynamically during runtime, which as far as I can see, is not supported by Django out of the box.
These tables usually have an identical structure, and can basically be abstracted by the same Model class, but Django doesn't let you change the underlying db_table of a certain model query, as it is declared on the Model class and not on the Manager.
My solution for this is to do this process whenever I need a new table to be created, populated and accessed:
Create and populate the table using raw sql
Add indexes to the table using raw sql
When I need to access the table (using django queryset api), I declare a new type dynamically and return it as the model for the query, by using this code:
table_name = # name of the table created by sql
model_name = '%d_%s' % (connection.tenant.id, table_name)
try:
model = apps.get_registered_model('myapp', model_name)
return model
except LookupError:
pass
logger.debug("no model exists for model %s, creating one" % model_name)
class Meta:
db_table = table_name
managed = False
attrs = {
'field1' : models.CharField(max_length=200),
'field2' : models.CharField(max_length=200),
'field3' : models.CharField(max_length=200)
'__module__': 'myapp.models',
'Meta':Meta
}
model = type(str(model_name), (models.Model,), attrs)
return model
Note that I do check if the model is already registered in django and I'm using an existing model in case it does. The model name is always unique for each table. Since I'm using multi tenants, the tenant name is also part of the model name to avoid conflict with similar tables declared on different schemas.
In case it's not clear: the tables created dynamically will and should be persisted permanently for future sessions.
This solution works fine for me so far.
However, the application will need to support a large number of these tables. i.e. 10,000 - 100,000 such tables(and corresponding model classes), with up to a million rows per table.
Assuming the underlying db is fine with this load, my questions are:
Do you see any problem with this solution, with and without regards to the expected scale ?
Anybody has a better solution for this scenario ?
Thanks.
There is a wiki page on creating models dynamically, although it has been a while since it was last updated:
DynamicModels Django
There are also a few apps that are designed for this use case, but I don't think any of them is being actively maintained:
Django Packages: Dynamic models
I understand that if you are already committed to Django this isn't very helpful, but this a use case for which Django isn't really good. It might be more costly to fight against the abstractions provided by Django's model layer, than to just use psycopg2 or whatever other adapter is appropriate for your data.
Depending on what sort of operations you are going to perform on your data, it may be also more reasonable to use a single model with an indexed field that allows you to distinguish in which table that row would be and then sharding the data by that column.
If you still need to do this, the general idea would be:
Create a metaclass that extends Django's ModelBase. This metaclass you would use as a factory for your actual models.
Consider stuff mentioned on that wiki page, like circumventing the app_label issue.
Generate and execute the sql for the creation of the model as also shown on the wiki page.
I am beginning to learn Python and Django. I want to know how if I have a simple class of "player" with some properties, like: name, points, inventory, how would I make the class also write the values to the database if they are changed. My thinking is that I create Django data models and then call the .save method within my classes. Is this correct?
You are correct that you call the save() method to save models to your db, But you don't have to define the save method within your model classes if you don't want to. It would be extremely helpful to go through the django tutorial which explains all.
https://docs.djangoproject.com/en/dev/intro/tutorial01/
https://docs.djangoproject.com/en/dev/topics/db/models/
Explains django models
django uses its own ORM (object-relational mapping)
This does exacxtly what it sounds like maps your django/python objects (models) to your backend.
It provides a sleek, intuitive, pythonic, very easy to use interface for creating models (tables in your rdbms) adding data and retrieving data.
First you would define your model
class Player(models.Model):
points = models.IntegerField()
name = models.CharField(max_length=255)
django provides commands for chanign this python object into a table.
python manage.py syncdb
you could also use python manage.py sql <appname> to show the actual sql that django is generating to turn this object into a table.
Once you have a storage for this object you can create new ones in the same manner you would create python objects
new_player = Player(points=100, name='me')
new_player.save()
Calling save() actually writes the object to your backend.
You're spot on...
Start at https://docs.djangoproject.com/en/dev/intro/tutorial01/
Make sure you have the python bindings for MySQL and work your way through it... Then if you have specific problems, ask again...
I'm currently working on a model that has been already built and i need to add some validation managment. (accessing to two fields and checking data, nothing too dramatic)
I was wondering about the exact difference between models and forms at a validation point of view and if i would be able to just make a clean method raising errors as in a formview in a model view ?
for extra knowledge, why are thoses two things separated ?
And finnaly, what would you do ? There are already some methods written for the model and i don't know yet if i would rewrite it to morph it into a form and simply add the clean() method + i don't exactly know how they work.
Oh, and everything is in the admin interface, havn't yet worked a lot on it since i started django not so long ago.
Thanks in advance,
You should use model (field) validation to make sure the returning datatype meets your database's requirements. Usually you won't need this because django's builtin fields do this for you, so unless you've built some custom field or know what you are doing you shouldn't change things.
Form validation is where you clean the user's input, you can add a clean method for every form field by adding a clean_FIELD(self) method, e.g.
class ContactForm(forms.Form):
# Everything as before.
...
def clean_recipients(self):
data = self.cleaned_data['recipients']
if "fred#example.com" not in data:
raise forms.ValidationError("You have forgotten about Fred!")
# Always return the cleaned data, whether you have changed it or
# not.
return data
Before a Form's main clean method is ran, it checks for a field level clean for each of its fields
Generally models represent business entities which may be stored in some persistent storage (usually relational DB). Forms are used to render HTML forms which may retreive data from users.
Django supports creating forms on the basis of models (using ModelForm class). Forms may be used to fetch data which should be saved in persistent storage, but that's not only the case - one may use forms just to get data to be searched in persistent storage or passed to external service, feed some application counters, test web browser engines, render some text on the basis of data entered by user (e.g. "Hello USERNAME"), login user etc.
Calling save() on model instance should guarantee that data will be saved in persistent storage if and only data is valid - that will provide consistent mechanism of validation of data before saving to persistent storage, regardless whether business entity is to be saved after user clicks "Save me" button on web page or in django interactive shell user will execute save() method of model instance.