For instance, I have an Author model, which contains 2 parts: author profile, and his articles.
class Author:
# Author profile properties.
name = ...
age = ...
# Author articles properties.
...
When designing the model, I got two options:
1) Create a separate model class called article and add a list of reference in the Author model.
2) Just define all these into the same model and use projection query to read profile.
So which one is better in the sense of read/write/update cost given the fact that most of the read is for profile? What if the article property will get larger and larger in the future?
A single entity if possible is the best solution. But what kind of queries do you use? Do have to query all the articles including the content or only the article metadata. Because you can put a lot of metadata is a single 1Mb entity.
So the real questions are: what kind of queries do you need, how much metadata do you have and what do you do with the result set, like showing a user a result page using a cursor.
Maybe you can use the search API : https://developers.google.com/appengine/docs/python/search/#Putting_Documents_in_an_Index
Related
I've been trying to build a Tutorial system that we usually see on websites. Like the ones we click next -> next -> previous etc to read.
All Posts are stored in a table(model) called Post. Basically like a pool of post objects.
Post.objects.all() will return all the posts.
Now there's another Table(model)
called Tutorial That will store the following,
class Tutorial(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
tutorial_heading = models.CharField(max_length=100)
tutorial_summary = models.CharField(max_length=300)
series = models.CharField(max_length=40) # <---- Here [10,11,12]
...
Here entries in this series field are post_ids stored as a string representation of a list.
example: series will have [10,11,12] where 10, 11 and 12 are post_id that correspond to their respective entries in the Post table.
So my table entry for Tutorial model looks like this.
id heading summary series
"5" "Series 3 Tutorial" "lorem on ullt consequat." "[12, 13, 14]"
So I just read the series field and get all the Posts with the ids in this list then display them using pagination in Django.
Now, I've read from several stackoverflow posts that having multiple entries in a single field is a bad idea. And having this relationship to span over multiple tables as a mapping is a better option.
What I want to have is the ability to insert new posts into this series anywhere I want. Maybe in the front or middle. This can be easily accomplished by treating this series as a list and inserting as I please. Altering "[14,12,13]" will reorder the posts that are being displayed.
My question is, Is this way of storing multiple values in field for my usecase is okay. Or will it take a performance hit Or generally a bad idea. If no then is there a way where I can preserve or alter order by spanning the relationship by using another table or there is an entirely better way to accomplish this in Django or MYSQL.
Here entries in this series field are post_ids stored as a string representation of a list.
(...)
So I just read the series field and get all the Posts with the ids in this list then display them using pagination in Django.
DON'T DO THIS !!!
You are working with a relational database. There is one proper way to model relationships between entities in a relational database, which is to use foreign keys. In your case, depending on whether a post can belong only to a single tutorial ("one to many" relationship) or to many tutorials at the same time ("many to many" relationship, you'll want either to had to post a foreign key on tutorial, or to use an intermediate "post_tutorials" table with foreign keys on both post and tutorials.
Your solution doesn't allow the database to do it's job properly. It cannot enforce integrity constraints (what if you delete a post that's referenced by a tutorial ?), it cannot optimize read access (with proper schema the database can retrieve a tutorial and all it's posts in a single query) , it cannot follow reverse relationships (given a post, access the tutorial(s) it belongs to) etc. And it requires an external program (python code) to interact with your data, while with proper modeling you just need standard SQL.
Finally - but this is django-specific - using proper schema works better with the admin features, and with django rest framework if you intend to build a rest API.
wrt/ the ordering problem, it's a long known (and solved) issue, you just need to add an "order" field (small int should be enough). There are a couple 3rd part django apps that add support for this to both your models and the admin so it's almost plug and play.
IOW, there are absolutely no good reason to denormalize your schema this way and only good reasons to use proper relational modeling. FWIW I once had to work on a project based on some obscure (and hopefully long dead) PHP cms that had the brillant idea to use your "serialized lists" anti-pattern, and I can tell you it was both a disaster wrt/ performances and a complete nightmare to maintain. So do yourself and the world a favour: don't try to be creative, follow well-known and established best practices instead, and your life will be much happier. My 2 cents...
I can think of two approaches:
Approach One: Linked List
One way is using linked list like this:
class Tutorial(models.Model):
...
previous = models.OneToOneField('self', null=True, blank=True, related_name="next")
In this approach, you can access the previous Post of the series like this:
for tutorial in Tutorial.objects.filter(previous__isnull=True):
print(tutorial)
while(tutorial.next_post):
print(tutorial.next)
tutorial = tutorial.next
This is kind of complicated approach, for example whenever you want to add a new tutorial in middle of a linked-list, you need to change in two places. Like:
post = Tutorial.object.first()
next_post = post.next
new = Tutorial.objects.create(...)
post.next=new
post.save()
new.next = next_post
new.save()
But there is a huge benefit in this approach, you don't have to create a new table for creating series. Also, there is possibility that the order in tutorials will not be modified frequently, which means you don't need to take too much hassle.
Approach Two: Create a new Model
You can simply create a new model and FK to Tutorial, like this:
class Series(models.Model):
name = models.CharField(max_length=255)
class Tutorial(models.Model):
..
series = models.ForeignKey(Series, null=True, blank=True, related_name='tutorials')
order = models.IntegerField(default=0)
class Meta:
unique_together=('series', 'order') # it will make sure that duplicate order for same series does not happen
Then you can access tutorials in series by:
series = Series.object.first()
series.tutorials.all().order_by('tutorials__order')
Advantage of this approach is its much more flexible to access Tutorials through series, but there will be an extra table created for this, and one extra field as well to maintain order.
I want to make a model where I have the option of pulling a group of items and their descriptions from a postgres database based on tags. What is the most efficient way of doing this for performance using Django 1.9 and Postgres 9.5 on data that I do not really modify often?
I found multiple ways of doing this:
Toxi solution
Where 3 tables are made, one storing the item and descriptions, second storing the tags, and a third table associating tags with items. See here:
What is the most efficient way to store tags in a database?
Using Array fields
Where there is one table with items, descriptions, and an array field that stores tags
Using JSON fields
Where there is one table with items, descriptions, and an JSON field that stores tags. The django docs mention this uses JSONB: https://docs.djangoproject.com/en/1.11/ref/contrib/postgres/fields/
I am guessing that using JSON fields like the following:
from django.contrib.postgres.fields import JSONField
from django.db import models
class Item(models.Model):
name = models.CharField()
description = models.TextField(blank=True, null=True)
tags = JSONField()
is the most efficient for reading data but I am not really sure.
No one of theese is "most efficient", it really depends of what you're focused in.
First is classical relation approach, that provides consistency and normalisation. It also allows you to to retrieve complex information (i.e. give me all items with a tag T) pretty easy (but usually with a cost of joins). I think this should be your goto solution if you're using RDBMS.
Second and third (there mostly the same in your case) are easier for simple retrieve and store (i.e. give me all tags of an item I), but lead to a more work for keeping your data consistent and performing complex queries.
I want to have several "bundles" (Mjbundle), which essentially are bundles of questions (Mjquestion). The Mjquestion has an integer "index" property which needs to be unique, but it should only be unique within the bundle containing it. I'm not sure how to model something like this properly, I try to do it using a structured (repeating) property below, but there is yet nothing actually constraining the uniqueness of the Mjquestion indexes. What is a better/normal/correct way of doing this?
class Mjquestion(ndb.Model):
"""This is a Mjquestion."""
index = ndb.IntegerProperty(indexed=True, required=True)
genre1 = ndb.IntegerProperty(indexed=False, required=True, choices=[1,2,3,4,5,6,7])
genre2 = ndb.IntegerProperty(indexed=False, required=True, choices=[1,2,3])
#(will add a bunch of more data properties later)
class Mjbundle(ndb.Model):
"""This is a Mjbundle."""
mjquestions = ndb.StructuredProperty(Mjquestion, repeated=True)
time = ndb.DateTimeProperty(auto_now_add=True)
(With the above model and having fetched a certain Mjbundle entity, I am not sure how to quickly fetch a Mjquestion from mjquestions based on the index. The explanation on filtering on structured properties looks like it works on the Mjbundle type level, whereas I already have a Mjbundle entity and was not sure how to quickly query only on the questions contained by that entity, without looping through them all "manually" in code.)
So I'm open to any suggestion on how to do this better.
I read this informational answer: https://stackoverflow.com/a/3855751/129202 It gives some thoughts about scalability and on a related note I will be expecting just a couple of bundles but each bundle will have questions in the thousands.
Maybe I should not use the mjquestions property of Mjbundle at all, but rather focus on parenting: each Mjquestion created should have a certain Mjbundle entity as parent. And then "manually" enforce uniqueness at "insert time" by doing an ancestor query.
When you use a StructuredProperty, all of the entities that type are stored as part of the containing entity - so when you fetch your bundle, you have already fetched all of the questions. If you stick with this way of storing things, iterating to check in code is the solution.
We want to reduce the amount of same object instances in one python interpreter.
Example:
class Blog(models.Model):
author=models.ForeignKey(User)
If we iterate the thousand blogs, the same (same id but different python object) author objects get created several times.
Is there a way to make the django ORM reuse the already created user instances?
Example:
for blog in Blog.objects.all():
print (blog.author.username)
If author "foo-writer" has 100 blogs, there are 100 author objects in the memory. That's what we want to avoid.
I think solutions like mem-cached/redis won't help here, since we want to optimize the python process.
I'm not sure if it's database calls or memory usage you're concerned about here.
If the former, then using select_related will help you:
Blog.objects.all().select_related('author')
which will get all the blogs and their associated authors.
If you want to optimize memory, then the best way to do it is to get the relevant author objects manually in one go, store them in a dict, then manually annotate that object on each blog:
blogs = Blog.objects.all()
author_ids = set(b.author_id for b in blogs)
authors = Author.objects.in_bulk(list(author_ids))
for blog in blogs:
blog._author = authors[blog.author_id]
Lets say you have an Entity like this.
postid=db.StringProperty()
comment=db.StringProperty()
for storing comments on a certain post identified by post id.
The comments can hit billions of records. Now if you want to
get all comments belonging to a certain post you can do,
query=Comment.all()
query.filter('postid = ','id').
Or instead of doing that you can define post like
class Post(db.Model)
commentids=db.StringListProperty()#store list of comment ids
This way you can directly get the comment by doing
comment=Comment.get_by_key_name('commentkey')
In the long run (when comments hit millions or even billions mark) which one
is more efficient. In other words which one is more appropriate.
If you're planning to have billions of comments consider also using the newest NDB API, which, among other things, supports automatic caching.
Instead of filtering them by postid you should probably use a parent for your Comment entity. Here is an example (using DB, but it's very similar using NDB):
If you have model like this one:
class Post(db.Model):
desc = db.StringProperty()
class Comment(db.Model):
desc = db.TextProperty()
You can create posts and comments like:
post_db = Post(desc='Hello World')
post_db.put()
comment_db = Comment(parent=post_db, desc='Nice post')
comment_db.put()
And finally if you want to get all the comments from a particular post_db entity:
comment_dbs = Comment.all().ancestor(post_db)
Entities are limited to 1MB in size. Also, entity can have max 5000 index entries, so if your commentids is indexed then the max size would be 5000 entries.
So option two would not be a good fit for millions of comments (I've never seen a site with a million of comments per post, but Reddit does get over 5k for popular posts.
Also, you'd probably need a way to list comments in a progressive way (pagging, progressive scrolling). In this case option one via query would be better as you could list comments progressivelly via cursors and also you could sort properties via different criteria ( time, votes, etc..).