distance based ordering in django - python

So i am using django and get user's location at registration time.
Then i show these users on the front page of the app but sorted as per the distance, i.e, the closest ones to the logged in user are on the top and so on.
Now what i am doing is i am ordering them as per distance on the backend using some annotate (etc) functions provided by django ORM.
sortedQueryset = self.get_queryset().annotate(distance=Distance(
'coords', user.coords, spheroid=True)).order_by('distance')
Where 'coords' is the column in db to store the point (location), user.coords is point (coordinates) of the logged in user.
Now to get only first 100 users (say) from the database i can do something like this;
sortedQueryset = self.get_queryset().annotate(distance=Distance(
'coords', user.coords, spheroid=True)).order_by('distance')[:100]
But what it think, it still grabs all the rows, orders them as per distance and then gets 100 of them. Say we have a million users in db, then it always has to get all those and then sort them and then get only 100.
I think it is a lot of overwork (maybe i am wrong or maybe this is the only way as i have to sort as per distance and that also depends on the logged in user, who is closest and who is farthest).
Any suggestions are appreciated. Thanks!

Actually what you have done is right only. This will not slice in Python but limit it in the database query itself. So it won't get all the results and slice it, instead, it runs LIMIT query against the database. See the documentation.
https://docs.djangoproject.com/en/dev/topics/db/queries/#limiting-querysets

Related

Django ORM: Filter results by values from list, limit answers per value?

I'm using Django 2.0 and have a Content model with a ForeignKey(User, ...). I also have a list of user IDs for which I'd like to fetch that Content, ordered by "newest first", but only up to 25 elements per user. I know I can do this:
Content.objects.filter(user_id__in=[1, 2, 3, ...]).order_by('-id')
...to fetch all the Content objects created by each of these users, plus I'll get it all sorted with newest elements first. But I'd like to fetch up to 25 elements for each of these users (some users might create hundreds of these objects, some might create zero). There's of course the dumb way:
for user in [1, 2, 3, ...]:
Content.objects.filter(user_id=user).order_by('-id')[:25]
This however hits the database as many times as there's objects in the user ID list, and that goes quite high (around 100 or so per page view). Is there any way to optimize this case? (I've tried looking around select_related, but that seems to fetch as many related models as possible.)
There are plenty of ways to form a greatest-n-per-group query, but in this case you could form a union of top-n queries of all users:
contents = Content.objects.\
none().\
union(*[Content.objects.
filter(user_id=uid).
order_by('-id')[:25] for uid in user_ids],
all=True)
Using prefetch_related() you could then produce a queryset that fetches the users and injects an attribute of latest content:
users = User.objects.\
filter(id__in=user_ids).\
prefetch_related(models.Prefetch(
'content_set',
queryset=contents,
to_attr='latest_content'))
Does it actually hit the database that many times? I have not looked at the raw SQL but according to the documentation it is equivalent to the LIMIT clause and it also states "Generally, slicing a QuerySet returns a new QuerySet – it doesn’t evaluate the query".
https://docs.djangoproject.com/en/2.0/topics/db/queries/#limiting-querysets
I would be curious to see the raw SQL if you are looking at it and it does NOT do this as I use this paradigm.

Keep form result in memory

The image above gives an example of what I hope to achieve with flask.
For now I have a list of tuples such as [(B,Q), (A,B,C), (T,R,E,P), (M,N)].
The list can be any length as well as the tuples. When I submit or pass my form, I receive the data one the server side, all good.
However now I am asked to remember the state of previously submited ad passed forms in order to go back to it and eventually modify the information.
What would be the best way to remember the state of the forms?
Python dictionary with the key being the form number as displayed at the bottom (1 to 4)
Store the result in an SQL table and query it every time I need to access a form again
Other ideas?
Notes: The raw data should be kept for max one day, however, the data are to be processed to generate meaningful information to be stored permanently. Hence, if a modification is made to the form, the final database should reflect it.
This will very much depend on how the application is built.
One option is to simply always return all the answers posted, with each request, but that won't work well if you have a lot of data.
Although you say that you need the data to be accessible for a day. So it seems reasonable to store it to a database. Performing select queries using the indexed key is rather insignificant for most cases.

MongoEngine: Limiting number of responses from DBRef

I have a document with around 7k DBRefs in one field to other objects. I want to limit the number of the objects coming back when I query the DBRef field but I cannot find an obvious way of doing it.
project = Project.objects.find({'id': 1})
users = project.users[:10]
On line 2 MongoEngine performs a query to retrieve ALL the users not just the first 10. What can I do to limit the query to only retrieve the first 10?
users = project.users[:10],
This operation is a client side operation, which is performed on the users array that has all the 7k DBRefs values returned by mongodb.
What can I do to limit the query to only retrieve the first 10?
You need to include a projection operation to just select the first 10 elements in the users array.
Project.objects.find({"id": 1},{"users":{"$slice":10}})
The syntax in MongoEngine:
Project.objects(id=1).fields(slice__users[0,10])
If I understand you correctly, there is no way to return a portion of one field. You can pick and choose what fields you are returning, but there is no way to specify a portion of one field.

Solr & User data

Let's assume I am developing a service that provides a user with articles. Users can favourite articles and I am using Solr to store these articles for search purposes.
However, when the user adds an article to their favourites list, I would like to be able to figure out out which articles the user has added to favourites so that I can highlight the favourite button.
I am thinking of two approaches:
Fetch articles from Solr and then loop through each article to fetch the "favourite-status" of this article for this specific user from MySQL.
Whenever a user favourites an article, add this user's ID to a multi-valued column in Solr and check whether the ID of the current user is in this column or not.
I don't know the capacity of the multivalued column... and I also don't think the second approach would be a "good practice" (saving user-related data in index).
What other options do I have, if any? Is approach 2 a correct approach?
I'd go with a modified version of the first one - it'll keep user specific data that's not going to be used for search out of the index (although if you foresee a case where you want to search for favourite'd articles, it would probably be an interesting field to have in the index) for now. For just display purposes like in this case, I'd take all the id's returned from Solr, fetch them in one SQL statement from the database and then set the UI values depending on that. It's a fast and easy solution.
If you foresee that "search only in my fav'd articles" as a use case, I would try to get that information into the index as well (or other filter applications against whether a specific user has added the field as a favourite). I'd try to avoid indexing anything more than the user id that fav'd the article in that case.
Both solutions would however work, although the latter would require more code - and the required response from Solr could grow large if a large number of users fav's an article, so I'd try to avoid having to return a set of userid's if that's the case (many fav's for a single article).

How to get the distinct value of one of my models in Google App Engine

I have a model, below, and I would like to get all the distinct area values. The SQL equivalent is select distinct area from tutorials
class Tutorials(db.Model):
path = db.StringProperty()
area = db.StringProperty()
sub_area = db.StringProperty()
title = db.StringProperty()
content = db.BlobProperty()
rating = db.RatingProperty()
publishedDate = db.DateTimeProperty()
published = db.BooleanProperty()
I know that in Python I can do
a = ['google.com', 'livejournal.com', 'livejournal.com', 'google.com', 'stackoverflow.com']
b = set(a)
b
>>> set(['livejournal.com', 'google.com', 'stackoverflow.com'])
But that would require me moving the area items out of the query into another list and then running set against the list (sounds very inefficient) and if I have a distinct item that is in position 1001 in the datastore I wouldnt see it because of the fetch limit of 1000.
I would like to get all the distinct values of area in my datastore to dump it to the screen as links.
Datastore cannot do this for you in a single query. A datastore request always returns a consecutive block of results from an index, and an index always consists of all the entities of a given type, sorted according to whatever orders are specified. There's no way for the query to skip items just because one field has duplicate values.
One option is to restructure your data. For example introduce a new entity type representing an "area". On adding a Tutorial you create the corresponding "area" if it doesn't already exist, and on deleting a Tutoral delete the corresponding "area" if no Tutorials remain with the same "area". If each area stored a count of Tutorials in that area, this might not be too onerous (although keeping things consistent with transactions etc would actually be quite fiddly). I expect that the entity's key could be based on the area string itself, meaning that you can always do key lookups rather than queries to get area entities.
Another option is to use a queued task or cron job to periodically create a list of all areas, accumulating it over multiple requests if need be, and put the results either in the datastore or in memcache. That would of course mean the list of areas might be temporarily out of date at times (or if there are constant changes, it might never be entirely in date), which may or may not be acceptable to you.
Finally, if there are likely to be very few areas compared with tutorials, you could do it on the fly by requesting the first Tutorial (sorted by area), then requesting the first Tutorial whose area is greater than the area of the first, and so on. But this requires one request per distinct area, so is unlikely to be fast.
The DISTINCT keyword has been introduced in release 1.7.4.
This has been asked before, and the conclusion was that using sets is fine.

Categories