I am working to learn Django, and have built a test database to work with. I have a table that provides basic vendor invoice information, so, and I want to simply present a user with the total value of invoices that have been loaded to into the database. I found that the following queryset does sum the column as I'd hoped:
total_outstanding: object = Invoice.objects.aggregate(Sum('invoice_amount'))
but the result is presented on the page in the following unhelpful way:
Total $ Outstanding: {'invoice_amount__sum': Decimal('1965')}
The 1965 is the correct total for the invoices that I populated the database with, so the queryset is pulling what I want it to, but I just want to present that portion of the result to the user, without the other stuff.
Someone else asked a similar question (basically the same) here: how-to-extract-data-from-django-queryset, but the answer makes no sense to me, it is just:
k = k[0] = {'name': 'John'}
Queryset is list .
Can anyone help me with a plain-English explanation of how I can extract just the numerical result of that query for presentation to a user?
What you here get is a dictionary that maps the name of the aggregate to the corresponding value. You can use subscripting to obtain the corresponding value:
object = Invoice.objects.aggregate(
Sum('in,voice_amount')
)['invoice_amount__sum']
Related
I am trying to prevent inserting duplicate documents by the following approach:
Get a list of all documents from the desired endpoint which will contain all the documents in JSON-format. This list is called available_docs.
Use a pre_POST_<endpoint> hook in order to handle the request before inserting to the data. I am not using the on_insert hook since I need to do this before validation.
Since we can access the request object use request.json to get the payload JSON-formatted
Check if request.json is already contained in available_docs
Insert new document if it's not a duplicate only, abort otherwise.
Using this approach I got the following snippet:
def check_duplicate(request):
if not request.json in available_sims:
print('Not a duplicate')
else:
print('Duplicate')
flask.abort(422, description='Document is a duplicate and already in database.')
The available_docs list looks like this:
available_docs = [{'foo': ObjectId('565e12c58b724d7884cd02bb'), 'bar': [ObjectId('565e12c58b724d7884cd02b9'), ObjectId('565e12c58b724d7884cd02ba')]}]
The payload request.json looks like this:
{'foo': '565e12c58b724d7884cd02bb', 'bar': ['565e12c58b724d7884cd02b9', '565e12c58b724d7884cd02ba']}
As you can see, the only difference between the document which was passed to the API and the document already stored in the DB is the datatype of the IDs. Due to that fact, the if-statement in my above snippet evaluates to True and judges the document to be inserted not being a duplicate whereas it definitely is a duplicate.
Is there a way to check if a passed document is already in the database? I am not able to use unique fields since the combination of all document fields needs to be unique only. There is an unique identifier (which I left out in this example), but this is not suitable for the desired comparison since it is kind of a time stamp.
I think something like casting the given IDs at the keys foo and bar as ObjectIDs would do the trick, but I do not know how to to this since I do not know where to get the datatype ObjectID from.
You approach would be much slower than setting a unique rule for the field.
Since, from your example, you are going to compare objectids, can't you simply use those as the _id field for the collection? In Mongo (and Eve of course) that field is unique by default. Actually, you typically don't even define it. You would not need to do anything at all, as a POST of a document with an already existing id would fail right away.
If you can't go that way (maybe you need to compare a different objectid field and still, for some reason, you can't simply set a unique rule for the field), I would look at querying the db for the field value instead than getting all the documents from the db and then scanning them sequentially in code. Something like db.find({db_field: new_document_field_value}). If that returns true, new document is a duplicate. Make sure db_field is indexed (which usually holds true also for fields tagged with unique rule)
EDIT after the comments. A trivial implementation would probable be something like this:
def pre_POST_callback(resource, request):
# retrieve mongodb collection using eve connection
docs = app.data.driver.db['docs']
if docs.find_one({'foo': <value>}):
flask.abort(422, description='Document is a duplicate and already in database.')
app = Eve()
app.run()
Here's my approach on preventing duplicate records:
def on_insert_subscription(items):
c_subscription = app.data.driver.db['subscription']
user = decode_token()
if user:
for item in items:
if c_subscription.find_one({
'topic': ObjectId(item['topic']),
'client': ObjectId(user['user_id'])
}):
abort(422, description="Client already subscribed to this topic")
else:
item['client'] = ObjectId(user['user_id'])
else:
abort(401, description='Please provide proper credentials')
What I'm doing here is creating subscriptions for clients. If a client is already subscribed to a topic I throw 422.
Note: the client ID is decoded from the JWT token.
I have a Post model that has a datetime.date field for posted_date. I need to find how many posts are made by a user on each day. Creating a query for each day seems silly.
I couldn't figure out how to make the aggregation API query using annotate.
Is
Post.objects.filter(author=someuser).annotate(dailycount=Count('posted_day'))
the correct way? Then, how do I use this to get number of posts for a particular day?
My model is:
class Post(models.Model):
posted_day=models.DateField(default=date.today)
author=models.ForeignKey(User,null=True)
You are almost there. You need two additional clauses:
day_counts = Post.objects.filter(author=someuser).values('posted_day').annotate(
dailycount=Count('posted_day')).order_by()
The values('posted_day') enables the grouping, and the empty order_by ensures the results are ordered by posted_day so the default ordering doesn't interfere.
The clearest documentation of this seems to be in the Django Aggregation docs section on the Order of annotate() and values() clauses.
values returns a list of dicts like:
[{'posted-day': 'the-first-day', 'dailycount': 2}, . . . ,
{'posted-day': 'the-last-day', 'dailycount': 3}]
so if you wanted the last day the user posted, it would be the last item in the list:
last_day_dict = day_counts[-1]
date = last_day_dict['posted_day']
count = last_day_dict['dailycount']
You can then compare date to today() to see if they match, and if they don't the user didn't post today, and if he did, he posted count times.
I use Djapian to search for object by keywords, but I want to be able to filter results. It would be nice to use Django's QuerySet API for this, for example:
if query.strip():
results = Model.indexer.search(query).prefetch()
else:
results = Model.objects.all()
results = results.filter(somefield__lt=somevalue)
return results
But Djapian returns a ResultSet of Hit objects, not Model objects. I can of course filter the objects "by hand", in Python, but it's not realistic in case of filtering all objects (when query is empty) - I would have to retrieve the whole table from database.
Am I out of luck with using Djapian for this?
I went through its source and found that Djapian has a filter method that can be applied to its results. I have just tried the below code and it seems to be working.
My indexer is as follows:
class MarketIndexer( djapian.Indexer ):
fields = [ 'name', 'description', 'tags_string', 'state']
tags = [('state', 'state'),]
Here is how I filter results (never mind the first line that does stuff for wildcard usage):
objects = model.indexer.search(q_wc).flags(djapian.resultset.xapian.QueryParser.FLAG_WILDCARD).prefetch()
objects = objects.filter(state=1)
When executed, it now brings Markets that have their state equal to "1".
I dont know Djapian, but i am familiar with xapian. In Xapian you can filter the results with a MatchDecider.
The decision function of the match decider gets called on every document which matches the search criteria so it's not a good idea to do a database query for every document here, but you can of course access the values of the document.
For example at ubuntuusers.de we have a xapian database which contains blog posts, forum posts, planet entries, wiki entries and so on and each document in the xapian database has some additional access information stored as value. After the query, an AuthMatchDecider filters the potential documents and returns the filtered MSet which are then displayed to the user.
If the decision procedure is as simple as somefield < somevalue, you could also simply add the value of somefield to the values of the document (using the sortable_serialize function provided by xapian) and add (using OP_FILTER) an OP_VALUE_RANGE query to the original query.
Basically, I have a table with a bunch of foreign keys and I'm trying to query only the first occurrence of a particular key by the "created" field. Using the Blog/Entry example, if the Entry model has a foreign key to Blog and a foreign key to User, then how can I construct a query to select all Entries in which a particular User has written the first one for the various Blogs?
class Blog(models.Model):
...
class User(models.Model):
...
class Entry(models.Model):
blog = models.Foreignkey(Blog)
user = models.Foreignkey(User)
I assume there's some magic I'm missing to select the first entries of a blog and that I can simple filter further down to a particular user by appending:
query = Entry.objects.magicquery.filter(user=user)
But maybe there's some other more efficient way. Thanks!
query = Entry.objects.filter(user=user).order_by('id')[0]
Basically order by id (lowest to highest), and slice it to get only the first hit from the QuerySet.
I don't have a Django install available right now to test my line, so please check the documentation if somehow I have a type above:
order by
limiting querysets
By the way, interesting note on 'limiting queysets" manual section:
To retrieve a single object rather
than a list (e.g. SELECT foo FROM bar
LIMIT 1), use a simple index instead
of a slice. For example, this returns
the first Entry in the database, after
ordering entries alphabetically by
headline:
Entry.objects.order_by('headline')[0]
EDIT: Ok, this is the best I could come up so far (after yours and mine comment). It doesn't return Entry objects, but its ids as entry_id.
query = Entry.objects.values('blog').filter(user=user).annotate(Count('blog')).annotate(entry_id=Min('id'))
I'll keep looking for a better way.
Ancient question, I realise - #zalew's response is close but will likely result in the error:
ProgrammingError: SELECT DISTINCT ON expressions must match initial
ORDER BY expressions
To correct this, try aligning the ordering and distinct parts of the query:
Entry.objects.filter(user=user).distinct("blog").order_by("blog", "created")
As a bonus, in case of multiple entries being created at exactly the same time (unlikely, but you never know!), you could add determinism by including id in the order_by:
To correct this, try aligning the ordering and distinct parts of the query:
Entry.objects.filter(user=user).distinct("blog").order_by("blog", "created", "id")
Can't test it in this particular moment
Entry.objects.filter(user=user).distinct("blog").order_by("id")
Given a class:
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=20)
Is it possible, and if so how, to have a QuerySet that filters based on dynamic arguments? For example:
# Instead of:
Person.objects.filter(name__startswith='B')
# ... and:
Person.objects.filter(name__endswith='B')
# ... is there some way, given:
filter_by = '{0}__{1}'.format('name', 'startswith')
filter_value = 'B'
# ... that you can run the equivalent of this?
Person.objects.filter(filter_by=filter_value)
# ... which will throw an exception, since `filter_by` is not
# an attribute of `Person`.
Python's argument expansion may be used to solve this problem:
kwargs = {
'{0}__{1}'.format('name', 'startswith'): 'A',
'{0}__{1}'.format('name', 'endswith'): 'Z'
}
Person.objects.filter(**kwargs)
This is a very common and useful Python idiom.
A simplified example:
In a Django survey app, I wanted an HTML select list showing registered users. But because we have 5000 registered users, I needed a way to filter that list based on query criteria (such as just people who completed a certain workshop). In order for the survey element to be re-usable, I needed for the person creating the survey question to be able to attach those criteria to that question (don't want to hard-code the query into the app).
The solution I came up with isn't 100% user friendly (requires help from a tech person to create the query) but it does solve the problem. When creating the question, the editor can enter a dictionary into a custom field, e.g.:
{'is_staff':True,'last_name__startswith':'A',}
That string is stored in the database. In the view code, it comes back in as self.question.custom_query . The value of that is a string that looks like a dictionary. We turn it back into a real dictionary with eval() and then stuff it into the queryset with **kwargs:
kwargs = eval(self.question.custom_query)
user_list = User.objects.filter(**kwargs).order_by("last_name")
Additionally to extend on previous answer that made some requests for further code elements I am adding some working code that I am using
in my code with Q. Let's say that I in my request it is possible to have or not filter on fields like:
publisher_id
date_from
date_until
Those fields can appear in query but they may also be missed.
This is how I am building filters based on those fields on an aggregated query that cannot be further filtered after the initial queryset execution:
# prepare filters to apply to queryset
filters = {}
if publisher_id:
filters['publisher_id'] = publisher_id
if date_from:
filters['metric_date__gte'] = date_from
if date_until:
filters['metric_date__lte'] = date_until
filter_q = Q(**filters)
queryset = Something.objects.filter(filter_q)...
Hope this helps since I've spent quite some time to dig this up.
Edit:
As an additional benefit, you can use lists too. For previous example, if instead of publisher_id you have a list called publisher_ids, than you could use this piece of code:
if publisher_ids:
filters['publisher_id__in'] = publisher_ids
Django.db.models.Q is exactly what you want in a Django way.
This looks much more understandable to me:
kwargs = {
'name__startswith': 'A',
'name__endswith': 'Z',
***(Add more filters here)***
}
Person.objects.filter(**kwargs)
A really complex search forms usually indicates that a simpler model is trying to dig it's way out.
How, exactly, do you expect to get the values for the column name and operation?
Where do you get the values of 'name' an 'startswith'?
filter_by = '%s__%s' % ('name', 'startswith')
A "search" form? You're going to -- what? -- pick the name from a list of names? Pick the operation from a list of operations? While open-ended, most people find this confusing and hard-to-use.
How many columns have such filters? 6? 12? 18?
A few? A complex pick-list doesn't make sense. A few fields and a few if-statements make sense.
A large number? Your model doesn't sound right. It sounds like the "field" is actually a key to a row in another table, not a column.
Specific filter buttons. Wait... That's the way the Django admin works. Specific filters are turned into buttons. And the same analysis as above applies. A few filters make sense. A large number of filters usually means a kind of first normal form violation.
A lot of similar fields often means there should have been more rows and fewer fields.