How can I make a query
select name where id in (select id from ...)
using Django ORM? I think I can make this using some loop for for obtain some result and another loop for, for use this result, but I think that is not practical job, is more simple make a query sql, I think that make this in python should be more simple in python
I have these models:
class Invoice (models.Model):
factura_id = models.IntegerField(unique=True)
created_date = models.DateTimeField()
store_id = models.ForeignKey(Store,blank=False)
class invoicePayments(models.Model):
invoice = models.ForeignKey(Factura)
date = models.DateTimeField()#auto_now = True)
money = models.DecimalField(max_digits=9,decimal_places=0)
I need get the payments of a invoice filter by store_id,date of pay.
I make this query in mysql using a select in (select ...). This a simple query but make some similar using django orm i only think and make some loop for but I don't like this idea:
invoiceXstore = invoice.objects.filter(local=3)
for a in invoiceXstore:
payments = invoicePayments.objects.filter(invoice=a.id,
date__range=["2016-05-01", "2016-05-06"])
You can traverse ForeignKey relations using double underscores (__) in Django ORM. For example, your query could be implemented as:
payments = invoicePayments.objects.filter(invoice__store_id=3,
date__range=["2016-05-01", "2016-05-06"])
I guess you renamed your classes to English before posting here. In this case, you may need to change the first part to factura__local=3.
As a side note, it is recommended to rename your model class to InvoicePayments (with a capital I) to be more compliant with PEP8.
Your mysql raw query is a sub query.
select name where id in (select id from ...)
In mysql this will usually be slower than an INNER JOIN (refer : [http://dev.mysql.com/doc/refman/5.7/en/rewriting-subqueries.html]) thus you can rewrite your raw query as an INNER JOIN which will look like 1.
SELECT ip.* FROM invoicepayments i INNER JOIN invoice i ON
ip.invoice_id = i.id
You can then use a WHERE clause to apply the filtering.
The looping query approach you have tried does work but it is not recommended because it results in a large number of queries being executed. Instead you can do.
InvoicePayments.objects.filter(invoice__local=3,
date__range=("2016-05-01", "2016-05-06"))
I am not quite sure what 'local' stands for because your model does not show any field like that. Please update your model with the correct field or edit the query as appropriate.
To lean about __range see this https://docs.djangoproject.com/en/1.9/ref/models/querysets/#range
Related
Say I have 3 models in Django
class Instrument(models.Model):
ticker = models.CharField(max_length=30, unique=True, db_index=True)
class Instrument_df(models.Model):
instrument = models.OneToOneField(
Instrument,
on_delete=models.CASCADE,
primary_key=True,
)
class Quote(models.Model):
instrument = models.ForeignKey(Instrument, on_delete=models.CASCADE)
I just want to query all Quotes that correspond to an instrument of 'DF' type. in SQL I would perform the join of Quote and Instrument_df on field id.
Using Django's ORM I came out with
Quote.objects.filter(instrument__instrument_df__instrument_id__gte=-1)
I think this does the job, but I see two drawbacks:
1) I am joining 3 tables, when in fact table Instrument would not need to be involved.
2) I had to insert the trivial id > -1 condition, that holds always. This looks awfully artificial.
How should this query be written?
Thanks!
Assuming Instrument_df has other fields not shown in the snippet (else this table is just useless and could be replaced by a flag in Instrument), a possible solution could be to use either a subquery or two queries:
# with a subquery
dfids = Instrument_df.objects.values_list("instrument", flat=True)
Quote.objects.filter(instrument__in=dfids)
# with two queries (can be faster on MySQL)
dfids = list(Instrument_df.objects.values_list("instrument", flat=True))
Quote.objects.filter(instrument__in=dfids)
Whether this will perform better than your actual solution depends on your db vendor and version (MySQL was known for being very bad at handling subqueries, don't know if it's still the case) and actual content.
But I think the best solution here would be a plain raw query - this is a bit less portable and may require more care in case of a schema update (hint: use a custom manager and write this query as a manager method so you have one single point of truth - you don't want to scatter your views with raw sql queries).
I am trying to use a Django model to for a record but then return a concatenated field of two different tables joined by a foreign key.
I can do it in SQL like this:
SELECT
location.location_geoname_id as id,
CONCAT_WS(', ', location.location_name, region.region_name, country.country_name) AS 'text'
FROM
geonames_location as location
JOIN
geonames_region as region
ON
location.region_geoname_id = region.region_geoname_id
JOIN
geonames_country as country
ON
region.country_geoname_id = country.country_geoname_id
WHERE
location.location_name like 'location'
ORDER BY
location.location_name, region.region_name, country.country_name
LIMIT 10;
Is there a cleaner way to do this using Django models? Or do I need to just use SQL for this one?
Thank you
Do you really need the SQL to return the concatenated field? Why not query the models in the usual way (with select_related()) and then concatenate in Python? Or if you're worried about querying more columns than you need, use values_list:
locations = Location.objects.values_list(
'location_name', 'region__region_name', 'country__country_name')
location_texts = [','.join(l) for l in locations]
You can also write raw query for this in your code like that and later on you can concatenate.
Example:
org = Organization.objects.raw('SELECT organization_id, name FROM organization where is_active=1 ORDER BY name')
Keep one thing in a raw query you have to always fetch primary key of table, it's mandatory. Here organization_id is a primary key of contact_organization table.
And it's depend on you which one is useful and simple(raw query or model query).
I would like to be able to full text search across several text fields of one of my SQLAlchemy mapped objects. I would also like my mapped object to support foreign keys and transactions.
I plan to use MySQL to run the full text search. However, I understand that MySQL can only run full text search on a MyISAM table, which does not support transactions and foreign keys.
In order to accomplish my objective I plan to create two tables. My code will look something like this:
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
users_myisam = Table('users_myisam', Base.metadata,
Column('id', Integer),
Column('name', String(50)),
Column('description', Text),
mysql_engine='MyISAM')
conn = Base.metadata.bind.connect()
conn.execute("CREATE FULLTEXT INDEX idx_users_ftxt \
on users_myisam (name, description)")
Then, to search I will run this:
q = 'monkey'
ft_search = users_myisam.select("MATCH (name,description) AGAINST ('%s')" % q)
result = ft_search.execute()
for row in result: print row
This seems to work, but I have a few questions:
Is my approach of creating two tables to solve my problem reasonable? Is there a standard/better/cleaner way to do this?
Is there a SQLAlchemy way to create the fulltext index, or am I best to just directly execute "CREATE FULLTEXT INDEX ..." as I did above?
Looks like I have a SQL injection problem in my search/match against query. How can I do the select the "SQLAlchemy way" to fix this?
Is there a clean way to join the users_myisam select/match against right back to my user table and return actual User instances, since this is what I really want?
In order to keep my users_myisam table in sync with my mapped object user table, does it make sense for me to use a MapperExtension on my User class, and set the before_insert, before_update, and before_delete methods to update the users_myisam table appropriately, or is there some better way to accomplish this?
Thanks,
Michael
Is my approach of creating two tables to solve my problem reasonable?
Is there a standard/better/cleaner way to do this?
I've not seen this use case attempted before, as developers who value transactions and constraints tend to use Postgresql in the first place. I understand that may not be possible in your specific scenario.
Is there a SQLAlchemy way to create the fulltext index, or am I best
to just directly execute "CREATE FULLTEXT INDEX ..." as I did above?
conn.execute() is fine though if you want something slightly more integrated you can use the DDL() construct, read through http://docs.sqlalchemy.org/en/rel_0_8/core/schema.html?highlight=ddl#customizing-ddl for details
Looks like I have a SQL injection problem in my search/match against query. How can I do the
select the "SQLAlchemy way" to fix this?
note: this recipe is only for MATCH against multiple columns simultaneously - if you have just one column, use the match() operator more simply.
most basically you could use the text() construct:
from sqlalchemy import text, bindparam
users_myisam.select(
text("MATCH (name,description) AGAINST (:value)",
bindparams=[bindparam('value', q)])
)
more comprehensively you could define a custom construct:
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ClauseElement
from sqlalchemy import literal
class Match(ClauseElement):
def __init__(self, columns, value):
self.columns = columns
self.value = literal(value)
#compiles(Match)
def _match(element, compiler, **kw):
return "MATCH (%s) AGAINST (%s)" % (
", ".join(compiler.process(c, **kw) for c in element.columns),
compiler.process(element.value)
)
my_table.select(Match([my_table.c.a, my_table.c.b], "some value"))
docs:
http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html
Is there a clean way to join the users_myisam select/match against right back
to my user table and return actual User instances, since this is what I really want?
you should probably create a UserMyISAM class, map it just like User, then use relationship() to link the two classes together, then simple operations like this are possible:
query(User).join(User.search_table).\
filter(Match([UserSearch.x, UserSearch.y], "some value"))
In order to keep my users_myisam table in sync with my mapped object
user table, does it make sense for me to use a MapperExtension on my
User class, and set the before_insert, before_update, and
before_delete methods to update the users_myisam table appropriately,
or is there some better way to accomplish this?
MapperExtensions are deprecated, so you'd at least use the event API, and in most cases we want to try applying object mutations outside of the flush process. In this case, I'd be using the constructor for User, or alternatively the init event, as well as a basic #validates decorator which will receive values for the target attributes on User and copy those values into User.search_table.
Overall, if you've been learning SQLAlchemy from another source (like the Oreilly book), its really out of date by many years, and I'd be focusing on the current online documentation.
I'm trying to use the Django ORM for a task that requires a JOIN in SQL. I
already have a workaround that accomplishes the same task with multiple queries
and some off-DB processing, but I'm not satisfied by the runtime complexity.
First, I'd like to give you a short introduction to the relevant part of my
model. After that, I'll explain the task in English, SQL and (inefficient) Django ORM.
The Model
In my CMS model, posts are multi-language: For each post and each language, there can be one instance of the post's content. Also, when editing posts, I don't UPDATE, but INSERT new versions of them.
So, PostContent is unique on post, language and version. Here's the class:
class PostContent(models.Model):
""" contains all versions of a post, in all languages. """
language = models.ForeignKey(Language)
post = models.ForeignKey(Post) # the Post object itself only
version = models.IntegerField(default=0) # contains slug and id.
# further metadata and content left out
class Meta:
unique_together = (("resource", "language", "version"),)
The Task in SQL
And this is the task: I'd like to get a list of the most recent versions of all posts in each language, using the ORM. In SQL, this translates to a JOIN on a subquery that does GROUP BY and MAX to get the maximum of version for each unique pair of resource and language. The perfect answer to this question would be a number of ORM calls that produce the following SQL statement:
SELECT
id,
post_id,
version,
v
FROM
cms_postcontent,
(SELECT
post_id as p,
max(version) as v,
language_id as l
FROM
cms_postcontent
GROUP BY
post_id,
language_id
) as maxv
WHERE
post_id=p
AND version=v
AND language_id=l;
Solution in Django
My current solution using the Django ORM does not produce such a JOIN, but two seperate SQL
queries, and one of those queries can become very large. I first execute the subquery (the inner SELECT from above):
maxv = PostContent.objects.values('post','language').annotate(
max_version=Max('version'))
Now, instead of joining maxv, I explicitly ask for every single post in maxv, by
filtering PostContent.objects.all() for each tuple of post, language, max_version. The resulting SQL looks like
SELECT * FROM PostContent WHERE
post=P1 and language=L1 and version=V1
OR post=P2 and language=L2 and version=V2
OR ...;
In Django:
from django.db.models import Q
conjunc = map(lambda pc: Q(version=pc['max_version']).__and__(
Q(post=pc['post']).__and__(
Q(language=pc['language']))), maxv)
result = PostContent.objects.filter(
reduce(lambda disjunc, x: disjunc.__or__(x), conjunc[1:], conjunc[0]))
If maxv is sufficiently small, e.g. when retrieving a single post, this might be
a good solution, but the size of the query and the time to create it grow linearly with
the number of posts. The complexity of parsing the query is also at least linear.
Is there a better way to do this, apart from using raw SQL?
You can join (in the sense of union) querysets with the | operator, as long as the querysets query the same model.
However, it sounds like you want something like PostContent.objects.order_by('version').distinct('language'); as you can't quite do that in 1.3.1, consider using values in combination with distinct() to get the effect you need.
In documentation is example:
Book.objects.annotate(num_authors=Count('authors')).filter(num_authors__gt=1)
How can I filter authors, before executing annotation on authors?
For example I want Count only those authors that have name "John".
I don't believe you can make this selective count with the Django database-abstraction API without including some SQL. You make additions to a QuerySet's SQL using the extra method.
Assuming that the example in the documention is part an app called "inventory" and using syntax that works with postgresql (you didn't specify and it's what I'm more familiar with), the following should do what you're asking for:
Book.objects.extra(
select={"john_count":
"""SELECT COUNT(*) from "inventory_book_authors"
INNER JOIN "inventory_author" ON ("inventory_book_authors"."author_id"="inventory_author"."id")
WHERE "inventory_author"."name"=%s
AND "inventory_book_authors"."book_id"="inventory_book"."id"
"""},
select_params=('John',)
)