Working a django project and trying to speed up the calls. I noticed that Django automatically does a second query to evaulate any foreign key relationships. For instance if my models look like:
Model Person:
name = model.CharField("blah")
Model Address:
person = model.ForeignKey(Person)
Then I make:
p1 = Person("Bob")
address1 = Address(p1)
print (p1.id) #let it be 1 cause it is the first entry
then when I call:
address1.objects.filter(person_id = "1")
I get:
Query #1: SELECT address.id, address.person_id FROM address
Query #2: SELECT person.id, person.name FROM person
I want to get rid of the 2nd call, query #2. I have tried using "defer" from django documentation, but that did not work (in fact it makes even more calls). "values" is a possibility but in actual practice, there are many more fields I want to pull. The only thing I want it to do is not evaluate the FOREIGN KEY. I would be happy to get the person_id back, or not. This drastically reduces the runtime especially when I do a command like: Address.objects.all(), because it Django evaluates every foreign key.
Having just seen your other question on the same issue, I'm going to guess that you have defined a __unicode__ method that references the ForeignKey field. If you query for some objects in the shell and output them, the __unicode__ method will be called, which requires a query to get the ForeignKey. The solution is to either rewrite that method so it doesn't need that reference, or - as I say in the other question - use select_related().
Next time, please provide full code, including some that actually demonstrates the problem you are having.
Related
I haven't had much luck finding other questions that helped with this, but apologies if I missed something and this is a duplicate.
I'm trying to add to some ManyToMany fields, without having to explicitly type out the names of the fields in the code (because the function I'm working on will be used to add to multiple fields and I'd rather not have to repeat the same code for every field). I'm having a hard time using ._meta to reference the model and field objects correctly so that .add() doesn't throw an "AttributeError: 'ManyToManyField' object has no attribute 'add'".
This is simplified because the full body of code is too long to post it all here, but in models.py, I have models defined similar to this:
class Sandwich(models.Model):
name = models.CharField(max_length=MAX_CHAR_FIELD)
veggies = models.ManyToManyField(Veggie)
meats = models.ManyToManyField(Meat)
class Veggie(models.Model):
name = models.CharField(max_length=MAX_CHAR_FIELD)
class Meat(models.Model):
name = models.CharField(max_length=MAX_CHAR_FIELD)
Once instances of these are created and saved, I can successfully use .add() like this:
blt = Sandwich(name='blt')
blt.save()
lettuce = Veggies(name='lettuce')
lettuce.save()
tomato = Veggies(name='tomato')
tomato.save()
bacon = Meat(name='bacon')
bacon.save()
blt.veggies.add(lettuce)
blt.veggies.add(tomato)
blt.meats.add(bacon)
But if I try to use ._meta to get blt's fields and add to them that way, I can't. ie something like this,
field_name='meats'
field = blt._meta.get_field(field_name)
field.add(bacon)
will throw "AttributeError: 'ManyToManyField' object has no attribute 'add'".
So, how can I use ._meta or a similar approach to get and refer to these fields in a way that will let me use .add()? (bonus round, how and why is "blt.meats" different than "blt._meta.get_field('meats')" anyway?)
Why do you want to do
field = blt._meta.get_field(field_name)
field.add(bacon)
instead of
blt.meats.add(bacon)
in the first place?
If what you want is to access the attribute meats on the blt instance of the Sandwich class because you have the string 'meats' somewhere, then it's plain python you're after:
field_string = 'meats'
meats_attribute = getattr(blt, field_string, None)
if meats_attribute is not None:
meats_attribute.add(bacon)
But if your at the point where you're doing that sort of thing you might want to revise your data modelling.
Bonus round:
Call type() on blt.meats and on blt._meta.get_field(field_name) and see what each returns.
One is a ManyToManyField, the other a RelatedManager. First is an abstraction that allows you to tell Django you have a M2M relation between 2 models, so it can create a through table for you, the other is an interface for you to query those related objects (you can call .filter(), .exclude() on it... like querysets): https://docs.djangoproject.com/en/4.1/ref/models/relations/#django.db.models.fields.related.RelatedManager
A have piece of code, which fetches some QuerySet from DB and then appends new calculated field to every object in the Query Set. It's not an option to add this field via annotation (because it's legacy and because this calculation based on another already pre-fetched data).
Like this:
from django.db import models
class Human(models.Model):
name = models.CharField()
surname = models.CharField()
def calculate_new_field(s):
return len(s.name)*42
people = Human.objects.filter(id__in=[1,2,3,4,5])
for s in people:
s.new_column = calculate_new_field(s)
# people.somehow_reorder(new_order_by=new_column)
So now all people in QuerySet have a new column. And I want order these objects by new_column field. order_by() will not work obviously, since it is a database option. I understand thatI can pass them as a sorted list, but there is a lot of templates and other logic, which expect from this object QuerySet-like inteface with it's methods and so on.
So question is: is there some not very bad and dirty way to reorder existing QuerySet by dinamically added field or create new QuerySet-like object with this data? I believe I'm not the only one who faced this problem and it's already solved with django. But I can't find anything (except for adding third-party libs, and this is not an option too).
Conceptually, the QuerySet is not a list of results, but the "instructions to get those results". It's lazily evaluated and also cached. The internal attribute of the QuerySet that keeps the cached results is qs._result_cache
So, the for s in people sentence is forcing the evaluation of the query and caching the results.
You could, after that, sort the results by doing:
people._result_cache.sort(key=attrgetter('new_column'))
But, after evaluating a QuerySet, it makes little sense (in my opinion) to keep the QuerySet interface, as many of the operations will cause a reevaluation of the query. From this point on you should be dealing with a list of Models
Can you try it functions.Length:
from django.db.models.functions import Length
qs = Human.objects.filter(id__in=[1,2,3,4,5])
qs.annotate(reorder=Length('name') * 42).order_by('reorder')
I commonly find myself writing the same criteria in my Django application(s) more than once. I'll usually encapsulate it in a function that returns a Django Q() object, so that I can maintain the criteria in just one place.
I will do something like this in my code:
def CurrentAgentAgreementCriteria(useraccountid):
'''Returns Q that finds agent agreements that gives the useraccountid account current delegated permissions.'''
AgentAccountMatch = Q(agent__account__id=useraccountid)
StartBeforeNow = Q(start__lte=timezone.now())
EndAfterNow = Q(end__gte=timezone.now())
NoEnd = Q(end=None)
# Now put the criteria together
AgentAgreementCriteria = AgentAccountMatch & StartBeforeNow & (NoEnd | EndAfterNow)
return AgentAgreementCriteria
This makes it so that I don't have to think through the DB model more than once, and I can combine the return values from these functions to build more complex criterion. That works well so far, and has saved me time already when the DB model changes.
Something I have realized as I start to combine the criterion from these functions that is that a Q() object is inherently tied to the type of object .filter() is being called on. That is what I would expect.
I occasionally find myself wanting to use a Q() object from one of my functions to construct another Q object that is designed to filter a different, but related, model's instances.
Let's use a simple/contrived example to show what I mean. (It's simple enough that normally this would not be worth the overhead, but remember that I'm using a simple example here to illustrate what is more complicated in my app.)
Say I have a function that returns a Q() object that finds all Django users, whose username starts with an 'a':
def UsernameStartsWithAaccount():
return Q(username__startswith='a')
Say that I have a related model that is a user profile with settings including whether they want emails from us:
class UserProfile(models.Model):
account = models.OneToOneField(User, unique=True, related_name='azendalesappprofile')
emailMe = models.BooleanField(default=False)
Say I want to find all UserProfiles which have a username starting with 'a' AND want use to send them some email newsletter. I can easily write a Q() object for the latter:
wantsEmails = Q(emailMe=True)
but find myself wanting to something to do something like this for the former:
startsWithA = Q(account=UsernameStartsWithAaccount())
# And then
UserProfile.objects.filter(startsWithA & wantsEmails)
Unfortunately, that doesn't work (it generates invalid PSQL syntax when I tried it).
To put it another way, I'm looking for a syntax along the lines of Q(account=Q(id=9)) that would return the same results as Q(account__id=9).
So, a few questions arise from this:
Is there a syntax with Django Q() objects that allows you to add "context" to them to allow them to cross relational boundaries from the model you are running .filter() on?
If not, is this logically possible? (Since I can write Q(account__id=9) when I want to do something like Q(account=Q(id=9)) it seems like it would).
Maybe someone suggests something better, but I ended up passing the context manually to such functions. I don't think there is an easy solution, as you might need to call a whole chain of related tables to get to your field, like table1__table2__table3__profile__user__username, how would you guess that? User table could be linked to table2 too, but you don't need it in this case, so I think you can't avoid setting the path manually.
Also you can pass a dictionary to Q() and a list or a dictionary to filter() functions which is much easier to work with than using keyword parameters and applying &.
def UsernameStartsWithAaccount(context=''):
field = 'username__startswith'
if context:
field = context + '__' + field
return Q(**{field: 'a'})
Then if you simply need to AND your conditions you can combine them into a list and pass to filter:
UserProfile.objects.filter(*[startsWithA, wantsEmails])
I have two models which I want to relate: User and Group.
Each user belongs to a group. I've tried to create a default user by using in get_or_create():
group = models.ForeignKey(Group.objects.get_or_create(name="Free")[0])
But it raises the following error:
(fields.E300) Field defines a relation with model 'Group', which is either not installed, or is abstract.
What can I do to fix this issue?
Each user must have a non-null group value. So I've read about this get_or_create() method. But I've also seen that it can return more than one object... and I don't want it to happen. I thought about creating a unique name parameter but is there a better solution for it?
Can you help me, please? I appreciate your help.
A more comprehensive answer can be found here: How to set a Django model field's default value to a function call / callable (e.g., a date relative to the time of model object creation)
You need to specifify the related Model and set the default.
class User(models.Model):
def default_group(self):
return Group.objects.get_or_create(name="Free")[0]
group = models.ForeignKey('app_name.Group', default=default_group)
Your default value would be evaluated at model definition time, but Django allows you to provide a callable as default, which is called for each instance creation.
To explain the error - code that is not inside a function, such as the line in your question, is executed as soon as your models.py file is loaded by Python. This happens early in the start-up of your Django process, when Django looks for a models.py file in each of the INSTALLED_APPS and imports it. The problem is that you don't know which other models have been imported yet. The error here is because the Group model (from django.auth.models) has not been imported yet, so it is as if it doesn't exist (yet).
Others have suggested you could put the Group.objects.get_or_create(name="Free")[0] in a function so that it is not executed immediately, Django will instead call the function only when it needs to know the value. At this point all the models in your project, and Django's own models, will have been imported and it will work.
Regarding the second part of your question... yes, any time you use get or get_or_create methods you need to query on a unique field otherwise you may get MultipleObjectsReturned exception.
In fact I think you should not use get_or_create for what you are trying to do here. Instead you should use an initial data fixture:
https://docs.djangoproject.com/en/1.9/howto/initial-data/
...to ensure that the default group already exists (and with a known primary key value) before you run your site.
That way you will know the unique pk of the default Group and you can do a get query:
def default_group():
return Group.objects.get(pk=1)
class YourModel(models.model):
group = models.ForeignKey(Group, default=default_group)
I'd like to set up a ForeignKey field in a django model which points to another table some of the time. But I want it to be okay to insert an id into this field which refers to an entry in the other table which might not be there. So if the row exists in the other table, I'd like to get all the benefits of the ForeignKey relationship. But if not, I'd like this treated as just a number.
Is this possible? Is this what Generic relations are for?
This question was asked a long time ago, but for newcomers there is now a built in way to handle this by setting db_constraint=False on your ForeignKey:
https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.db_constraint
customer = models.ForeignKey('Customer', db_constraint=False)
or if you want to to be nullable as well as not enforcing referential integrity:
customer = models.ForeignKey('Customer', null=True, blank=True, db_constraint=False)
We use this in cases where we cannot guarantee that the relations will get created in the right order.
EDIT: update link
I'm new to Django, so I don't now if it provides what you want out-of-the-box. I thought of something like this:
from django.db import models
class YourModel(models.Model):
my_fk = models.PositiveIntegerField()
def set_fk_obj(self, obj):
my_fk = obj.id
def get_fk_obj(self):
if my_fk == None:
return None
try:
obj = YourFkModel.objects.get(pk = self.my_fk)
return obj
except YourFkModel.DoesNotExist:
return None
I don't know if you use the contrib admin app. Using PositiveIntegerField instead of ForeignKey the field would be rendered with a text field on the admin site.
This is probably as simple as declaring a ForeignKey and creating the column without actually declaring it as a FOREIGN KEY. That way, you'll get o.obj_id, o.obj will work if the object exists, and--I think--raise an exception if you try to load an object that doesn't actually exist (probably DoesNotExist).
However, I don't think there's any way to make syncdb do this for you. I found syncdb to be limiting to the point of being useless, so I bypass it entirely and create the schema with my own code. You can use syncdb to create the database, then alter the table directly, eg. ALTER TABLE tablename DROP CONSTRAINT fk_constraint_name.
You also inherently lose ON DELETE CASCADE and all referential integrity checking, of course.
To do the solution by #Glenn Maynard via South, generate an empty South migration:
python manage.py schemamigration myapp name_of_migration --empty
Edit the migration file then run it:
def forwards(self, orm):
db.delete_foreign_key('table_name', 'field_name')
def backwards(self, orm):
sql = db.foreign_key_sql('table_name', 'field_name', 'foreign_table_name', 'foreign_field_name')
db.execute(sql)
Source article
(Note: It might help if you explain why you want this. There might be a better way to approach the underlying problem.)
Is this possible?
Not with ForeignKey alone, because you're overloading the column values with two different meanings, without a reliable way of distinguishing them. (For example, what would happen if a new entry in the target table is created with a primary key matching old entries in the referencing table? What would happen to these old referencing entries when the new target entry is deleted?)
The usual ad hoc solution to this problem is to define a "type" or "tag" column alongside the foreign key, to distinguish the different meanings (but see below).
Is this what Generic relations are for?
Yes, partly.
GenericForeignKey is just a Django convenience helper for the pattern above; it pairs a foreign key with a type tag that identifies which table/model it refers to (using the model's associated ContentType; see contenttypes)
Example:
class Foo(models.Model):
other_type = models.ForeignKey('contenttypes.ContentType', null=True)
other_id = models.PositiveIntegerField()
# Optional accessor, not a stored column
other = generic.GenericForeignKey('other_type', 'other_id')
This will allow you use other like a ForeignKey, to refer to instances of your other model. (In the background, GenericForeignKey gets and sets other_type and other_id for you.)
To represent a number that isn't a reference, you would set other_type to None, and just use other_id directly. In this case, trying to access other will always return None, instead of raising DoesNotExist (or returning an unintended object, due to id collision).
tablename= columnname.ForeignKey('table', null=True, blank=True, db_constraint=False)
use this in your program