I'm creating a new Django (1.7) model:
class MyModel(models.Model):
field1 = models.ForeignKey('OtherModel')
field2 = models.ForeignKey('AnotherModel', null=True)
field3 = models.PositiveSmallIntegerField(db_index=True, null=True)
other_field1 = models.FloatField(default=0, db_index=True)
class Meta:
unique_together = (('field1', 'field2', 'field3'), )
Ideally, I would have liked it to have the tuple (field1, field2, field3) as primary key, but that's not possible at the moment.
So instead, I have this automatically generated and incremented id column that is required by Django but totally useless for the rest of my code.
The thing is that I'd like to be able to delete and recreate instances of this model very often (almost continuously).
For performance reasons, I'd like to avoid having to use a create_or_update approach, as deleting and creating is much quicker from what I've tested (18op/s with the create_or_update method, 72op/s with the "delete all and create" method, and I expect these numbers to be higher on our production server).
But I'm afraid to reach the auto_increment upper limit too soon (about a year it seems).
Other possibilities that I've imagined :
using a Bigint primary key: that would work, but it would probably be less effective, especially considering that I will never use this primary key (and BigAutoField is not available until Django 1.10)
using an UUID primary key: same stuff (and UUIDField is not available until Django 1.8)
using a custom CharField primary key that would be something like "{self.field1.pk}/{self.field2.pk}/{self.field3}", but I don't even know if Django can handle a primary key generator based on the instance itself (the advantage though is that it would ensure the unicity even with null values on field2 and field3)
What would you suggest (aside from upgrading to a newer version of Django, I know I'm late...)? Do you think I'm looking for too much optimizations?
Stack:
Django 1.7 (can't upgrade for now)
PostgreSQL 9.6
If you are feeling adventurous, you can try turning the id column into a nullable integer column without default.
Note: This will break parts of the Model API, the admin and most likely cause all kinds of unforeseen trouble, but depending on your use case it might just do what you want. I'm not saying this is a good idea, I'm just showing you the option.
That being said, here is the necessary migration:
migrations.RunSQL([
"ALTER TABLE myapp_mymodel DROP CONSTRAINT myapp_mymodel_pkey",
"ALTER TABLE myapp_mymodel ALTER COLUMN id drop default",
"ALTER TABLE myapp_mymodel ALTER COLUMN id drop not null"]
),
You can still create new objects and get existing objects, but you cannot directly save or delete objects:
>>> m = MyModel.objects.create(field1=1, field2=2, field3=3)
>>> m
<MyModel: MyModel object (None)>
>>> m.delete()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../lib/python3.6/site-packages/django/db/models/base.py", line 886, in delete
(self._meta.object_name, self._meta.pk.attname)
AssertionError: MyModel object can't be deleted because its id attribute is set to None.
However, you can use .filter(...).delete() and .filter(...).update(...):
>>> MyModel.objects.filter(field1=1, field2=2, field3=3).update(field3=4)
1
>>> MyModel.objects.filter(field1=1, field2=2, field3=4).delete()
(1, {'myapp.MyModel': 1})
The MyModel I used to test this behavior has three PositiveSmallIntegerFields, not ForeignKeys, but that shouldn't make a difference:
class MyModel(models.Model):
field1 = models.PositiveSmallIntegerField()
field2 = models.PositiveSmallIntegerField()
field3 = models.PositiveSmallIntegerField(db_index=True, null=True)
other_field1 = models.FloatField(default=0, db_index=True)
class Meta:
unique_together = (('field1', 'field2', 'field3'), )
Related
Consider the following Django ORM example:
class A(models.Model):
pass
class B(models.Model):
a = models.ForeignKey('A', on_delete=models.CASCADE, null=True)
b_key = models.SomeOtherField(doesnt_really_matter=True)
class Meta:
unique_together = (('a', 'b_key'),)
Now let's say I delete an instance of A that's linked to an instance of B. Normally, this is no big deal: Django can delete the A object, setting B.a = NULL, then delete B after the fact. This is usually fine because most databases don't consider NULL values in unique constraints, so even if you have b1 = B(a=a1, b_key='non-unique') and b2 = B(a=a2, b_key='non-unique'), and you delete both a1 and a2, that's not a problem because (NULL, 'non-unique') != (NULL, 'non-unique') because NULL != NULL.
However, that's not the case with SQL Server. SQL Server brilliantly defines NULL == NULL, which breaks this logic. The workaround, if you're writing raw SQL, is to use WHERE key IS NOT NULL when defining the unique constraint, but this is generated for me by Django. While I can manually create the RunSQL migrations that I'd need to drop all the original unique constraints and add the new filtered ones, it's definitely a shortcoming of the ORM or driver (I'm using pyodbc + django-pyodbc-azure). Is there some way to either coax Django into generating filtered unique constraints in the first place, or force it to delete tables in a certain order to circumvent this issue altogether, or some other general fix I could apply to the SQL server?
I have 2 models in my django project.
ModelA(models.Model):
id = models.AutoField(primary_key=True)
field1 = ...
~
fieldN = ...
ModelB(models.Model):
id = models.AutoField(primary_key=True)
a = models.ForeignKey(A, on_delete=models.CASCADE)
field1 = ...
~
fieldN = ...
Here I have one-to-mane relation A->B. Table A has around 30 different fields and 10.000+ rows and table B has around 15 and 10.000.000+ rows. I need to filter firstly by the several ModelA fields and then for each of the filtered ModelA row/object get related ModelB objects and filter them by several fields. After that I need to serialize them in JSON where all ModelB packed in one field as array.
Is it possible to perform this around the 1-3 second? If yes, what is the best approach?
I use PostgreSQL.
EDIT:
Now I am doing it like chain .filter() on simple ModelA fields and then iterate over resulted QuerySet and get set of ModelB for each ModelA instance,but i suspect, that the second part of this solution will slow down whole process, so I suppose there is a better way to do it.
It may be faster to do a query like this:
model_a_queryset = ModelA.objects.filter(field=whatever)
model_b_queryset = ModelB.objects.filter(a__in=model_a_queryset)
Because Django does lazy queryset evaulation, this will only result in one hit to the database.
As an aside, there is no need to define id = Autofield fields on your models. Django includes them by default.
I need a table without a primary key (in Django it was created automatically). So my question is: Can I create a model without an ID/primary key?
I'm using Django 1.7.
You can create a model without an auto-incrementing key, but you cannot create one without a primary key.
From the Django Documentation:
If Django sees you’ve explicitly set Field.primary_key, it won’t add the automatic id column.
Each model requires exactly one field to have primary_key=True (either explicitly declared or automatically added).
No, you can't. Excerpt from the documentation:
Each model requires exactly one field to have primary_key=True (either explicitly declared or automatically added).
I've found the solution.
Since django need Primary Key (either it's composite or single-field ID) so, I've tried to set primary_key=True in every fields in its composite-key combination, and add those fields in Meta and groups in unique_together
class ReportPvUv(models.Model):
report_id = models.ForeignKey(Reports, primary_key=True)
rdate = models.DateField(primary_key=True)
fdate = models.DateTimeField(primary_key=True)
ga_pv = models.BigIntegerField()
ga_uv = models.BigIntegerField()
ur_pv = models.BigIntegerField()
ur_uv = models.BigIntegerField()
da_pv = models.BigIntegerField()
da_uv = models.BigIntegerField()
class Meta:
db_table = 'report_pv_uv'
unique_together = ('report_id', 'rdate', 'fdate')
and when I run makemigrations, there are no ID field in it's migrations script :D
thanks everybody
How to replace default primary key in Django model with custom primary key field?
I have a model with no primary key defined at first since django automatically adds an id field by default as primary field.
#models.py
from django.db import models
class Event(models.Model):
title = models.CharField(max_length=50, unique=True)
description = models.CharField(max_length=150)
I added some objects into it from django shell.
>>e = Event('meeting', 'Contents about meeting')
>>e.save()
>>e = Event('party', 'Contents about party')
>>e.save()
Then I require to add custom character field as primary into this model.
class Event(models.Model):
event-id = models.CharField(max_length=50, primary_key=True)
...
Running makemigrations:
$ python manage.py makemigrations
You are trying to add a non-nullable field 'event-id' to event without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows)
2) Quit, and let me add a default in models.py
Select an option: 1
Please enter the default value now, as valid Python
The datetime and `django.utils.timezone modules` are available, so you can do e.g. timezone.now()
>>> 'meetings'
Migrations for 'blog':
0002_auto_20141201_0301.py:
- Remove field id from event
- Add field event-id to event
But while running migrate it threw an error:
.virtualenvs/env/local/lib/python2.7/site-packages/django/db/backends/sqlite3/base.py", line 485, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: UNIQUE constraint failed: blog_event__new.event-id
In my experience (using Django 1.8.* here), I've seen similar situations when trying to update the PK field for models that already exist, have a Foreign Key relationship to another model, and have associated data in the back-end table.
You didn't specify if this model is being used in a FK relation, but it seems this is the case.
In this case, the error message you're getting is because the data that already exists needs to be made consistent with the changes you're requesting --i.e. a new field will be the PK. This implies that the current PK must be dropped for django to 'replace' them. (Django only supports a single PK field per model, as per docs[1].)
Providing a default value that matches currently existing data in the related table should work.
For example:
class Organization(models.Model):
# assume former PK field no longer here; name is the new PK
name = models.CharField(primary_key=True)
class Product(models.Model):
name = models.CharField(primary_key=True)
organization = models.ForeignKey(Organization)
If you're updating the Organization model and products already exist, then existing product rows must be updated to refer to a valid Organization PK value. During the migration, you'd want to choose one of the existing Organization PKs (e.g. 'R&D') to update the existing products.
[1] https://docs.djangoproject.com/en/1.8/topics/db/models/#automatic-primary-key-fields
Django has already established an auto incrementing integer id as primary key in your backend as and when u made the previous model.
When u were trying to run the new model , An attempt was made to recreate a new primary key column that failed.
Another reason is,When u made the field,Django was expecting a unique value be explicitly defined for each new row which it couldn't found,hence the reason.
As told in previous answer you can re-create the migration and then try doing it again.It should work.. cheers :-)
The problem is that you made the field unique, then attempted to use the same value for all the rows in the table. I'm not sure if there's a way to programmatically provide the key, but you could do the following:
Delete the migration
Remove the primary_key attribute from the field
Make a new migration
Apply it
Fill in the value for all your rows
Add the primary_key attribute to the field
Make a new migration
Apply it
It's bruteforce-ish, but should work well enough.
Best of luck!
I'd like to set up a ForeignKey field in a django model which points to another table some of the time. But I want it to be okay to insert an id into this field which refers to an entry in the other table which might not be there. So if the row exists in the other table, I'd like to get all the benefits of the ForeignKey relationship. But if not, I'd like this treated as just a number.
Is this possible? Is this what Generic relations are for?
This question was asked a long time ago, but for newcomers there is now a built in way to handle this by setting db_constraint=False on your ForeignKey:
https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.db_constraint
customer = models.ForeignKey('Customer', db_constraint=False)
or if you want to to be nullable as well as not enforcing referential integrity:
customer = models.ForeignKey('Customer', null=True, blank=True, db_constraint=False)
We use this in cases where we cannot guarantee that the relations will get created in the right order.
EDIT: update link
I'm new to Django, so I don't now if it provides what you want out-of-the-box. I thought of something like this:
from django.db import models
class YourModel(models.Model):
my_fk = models.PositiveIntegerField()
def set_fk_obj(self, obj):
my_fk = obj.id
def get_fk_obj(self):
if my_fk == None:
return None
try:
obj = YourFkModel.objects.get(pk = self.my_fk)
return obj
except YourFkModel.DoesNotExist:
return None
I don't know if you use the contrib admin app. Using PositiveIntegerField instead of ForeignKey the field would be rendered with a text field on the admin site.
This is probably as simple as declaring a ForeignKey and creating the column without actually declaring it as a FOREIGN KEY. That way, you'll get o.obj_id, o.obj will work if the object exists, and--I think--raise an exception if you try to load an object that doesn't actually exist (probably DoesNotExist).
However, I don't think there's any way to make syncdb do this for you. I found syncdb to be limiting to the point of being useless, so I bypass it entirely and create the schema with my own code. You can use syncdb to create the database, then alter the table directly, eg. ALTER TABLE tablename DROP CONSTRAINT fk_constraint_name.
You also inherently lose ON DELETE CASCADE and all referential integrity checking, of course.
To do the solution by #Glenn Maynard via South, generate an empty South migration:
python manage.py schemamigration myapp name_of_migration --empty
Edit the migration file then run it:
def forwards(self, orm):
db.delete_foreign_key('table_name', 'field_name')
def backwards(self, orm):
sql = db.foreign_key_sql('table_name', 'field_name', 'foreign_table_name', 'foreign_field_name')
db.execute(sql)
Source article
(Note: It might help if you explain why you want this. There might be a better way to approach the underlying problem.)
Is this possible?
Not with ForeignKey alone, because you're overloading the column values with two different meanings, without a reliable way of distinguishing them. (For example, what would happen if a new entry in the target table is created with a primary key matching old entries in the referencing table? What would happen to these old referencing entries when the new target entry is deleted?)
The usual ad hoc solution to this problem is to define a "type" or "tag" column alongside the foreign key, to distinguish the different meanings (but see below).
Is this what Generic relations are for?
Yes, partly.
GenericForeignKey is just a Django convenience helper for the pattern above; it pairs a foreign key with a type tag that identifies which table/model it refers to (using the model's associated ContentType; see contenttypes)
Example:
class Foo(models.Model):
other_type = models.ForeignKey('contenttypes.ContentType', null=True)
other_id = models.PositiveIntegerField()
# Optional accessor, not a stored column
other = generic.GenericForeignKey('other_type', 'other_id')
This will allow you use other like a ForeignKey, to refer to instances of your other model. (In the background, GenericForeignKey gets and sets other_type and other_id for you.)
To represent a number that isn't a reference, you would set other_type to None, and just use other_id directly. In this case, trying to access other will always return None, instead of raising DoesNotExist (or returning an unintended object, due to id collision).
tablename= columnname.ForeignKey('table', null=True, blank=True, db_constraint=False)
use this in your program