I want to populate a Django database using manage.py loaddata initial_data.json where the json file contains the specifications of several objects. My problem is that these object have 'user' attribute referencing to a Django User object, to indicate which user created them. The model for these objects looks like this:
from django.db import models
from django.contrib.auth.models import User
class MySpecialModel(models.Model):
name = models.CharField(max_length=200)
user = models.ForeignKey(User)
The objects in my json will look like this:
[{"fields":{
"name": "some name",
"user": XXX
}
...]
The problem is that I don't know what to write in place of the XXX to indicate the user. I have tried with user names but Django tells me it expects a number where the XXX are. using a number does not produce any bug but I don't see my database populated. So is there a way to place a Django object in a initial_data.json file ?
You should write a user.pk there, it is an integer value.
[{"fields":{
"name": "some name",
"user": 1
}
...]
Obviously, the user with that pk should be created before you import any object with foreign key to it, so maybe it is easier to create some (fake or not) users through admin before and export they with: python manage.py dumpdata auth.User --indent 4 > users.json
Related
I have a python project setup with Django 1.8.0 and POSTGRESQL. My model look like this:
class poll_db(models.Model):
p_pk = models.AutoField(primary_key=True)
p_name = models.CharField(max_length=256)
p_desc = models.CharField(max_length=512)
I have a post url registered to router on urls.py:
router.register(r'newpoll', views.createPoll)
I am trying to make a default POST call with the following URL
http://localhost:8080/newpoll/
And my postBody looks like:
{
"name": "What's the weekend plan?",
"desc": "Poll to decide on the weekend plan"
}
The request hits the server and there is a new entry created on the DB. But when I look at the created entry, it has empty values except for the p_pk
14 | |
which means the values are passed as empty. But when I try to override the default create method on the views.py, I see the values as part of the request and add to the db is fine.
All I am trying is to skip writing a method for adding it to the DB and use the default create method.
Any help is much appreciated. Thanks!
My bad. I had read-only fields on serializer.
I'm writing a simple Flask app, with the sole purpose to learn Python and MongoDB.
I've managed to reach to the point where all the collections are defined, and CRUD operations work in general. Now, one thing that I really want to understand, is how to refresh the collection, after updating its structure. For example, say that I have the following model:
user.py
class User(db.Document, UserMixin):
email = db.StringField(required=True, unique=True)
password = db.StringField(required=True)
active = db.BooleanField()
first_name = db.StringField(max_length=64, required=True)
last_name = db.StringField(max_length=64, required=True)
registered_at = db.DateTimeField(default=datetime.datetime.utcnow())
confirmed = db.BooleanField()
confirmed_at = db.DateTimeField()
last_login_at = db.DateTimeField()
current_login_at = db.DateTimeField()
last_login_ip = db.StringField(max_length=45)
current_login_ip = db.StringField(max_length=45)
login_count = db.IntField()
companies = db.ListField(db.ReferenceField('Company'), default=[])
roles = db.ListField(db.ReferenceField(Role), default=[])
meta = {
'indexes': [
{'fields': ['email'], 'unique': True}
]
}
Now, I already have entries in my user collection, but I want to change companies to:
company = db.ReferenceField('Company')
How can I refresh the collection's structure, without having to bring the whole database down?
I do have a manage.py script that helps me and also provides a shell:
#!/usr/bin/python
from flask.ext.script import Manager
from flask.ext.script.commands import Shell
from app import factory
app = factory.create_app()
manager = Manager(app)
manager.add_command("shell", Shell(use_ipython=True))
# manager.add_command('run_tests', RunTests())
if __name__ == "__main__":
manager.run()
and I have tried a couple of commands, from information that I could recompile and out of my basic knowledge:
>>> from app.models import db, User
>>> import mongoengine
>>> mongoengine.Document(User)
field = iter(self._fields_ordered)
AttributeError: 'Document' object has no attribute '_fields_ordered'
>>> mongoengine.Document(User).modify() # well, same result as above
Any pointers on how to achieve this?
Update
I am asking all of this, because I have updated my user.py to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error:
FieldDoesNotExist: The field 'companies' does not exist on the
document 'User', referer: http://local.faqcolab.com/company
Solution is easier then I expected:
db.getCollection('user').update(
// query
{},
// update
{
$rename: {
'companies': 'company'
}
},
// options
{
"multi" : true, // update all documents
"upsert" : false // insert a new document, if no existing document match the query
}
);
Explanation for each of the {}:
First is empty because I want to update all documents in user collection.
Second contains $rename which is the invoking action to rename the fields I want.
Last contains aditional settings for the query to be executed.
I have updated my user.py to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error
MongoDB does not have a "table structure" like relational databases do. After a document has been inserted, you can't change it's schema by changing the document model.
I don't want to sound like I'm telling you that the answer is to use different tools, but seeing things like db.ListField(db.ReferenceField('Company')) makes me think you'd be much better off with a relational database (Postgres is well supported in the Flask ecosystem).
Mongo works best for storing schema-less documents (you don't know before hand how your data is structured, or it varies significantly between documents). Unless you have data like that, it's worth looking at other options. Especially since you're just getting started with Python and Flask, there's no point in making things harder than they are.
I am trying to export some data from a MySQL database and import it into a PostgreSQL database. The two models are described below:
class Location(models.Model):
name = models.CharField(max_length=100)
class Item(models.Model):
title = models.CharField(max_length=200)
location = models.ForeignKey(Location)
class Book(Item):
author = models.CharField(max_length=100)
Notice that the Book model inherits from the Item model. (Also, I do realize that author really should be a separate model - but I need something simple to demonstrate the problem.) I first attempt to export the data from the model using the dumpdata command:
./manage.py dumpdata myapp.location >locations.json
./manage.py dumpdata myapp.item >items.json
./manage.py dumpdata myapp.book >books.json
The JSON in items.json looks something like this:
{
"fields": {
"title": "Introduction to Programming",
"location": 1
},
"model": "myapp.item",
"pk": 1
}
The JSON in books.json looks something like this:
{
"fields": {
"author": "Smith, Bob"
},
"model": "myapp.book",
"pk": 1
}
I can import locations.json and items.json without issue but as soon as I attempt to import books.json, I run into the following error:
IntegrityError: Problem installing fixture 'books.json': Could not load
myapp.Book(pk=1): null value in column "location_id" violates not-null
constraint
Edit: the schema for myapp.books (according to PostgreSQL itself) is as follows:
# SELECT * FROM myapp_book;
item_ptr_id | author
-------------+--------
(0 rows)
The books.json file needs to have all the fields of the parent model, in your case 'title' and 'location' fields (and appropriate values of course) need to be added (from the items.json file). loaddata doesn't use the database structure (with intermediate the table and all), but the checks the actual fields of the model.
To avoid ending up with double entries, you will also have to remove all the json entries in the items.json file that are pointed to by the original mysql database in the myapp_book table.
Another solution would be to use http://pypi.python.org/pypi/py-mysql2pgsql (see also this question)
Seems like for whatever reason the Books schema in your postgres db doesn't match the models - it has a location_id column. You should drop the table and rerun syncdb.
How do you load a Django fixture so that models referenced via natural keys don't conflict with pre-existing records?
I'm trying to load such a fixture, but I'm getting IntegrityErrors from my MySQL backend, complaining about Django trying to insert duplicate records, which doesn't make any sense.
As I understand Django's natural key feature, in order to fully support dumpdata and loaddata usage, you need to define a natural_key method in the model, and a get_by_natural_key method in the model's manager.
So, for example, I have two models:
class PersonManager(models.Manager):
def get_by_natural_key(self, name):
return self.get(name=name)
class Person(models.Model):
objects = PersonManager()
name = models.CharField(max_length=255, unique=True)
def natural_key(self):
return (self.name,)
class BookManager(models.Manager):
def get_by_natural_key(self, title, *person_key):
person = Person.objects.get_by_natural_key(*person_key)
return self.get(title=title, person=person)
class Book(models.Model):
objects = BookManager()
author = models.ForeignKey(Person)
title = models.CharField(max_length=255)
def natural_key(self):
return (self.title,) + self.author.natural_key()
natural_key.dependencies = ['myapp.Person']
My test database already contains a sample Person and Book record, which I used to generate the fixture:
[
{
"pk": null,
"model": "myapp.person",
"fields": {
"name": "bob"
}
},
{
"pk": null,
"model": "myapp.book",
"fields": {
"author": [
"bob"
],
"title": "bob's book",
}
}
]
I want to be able to take this fixture and load it into any instance of my database to recreate the records, regardless of whether or not they already exist in the database.
However, when I run python manage.py loaddata myfixture.json I get the error:
IntegrityError: (1062, "Duplicate entry '1-1' for key 'myapp_person_name_uniq'")
Why is Django attempting to re-create the Person record instead of reusing the one that's already there?
Turns out the solution requires a very minor patch to Django's loaddata command. Since it's unlikely the Django devs would accept such a patch, I've forked it in my package of various Django admin related enhancements.
The key code change (lines 189-201 of loaddatanaturally.py) simply involves calling get_natural_key() to find any existing pk inside the loop that iterates over the deserialized objects.
Actually loaddata is not supposed to work with existing data in database, it is normally used for initial load of models.
Look at this question for another way of doing it: Import data into Django model with existing data?
How do you exclude the primary key from the JSON produced by Django's dumpdata when natural keys are enabled?
I've constructed a record that I'd like to "export" so others can use it as a template, by loading it into a separate databases with the same schema without conflicting with other records in the same model.
As I understand Django's support for natural keys, this seems like what NKs were designed to do. My record has a unique name field, which is also used as the natural key.
So when I run:
from django.core import serializers
from myapp.models import MyModel
obj = MyModel.objects.get(id=123)
serializers.serialize('json', [obj], indent=4, use_natural_keys=True)
I would expect an output something like:
[
{
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which I could then load into another database, using loaddata, expecting it to be dynamically assigned a new primary key. Note, that my "create_user" field is a FK to Django's auth.User model, which supports natural keys, and it output as its natural key instead of the integer primary key.
However, what's generated is actually:
[
{
"pk": 123,
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which will clearly conflict with and overwrite any existing record with primary key 123.
What's the best way to fix this? I don't want to retroactively change all the auto-generated primary key integer fields to whatever the equivalent natural keys are, since that would cause a performance hit as well as be labor intensive.
Edit: This seems to be a bug that was reported...2 years ago...and has largely been ignored...
Updating the answer for anyone coming across this in 2018 and beyond.
There is a way to omit the primary key through the use of natural keys and unique_together method. Taken from the Django documentation on serialization:
You can use this command to test :
python manage.py dumpdata app.model --pks 1,2,3 --indent 4 --natural-primary --natural-foreign > dumpdata.json ;
Serialization of natural keys
So how do you get Django to emit a natural key when serializing an object? Firstly, you need to add another method – this time to the model itself:
class Person(models.Model):
objects = PersonManager()
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
def natural_key(self):
return (self.first_name, self.last_name)
class Meta:
unique_together = (('first_name', 'last_name'),)
That method should always return a natural key tuple – in this example, (first name, last name). Then, when you call serializers.serialize(), you provide use_natural_foreign_keys=True or use_natural_primary_keys=True arguments:
serializers.serialize('json', [book1, book2], indent=2, use_natural_foreign_keys=True, use_natural_primary_keys=True)
When use_natural_foreign_keys=True is specified, Django will use the natural_key() method to serialize any foreign key reference to objects of the type that defines the method.
When use_natural_primary_keys=True is specified, Django will not provide the primary key in the serialized data of this object since it can be calculated during deserialization:
{
"model": "store.person",
"fields": {
"first_name": "Douglas",
"last_name": "Adams",
"birth_date": "1952-03-11",
}
}
The problem with json is that you can't omit the pk field since it will be required upon loading of the fixture data again. If not existing, json will fail with
$ python manage.py loaddata some_data.json
[...]
File ".../django/core/serializers/python.py", line 85, in Deserializer
data = {Model._meta.pk.attname : Model._meta.pk.to_python(d["pk"])}
KeyError: 'pk'
As pointed out in the answer to this question, you can use yaml or xml if you really want to omit the pk attribute OR just replace the primary key value with null.
import re
from django.core import serializers
some_objects = MyClass.objects.all()
s = serializers.serialize('json', some_objects, use_natural_keys=True)
# Replace id values with null - adjust the regex to your needs
s = re.sub('"pk": [0-9]{1,5}', '"pk": null', s)
Override the Serializer class in a separate module:
from django.core.serializers.json import Serializer as JsonSerializer
class Serializer(JsonSerializer):
def end_object(self, obj):
self.objects.append({
"model" : smart_unicode(obj._meta),
"fields" : self._current,
# Original method adds the pk here
})
self._current = None
Register it in Django:
serializers.register_serializer("json_no_pk", "path.to.module.with.custom.serializer")
Add use it:
serializers.serialize('json_no_pk', [obj], indent=4, use_natural_keys=True)