I've been using django-modeltranslation to translate models in django for a while. It is really straightforward and it works really well on apps I've been developing, where all model translated content gets inserted with forms by the final user.
eg: inputs: content, content_en, content_pt, ...
I have to build an application where I need to translate 'built-in' model strings that are generated by django, like 'auth.permission.name' or 'contenttypes.contenttype.name' and add them to translation django.po files.
I came up with a solution that works fine,
which uses post_migration signals that create a file with lists of ugettext_lazy elements, so new strings, like a new contenttype.name for example, are added to 'django.po' dynamically and loaded to the database.
Yet, is a bit weird having to create a file with ugettext calls
in order to register the strings, but I didn't find another way of registering and adding them dynamically to the django.po file, so I need your help
Here's what I have done:
1. I created an app named 'tools', that is the last one on INSTALLED_APPS, so its migrations are naturally the last ones to be called. This app does not have any models, it just runs migrations, has the django-modeltranslation translation.py file and an application config with a post_migration signal call.
# translations.py
from modeltranslation.translator import translator, TranslationOptions
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
class PermissionTranslationOptions(TranslationOptions):
fields = ('name',)
class ContentTypeTranslationOptions(TranslationOptions):
fields = ('name',)
translator.register(Permission, PermissionTranslationOptions)
translator.register(ContentType, ContentTypeTranslationOptions)
2. Running 'manage.py makemigrations' creates the migrations on the 'auth' and 'contenttypes' applications with the extra 'name_*' fields.
3. the app has an application config that has a post_migrate signal
# __init__.py
default_app_config = 'apps.tools.config.SystemConfig'
# config.py
from django.apps import AppConfig
from django.db.models.signals import post_migrate
from apps.tools.translations.exporter import make_translations
from apps.tools.translations.importer import load_translations
def run_translations(sender, **kwargs):
# This creates the translations
make_translations()
# This loads the the translations to the db
load_translations()
class SystemConfig(AppConfig):
name = 'apps.tools'
verbose_name = 'Tools'
def ready(self):
# Call post migration operations
post_migrate.connect(run_translations, sender=self)
4. make_translations() is called after migrations and generates a file with lists of uggettext_lazy calls.
This is the bit I would like to change. Do I really need to create a file?
# exporter
import os
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.utils import translation
from django.contrib.contenttypes.management import update_all_contenttypes
# TODO
# It has got to be another way
def make_translations():
# lets go default
translation.activate("en")
update_all_contenttypes()
try:
f = open(os.path.join(os.path.realpath(os.path.dirname(__file__)), 'translations.py'), 'w')
# Write file
f.write("from django.utils.translation import ugettext_lazy as _\n\n")
# All Permissions to lazy text
f.write('permissions = {\n')
for perm in Permission.objects.all().order_by('id'):
f.write(' "'+str(perm.id)+'": _("'+perm.name+'"),\n')
f.write('}\n\n')
# All Content types to lazy text
f.write('content_types = {\n')
for content in ContentType.objects.all().order_by('id'):
f.write(' "'+str(content.id)+'": _("'+content.name+'"),\n')
f.write('}\n\n')
# Closing file
f.close()
# Importing file to get it registered with ugettext_lazy
try:
from apps.tools.translations import translations
except:
print('Could not import file')
pass
except:
print('Could not create file')
pass
The above results in a file like this:
from django.utils.translation import ugettext_lazy as _
permissions = {
"1": _("Can add permission"),
"2": _("Can change permission"),
"3": _("Can delete permission"),
"4": _("Can add group"),
...
}
content_types = {
"1": _("group"),
"2": _("user"),
"3": _("permission"),
"4": _("content type"),
"5": _("session"),
...
}
5. Running 'makemessages' would add this strings to 'django.po' files, yet, the post_migration signal does not stop here, and loads the existing compiled strings in the database
# importer
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.conf import settings
from django.utils import translation
def load_translations():
try:
from apps.tools.translations.translations import permissions, content_types
except:
# File does not exists
print('Translations could not be loaded')
return
# For each language
for lang in settings.LANGUAGES:
# Activate language
translation.activate(lang[0])
# Loading translated permissions
all_permissions = Permission.objects.all()
for permission in all_permissions:
permission.name = unicode(permissions[str(permission.id)])
permission.save()
# Loading translated content_types
all_contenttypes = ContentType.objects.all()
for contenttype in all_contenttypes:
contenttype.name = unicode(content_types[str(contenttype.id)])
contenttype.save()
How can I replace 'make_translations()' without creating a file and register those strings with ugettext_lazy?
Thanks for your help
I've read your post and also somehow I had the same problem with translation for permissions, I've found a very short way to solve the problem:
I wouldn't recommend to do this way neither regret that.
but the solution is:
edit the app_labeled_name function decorated as property of this path: .pyenv/Lib/site-packages/django/contrib/contenttypes/models.py of ContentType class, to become like this:
#property
def app_labeled_name(self):
model = self.model_class()
if not model:
return self.model
return '%s | %s' % (apps.get_app_config(model._meta.app_label).verbose_name,
model._meta.verbose_name)
the trick is to use apps.get_app_config(model._meta.app_label).verbose_name instead of model._meta.app_label, so it would use the same verbose_name as whatever use set for your app in the AppConfig subclass of your app.
Related
I'm writing a Django app, where users can upload CSV files. Therefore I've created an upload model with three validators:
One that checks the file extension (FileExtensionValidator),
one for the MIME type validation (ValidateFileType),
and a third one for parsing the CSV file and checking for data types, right number of columns and so on (ValidateCsv).
It'd be reasonable to check the upload only with the next validator if the preceding validation didn't raise a ValidationError.
For instance, the user could upload a .py file. This would raise an error in all three validators, but I want to avoid, that Django checks the MIME type or even tries to treat and parse a .py file as a CSV file, although the file extension wasn't correct right from the beginning.
So here is my model for the user's upload:
models.py
from django.db import models
from .utils import unique_file_path
from django.core.validators import FileExtensionValidator
from .validators import ValidateFileType, ValidateCsv
class Upload(models.Model):
date_uploaded = models.DateTimeField(auto_now_add=True)
file = models.FileField(upload_to=unique_file_path, validators=[FileExtensionValidator(['csv']), ValidateFileType, ValidateCsv], max_length=255)
With this validators list all three validations are always performed and I can see all the error messages in upload_form.errors. For example:
File extension 'py' is not allowed. Allowed extensions are: 'csv'.
File type text/x-python not supported.
Some data is invalid. Please check your CSV file.
forms.py
from django import forms
from .models import Upload
class UploadForm(forms.ModelForm):
class Meta:
model = Upload
view.py
from .forms import UploadForm
def someView(request):
upload_form = UploadForm()
...
context = {'upload_form': upload_form}
return render(request, 'someTemplate.html', context)
Do you have an idea, what's the best approach to make such a hierarchical chain of validators? Of course I could write some big all-in-one validator function, but since I use a Django core validator, I dont't want to rewrite this one, but combine existing django validators with my own ones.
You don't need to rewrite the django core validator.
In your validators file:
from django.core.validators import FileExtensionValidator
def ValidateFileType(value):
....your code....
def ValidateCsv(value):
....your code....
def csv_validator(value):
'''Your all in one function'''
extension = FileExtensionValidator(['csv'])
extension(value) #FileExtensionValidator is a callable class. See docs for that.
ValidateFileType(value)
ValidateCsv(value)
Don't know if that's the best way, but it should do the trick.
I have a model class AppModelActions that inherits from CustomModel that inherits from django.db.models, and I use it in all apps of the project.
I want to get all instances of AppModelActions in all apps, from a command file in one of the apps. I can't import each one explicitly because I want this command to work dynamically. I tried to import the instances programmatically, but it didn't work (I get this error: `KeyError: 'AppModelActions').
This is my code:
from django.core.management.base import BaseCommand
from django.db import IntegrityError
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
from django.conf import settings
from GeneralApp import models
class Command(BaseCommand):
help = _("""Run this commando to populate an empty database with initial data required for Attractora to work.
For the momment, this models are considered on 'populate_db' command:
- InventoriManagerApp.InventoryManagerAppActions """)
def populate_db(self):
applications = settings.INSTALLED_APPS
for application in applications:
try:
import applications.models
except:
pass
#from InventoryManagerApp.models import InventoryManagerAppActions
models_to_populate = (vars()['AppModelActions'])
#models_to_populate = (InventoryManagerAppActions,)
for model_to_populate in models_to_populate:
self.populate_model('InventoryManagerApp', model_to_populate)
def populate_model(self, app_label, model_to_populate):
model_to_populate_instance = model_to_populate()
content_type = ContentType.objects.get(app_label=app_label, model=model_to_populate_instance.name)
if hasattr(model_to_populate_instance, 'populate_data'):
populate_data = model_to_populate.populate_data
for record in populate_data:
for key, value in record.items():
setattr(model_to_populate_instance, key, value)
try:
model_to_populate_instance.save()
except IntegrityError:
print("{} already exists.".format(model_to_populate_instance))
# else:
# new_permission = Permission.objects.create(code_name=)
def handle(self, *args, **kwargs):
self.populate_db()
You can obtain all subclasses by iteratively updating a set of children:
from django.core.management.base import BaseCommand
class Command(BaseCommand):
# ...
def populate_db(self):
from some_app.models import AppModelActions
children = set()
gen = [AppModelActions]
while gen:
children.update(gen)
gen = [sc for c in gen for sc in c.__subclasses__()]
# do something with children
# ...
After this, the children is a set containing all models that are subclasses of AppModelActions (including this model). In case you do not want to include AppModelActions, swap the two lines in the while loop.
Since you use a command, Django will first load all the models.py files from the installed apps, and so the subclasses are registered before the handle(..) function is executed, so the children set will contain all child models that are in installed apps.
Note that child-models can be abstract, so you might want to perform an extra filtering, such that only non-abstract models are used. For example post-process it with:
non_abstract_children = [c for c in children if not c._meta.abstract]
Something really annoying is happening to me since using Django migrations (not south) and using loaddata for fixtures inside of them.
Here is a simple way to reproduce my problem:
create a new model Testmodel with 1 field field1 (CharField or whatever)
create an associated migration (let's say 0001) with makemigrations
run the migration
and add some data in the new table
dump the data in a fixture testmodel.json
create a migration with call_command('loaddata', 'testmodel.json'): migration 0002
add some a new field to the model: field2
create an associated migration (0003)
Now, commit that, and put your db in the state just before the changes: ./manage.py migrate myapp zero. So you are in the same state as your teammate that didn't get your changes yet.
If you try to run ./manage.py migrate again you will get a ProgrammingError at migration 0002 saying that "column field2 does not exist".
It seems it's because loaddata is looking into your model (which is already having field2), and not just applying the fixture to the db.
This can happen in multiple cases when working in a team, and also making the test runner fail.
Did I get something wrong? Is it a bug? What should be done is those cases?
--
I am using django 1.7
loaddata command will simply call serializers. Serializers will work on models state from your models.py file, not from current migration, but there is little trick to fool default serializer.
First, you don't want to use that serializer by call_command but rather directly:
from django.core import serializers
def load_fixture(apps, schema_editor):
fixture_file = '/full/path/to/testmodel.json'
fixture = open(fixture_file)
objects = serializers.deserialize('json', fixture, ignorenonexistent=True)
for obj in objects:
obj.save()
fixture.close()
Second, monkey-patch apps registry used by serializers:
from django.core import serializers
def load_fixture(apps, schema_editor):
original_apps = serializers.python.apps
serializers.python.apps = apps
fixture_file = '/full/path/to/testmodel.json'
fixture = open(fixture_file)
objects = serializers.deserialize('json', fixture, ignorenonexistent=True)
for obj in objects:
obj.save()
fixture.close()
serializers.python.apps = original_apps
Now serializer will use models state from apps instead of default one and whole migration process will succeed.
To expand on the answer from GwynBleidD and mix in this issue since Postgres won't reset the primary key sequences when loaded this way (https://stackoverflow.com/a/14589706/401636)
I think I now have a failsafe migration for loading fixture data.
utils.py:
import os
from io import StringIO
import django.apps
from django.conf import settings
from django.core import serializers
from django.core.management import call_command
from django.db import connection
os.environ['DJANGO_COLORS'] = 'nocolor'
def reset_sqlsequence(apps=None, schema_editor=None):
"""Suitable for use in migrations.RunPython"""
commands = StringIO()
cursor = connection.cursor()
patched = False
if apps:
# Monkey patch django.apps
original_apps = django.apps.apps
django.apps.apps = apps
patched = True
else:
# If not in a migration, use the normal apps registry
apps = django.apps.apps
for app in apps.get_app_configs():
# Generate the sequence reset queries
label = app.label
if patched and app.models_module is None:
# Defeat strange test in the mangement command
app.models_module = True
call_command('sqlsequencereset', label, stdout=commands)
if patched and app.models_module is True:
app.models_module = None
if patched:
# Cleanup monkey patch
django.apps.apps = original_apps
sql = commands.getvalue()
print(sql)
if sql:
# avoid DB error if sql is empty
cursor.execute(commands.getvalue())
class LoadFixtureData(object):
def __init__(self, *files):
self.files = files
def __call__(self, apps=None, schema_editor=None):
if apps:
# If in a migration Monkey patch the app registry
original_apps = serializers.python.apps
serializers.python.apps = apps
for fixture_file in self.files:
with open(fixture_file) as fixture:
objects = serializers.deserialize('json', fixture)
for obj in objects:
obj.save()
if apps:
# Cleanup monkey patch
serializers.python.apps = original_apps
And now my data migrations look like:
# -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on foo
from __future__ import unicode_literals
import os
from django.conf import settings
from django.db import migrations
from .utils import LoadFixtureData, reset_sqlsequence
class Migration(migrations.Migration):
dependencies = [
('app_name', '0002_auto_foo'),
]
operations = [
migrations.RunPython(
code=LoadFixtureData(*[
os.path.join(settings.BASE_DIR, 'app_name', 'fixtures', fixture) + ".json"
for fixture in ('fixture_one', 'fixture_two',)
]),
# Reverse will NOT remove the fixture data
reverse_code=migrations.RunPython.noop,
),
migrations.RunPython(
code=reset_sqlsequence,
reverse_code=migrations.RunPython.noop,
),
]
When you run python manage.py migrate it's trying to load your testmodel.json in fixtures folder, but your model (after updated) does not match with data in testmodel.json. You could try this:
Change your directory from fixture to _fixture.
Run python manage.py migrate
Optional, you now can change _fixture by fixture and load your data as before with migrate command or load data with python manage.py loaddata app/_fixtures/testmodel.json
I'm trying to implement a datamigration using django 1.7 native migration system. Here is what I've done.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
def create_basic_user_group(apps, schema_editor):
"""Forward data migration that create the basic_user group
"""
Group = apps.get_model('auth', 'Group')
Permission = apps.get_model('auth', 'Permission')
group = Group(name='basic_user')
group.save()
perm_codenames = (
'add_stuff',
'...',
)
# we prefere looping over all these in order to be sure to fetch them all
perms = [Permission.objects.get(codename=codename)
for codename in perm_codenames]
group.permissions.add(*perms)
group.save()
def remove_basic_user_group(apps, schema_editor):
"""Backward data migration that remove the basic_user group
"""
group = Group.objects.get(name='basic_user')
group.delete()
class Migration(migrations.Migration):
"""This migrations automatically create the basic_user group.
"""
dependencies = [
]
operations = [
migrations.RunPython(create_basic_user_group, remove_basic_user_group),
]
But when I try to run the migration, I got a LookupError exception telling me that no app with label 'auth' could be found.
How can I create my groups in a clean way that could also be used in unit tests ?
I've done what you are trying to do. The problems are:
The documentation for 1.7 and 1.8 is quite clear: If you want to access a model from another app, you must list this app as a dependency:
When writing a RunPython function that uses models from apps other than the one in which the migration is located, the migration’s dependencies attribute should include the latest migration of each app that is involved, otherwise you may get an error similar to: LookupError: No installed app with label 'myappname' when you try to retrieve the model in the RunPython function using apps.get_model().
So you should have a dependency on the latest migration in auth.
As you mentioned in a comment you will run into an issue whereby the permissions you want to use are not created yet. The problem is that the permissions are created by signal handler attached to the post_migrate signal. So the permissions associated with any new model created in a migration are not available until the migration is finished.
You can fix this by doing this at the start of create_basic_user_group:
from django.contrib.contenttypes.management import update_contenttypes
from django.apps import apps as configured_apps
from django.contrib.auth.management import create_permissions
for app in configured_apps.get_app_configs():
update_contenttypes(app, interactive=True, verbosity=0)
for app in configured_apps.get_app_configs():
create_permissions(app, verbosity=0)
This will also create the content types for each model (which are also created after the migration), see below as to why you should care about that.
Perhaps you could be more selective than I am in the code above: update just some key apps rather than update all apps. I've not tried to be selective. Also, it is possible that both loop could be merged into one. I've not tried it with a single loop.
You get your Permission objects by searching by codename but codename is not guaranteed to be unique. Two apps can have models called Stuff and so you could have an add_stuff permission associated with two different apps. If this happens, your code will fail. What you should do is search by codename and content_type, which are guaranteed to be unique together. A unique content_type is associated with each model in the project: two models with the same name but in different apps will get two different content types.
This means adding a dependency on the contenttypes app, and using the ContentType model: ContentType = apps.get_model("contenttypes", "ContentType").
As said in https://code.djangoproject.com/ticket/23422, the signal post_migrate should be sent before dealing with Permission objects.
But there is a helper function already on Django to sent the needed signal: django.core.management.sql.emit_post_migrate_signal
Here, it worked this way:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.core.management.sql import emit_post_migrate_signal
PERMISSIONS_TO_ADD = [
'view_my_stuff',
...
]
def create_group(apps, schema_editor):
# Workarounds a Django bug: https://code.djangoproject.com/ticket/23422
db_alias = schema_editor.connection.alias
try:
emit_post_migrate_signal(2, False, 'default', db_alias)
except TypeError: # Django < 1.8
emit_post_migrate_signal([], 2, False, 'default', db_alias)
Group = apps.get_model('auth', 'Group')
Permission = apps.get_model('auth', 'Permission')
group, created = Group.objects.get_or_create(name='MyGroup')
permissions = [Permission.objects.get(codename=i) for i in PERMISSIONS_TO_ADD]
group.permissions.add(*permissions)
class Migration(migrations.Migration):
dependencies = [
('auth', '0001_initial'),
('myapp', '0002_mymigration'),
]
operations = [
migrations.RunPython(create_group),
]
So, I figure out how to solve this problem and I get the following exit: get_model will only fetch Your model apps. I don't have sure about if this would be a good pratice, but it worked for me.
I just invoked the model Directly and made the changes.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.contrib.auth.models import Group
def create_groups(apps, schema_editor):
g = Group(name='My New Group')
g.save()
class Migration(migrations.Migration):
operations = [
migrations.RunPython(create_groups)
]
And then, just apply a /manage.py migrate to finish.
I hope it helps.
I'm using Django 1.5 and I'm trying to make an application work with any custom user model. I've changed the app to use get_user_model everywhere and the app itself is not showing any problems so far.
The issue is that I want to be able to test the app as well, but I can't find a way to make ForeignKey model fields to test correctly using custom user models. When I run the test case attached below, I get this error:
ValueError: Cannot assign "<NewCustomUser: alice#bob.net>": "ModelWithForeign.user" must be a "User" instance.
This is the file I'm using for testing:
from django.conf import settings
from django.contrib.auth import get_user_model
from django.contrib.auth.tests.custom_user import CustomUser, CustomUserManager
from django.db import models
from django.test import TestCase
from django.test.utils import override_settings
class NewCustomUser(CustomUser):
objects = CustomUserManager()
class Meta:
app_label = 'myapp'
class ModelWithForeign(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL)
#override_settings(
AUTH_USER_MODEL = 'myapp.NewCustomUser'
)
class MyTest(TestCase):
user_info = {
'email': 'alice#bob.net',
'date_of_birth': '2013-03-12',
'password': 'password1'
}
def test_failing(self):
u = get_user_model()(**self.user_info)
m = ModelWithForeign(user=u)
m.save()
I'm referencing the user model in the ForeignKey argument list as described here, but using get_user_model there doesn't change anything, as the user attribute is evaluated before the setting change takes place. Is there a way to make this ForeignKey play nice with testing when I'm using custom user models?
I asked about this on the Django mailing list as well but it seems that, at least currently, there is no way to change the settings.AUTH_USER_MODEL and have it work nicely with a ForeignKey.
So far, in order to test my app, I've created a runtests.py file from this answer:
import os, sys
from django.conf import settings
if len(sys.argv) >= 2:
user_model = sys.argv[1]
else:
user_model = 'auth.User'
settings.configure(
...
AUTH_USER_MODEL=user_model,
...
)
...
And added a bash script to actually run the tests using different user models:
for i in "auth.User" "myapp.NewCustomUser"; do
echo "Running with AUTH_USER_MODEL=$i"
python runtests.py $i
if [ $? -ne 0 ]; then
break
fi
done
The last bit is to use a function to actually retrieve the right user model info instead of just using a "static" variable:
def get_user_info():
if settings.AUTH_USER_MODEL == 'auth.User':
return {default user info}
if settings.AUTH_USER_MODEL == 'myapp.NewCustomUser':
return {my custom user info}
raise NotImplementedError
I'm not claiming this to be a correct answer for the problem, but so far... It works.