I'm trying to create server side flask session extension that expires after # time. I found below Mongodb shell command in the documentation.
db.log_events.createIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
But how can I do it using pymodm?
Take a look to the model definition: http://pymodm.readthedocs.io/en/stable/api/index.html?highlight=indexes#defining-models. There is a meta attribute called "indexes", that one is responsible for creating indexes. Here is an example:
import pymodm
import pymongo
class SomeModel(pymodm.MongoModel):
...
class Meta:
indexes=[pymongo.IndexModel([('field_name', <direction>)])]
Form the docs:
indexes: This is a list of IndexModel instances that describe the
indexes that should be created for this model. Indexes are created
when the class definition is evaluated.
IndexModel is explained in this page.
Then add the following Meta class to your MongoModel class:
class Meta:
indexes = [
IndexModel([('createdAt', pymongo.ASCENDING)], expireAfterSeconds=3600)
]
Related
I have a web application in Python django. I need to import users and display data about them from another database, from another existing application. All I need is the user to be able to login and display information about them. What solutions are?
You can set 2 DATABASES in settings.py.
DATABASES = {
'default': {
...
},
'user_data': {
...
}
}
Then in one database store User models with authentication and stuff, in another rest information. You can connect information about specific User with a field that is storing id of User from another database.
If you have multiple databases and create a model, you should declare on which db it is going to be stored. If you didn't, it will be in default one (if you have it declared).
class UserModel(models.Model):
class Meta:
db_table = 'default'
class UserDataModel(models.Model):
class Meta:
db_table = 'user_data'
the answer from #NixonSparrow was wrong.
_meta.db_table defined only table_name in database and not the database self.
for switch database you can use manager.using('database_name'), for every model, it is good declared here: https://docs.djangoproject.com/en/4.0/topics/db/multi-db/#topics-db-multi-db-routing
in my project i use multiple router.
https://docs.djangoproject.com/en/4.0/topics/db/multi-db/#topics-db-multi-db-routing
it help don't override every manager with using. But in your case:
DATABASES = {
'default': {
...
},
'other_users_data': {
...
}
}
and somethere in views:
other_users = otherUserModel.objects.using('other_users_data')
Probably, otherUserModel should define in meta, which table you want to use db_table = 'other_users_table_name' and also probably it should have managed=False, to hide this model from migration manager.
Given the general structure:
class Article(Page):
body = RichTextField(...)
search_fields = Page.search_fields + [index.SearchField('body')]
class ArticleFilter(FilterSet):
search = SearchFilter()
class Meta:
model = Article
fields = ['slug']
class Query(ObjectType):
articles = DjangoFilterConnectionField(ArticleNode, filterset_class=ArticleFilter)
I thought to create a "SearchFilter" to expose the wagtail search functionality, since I ultimately want to perform full text search via graphql like so:
query {
articles (search: "some text in a page") {
edges {
nodes {
slug
}
}
}
}
"search" is not a field on the Django model which is why I created a custom field in the Django FilterSet. My thought was to do something like:
class SearchFilter(CharFilter):
def filter(self, qs, value):
search_results = [r.pk for r in qs.search(value)]
return self.get_method(qs)(pk__in=search_results)
however, I'm curious if there's a better pattern that's more efficient. At the bare minimum I'd want to ensure the SearchFilter is added last (so the query searched is filtered first).
Should the "search" be moved outside of the FilterSet and into the Query/Node/custom connection, and if so, how can I add an additional field to "articles" to see it as the final step in resolving articles (i.e. tack it on to the end of the filter queryset)? If this does belong in a separate Connection, is it possible to combine that connection with the django filter connection?
I would think this pattern of accessing Wagtail search via graphene already exists, however I've had no luck on finding this in the documentation.
I have 2 Models in Django. ModelA and ModelB. Here is the code for both these models.(This is just example code.)
class ModelA(models.Model):
# Single Insert
name=model.CharField(max_length=100)
class ModelB(models.Model):
# Multiple Insert
model_a=models.ForeignKey(ModelA,on_delete=models.CASCADE)
address=models.CharField(max_length=250)
Now how can i insert data in both these models using a single Database Query(i.e Database should be hit only once) using ORM in Django.More specifically is it possible to do this via Django REST serializers cause it can handle the CRUD operations in an optimized manner.
I know that i can do this via multiple serializers but that will lead to the databse getting hit multiple times or i can also do this via stored procedures in MySQL.
You can make use of bulk_create and bulk_update method, You need to create a list of objects manually.
In your case ModelA is parent class and ModelB is a child.
It means one object of ModelA can be related to more than one objects
of ModelB
If you have request.data in your API endpoint seems like.
received_data = [
{
"name": "model_a_obj_1",
"children": [
{
"address" : "some_address"
},
{
"address" : "some_other_address"
}
]
}, {}, {}, ...
]
Now the first step is to create two lists, a list of children list and a list of parent objects.
model_a_obj_list = []
model_b_children_list = []
for obj in received_data:
children_list = []
for child in obj.children:
children_list.append(ModelB(**child))
model_b_children_list.append(children_list)
model_a_obj_list.append(ModelA(name=obj.name))
Now create parent objects first i.e ModelA and then loop over actual objects of ModelA to assign its pk to corresponding ModelB objects
model_a_objs = ModelA.objects.bulk_create(model_a_obj_list)
model_b_obj_list = []
for i, a_obj in enumerate(model_a_objs):
model_b_obj_sub_list = model_b_children_list[i]
for b_obj in model_b_obj_sub_list:
b_obj.model_a = a_obj
model_b_obj_list.append(b_obj)
Now we have ModelB objects with ModelA references. Now just create ModelB objects.
ModelB.objects.bulk_create(model_b_obj_list)
Remember the bulk_create method doesn't use the .save() method of Model class behind the scene, so any
post_save logic must be calculated while creating objects list for bulk_create
This solution will hit the DB 2 time one for each Model
Hello Awesome People!
I'm using Celery to run background tasks, but I can't have access to Model in celery tasks files, I want to convert these model instances to dict, so I can use them in template as Django has a perfect lookup way with dict like it was a model instance.
Such a simple question, for instance
class ModelA(models.Model):
name = CharField(max_length=123)
the_future = ForeignKey('Country',on_delete=models.CASCADE)
Class Country(models.Model):
name = CharField(max_length=123)
img = FileField(upload_to='bathroom/')
I expect a dict like:
{
'model_name':'ModelA',
'id':1,
'name':'Can you Dab?'
'the_future':{
'model_name':'Country',
'id':'3',
'name':'Despacito',
'img':'/media/batchroom/teethbrush.jpg',
},
}
How do you load a Django fixture so that models referenced via natural keys don't conflict with pre-existing records?
I'm trying to load such a fixture, but I'm getting IntegrityErrors from my MySQL backend, complaining about Django trying to insert duplicate records, which doesn't make any sense.
As I understand Django's natural key feature, in order to fully support dumpdata and loaddata usage, you need to define a natural_key method in the model, and a get_by_natural_key method in the model's manager.
So, for example, I have two models:
class PersonManager(models.Manager):
def get_by_natural_key(self, name):
return self.get(name=name)
class Person(models.Model):
objects = PersonManager()
name = models.CharField(max_length=255, unique=True)
def natural_key(self):
return (self.name,)
class BookManager(models.Manager):
def get_by_natural_key(self, title, *person_key):
person = Person.objects.get_by_natural_key(*person_key)
return self.get(title=title, person=person)
class Book(models.Model):
objects = BookManager()
author = models.ForeignKey(Person)
title = models.CharField(max_length=255)
def natural_key(self):
return (self.title,) + self.author.natural_key()
natural_key.dependencies = ['myapp.Person']
My test database already contains a sample Person and Book record, which I used to generate the fixture:
[
{
"pk": null,
"model": "myapp.person",
"fields": {
"name": "bob"
}
},
{
"pk": null,
"model": "myapp.book",
"fields": {
"author": [
"bob"
],
"title": "bob's book",
}
}
]
I want to be able to take this fixture and load it into any instance of my database to recreate the records, regardless of whether or not they already exist in the database.
However, when I run python manage.py loaddata myfixture.json I get the error:
IntegrityError: (1062, "Duplicate entry '1-1' for key 'myapp_person_name_uniq'")
Why is Django attempting to re-create the Person record instead of reusing the one that's already there?
Turns out the solution requires a very minor patch to Django's loaddata command. Since it's unlikely the Django devs would accept such a patch, I've forked it in my package of various Django admin related enhancements.
The key code change (lines 189-201 of loaddatanaturally.py) simply involves calling get_natural_key() to find any existing pk inside the loop that iterates over the deserialized objects.
Actually loaddata is not supposed to work with existing data in database, it is normally used for initial load of models.
Look at this question for another way of doing it: Import data into Django model with existing data?