django-tables2 add dynamic columns to table class from hstore - python

My general question is: can I use the data stored in a HStoreField (Django 1.8.9) to generate columns dynamically for an existing Table class of django-tables2? As an example below, say I have a model:
from django.contrib.postgres import fields as pgfields
GameSession(models.Model):
user = models.ForeignKey('profile.GamerProfile')
game = models.ForeignKey('games.Game')
last_achievement = models.ForeignKey('games.Achievement')
extra_info = pgfields.HStoreField(null=True, blank=True)
Now, say I have a table defined as:
GameSessionTable(tables.Table):
class Meta(BaseMetaTable):
model = GameSession
fields = []
orderable=False
id = tables.LinkColumn(accessor='id', verbose_name='Id', viewname='reporting:session_stats', args=[A('id')], attrs={'a':{'target':'_blank'}})
started = DateTimeColumn(accessor='startdata.when_started', verbose_name='Started')
stopped = DateTimeColumn(accessor='stopdata.when_stopped', verbose_name='Stopped')
game_name = tables.LinkColumn(accessor='game.name', verbose_name='Game name', viewname='reporting:game_stats', args=[A('mainjob.id')], attrs={'a':{'target':'_blank'}})
I want to be able to add columns for each of the keys stored in the extra_info column for all of the GameSessions. I have tried to override the init() method of the GameSessionTable class, where I have access to the queryset, then make a set of all the keys of my GameSession objects, then add them to self, however that doesn't seem to work. Code below:
def __init__(self, data, *args, **kwargs):
super(GameSessionTable, self).__init__(data, *args, **kwargs)
if data:
extra_cols=[]
# just to be sure, check that the model has the extra_info HStore field
if data.model._meta.get_field('extra_info'):
extra_cols = list(set([item for q in data if q.extra_info for item in q.extra_info.keys()]))
for col in extra_cols:
self.columns.columns[col] = tables.Column(accessor='extra_info.%s' %col, verbose_name=col.replace("_", " ").title())
Just a mention, I have had a look at https://spapas.github.io/2015/10/05/django-dynamic-tables-similar-models/#introduction but it's not been much help because the use case there is related to the fields of a model, whereas my situation is slightly different as you can see above.
Just wanted to check, is this even possible or do I have to define an entirely different table for this data, or potentially use an entirely different library altogether like django-reports-builder?

Managed to figure this out to a certain extent. The code I was running above was slightly wrong, so I updated it to run my code before the superclass init() gets run, and changed where I was adding the columns.
As a result, my init() function now looks like this:
def __init__(self, data, *args, **kwargs):
if data:
extra_cols=[]
# just to be sure, check that the model has the extra_info HStore field
if data.model._meta.get_field('extra_info'):
extra_cols = list(set([item for q in data if q.extra_info for item in q.extra_info.keys()]))
for col in extra_cols:
self.base_columns[col] = tables.Column(accessor='extra_info.%s' %col, verbose_name=col.replace("_", " ").title())
super(GameSessionTable, self).__init__(data, *args, **kwargs)
Note that I replaced self.columns.columns (which were BoundColumn instances) with self.base_columns. This allows the superclass to then consider these as well when initializing the Table class.
Might not be the most elegant solution, but it seems to do the trick for me.

Related

Query based in Embedded Documents List fields in mongoengine

I'm running into an issue using mongoengine. A raw query that works on Compass isn't working using _ _ raw _ _ on mongoengine. I'd like to rewrite it using mongoengine's methods, but I'd like to understand why it's not working using _ _ raw_ _ either.
I'm running an embedded document list field that has inheritence. The query is "give me all sequences that are have a 'type A' Assignment "
My schema:
class Sequence(Document):
seq = StringField(required = True)
samples = EmbeddedDocumentListField(Sample)
assignments = EmbeddedDocumentListField(Assignment)
class Sample(EmbeddedDocument):
name = StringField()
class Assignment(EmbeddedDocument):
name = StringField()
meta = {'allow_inheritance': True}
class TypeA(Assignment):
pass
class TypeB(Assignment):
other_field = StringField()
pass
Writing {'assignments._cls': 'TypeA'} into Compass returns a list. But on mongoengine I get an empty field:
from mongo_objects import Sequence
def get_samples_assigned_as_class(cls : str):
query_raw = Sequence.objects(__raw__={'assignments._cls': cls}) # raw query, fails
#query2 = Sequence.objects(assignments___cls = cls) # Fist attempt, failed
#query3 = Sequence.objects.get().assignments.filter(cls = cls) # Second attempt, also failed. Didn't like that it queried everything first
print(query_raw) # empty list, iterating does nothing.
get_samples_assigned_as_class('TypeA')
"Assignments" is a list because one sequence may have multiples of the same class. An in depth awnser on how to query these lists for categorical information would be ideal, as I'm not sure how to properly go about it. I'm mostly filtering on the inheritence _cls, but eventually I'd like to do nested queries (cls : TypeA, sample : Sample_1)
Thanks

Django-tables2 - column name is changed after refreshing page

I'm facing a very weird behaviour of Django and Django-tables2.
I use table to help to render multiple types of documents.
For simplicity, there are two types of documents - 'faktura' and 'dobropis'.
'dobropis' has to have first column labeled "Dobropisujeme Vám" and 'faktura' - "Názov položky" + those columns are equal.
So I'm checking, if the type of Document is 'faktura' or 'dobropis' inside Table __init__ function and accordingly set
self.base_columns['column'].verbose_name = ...
The weird thing is that it works, but only after second refresh.
Situation: I've opened 'faktura' page - I can see 'Názov položky' which is ok. Then I open 'dobropis' page and I see 'Názov položky' label again. Then, if I refresh again, there is 'Dobropisujeme Vám'. Then, after each refresh, I see correct label - 'Dobropisujeme Vám'. But, if I open 'faktura' page, there is 'Dobropisujeme Vám' too for the first time, after second refresh it goes normal.
class PolozkyTable(tables.Table):
nazov_polozky = tables.columns.Column(orderable=False)
pocet = tables.columns.Column(orderable=False)
jednotka = tables.columns.Column(orderable=False)
cena = tables.columns.Column(orderable=False, verbose_name=u'Jednotková cena')
zlava = tables.columns.Column(orderable=False)
dph = tables.columns.Column(orderable=False)
celkom = tables.columns.Column(orderable=False)
class Meta:
model = Polozka
fields = ['nazov_polozky', 'pocet', 'jednotka', 'cena', 'zlava', 'dph', 'celkom']
attrs = {'class': 'table table-striped table-hover'}
def __init__(self, *args, **kwargs):
typ = kwargs.pop('typ')
super(PolozkyTable, self).__init__(*args, **kwargs)
if typ == 'dobropis':
self.base_columns['nazov_polozky'].verbose_name = u'Dobropisujeme Vám'
self.exclude = ['zlava', 'dph']
else:
self.base_columns['nazov_polozky'].verbose_name = u'Názov položky'
def render_cena(self, record):
return '{} {}'.format(record.cena, record.doklad.get_fakturacna_mena_display().encode('utf-8'))
def render_celkom(self, record):
return '{} {}'.format(record.celkom, record.doklad.get_fakturacna_mena_display().encode('utf-8'))
#login_required
def doklad_detail(request):
doklad = get_object_or_404(Doklad, pk=request.GET.get('id'))
polozky_table = PolozkyTable(doklad.polozky.all(),typ=doklad.typ)
return render(request, 'pdf/{}_pdf_template.html'.format(doklad.typ),
{'doklad': doklad, 'polozky_table': polozky_table})
There is no cache in this project and I really don't know what it could be.
Do you know?
EDIT:
Moreover, when I open 'faktura' after restarting the server, it shows camelcased 'Názov Položky' instead of 'Názov položky'. This is another weird thing because I searched whole project for case sensitive 'Názov Položky' and there is no such string.
EDIT2:
I've solved this creating separate table for 'dobropis' but I'm still curious which caused this problem. My colleague had the same problem on his PC.
You shouldn't be modifying base_columns in the __init__ method -- you are changing the value on the class, so it affects other views as well.
The base_columns have already been copied in the super() call before you alter them, so the changes only show up the next time you access the view.
It looks like you should be modifying self.columns instead. That should only affect the instance. Try setting header instead of verbose_name.
self.columns['nazov_polozky'].header = u'Dobropisujeme Vám'

Why is this Django QuerySet returning no results?

I have this Class in a project, and I'm trying to get previous and next elements of current one.
def get_context(self, request, *args, **kwargs):
context = super(GuidePage, self).get_context(request, *args, **kwargs)
context = get_article_context(context)
all_guides = GuidePage.objects.all().order_by("-date")
context['all_guides'] = all_guides
context['next_guide'] = all_guides.filter(date__lt=self.date)
context['prev_guide'] = all_guides.filter(date__gt=self.date)
print context['next_guide']
print context['prev_guide']
return context
These two lines:
context['prev_guide'] = all_guides.filter(date__lt=self.date)
context['next_guide'] = all_guides.filter(date__gt=self.date)
are returning empty results as printed in the console:
(QuerySet[])
(QuerySet[])
What am I missing?
EDIT:
I changed lt and gt to lte and gte. As I understand that will include results that are also equal in date.
In this case I got ALL elements. All elements were created the same day, but, of course, at different times, so they should be different by minutes. Is this difference not taking into account when filtering for greater/lesser ?
If you want to filter not only by date, but time also, you must change the relevant field in your model to be of DateTimeField instead of DateField.
Like this:
from django.db import models
class MyModel(models.Model):
date_time = models.DateTimeField()
Now, you can do stuff like all_guides.filter(date_time__lte=self.date_time) or all_guides.filter(date_time__gte=self.date_time).
Carefull of the two underscores __.

How to update object with another object in get_or_create?

I have to tables wit similar fields and I want to copy objects from one table to another.
Problem that object could be absent in second table, so I have to use get_or_create() method:
#these are new products, they all are instances of NewProduct model, which is similar
#to Product model
new_products_list = [<NewProduct: EEEF0AP>, <NewProduct: XR3D-F>,<Product: XXID-F>]
#loop over them and check if they are already in database
for product in new_products_list:
product, created = Products.objects.get_or_create(article=product.article)
if created:
#here is no problem because new object saved
pass
else:
# here I need to code that will update existing Product instance
# with values from NewProduct instance fields
The case is that I don't want to list all fields for update manually, like this,, because I have about 30 of them:
update_old_product = Product(name=new_product.name,article= new_product.article)
Please advise more elegant way than above
You can loop over the field names and update them in the the other Product instance:
for new_product in new_products_list:
# use different variable names, otherwise you won't be able to access
# the item from new_product_list here
product, created = Products.objects.get_or_create(article=new_product.article)
if not created:
for field in new_product._meta.get_all_field_names():
setattr(product, field, getattr(new_product, field))
product.save()
You could try something like this.
def copy_fields(frm, to):
id = to.id
for field in frm.__class__._meta.fields:
setattr(to, field.verbose_name, field._get_val_from_obj(frm))
to.id = id
This is similar to Ashwini Chaudhary, although I think it will take care of that error that you mentioned in the comments.
new_products_list= (
# obj1, obj2, obj3 would be from [<NewProduct: EEEF0AP>, <NewProduct: XR3D-F>,<Product: XXID-F>] in your question
# NewProduct would just be the model that you import
# NewProduct._meta.fields would be all the fields
(obj1, NewProduct, NewProduct._meta.fields,),
(obj2, NewProduct, NewProduct._meta.fields,),
(obj3, NewProduct, NewProduct._meta.fields,),
)
for instance, model, fields in new_products_list:
new_fields = {}
obj, created = model.objects.get_or_create(pk=instance.article) # this is pretty much just here to ensure that it is created for filter later
for field in fields:
if field != model._meta.pk: # do not want to update the pk
new_fields[field.name] = request.POST[field.name]
model.objects.filter(pk=question_id).update(**new_fields) # you won't have to worry about updating multiple models in the db because there can only be one instance with this pk
I know this was over a month ago, but I figured I would share my solution even if you have already figured it out

The right way to auto filter SQLAlchemy queries?

I've just introspected a pretty nasty schema from a CRM app with sqlalchemy. All of the tables have a deleted column on them and I wanted to auto filter all those entities and relations flagged as deleted. Here's what I came up with:
class CustomizableQuery(Query):
"""An overridden sqlalchemy.orm.query.Query to filter entities
Filters itself by BinaryExpressions
found in :attr:`CONDITIONS`
"""
CONDITIONS = []
def __init__(self, mapper, session=None):
super(CustomizableQuery, self).__init__(mapper, session)
for cond in self.CONDITIONS:
self._add_criterion(cond)
def _add_criterion(self, criterion):
criterion = self._adapt_clause(criterion, False, True)
if self._criterion is not None:
self._criterion = self._criterion & criterion
else:
self._criterion = criterion
And it's used like this:
class UndeletedContactQuery(CustomizableQuery):
CONDITIONS = [contacts.c.deleted != True]
def by_email(self, email_address):
return EmailInfo.query.by_module_and_address('Contacts', email_address).contact
def by_username(self, uname):
return self.filter_by(twod_username_c=uname).one()
class Contact(object):
query = session.query_property(UndeletedContactQuery)
Contact.query.by_email('someone#some.com')
EmailInfo is the class that's mapped to the join table between emails and the other Modules that they're related to.
Here's an example of a mapper:
contacts_map = mapper(Contact, join(contacts, contacts_cstm), {
'_emails': dynamic_loader(EmailInfo,
foreign_keys=[email_join.c.bean_id],
primaryjoin=contacts.c.id==email_join.c.bean_id,
query_class=EmailInfoQuery),
})
class EmailInfoQuery(CustomizableQuery):
CONDITIONS = [email_join.c.deleted != True]
# More methods here
This gives me what I want in that I've filtered out all deleted Contacts. I can also use this as the query_class argument to dynamic_loader in my mappers - However...
Is there a better way to do this, I'm not really happy with poking around with the internals of a compicated class like Query as I am.
Has anyone solved this in a different way that they can share?
You can map to a select. Like this:
mapper(EmailInfo, select([email_join], email_join.c.deleted == False))
I'd consider seeing if it was possible to create views for these tables that filter out the deleted elements, and then you might be able to map directly to that view instead of the underlying table, at least for querying operations. However I've never tried this myself!

Categories