Mutually exclusive many-to-many relationship in Django models - python

I'm trying to create a simple model to keep track of discount coupons in Django 1.10 (with Postgres 9.5 as underlying database) and I was wondering if there's a way to make sure that a coupon instance (id, perhaps is a more accurate term?) doesn't appear in two M2M relationships at the same time.
I'm sure everyone is familiar with how discount coupons work but, just in case, let me explain my use case:
Some coupons would be always applied. For instance: "Free delivery in your first purchase", or "10% off Pepsi for the rest of your life"... things like that.
Some other coupons would be applied through a code (a simple string, really) that the user would have to input somewhere (like "Get a 5% off with the code "5-OFF" "... yeah, I'll probably have to work on the obfuscation of the codes :-D )
The user could say "No, I don't want to apply this coupon to this order, I'll use it later". For instance: if the user can use a one-time 5% off coupon, but wants to keep it for a large purchase. Let's say the customer knows he's going to make a large purchase in the upcoming future, and right now he's doing a small one. He might wanna keep the 5% off for the later (bigger) purchase.
To keep track of those things, I have a model like this:
class CouponTracker(models.Model):
# Coupons that require a code to be activated:
extra_applied_coupons = ManyToManyField('Coupon', related_name='+')
# Coupons that the user could have applied and specifically
# said no (such as 5% off if the purchase is small, and wants
# to use it later):
vetoed_coupons = ManyToManyField('Coupon', related_name='+')
So, the question is:
How can I enforce (at a database level, through a constraint) that a coupon does not appear at the same time in extra_applied_coupons and vetoed_coupons?
Thank you in advance!

Why don't you combine 2 extra_applied_coupons and vetoed_coupons and have 1 more fields (for example, type) to determine coupon's group. Then problem will be simpler, just ensure unique in 1 ManyToMany relationship
class CouponTracker(models.Model):
coupons = ManyToManyField('Coupon', related_name='+')
type = models.IntegerField(default=0)
type can be 0 for extra_applied_coupons and 1 for vetoed_coupons.
If you want to add more relationship attribute, you can check https://docs.djangoproject.com/en/1.11/topics/db/models/#extra-fields-on-many-to-many-relationships

Since ManyToMany relations creating a seperate table, AFAIK there cannot make an UNIQUE constraint across tables. So there is no direct way to add contraint on db level. Check this . Either you have to do on application layer or some hackish way like this

Related

Django Is it correct to use uuid for a sales transaction

Is it correct to use a uuid as a sales order transaction? for example, in an ecommerce website when someones orders any product or products is it ok to use the uuid as a unique identifier for the order transaction?
Yes. It is universally unique, so you can use it as a unique ID for a transaction, or whatever you want to be able to identify maybe cross systems and for storage etc.
You can consider alternatives also - sometimes perhaps also just a sequence number works, if they all come from one place and in order, and it's useful to know the order. With UUIDs you don't have it, unless store the sequence somewhere of course.

Calculate weighted score from Salesforce data in Django

I'm looking to connect my website with Salesforce and have a view that shows a breakdown of a user's activities in Salesforce, then calculate an overall score based on assigned weights to each activity. I'm using Django-Salesforce to initiate the connection and extend the Activity model, but I'm not sure I've setup the Activity or OverallScore classes correctly.
Below is my code for what I already have. Based on other questions I've seen that are similar, it seems like a custom save method is the suggested result, but my concern is that my database would quickly become massive, as the connection will refresh every 5 minutes.
The biggest question I have is how to setup the "weighted_score" attribute of the Activity class, as I doubt what I have currently is correct.
class Activity(salesforce.models.Model):
owner = models.ManyToManyField(Profile)
name = models.CharField(verbose_name='Name', max_length=264,
unique=True)
weight = models.DecimalField(verbose_name='Weight', decimal_places=2,
default=0)
score = models.IntegerField(verbose_name='Score', default=0)
weighted_score = weight*score
def __str__(self):
return self.name
class OverallScore(models.Model):
factors = models.ManyToManyField(Activity)
score = Activity.objects.aggregate(Sum('weighted_score'))
def __str__(self):
return "OverallScore"
The ideal end result would be each user logged in gets a "live" look at their activity scores and one overall score which is refreshed every 5 minutes from the Salesforce connection, then at the end of the day I would run a cron job to save the end of day results to the database.
Excuse a late partial response only to parts of question that are clear.
The implementation of arithmetic on fields in weighted_score depends on your preferences if your prefer an expression on Django side or on Salesforce side.
The easiest, but very limited solution is by #property decorator on a method.
class Activity(salesforce.models.Model):
... # the same fields
#property
def weighted_score(self)
return self.weight * self.score
This can be used in Python code as self.weighted_score, but it can not be passed any way to SOQL and it gives you not more power than if you would write a longer (self.weight * self.score) on the same place.
Salesforce SOQL does not support arithmetic expressions in SELECT clause, but you can define a custom "Formula" field in Salesforce setup of the Activity object and use it as normal numeric read only field in Django. If the Activity would be a Master-Detail Relationship of any other Salesforce object you can apply very fast Sum, max or average formula on that object.
ManyToMany field require to create the binding object in Salesforce Setup manually and to assign it to the through attribute of the ManyToMany field. An example is on wiki Foreign Key Support. As a rule of thumb your object definition must first exist in Salesforce with useful relationships (it can be Lookup Relationship or Lookup Relationship) and manageable data structure. Then you can run python manage.py inspectdb --database=salesforce ... table names (optionally a list of API Names of used tables, separated by spaces) That is much code to prune many unnecessary fields and choices, but still easier and reliably functional than to ask someone. Salesforce has no special form of custom ManyToMany relationship, therefore everything is written by ForeignKey in models.py. Master-Detail is only a comment on the ForeignKey. You can finally create a ManyToMany field manually, but it is mainly only a syntactic sugar to have nice mnemonic name for a forward and reverse traversing by the two foreign keys on the "through=" binding object.
(The rest of question was too broad and unclear for me.)

Archiving Django models

I'm creating an online order system for selling items on a regular basis (home delivery of vegetable boxes). I have an 'order' model (simplified) as follows:
class BoxOrder(models.Model):
customer = models.ForeignKey(Customer)
frequency = models.IntegerField(choices=((1, "Weekly"), (2, "Fortnightly)))
item = models.ForeignKey(Item)
payment_method = models.IntegerField(choices=((1, "Online"), (2, "Free)))
Now my 'customer' has the ability to change the frequency of the order, or the 'item' (say 'carrots') being sold or even delete the order all together.
What I'd like to do is create weekly 'backups' of all orders processed that week, so that I can see a historical graph of all the orders ever sold every week. The problem with just archiving the order into another table/database is that if an item (say I no longer sell carrots) is deleted for some reason, then that archived BoxOrder would become invalid because of the ForeignKeys
What would be the best solution for creating an archiving system using Django - so that orders for every week in history are viewable in Django admin, and they are 'static' (i.e. independent of whether any other objects are deleted)?
I've thought about creating a new 'flat' BoxOrderArchive model, then using a cron job to move orders for a given week over, e.g.:
class BoxOrderArchive(models.Model):
customer_name = models.CharField(max_length=20)
frequency = models.IntegerField()
item_name = models.CharField() # refers to BoxOrder.item.name
item_price = models.DecimalField(max_digits=10, decimal_places=2) # refers to BoxOrder.item.price
payment_method = models.IntegerField()
But I feel like that might be a lot of extra work. Before I go down that route, it would be great to know if anybody has any other solutions?
Thanks
This is a rather broad topic, and I won't specifically answer your question, however my advice to you is don't delete or move anything. You can add a boolan field to your Item named is_deleted or is_active or something similar and play with that flag when you delete your item. This way you can
keep your ForeignKeys,
have a different representation for non-active items
restore and Item that was previously deleted (for instance you may want to sell Carrots again after some months - this way your statistics will be consistent across the year)
The same advice is true for the BoxOrder model. Do not remove rows to different tables, just add an is_archived field and set it to True.
So, after looking into this long and hard, I think the best solution for me is to create a 'flat' version of the object, dereferencing any existing objects, and save that in the database.
The reason for this is that my 'BoxOrder' object can change every week (as the customer edits their address, item, cost etc. Keeping track of all these changes is just plain difficult.
Plus, I don't need to do anything with the data other than display it to the sites users.
Basically what I am wanting to do is to create a snapshot, and none of the existing tools really are what I want. Having said that, others may have different priorities, so here's a list of useful links:
[1] SO question regarding storing a snapshot/pickling model instances
[2] Django Simple History Docs - stores model state on every create/update/delete
[3] Django Reversion Docs - allows reverting a model instance
For discussion on [2] and [3], see the comments on Serafim's answer

Django design patterns - models with ForeignKey references to multiple classes

I'm working through the design of a Django inventory tracking application, and have hit a snag in the model layout. I have a list of inventoried objects (Assets), which can either exist in a Warehouse or in a Shipment. I want to store different lists of attributes for the two types of locations, e.g.:
For Warehouses, I want to store the address, manager, etc.
For Shipments, I want to store the carrier, tracking number, etc.
Since each Warehouse and Shipment can contain multiple Assets, but each Asset can only be in one place at a time, adding a ForeignKey relationship to the Asset model seems like the way to go. However, since Warehouse and Shipment objects have different data models, I'm not certain how to best do this.
One obvious (and somewhat ugly) solution is to create a Location model which includes all of the Shipment and Warehouse attributes and an is_warehouse Boolean attribute, but this strikes me as a bit of a kludge. Are there any cleaner approaches to solving this sort of problem (Or are there any non-Django Python libraries which might be better suited to the problem?)
what about having a generic foreign key on Assets?
I think its perfectly reasonable to create a "through" table such as location, which associates an asset, a content (foreign key) and a content_type (warehouse or shipment) . And you could set a unique constraint on the asset_fk so thatt it can only exist in one location at a time

What is the default order of a list returned from a Django filter call?

Short Question
What is the default order of a list returned from a Django filter call when connected to a PostgreSQL database?
Background
By my own admission, I had made a poor assumption at the application layer in that the order in which a list is returned will be constant, that is without using 'order_by'. The list of items I was querying is not in alphabetic order or any other deliberate order. It was thought to remain in the same order as which they were added to the database.
This assumption held true for hundreds of queries, but a failure was reported by my application when the order changed unknowingly. To my knowledge, none of these records were touched during this time as I am the only person who maintains the DB. To add to the confusion, when running the Django app on Mac OS X, it still worked as expected, but on Win XP, it changed the order. (Note that the mentioned hundreds of queries was on Win XP).
Any insight to this would be helpful as I could not find anything in the Django or PostgreSQL documentation that explained the differences in operating systems.
Example Call
required_tests = Card_Test.objects.using(get_database()).filter(name__icontains=key)
EDIT
After speaking with some colleague's of mine today, I had come up with the same answer as Björn Lindqvist.
Looking back, I definitely understand why this is done wrong so often. One of the benefits to using an ORM Django, sqlalchemy, or whatever is that you can write commands without having to know or understand (in detail) the database it's connected to. Admittedly I happen to have been one of these users. However on the flip-side of this is that without knowing the database in detail debugging errors like this are quite troublesome and potentially catastrophic.
There is NO DEFAULT ORDER, a point that can not be emphasized enough because everyone does it wrong.
A table in a database is not an ordinary html table, it is an unordered set of tuples. It often surprises programmers only used to MySQL because in that particular database the order of the rows are often predictable due to it not taking advantage of some advanced optimization techniques. For example, it is not possible to know which rows will be returned, or their order in any of the following queries:
select * from table limit 10
select * from table limit 10 offset 10
select * from table order by x limit 10
In the last query, the order is only predictable if all values in column x are unique. The RDBMS is free to returns any rows in any order it pleases as long as it satisfies the conditions of the select statement.
Though you may add a default ordering on the Django level, which causes it to add an order by clause to every non-ordered query:
class Table(models.Model):
...
class Meta:
ordering = ['name']
Note that it may be a performance drag, if for some reason you don't need ordered rows.
If you want to have them returned in the order they were inserted:
Add the following to your model:
created = models.DateTimeField(auto_now_add=True, db_index=True)
# last_modified = models.DateTimeField(auto_now=True, db_index=True)
class Meta:
ordering = ['created',]
# ordering = ['-last_modified'] # sort last modified first

Categories