SQL column with list of foreign keys - python

I am working with Python, PostgreSQL, SQLAlchemy and alembic.
I have to design a database, but I am kinda stuck because my design needs to have a column which will store a list of IDs which are basically foreign keys. I am not sure how to do that and moreover if I should be doing that.
Example: I have a discount table which basically contains all the available discount codes.I have a column discount_applies to where I want to store a list of all products to which the discount applies (I cannot edit the products table). Basically the column will contain a list of UUIDs of products on which the discount can be applied

class Product(Base):
.....
class Discount(Base):
.....
class ProductDiscount(Base):
__tablename__ = 'discount_applies'
product_id = Column(String(32), nullable=False, ,ForeignKey('product.id'))
discount_id = Column(String(32), nullable=False,ForeignKey('discount.id')) #If discount primary key if Integer then change String to Integer
product = relationship(Product)
discount = relationship(Discount)

Related

How to annotate an object with the sum of multilplication of two fields from a related object to a realated object in Django

Given these three classes
class User(BaseModel):
name = models.CharField(..)
class Order(BaseModel):
user = models.ForeignKey(User,...,related_name='orders')
class OrderItem(BaseModel):
order = models.ForeignKey(Order,...,related_name='items'
quatity = models.IntegerField(default=1)
price = models.FloatField()
and this is the base class (it is enough to note that it has the created_at field)
class BaseModel(models.Model):
createt_at = models.DateTimeField(auto_now_add=True)
Now each User will have multiple Orders and each Order has multiple OrdeItems
I want to annotate the User objects with the total price of the last order.
Take this data for example:
The User objects should be annotated with the sum of the last order that is for user john with id=1
we should return the sum of order_items (with ids= 3 & 4) since they are related to the order id=2 since it is the latest order.
I hope I have made my self clear. I am new to Django and tried to go over the docs and tried many different things but I keep getting stuck at getting the last order items
Sometimes it's unclear how to make such query in Django ORM. In your case I'd write the query in raw SQL something like:
WITH last_order_for_user AS (
SELECT id, user_id, MAX(created_at)
FROM orders
GROUP BY user_id
) -- last order_id for each user_id
SELECT
order.user_id, order.id, SUM(item.price)
FROM
last_order_for_user order
LEFT JOIN
orderitems item ON order.id=item.order_id
GROUP BY 1,2 -- sum of items for last order, user
And then perform raw SQL django-docs

Django use values or values_list with non related models

I have this both models:
class GeneCombination(models.
gene = models.ForeignKey(Gene, db_column='gene', on_delete=models.DO_NOTHING, db_constraint=False)
class Gene(models.Model):
name = models.CharField(max_length=50, primary_key=True)
type = models.CharField(max_length=25)
I know it's not the best model schema, it has to be like that due to business rules. I'm getting the values from GeneCombination model and some fields from Gene model. Now, It could be that a gene is not present in Gene model, in that case that rows are being filtered when I use values of values_list methods, which is what I want to prevent.
This query is returning 31 elements: GeneCombination.objects.select_related('gene').values('gene')
But this query is returning 27 elements (is filtering the 4 elements whose gene does not exist in Gene table): GeneCombination.objects.select_related('gene').values('gene__type')
How could I get empty values in case that It doesn't exist and prevent the rows to be filtered?
Thanks in advance and sorry about my English

sqlalchemy: order of query result unexpected

I'm using SQLAlchemy with MySQL and have a table with two foreign keys:
class CampaignCreativeLink(db.Model):
__tablename__ = 'campaign_creative_links'
campaign_id = db.Column(db.Integer, db.ForeignKey('campaigns.id'),
primary_key=True)
creative_id = db.Column(db.Integer, db.ForeignKey('creatives.id'),
primary_key=True)
Then I use a for loop to insert 3 items into the table like this:
session.add(8, 3)
session.add(8, 2)
session.add(8, 1)
But when I checked the table, the items are ordered reversely
8 1
8 2
8 3
And the query shows the order reversely too. What's the reason for this and how can I keep the order same as when they were added?
A table is a set of rows and are therefore not guaranteed to have any order unless you specify ORDER BY.
In MySQL (InnoDB), the primary key acts as the clustered index. This means that the rows are physically stored in the order specified by the primary key, in this case (campaign_id, created_id), regardless of the order of insertion. This is usually the order the rows are returned in if you don't specify an ORDER BY.
If you need your rows returned in a certain order, specify ORDER BY when you query.

Creating partial unique index with sqlalchemy on Postgres

SQLAlchemy supports creating partial indexes in postgresql.
Is it possible to create a partial unique index through SQLAlchemy?
Imagine a table/model as so:
class ScheduledPayment(Base):
invoice_id = Column(Integer)
is_canceled = Column(Boolean, default=False)
I'd like a unique index where there can be only one "active" ScheduledPayment for a given invoice.
I can create this manually in postgres:
CREATE UNIQUE INDEX only_one_active_invoice on scheduled_payment
(invoice_id, is_canceled) where not is_canceled;
I'm wondering how I can add that to my SQLAlchemy model using SQLAlchemy 0.9.
class ScheduledPayment(Base):
id = Column(Integer, primary_key=True)
invoice_id = Column(Integer)
is_canceled = Column(Boolean, default=False)
__table_args__ = (
Index('only_one_active_invoice', invoice_id, is_canceled,
unique=True,
postgresql_where=(~is_canceled)),
)
In case someone stops by looking to set up a partial unique constraint with a column that can optionally be NULL, here's how:
__table_args__ = (
db.Index(
'uk_providers_name_category',
'name', 'category',
unique=True,
postgresql_where=(user_id.is_(None))),
db.Index(
'uk_providers_name_category_user_id',
'name', 'category', 'user_id',
unique=True,
postgresql_where=(user_id.isnot(None))),
)
where user_id is a column that can be NULL and I want a unique constraint enforced across all three columns (name, category, user_id) with NULL just being one of the allowed values for user_id.
To add to the answer by sas, postgresql_where does not seem to be able to accept multiple booleans. So in the situation where you have TWO null-able columns (let's assume an additional 'price' column) it is not possible to have four partial indices for all combinations of NULL/~NULL.
One workaround is to use default values which would never be 'valid' (e.g. -1 for price or '' for a Text column. These would compare correctly, so no more than one row would be allowed to have these default values.
Obviously, you will also need to insert this default value in all existing rows of data (if applicable).

how to check whether all the values of a database field are same using django queries

I have a model like below
Class Product(models.Model):
name = models.CharField(max_length=255)
price = models.IntegerField()
So suppose we have 4 product records in database, is there anyway to check whether all the 4 product records have the same price ?
I don't want to loop through all the products, because there may be thousands of product records in database, and doing so will become a performance issue.
So i am looking for something like using builtin django database ORM to do this
check_whether_all_the_product_records_has_same_price_value = some django ORM operation......
if check_whether_all_the_product_records_has_same_price_value:
# If all the Product table records(four) has the same price value
# return the starting record
return check_whether_product_has_same_price_value(0)
So can anyone please let me know how can we do this ?
Can propose You count lines using filter
if Product.objects.all().count() == Product.objects.filter(price=price).count():
pass
or use distinct
if Product.objects.all().products.distinct('price').count() == 1:
pass
Note that this example works correctly on Portgres only.
Also You can Use annotate to calculate count I think
if Product.objects.all().values('price').annotate(Count('price')).count() == 1:
pass
You can use distinct to find unique prices:
products = Product.objects.filter([some condition])
prices = products.values_list('price', flat=True).distinct()
Then check the length of prices.

Categories