Flask SQLAlchemy sum function comparison - python

I have table that is holding some data about users. There are two fields there like and smile. I need to get data from table, grouped by user_id that will show if user has likes or smiles. Query that I would write in SQL looks like:
select sum(smile) > 0 as has_smile,
sum(like) > 0 as has_like,
user_id
from ratings
group by user_id.
This would provide output like:
| has_smile | has_like | user_id |
+-----------+----------+---------+
| 1 | 0 | 1 |
| 1 | 1 | 2 |
Is there any chance this query can be translated to SQLAlchemy (Flask-SQLAlchemy to be precise)? I know there is db.func.sum but I don't know how to add comparison there, and to have label. What I did for now is:
cls.query.with_entities("user_id").group_by(user_id).\
add_columns(db.func.sum(cls.smile).label("has_smile"),
db.func.sum(cls.like).label("has_like")).all()
but that will return exact number of smiles/likes instead of just 1/0 if there is or there is not smile/like.

Thanks to operator overloading you'd do comparison the way you're used to doing in Python in general:
db.func.sum(cls.smile) > 0
which produces an SQL expression object that you can then give a label to:
(db.func.sum(cls.smile) > 0).label('has_smile')

Related

Is there any ways to combine two rows of table into one row using Django ORM?

I have a table which has columns named measured_time, data_type and value.
In data_type, there is two types, temperature and humidity.
I want to combine two rows of data if they have same measured_time using Django ORM.
I am using Maria DB.
Using Raw SQL, The following Query does what I want to.
SELECT T1.measured_time, T1.temperature, T2.humidity
FROM ( SELECT CASE WHEN data_type = 1 then value END as temperature,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T1,
( SELECT CASE WHEN data_type = 1 then value END as temperature ,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T2
WHERE T1.measured_time = T2.measured_time and
T1.temperature IS NOT null and T2.humidity IS NOT null and
DATE(T1.measured_time) = '2019-07-01'
Original Table
| measured_time | data_type | value |
|---------------------|-----------|-------|
| 2019-07-01-17:27:03 | 1 | 25.24 |
| 2019-07-01-17:27:03 | 2 | 33.22 |
Expected Result
| measured_time | temperaure | humidity |
|---------------------|------------|----------|
| 2019-07-01-17:27:03 | 25.24 | 33.22 |
I've never used it and so can't answer in detail, but you can feed a raw SQL query into Django and get the results back through the ORM. Since you have already got the SQL this may be the easiest way to proceed. Documentation here

Get Max Value of Column out of a a query list in Python

I am new to Python and am currently trying to create a Web-form to edit customer data. The user selects a customer and gets all DSL-Products linked to the customer. What I am now trying is to get the maximum downstream possible for a customer. So when the customer got DSL1, DSL3 and DSL3 then his MaxDownstream is 550. Sorry for my poor english skills.
Here is the structure of my tables..
Customer_has_product:
Customer_idCustomer | Product_idProduct
----------------------------
1 | 1
1 | 3
1 | 4
2 | 5
3 | 3
Customer:
idCustomer | MaxDownstream
----------------------------
1 |
2 |
3 |
Product:
idProduct | Name | downstream
-------------------------------------------------
1 | DSL1 | 50
2 | DSL2 | 100
3 | DSL3 | 550
4 | DSL4 | 400
5 | DSL5 | 1000
And the code i've got so far:
db_session = Session(db_engine)
customer_object = db_session.query(Customer).filter_by(
idCustomer=productform.Customer.data.idCustomer
).first()
productlist = request.form.getlist("DSLPRODUCTS_PRIVATE")
oldproducts = db_session.query(Customer_has_product.Product_idProduct).filter_by(
Customer_idCustomer=customer_object.idCustomer)
id_list_delete = list(set([r for r, in oldproducts]) - set(productlist))
for delid in id_list_delete:
db_session.query(Customer_has_product).filter_by(Customer_idCustomer=customer_object.idCustomer,
Product_idProduct=delid).delete()
db_session.commit()
for product in productlist:
if db_session.query(Customer_has_product).filter_by(
Customer_idCustomer=customer_object.idCustomer,
Product_idProduct=product
).first() is not None:
continue
else:
product_link_to_add = Customer_has_product(
Customer_idCustomer=productform.Customer.data.idCustomer,
Product_idProduct=product
)
db_session.add(product_link_to_add)
db_session.commit()
What you want to do is JOIN the tables onto each other. All relational database engines support joins, as does SQLAlchemy.
So how do you do that in SQLAlchemy?
You have two options, really. One is to use the Query builder of SQLAlchemy's ORM, the other is using SQLAlchemy Core (upon which the ORM is built) directly. I really prefer the later, because it maps more directly to SELECT statements, but I'm going to show both.
Using SQLAlchemy Core
How to do a join in Core is documented here. First argument is the table to JOIN to, second argument is the JOIN-condition.
from sqlalchemy import select, func
query = select(
[
Customer.idCustomer,
func.max(Product.downstream),
]
).select_from(
Customer.__table__
.join(Customer_has_product.__table__,
Customer_has_product.Customer_idCustomer ==
Customer.idCustomer)
.join(Product.__table__,
Product.idProduct == Customer_has_product.Product_idProduct)
).group_by(
Customer.idCustomer
)
# Now we can execute the built query on the database.
result = db_session.execute(query).fetchall()
print(result) # Should now give you the correct result.
Using SQLAlchemy ORM
To simplify this it's best to declare some [relationships on your models][2].joinis documented [here][2]. First argument tojoin` is the model to join onto and the second argument is the JOIN-condition again.
Without the relationships you'll have to do it like this.
result = (db_session
.query(Customer.idCustomer, func.max(Product.downstream))
.join(Customer_has_product,
Customer_has_product.Customer_idCustomer ==
Customer.idCustomer)
.join(Product,
Product.idProduct == Customer_has_product.Product_idProduct)
.group_by(Customer.idCustomer)
).all()
print(result)
This should be enough to get the idea on how to do this.

Using func rank in SQLAlchemy to rank rows in a table

I have a table defined like so:
Column | Type | Modifiers | Storage | Stats target | Description
-------------+---------+-----------+---------+--------------+-------------
id | uuid | not null | plain | |
user_id | uuid | | plain | |
area_id | integer | | plain | |
vote_amount | integer | | plain | |
I want to be able to generate a rank 'column' when I query this database. This rank column would be ordered by the vote_amount column. I have attempted to create a query to do this, it looks like so:
subq_rank = db.session.query(user_stories).add_columns(db.func.rank.over(partition_by=user_stories.user_id, order_by=user_stories.vote_amount).label('rank')).subquery('slr')
data = db.session.query(user_stories).select_entity_from(subq_rank).filter(user_stories.area_id == id).group_by(-subq_rank.c.rank).limit(50).all()
Hopefully my attempt will give you an idea of what I am trying to achieve.
Thanks.
Well, if you need in each query these columns better I would do it in DB. I would create a view which contains the column rank, and in the query I call this view to show directly the data in code:
CREATE VIEW [ranking_user_stories] AS
SELECT TOP 50 * FROM
(SELECT *, rank() over (partition by user_stories.user_id order by user_stories.vote_amount ASC) AS ranking
FROM user_stories
WHERE user_stories.area_id = id) uS
ORDER BY vote_amount ASC
It's the same logic than your code but in SQL, if your are using MySQL, just change TOP 50 to LIMIT 50 (and put at the end of query). I don't see the sense to put the last group by by ranking, but if you need it:
CREATE VIEW [ranking_user_stories] AS
SELECT TOP 50 MAX(id) AS id, user_id, area_id, MAX(vote_amount) AS vote_amount, ranking FROM
(SELECT *, rank() over (partition by user_stories.user_id order by user_stories.vote_amount ASC) AS ranking
FROM user_stories
WHERE user_stories.area_id = id) uS
ORDER BY MAX(vote_amount) ASC
GROUP BY user_id, area_id, ranking

Having trouble with a PostgreSQL query

I've got the following two tables:
User
userid | email | phone
1 | some#email.com | 555-555-5555
2 | some#otheremail.com | 555-444-3333
3 | one#moreemail.com | 333-444-1111
4 | last#one.com | 123-333-2123
UserTag
id | user_id | tag
1 | 1 | tag1
2 | 1 | tag2
3 | 1 | cool_tag
4 | 1 | some_tag
5 | 2 | new_tag
6 | 2 | foo
6 | 4 | tag1
I want to run a query in SQLAlchemy to join those two tables and return all users who do NOT have the tags "tag1" or "tag2". In this case, the query should return users with userid 2, and 3. Any help would be greatly appreciated.
I need the opposite of this query:
users.join(UserTag, User.userid == UserTag.user_id)
.filter(
or_(
UserTag.tag.like('tag1'),
UserTag.tag.like('tag2')
))
)
I have been going at this for hours but always end up with the wrong users or sometimes all of them. An SQL query which achieves this would also be helpful. I'll try to convert that to SQLAlchemy.
Not sure how this would look in SQLAlchemy, but hopefully and explanation of why the query is the way it is will help you get there.
This is an outer join - you want all the records from one table (User) even if there are no records in the other table (UserTag) if we put User first it would be a left join. Beyond that you want all the records that don't have a match in the UserTag for a specific filter.
SELECT user.user_id, email, phone
FROM user LEFT JOIN usertag
ON usertag.user_id = user.user_id
AND usertag.tag IN ('tag1', 'tag2')
WHERE usertag.user_id IS NULL;
SQL will go like this
select u.* from user u join usertag ut on u.id = ut.user_id and ut.tag not in ('tag1', 'tag2');
I have not used SQLAlchemy so you need to convert it to equivalent SQLAlchemy query.
Hope it helps.
Thanks.
Assuming your model defines a relationship as below:
class User(Base):
...
class UserTag(Base):
...
user = relationship("User", backref="tags")
the query follows:
qry = session.query(User).filter(~User.tags.any(UserTag.tag.in_(tags))).order_by(User.id)

Django lte/gte query on a list

I have the following type of data:
The data is segmented into "frames" and each frame has a start and stop "gpstime". Within each frame are a bunch of points with a "gpstime" value.
There is a frames model that has a frame_name,start_gps,stop_gps,...
Let's say I have a list of gpstime values and want to find the corresponding frame_name for each.
I could just do a loop...
framenames = [frames.objects.filter(start_gps__lte=gpstime[idx],stop_gps__gte=gpstime[idx]).values_list('frame_name',flat=True) for idx in range(len(gpstime))]
This will give me a list of 'frame_name', one for each gpstime. This is what I want. However this is very slow.
What I want to know: Is there a better way to preform this lookup to get a framename for each gpstime that is more efficient than iterating over the list. This list could get faily large.
Thanks!
EDIT: Frames model
class frames(models.Model):
frame_id = models.AutoField(primary_key=True)
frame_name = models.CharField(max_length=20)
start_gps = models.FloatField()
stop_gps = models.FloatField()
def __unicode__(self):
return "%s"%(self.frame_name)
If I understand correctly, gpstime is a list of the times, and you want to produce a list of framenames with one for each gpstime. Your current way of doing this is indeed very slow because it makes a db query for each timestamp. You need to minimize the number of db hits.
The answer that comes first to my head uses numpy. Note that I'm not making any extra assumptions here. If your gpstime list can be sorted, i.e. the ordering does not matter, then it could be done much faster.
Try something like this:
from numpy import array
frame_start_times=array(Frame.objects.all().values_list('start_time'))
frame_end_times=array(Frame.objects.all().values_list('end_time'))
frame_names=array(Frame.objects.all().values_list('frame_name'))
frame_names_for_times=[]
for time in gpstime:
frame_inds=frame_start_times[(frame_start_times<time) & (frame_end_times>time)]
frame_names_for_times.append(frame_names[frame_inds].tostring())
EDIT:
Since the list is sorted, you can use .searchsorted():
from numpy import array as a
gpstimes=a([151,152,153,190,649,652,920,996])
starts=a([100,600,900,1000])
ends=a([180,650,950,1000])
names=a(['a','b','c','d',])
names_for_times=[]
for time in gpstimes:
start_pos=starts.searchsorted(time)
end_pos=ends.searchsorted(time)
if start_pos-1 == end_pos:
print time, names[end_pos]
else:
print str(time) + ' was not within any frame'
The best way to speed things up is to add indexes to those fields:
start_gps = models.FloatField(db_index=True)
stop_gps = models.FloatField(db_index=True)
and then run manage.py dbsync.
The frames table is very large, but I have another value that lowers
the frames searched in this case to under 50. There is not really a
pattern, each frame starts at the same gpstime the previous stops.
I don't quite understand how you lowered the number of searched frames to 50, but if you're searching for, say, 10,000 gpstime values in only 50 frames, then it's probably easiest to load those 50 frames into RAM, and do the search in Python, using something similar to foobarbecue's answer.
However, if you're searching for, say, 10 gpstime values in the entire table which has, say, 10,000,000 frames, then you may not want to load all 10,000,000 frames into RAM.
You can get the DB to do something similar by adding the following index...
ALTER TABLE myapp_frames ADD UNIQUE KEY my_key (start_gps, stop_gps, frame_name);
...then using a query like this...
(SELECT frame_name FROM myapp_frames
WHERE 2.5 BETWEEN start_gps AND stop_gps LIMIT 1)
UNION ALL
(SELECT frame_name FROM myapp_frames
WHERE 4.5 BETWEEN start_gps AND stop_gps LIMIT 1)
UNION ALL
(SELECT frame_name FROM myapp_frames
WHERE 7.5 BETWEEN start_gps AND stop_gps LIMIT 1);
...which returns...
+------------+
| frame_name |
+------------+
| Frame 2 |
| Frame 4 |
| Frame 7 |
+------------+
...and for which an EXPLAIN shows...
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
| 1 | PRIMARY | myapp_frames | range | my_key | my_key | 8 | NULL | 3 | Using where; Using index |
| 2 | UNION | myapp_frames | range | my_key | my_key | 8 | NULL | 5 | Using where; Using index |
| 3 | UNION | myapp_frames | range | my_key | my_key | 8 | NULL | 8 | Using where; Using index |
| NULL | UNION RESULT | <union1,2,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
...so you can do all the lookups in one query which hits that index, and the index should be cached in RAM.

Categories