Can I dynamically change order_by attributes in my query call? - python

I have the following query call:
SearchList = (DBSession.query(
func.count(ExtendedCDR.uniqueid).label("CallCount"),
func.sum(ExtendedCDR.duration).label("TotalSeconds"),
ExtendedCDR,ExtensionMap)
.filter(or_(ExtensionMap.exten == ExtendedCDR.extension,ExtensionMap.prev_exten == ExtendedCDR.extension))
.filter(between(ExtendedCDR.start,datebegin,dateend))
.filter(ExtendedCDR.extension.in_(SelectedExtension))
.group_by(ExtendedCDR.extension)
.order_by(func.count(ExtendedCDR.uniqueid).desc()))
.all()
)
I would like to be able to define the order_by clause prior to calling the .query(), is this possible?
I tried doing as this stackoverflow answer suggests for a filter spec, but I had no idea how to create the filter_group syntax.
From that post:
filter_group = list(Column.in_('a','b'),Column.like('%a'))
query = query.filter(and_(*filter_group))

You build a SQL query with the DBSession.query() call, and this query is not executed until you call .all() on it.
You can store the intermediary results and add more filters or other clauses as needed:
search =DBSession.query(
func.count(ExtendedCDR.uniqueid).label("CallCount"),
func.sum(ExtendedCDR.duration).label("TotalSeconds"),
ExtendedCDR,ExtensionMap)
search = search.filter(or_(
ExtensionMap.exten == ExtendedCDR.extension,
ExtensionMap.prev_exten == ExtendedCDR.extension))
search = search.filter(between(ExtendedCDR.start, datebegin, dateend))
search = search.filter(ExtendedCDR.extension.in_(SelectedExtension))
search = search.group_by(ExtendedCDR.extension)
search = search.order_by(func.count(ExtendedCDR.uniqueid).desc())
The value you pass to order_by can be created ahead of time:
search_order = func.count(ExtendedCDR.uniqueid).desc()
then used like:
search = search.order_by(search_order)
Once your query is complete, get the results by calling .all():
SearchList = search.all()

Related

Python: How can I append without overriding past append for loop

I am currently trying to append to the output list in my code the id of the query result. I can get it to do one of the ids but it will override the first one how can I change my code to allow any amount of looping to the output.append(q.id)
Here is the code:
#app.route('/new-mealplan', methods=['POST'])
def create_mealplan():
data = request.get_json()
recipes = data['recipes']
output = []
for recipe in recipes:
try:
query = Recipes.query.filter(func.lower(Recipes.recipe_name) == func.lower(recipe)).all()
# print(recipe)
if query:
query = Recipes.query.filter(func.lower(Recipes.recipe_name) == func.lower(recipe)).all()
for q in query:
output.append(q.id)
finally:
return jsonify({"data" : output})
To fix this I removed the
Try and Finally blocks.
Then returned after the for-loop was completed.

SQLAlchemy search using varying keywords

Im struggling at a very specific problem here:
I need to make a LIKE search in SQLAlchemy, but the amount of keywords are varying.
Heres the code for one keyword:
search_query = request.form["searchinput"]
if selectet_wg and not lagernd:
query = db_session.query(
Artikel.Artnr,
Artikel.Benennung,
Artikel.Bestand,
Artikel.Vkpreisbr1
).filter(
and_(
Artikel.Benennung.like("%"+search_query+"%"),
Artikel.Wg == selectet_wg
)
).order_by(Artikel.Vkpreisbr1.asc())
"searchinput" looks like this : "property1,property2,property3", but also can be just 1,2,5 or more propertys.
I want to split the searchinput at "," (yes i know how to do that :) ) and insert another LIKE search for every property.
So for the above example the search should be looking like this:
search_query = request.form["searchinput"]
if selectet_wg and not lagernd:
query = db_session.query(
Artikel.Artnr,
Artikel.Benennung,
Artikel.Bestand,
Artikel.Vkpreisbr1
).filter(
and_(
Artikel.Benennung.like("%"+search_query+"%"), #property1
Artikel.Benennung.like("%"+search_query+"%"), #property2
Artikel.Benennung.like("%"+search_query+"%"), #property3
Artikel.Wg == selectet_wg
)
).order_by(Artikel.Vkpreisbr1.asc())
I dont think its a smart idea just to make an if statement for the amount of propertys and write down the query serveral times...
Im using the newest version of sqlalchemy and python 3.4
It should be possible to create a list of you like filters and pass them all to and_.
First create a list of the like queries:
search_queries = search_query.split(',')
filter = [Artikel.Benennung.like("%"+sq"%") for sq in search_queries]
Then pass them to and_, unpacking the list:
and_(
Artikel.Wg == selectet_wg,
*filter
)
*filter has to be the last argument to and_ otherwise python will give you an error.
You can call filter multiple times:
search_query = request.form["searchinput"]
if selectet_wg and not lagernd:
query = db_session.query(
Artikel.Artnr,
Artikel.Benennung,
Artikel.Bestand,
Artikel.Vkpreisbr1
).filter(Artikel.Wg == selectet_wg)
for prop in search_query.split(','):
query = query.filter(Artikel.Benennung.like("%"+prop+"%"))
query = query.order_by(Artikel.Vkpreisbr1.asc())

sqlalchemy add entity from a subquery

I want to use outerjoin operation on a subquery and also include values from the subquery also.
My code
q_responses = session.query(Candidate, CandidateProfile)
.join(CandidateProfile, CandidateProfile.candidate_id == Candidate.id)
subq = (session.query(AppAction.candidate_id, Activity.archived)\
.join(Activity, and_(AppAction.candidate_id == Activity.candidate_id,
Activity.archived == 1)))\
.subquery("subq")
responses = q_responses.outerjoin(subq, Candidate.id == subq.c.candidate_id).all()
So I get the result in this format
(Candidate, CandidateProfile)
But I also want to include the archived value from subquery in the result.
By reading many relevant posts from the internet, I have tried
add_entity(subq.c.archived)
with_entities
add_column
select_from
But all those have resulted in some error.
Please help me out.
Please share your error for when you try add_column. The code below should work just fine (assuming that it does work without like which contains add_column):
responses = (
q_responses
.add_column(subq.c.archived) # #new
.outerjoin(subq, Candidate.id == subq.c.candidate_id)
).all()
Also you could have created a query straight away with this column included:
subq = (
session.query(AppAction.candidate_id, Activity.archived)
.join(Activity, and_(AppAction.candidate_id == Activity.candidate_id,
Activity.archived == 1))
).subquery("subq")
q_responses = (
session.query(Candidate, CandidateProfile, subq.c.archived)
.join(CandidateProfile, CandidateProfile.candidate_id == Candidate.id)
.outerjoin(subq, Candidate.id == subq.c.candidate_id)
).all()

Filtering objects in Django based on optional arguments

Many times I find myself writing code similar to:
query = MyModel.objects.all()
if request.GET.get('filter_by_field1'):
query = query.filter(field1 = True)
if request.GET.get('filter_by_field2'):
query = query.filter(field2 = False)
field3_filter = request.GET.get('field3'):
if field3_filter is not None:
query = query.filter(field3 = field3_filter)
if field4_filter:
query = query.filter(field4 = field4_filter)
# etc...
return query
Is there a better, more generic way of building queries such as the one above?
If the only things that are ever going to be in request GET are potential query arguments, you could do this:
query = MyModel.objects.filter(**request.GET)

Applying LIMIT and OFFSET to all queries in SQLAlchemy

I'm designing an API with SQLAlchemy (querying MySQL) and I would like to force all my queries to have page_size (LIMIT) and page_number (OFFSET) parameters.
Is there a clean way of doing this with SQLAlchemy? Perhaps building a factory of some sort to create a custom Query object? Or maybe there is a good way to do this with a mixin class?
I tried the obvious thing and it didn't work because .limit() and .offset() must be called after all filter conditions have been applied:
def q(page=0, page_size=None):
q = session.query(...)
if page_size: q = q.limit(page_size)
if page: q = q.offset(page*page_size)
return q
When I try using this, I get the exception:
sqlalchemy.exc.InvalidRequestError: Query.filter() being called on a Query which already has LIMIT or OFFSET applied. To modify the row-limited results of a Query, call from_self() first. Otherwise, call filter() before limit() or offset() are applied.
Try adding a first, required argument, which must be a group of query filters. Thus,
# q({'id': 5}, 2, 50)
def q(filters, page=0, page_size=None):
query = session.query(...).filter_by(**filters)
if page_size:
query = query.limit(page_size)
if page:
query = query.offset(page*page_size)
return query
or,
# q(Model.id == 5, 2, 50)
def q(filter, page=0, page_size=None):
query = session.query(...).filter(filter)
if page_size:
query = query.limit(page_size)
if page:
query = query.offset(page*page_size)
return query
Not an option at the time of this question, since version 1.0.0 you can take advantage of Query events to ensure limit and offset methods are always called just before your query object is compiled, after any manipulation is performed by the users of your q function:
from sqlalchemy.event import listen
def q(page=0, page_size=None):
query = session.query()
listen(query, 'before_compile', apply_limit(page, page_size), retval=True)
return query
def apply_limit(page, page_size):
def wrapped(query):
if page_size:
query = query.limit(page_size)
if page:
query = query.offset(page * page_size)
return query
return wrapped
You can call query.limit(None). to remove previously applied limit or offset.

Categories