Django limit query - python

I am trying to run a Django query that will limit the returned results to 5 items. This is easy, except for the fact that the query will not always return 5 items. In that case, a statement like this (my code) fails:
users = User.objects.filter(username__istartswith = name)[5].only('Person__profile_picture', 'username')
If the query only returns a single result, I get an index out of range error. Is there a way to instruct Django to give me a maximum of 5 results without crashing on me? Is the standard solution to just wrap everything up in a try block? Or do I need to write a raw SQL query to accomplish this?

As in doc: https://docs.djangoproject.com/en/dev/topics/db/queries/#limiting-querysets
For example, this returns the first 5 objects (LIMIT 5):
Entry.objects.all()[:5]

users=User.objects.filter(username__istartswith=name)
[:5].values(‘Person__profile_picture’,’username’)

Related

Getting error when running a sql select statement in python

I am new to this and trying to learn python. I wrote a select statement in python where I used a parameter
Select """cln.customer_uid = """[(num_cuid_number)])
TypeError: string indices must be integers
Agree with the others, this doesn't look really like Python by itself.
I will see even without seeing the rest of that code I'll guess the [(num_cuid_number)] value(s) being returned is a string, so you'll want to convert it to integer for the select statement to process.
num_cuid_number is most likely a string in your code; the string indices are the ones in the square brackets. So please first check your data variable to see what you received there. Also, I think that num_cuid_number is a string, while it should be in an integer value.
Let me give you an example for the python code to execute: (Just for the reference: I have used SQLAlchemy with flask)
#app.route('/get_data/')
def get_data():
base_sql="""
SELECT cln.customer_uid='%s' from cln
""" % (num_cuid_number)
data = db.session.execute(base_sql).fetchall()
Pretty sure you are trying to create a select statement with a "where" clause here. There are many ways to do this, for example using raw sql, the query should look similar to this:
query = "SELECT * FROM cln WHERE customer_uid = %s"
parameters = (num_cuid_number,)
separating the parameters from the query is secure. You can then take these 2 variables and execute them with your db engine like
results = db.execute(query, parameters)
This will work, however, especially in Python, it is more common to use a package like SQLAlchemy to make queries more "flexible" (in other words, without manually constructing an actual string as a query string). You can do the same thing using SQLAlchemy core functionality
query = cln.select()
query = query.where(cln.customer_uid == num_cuid_number)
results = db.execute(query)
Note: I simplified "db" in both examples, you'd actually use a cursor, session, engine or similar to execute your queries, but that wasn't your question.

How can I get limited number of rows from mysql using flask SQLAlchemy?

I am unable to fetch a limited amount of rows. I always get an error whenever I attempt it.
Please note that i'm also trying to paginate the limited number of rows that are fetched.
The program works fine without using the limit. Its able to fetch randomly and paginate but gives an error when I try using .limit()
def question(page=1):
# questions = Question.query.paginate(page, per_page=1)
quess = Question.query.order_by(func.rand())
quest = quess.limit(2)
questions = quest.paginate(page, per_page=1)
This is the error I keep getting...
sqlalchemy.exc.InvalidRequestError
InvalidRequestError: Query.order_by() being called on a Query which already has LIMIT or OFFSET applied. To modify the row-limited results of a Query, call from_self() first. Otherwise, call order_by() before limit() or offset() are applied.
You can't call order_by() after limit() in the same query. SQLAlchemy doesn't allow you to do that.
Try to use from_self() before calling paginate().
Question.query.order_by(func.rand())
.limit(2).from_self()
.paginate(page, per_page=1)

Fetching queryset data one by one

I am aware that regular queryset or the iterator queryset methods evaluates and returns the entire data-set in one shot .
for instance, take this :
my_objects = MyObject.objects.all()
for rows in my_objects: # Way 1
for rows in my_objects.iterator(): # Way 2
Question
In both methods all the rows are fetched in a single-go.Is there any way in djago that the queryset rows can be fetched one by one from database.
Why this weird Requirement
At present my query fetches lets says n rows but sometime i get Python and Django OperationalError (2006, 'MySQL server has gone away').
so to have a workaround for this, i am currently using a weird while looping logic.So was wondering if there is any native or inbuilt method or is my question even logical in first place!! :)
I think you are looking to limit your query set.
Quote from above link:
Use a subset of Python’s array-slicing syntax to limit your QuerySet to a certain number of results. This is the equivalent of SQL’s LIMIT and OFFSET clauses.
In other words, If you start with a count you can then loop over and take slices as you require them..
cnt = MyObject.objects.count()
start_point = 0
inc = 5
while start_point + inc < cnt:
filtered = MyObject.objects.all()[start_point:inc]
start_point += inc
Of course you may need to error handle this more..
Fetching row by row might be worse. You might want to retrieve in batches for 1000s etc. I have used this Django snippet (not my work) successfully with very large querysets. It doesn't eat up memory and no trouble with connections going away.
Here's the snippet from that link:
import gc
def queryset_iterator(queryset, chunksize=1000):
'''''
Iterate over a Django Queryset ordered by the primary key
This method loads a maximum of chunksize (default: 1000) rows in it's
memory at the same time while django normally would load all rows in it's
memory. Using the iterator() method only causes it to not preload all the
classes.
Note that the implementation of the iterator does not support ordered query sets.
'''
pk = 0
last_pk = queryset.order_by('-pk')[0].pk
queryset = queryset.order_by('pk')
while pk < last_pk:
for row in queryset.filter(pk__gt=pk)[:chunksize]:
pk = row.pk
yield row
gc.collect()
To solve (2006, 'MySQL server has gone away') problem, your approach is not that logical. If you will hit database for each entry, it is going to increase number of queries which itself will create problem in future as usage of your application grows.
I think you should close mysql connection after iterating all elements of result, and then if you will try to make another query, django will create a new connection.
from django.db import connection:
connection.close()
Refer this for more details

Django returning said rows in a queryset

Using a queryset in Django (in a view) I only want to get said rows 51-100. i.e. I only want it to return these rows.
is this possible and how within .
objectQuerySet = Recipient.objects.filter(incentiveid=incentive).order_by('fullname')
I don't want to use any paging system etc this is just a one time thing?
Thank you
You can use slicing to execute a LIMIT OFFSET statement:
objectQuerySet = Recipient.objects.filter(incentiveid=incentive).order_by('fullname')[51:100]

How do I tell if the returned cursor is the last cursor in App Engine

I apologize if I am missing something really obvious.
I'm making successive calls to app engine using cursors. How do I tell if the I'm on the last cursor? The current way I'm doing it now is to save the last cursor and then testing to see if that cursor equals the currently returned cursor. This requires an extra call to the datastore which is probably unnecessary though.
Is there a better way to do this?
Thanks!
I don't think there's a way to do this with ext.db in a single datastore call, but with ndb it is possible. Example:
query = Person.query(Person.name == 'Guido')
result, cursor, more = query.fetch_page(10)
If using the returned cursor will result in more records, more will be True. This is done smartly, in a single RPC call.
Since you say 'last cursor' I assume you are using cursors for some kind of pagination, which implies you will be fetching results in batches with a limit.
In this case then you know you are on the last cursor when you have less results returned than your limit.
limit = 100
results = Entity.all().with_cursor('x').fetch(limit)
if len(results)<limit:
# then there's no point trying to fetch another batch after this one
If you mean "has this cursor hit the end of the search results", then no, not without picking the cursor up and trying it again. If more entities are added that match the original search criteria, such that they logically land "after" the cursor (e.g., a query that sorts by an ascending timestamp), then reusing that saved cursor will let you retrieve those new entities.
I use the same technique Chris Familoe describes, but set the limit 1 more than I wish to return. So, in Chris' example, I would fetch 101 entities. 101 returned means I have another page with at least 1 on.
recs = db_query.fetch(limit + 1, offset)
# if less records returned than requested, we've reached the end
if len(recs) < limit + 1:
lastpage = True
entries = recs
else:
lastpage = False
entries = recs[:-1]
I know this post is kind of old but I was looking for a solution to the same problem. I found it in this excellent book:
http://shop.oreilly.com/product/0636920017547.do
Here is the tip:
results = query.fetch(RESULTS_FOR_PAGE)
new_cursor = query.cursor()
query.with_cursor(new_cursor)
has_more_results = query.count(1) == 1

Categories