Is there a way to cache the fetch output? - python

I'm working on a closed system running in the cloud.
What I need is a search function that uses user-typed-in regexp to filter the rows in a dataset.
phrase = re.compile(request.get("query"))
data = Entry.all().fetch(50000) #this takes around 10s when there are 6000 records
result = x for x in data if phrase.search(x.title)
Now, the database itself won't change too much, and there will be no more than 200-300 searches a day.
Is there a way to somehow cache all the Entries (I expect that there will be no more than 50.000 of them, each no bigger than 500 bytes), so retrieving them won't take up >10 seconds? Or perhaps to parallelize it? I don't mind 10cpu seconds, but I do mind 10 second that the user has to wait.
To address any answers like "index it and use .filter()" - the query is a regexp, and I don't know about any indexing mechanism that would allow to use a regexp.

You can also use cachepy or performance engine (shameless plug) to store the data on app engine's local instances, so you can have faster access to all entities without getting limited by memcache boundaries or datastore latency.
Hint: A local instance gets killed if it surpasses about 185 MB of memory, so you can store actually quite a lot of data in it if you know what you're doing.

Since there is a bounded number of entries, you can memcache all entries and then do the filtering in memory like you've outlined. However, note that each memcache entry cannot exceed 1mb. But you can fetch up to 32mb of memcache entries in parallel.
So split the entries into sub sets, memcache the subsets and then read them in parallel by precomputing the memcache key.
More here:
http://code.google.com/appengine/docs/python/memcache/functions.html

Since your data is on the order of 20MB, you may be able to load it entirely into local instance memory, which will be as fast as you can get. Alternately, you could store it as a data file alongside your app, reading which will be faster than accessing the datastore.

Related

Managing Heroku RAM for Unique Application

I have a Flask application that allows users to query a ~small database (2.4M rows) using SQL. It's similar to a HackerRank but more limited in scope. It's deployed on Heroku.
I've noticed during testing that I can predictably hit an R14 error (memory quota exceeded) or R15 (memory quota greatly exceeded) by running large queries. The queries that typically cause this are outside what a normal user might do, such as SELECT * FROM some_huge_table. That said, I am concerned that these errors will become a regular occurrence for even small queries when 5, 10, 100 users are querying at the same time.
I'm looking for some advice on how to manage memory quotas for this type of interactive site. Here's what I've explored so far:
Changing the # of gunicorn workers. This has had some effect but I still hit R14 and R15 errors consistently.
Forced limits on user queries, based on either text or the EXPLAIN output. This does work to reduce memory usage, but I'm afraid it won't scale to even a very modest # of users.
Moving to a higher Heroku tier. The plan I use currently provides ~512MB RAM. The largest plan is around 14GB. Again, this would help but won't even moderately scale, to say nothing of the associated costs.
Reducing the size of the database significantly. I would like to avoid this if possible. Doing the napkin math on a table with 1.9M rows going to 10k or 50k, the application would have greatly reduced memory needs and will scale better, but will still have some moderate max usage limit.
As you can see, I'm a novice at best when it comes to memory management. I'm looking for some strategies/ideas on how to solve this general problem, and if it's the case that I need to either drastically cut the data size or throw tons of $ at this, that's OK too.
Thanks
Coming from my personal experience, I see two approaches:
1. plan for it
Coming from your example, this means you try to calculate the maximum memory that the request would use, multiply it by the number of gunicorn workers, and use dynos big enough.
With a different example this could be valid, I don't think it is for you.
2. reduce memory usage, solution 1
The fact that too much application memory is used makes me think that likely in your code you are loading the whole result-set into memory (probably even multiple times in multiple formats) before returning it to the client.
In the end, your application is only getting the data from the database and converting it to some output format (JSON/CSV?).
What you are probably searching for is streaming responses.
Your Flask-view will work on a record-by-record base. It will read a single record, convert it to your output format, and return a single record.
Both your database client library and Flask will support this (on most databases it is called cursors / iterators).
2. reduce memory usage, solution 2
other services often go for simple pagination or limiting resultsets to manage server-side memory.
security sidenote
it sounds like the users can actually define the SQL statement in their API requests. This is a security and application risk. Apart from doing INSERT, UPDATE, or DELETE statements, the user could create a SQL statement that will not only blow your application memory, but also break your database.

Importing data into Google App Engine

Recently I had to import 48,000 records into Google App Engine. The stored 'tables' are 'ndb.model' types. Each of these records is checked against a couple of other 'tables' in the 'database' for integrity purposes and then written (.put()).
To do this, I uploaded a .csv file into Google Cloud Storage and processed it from there in a task queue. This processed about 10 .csv rows per second and errored after 41,000 records with an out of memory error. Splitting the .csv file into 2 sets of 24,000 records each fixed this problem.
So, my questions are:
a) is this the best way to do this?
b) is there a faster way (the next upload might be around 400,000 records)? and
c) how do I get over (or stop) the out of memory error?
Many thanks,
David
1) Have you thought about (even temporarily) upgrading your server instances?
https://cloud.google.com/appengine/docs/standard/#instance_classes
2) I don't think a 41000 row csv is enough to run out of memory, so you probably need to change your processing:
a) Break up the processing using multiple tasks, rolling your own cursor to process a couple thousand at a time, then spinning up a new task.
b) Experiment with ndb.put_multi()
Sharing some code of your loop and puts might help
The ndb in-context cache could be contributing to the memory errors. From the docs:
With executing long-running queries in background tasks, it's possible for the in-context cache to consume large amounts of memory. This is because the cache keeps a copy of every entity that is retrieved or stored in the current context. To avoid memory exceptions in long-running tasks, you can disable the cache or set a policy that excludes whichever entities are consuming the most memory.
You can prevent caching on a case by case basis by setting a context option in your ndb calls, for example
foo.put(use_cache=False)
Completely disabling caching might degrade performance if you are often using the same objects for your comparisons. If that's the case, you could flush the cache periodically to stop it getting too big.
if some_condition:
context = ndb.get_context()
context.clear_cache()

Any way to 'throttle' Python/Mongo function

I have an function that operates on a single collection, recursively doing two aggregations then updating documents. All my indexes are correct.
At the moment I cannot refactor this code, and when the function runs it slams Mongo for about 10 minutes to process the data. Seems to grow based on collection size, averaging about 60 seconds for every additional 3k documents. This collection can grow to hundreds of thousands of documents. (The documents are small - about 10 keys with very small values each.)
There is no need for the result of this function to be real-time - it is scheduled, so it's perfectly fine for me to throttle it back.
The question is, is there any way to tell mongo to limit the CPU it grants to an operation? Or should I address the throttling in Python using sleep or some other method?
recursively doing two aggregations then updating documents
It looks like you need to consider to re-model your schema. The flexibility of MongoDB documents schema is something to optimise your process. See MongoDB: Data Modeling for more infos, examples and patterns.
The question is, is there any way to tell mongo to limit the CPU it grants to an operation?
MongoDB does not have a feature to limit a CPU usage per operation. This feature may not make sense in distributed fashion. For example, a limit of 1 CPU for an operation that spans multiple shards may not be as simple/desired anymore.
Alternatively, depending on your use case, if the function does not have to be real-time, you could utilise secondary read-preference.
Essentially, directing your scheduled reporting to a secondary member, allowing your primary to process other data.
Although make sure you read the pros and cons of this secondary read beforehand.
See also MongoDB Replication

What is the Google Appengine Ndb GQL query max limit?

I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
This depends on lots of things like the size of the entities and the number of values that need to look up in the index, so it's best to benchmark it for your specific application. Also beware that if you find that on a sunny day it takes e.g. 10 seconds to load all your items, that probably means that some small fraction of your queries will run into a timeout due to natural variations in datastore performance, and occasionally your app will hit the timeout all the time when the datastore is having a bad day (it happens).
Basically you don't have the old limit of 1000 entities per query anymore, but consider using a reasonable limit, because you can hit the time out error and it's better to get them in batches so users won't wait during load time.

Efficient serialisation of large list into a database?

I have about 29 million individual unicode elements I want to serialise into a database. My current solution uses SQLite, and incrementally sends a list of length 100,000 at a time:
cursor.executemany("insert into array values (?)", [list of length 100,000])
I thought this would be efficient enough, but after going to sleep and waking up (say 6 hours), and seeing it still hadn't finished, I thought otherwise.
Would a MongoDB or memcached solution be more efficient?
(feel free to suggest any more efficient method)
If you decide to choose memcached, please take case of the below mentioned note.
Memcached has a default setting of maximum data size that can be set as 1Mb.
Any data size greater then that would require you to start the memcached explicitly stating the maximum size.
I really like using redis for storing data structures. As your requirement is not so very clear. I would be suggesting you to use redis, as you can store it in multiple fashion and use it interestingly.
Please read the documentation of the different data structures that can be stored on the redis.
http://redis.io/documentation
Try spawning gevent threads e.g 10 and monitor how many records are processed in a given time, and try increasing the number. You should also look into multiprocessing

Categories