Sending file from celery task to flask route - python

I have a flask application where I query and filter large datasets from S3 in a Celery Task. I want to serve the filtered data as a CSV to the user. The size of the CSV is up to 100mb. My initial thought was to have Celery save the dataset as a CSV to disk and then use send_file in flask route BUT I am using Heroku for deployment which has an ephemeral file system. So if I save the file in the Celery worker it won't share across to the web worker. I also played with having the S3 query directly in the flask route and then sending the file without saving to the server, but the file query takes time, up to 30 seconds, so I want to keep it as a background job in Celery.
The other thought was to upload the filtered data to S3 and remove following the download. However, this seems inefficient because it means downloading, filtering, and re-uploading.
Is there any way to do this efficiently or should I move off Heroku to something where I have SSD space. Thanks!

Related

copy file from AWS S3 (as static website) to AWS S3 [duplicate]

We need to move our video file storage to AWS S3. The old location is a cdn, so I only have url for each file (1000+ files, > 1TB total file size). Running an upload tool directly on the storage server is not an option.
I already created a tool that downloads the file, uploads file to S3 bucket and updates the DB records with new HTTP url and works perfectly except it takes forever.
Downloading the file takes some time (considering each file close to a gigabyte) and uploading it takes longer.
Is it possible to upload the video file directly from cdn to S3, so I could reduce processing time into half? Something like reading chunk of file and then putting it to S3 while reading next chunk.
Currently I use System.Net.WebClient to download the file and AWSSDK to upload.
PS: I have no problem with internet speed, I run the app on a server with 1GBit network connection.
No, there isn't a way to direct S3 to fetch a resource, on your behalf, from a non-S3 URL and save it in a bucket.
The only "fetch"-like operation S3 supports is the PUT/COPY operation, where S3 supports fetching an object from one bucket and storing it in another bucket (or the same bucket), even across regions, even across accounts, as long as you have a user with sufficient permission for the necessary operations on both ends of the transaction. In that one case, S3 handles all the data transfer, internally.
Otherwise, the only way to take a remote object and store it in S3 is to download the resource and then upload it to S3 -- however, there's nothing preventing you from doing both things at the same time.
To do that, you'll need to write some code, using presumably either asynchronous I/O or threads, so that you can simultaneously be receiving a stream of downloaded data and uploading it, probably in symmetric chunks, using S3's Multipart Upload capability, which allows you to write individual chunks (minimum 5MB each) which, with a final request, S3 will validate and consolidate into a single object of up to 5TB. Multipart upload supports parallel upload of chunks, and allows your code to retry any failed chunks without restarting the whole job, since the individual chunks don't have to be uploaded or received by S3 in linear order.
If the origin supports HTTP range requests, you wouldn't necessarily even need to receive a "stream," you could discover the size of the object and then GET chunks by range and multipart-upload them. Do this operation with threads or asynch I/O handling multiple ranges in parallel, and you will likely be able to copy an entire object faster than you can download it in a single monolithic download, depending on the factors limiting your download speed.
I've achieved aggregate speeds in the range of 45 to 75 Mbits/sec while uploading multi-gigabyte files into S3 from outside of AWS using this technique.
This has been answered by me in this question, here's the gist:
object = Aws::S3::Object.new(bucket_name: 'target-bucket', key: 'target-key')
object.upload_stream do |write_stream|
IO.copy_stream(URI.open('http://example.com/file.ext'), write_stream)
end
This is no 'direct' pull-from-S3, though. At least this doesn't download each file and then uploads in serial, but streams 'through' the client. If you run the above on an EC2 instance in the same region as your bucket, I believe this is as 'direct' as it gets, and as fast as a direct pull would ever be.
if a proxy ( node express ) is suitable for you then the portions of code at these 2 routes could be combined to do a GET POST fetch chain, retreiving then re-posting the response body to your dest. S3 bucket.
step one creates response.body
step two
set the stream in 2nd link to response from the GET op in link 1 and you will upload to dest.bucket the stream ( arrayBuffer ) from the first fetch

How can i upload as 1cr rows .csv file using django?

I am getting a memory error while uploading a CSV file of size around 650 mb with a shape (10882101, 6).
How can i upload such file in postgres using django framework.
You haven't shared much details (error logs, which python package you are using etc).
You might like to read Most efficient way to parse a large .csv in python?, https://medium.com/casual-inference/the-most-time-efficient-ways-to-import-csv-data-in-python-cc159b44063d
How I will do it from Django Framework:
I will use Celery to run the job as a background process, as waiting for the file to be uploaded completely before returning a response might give a HTTP timeout.
Celery quickstart with Django

What is the best way to allow a user to upload a large csv file (> 5GB) from the browser to a python flask server?

I am working on a single page web app that will allow users to upload a large CSV file (greater than 5 GB) and will then send it to a python flask server to be streamed to a database. Large CSVs are hard to read in python because of memory issues so I think it's best to send the csv data to a database like sqlite and then query the database to get the data back. However, I tried reading a csv file of 6.9 GB by chunks using pandas and then using the df.to_sql() method to store it into an sqlite db but it took about an hour which would be inefficient in a flask server and a terrible user experience. I have been doing some research and it seems that I can use worker processes/task queues like Celery or Redis Queue to speed up the process but not to sure how to go about it or if I need sockets to do this. I am a junior dev so any reference to tutorials, examples, or advice would be greatly appreciated.

Concurrent file upload/download and running background processes

I want to create a minimal webpage where concurrent users can upload a file and I can process the file (which is expected to take some hours) and email back to the user later on.
Since I am hosting this on AWS, I was thinking of invoking some background process once I receive the file so that even if the user closes the browser window, the processing keeps taking place and I am able to send the results after few hours, all through some pre-written scripts.
Can you please help me with the logistics of how should I do this?
Here's how it might look like (hosting-agnostic):
A user uploads a file on the web server
The file is saved in a storage that can be accessed later by the background jobs
Some metadata (location in the storage, user's email etc) about the file is saved in a DB/message broker
Background jobs tracking the DB/message broker pick up the metadata and start handling the file (this is why it needs to be accessible by it in p.2) and notify the user
More specifically, in case of python/django + aws you might use the following stack:
Lets assume you're using python + django
You can save the uploaded files in a private AWS S3 bucket
Some meta might be saved in the db or use celery + AWS SQS or AWS SQS directly or bring up something like rabbitmq or redis(+pubsub)
Have python code handling the job - depends on what your opt for in p.3. The only requirement is that it can pull data from your S3 bucket. After the job is done notify the user via AWS SES
The simplest single-server setup that doesn't require any intermediate components:
Your python script that simply saves the file in a folder and gives it a name like someuser#yahoo.com-f9619ff-8b86-d011-b42d-00cf4fc964ff
Cron job looking for any files in this folder that would handle found files and notify the user. Notice if you need multiple background jobs running in parallel you'll need to slightly complicate the scheme to avoid race conditions (i.e. rename the file being processed so that only a single job would handle it)
In a prod app you'll likely need something in between depending on your needs

Should a json file that changes every 5 minutes be hosted by Flask or nginx?

I understand the concept that nginx should host my static files and I should leave Flask to serving the routes that dynamically build content. I don't quite understand where one draws the line of a static vs dynamic file, though.
Specifically, I have some json files that are updated every 5 minutes by a background routine that Flask runs via #cron.interval_schedule and writes the .json to a file on the server.
Should I be building routes in flask to return this content (simply return the raw .json file) since the content changes every five minutes, or should have nginx host the json files? Can nginx handle a file that changes every five minutes with it's caching logic?
Since generating the file appears to have no relation to the request / response cycle of a Flask app, don't use Flask to serve it. If it does require the Flask app to actively do something to it for every request, then do use Flask to serve it.

Categories