using aws lambda /tmp - python - python

for <some-condition>:
g.to_csv(f'/tmp/{k}.csv')
This example makes use of /tmp/. When /tmp/ not used in g.to_csv(f'/tmp/{k}.csv') then it gives Read only file system error from here https://stackoverflow.com/a/42002539/13016237, so question is if AWS lambda clears /tmp/ on its own or is it to be done manually. Is there any workaround for this within the scope of boto3. Thanks!

/tmp, as the name suggest, is only a temporary storage. It should not be relied upon for any long term data storage. The files in /tmp persist for as long as lambda execution context is kept alive. The time is not defined and varies.
To overcome the size limitation (512 MB) and to ensure long term data storage there are two solutions employed:
Using Amazon EFS with Lambda
Using AWS Lambda with Amazon S3
The use of the EFS is easier (but not cheaper), as this will present a regular filesystem to your function which you can write and read directly. You can also re-use the same filesystem across multiple lambda functions, instances, containers and more.
The S3 will be cheaper but there is some extra work required from you to seamlessly use in lambda. Pandas does support S3, but for seamless integration you would have to include S3FS in your deployment package (or layer) if not already present. The S3 can also be accessed from different functions, instances and containers.

g.to_csv('s3://my_bucket/my_data.csv') should work if you will package s3fs with your lambda.
Another option is to save the csv into memory and use boto3 to create an object in s3

Related

How Python AWS Lambda interacts specifically with the uploaded file?

I´m trying to do the following:
when I upload a file in my s3 storage, the lambda picks this json file and converts it into a csv file.
How can I specify in the lambda code which file must pick?
example of my code in local:
import pandas as pd
df = pd.read_json('movies.json')
df.to_csv('csv-movies.csv')
in this example, I provide the name of the file...but..how can I manage that on a Lambda?
I think I don´t understand how Lambda works...could you give me an example?
Lambda spins up execution environments to handle your requests. When it initialises these environments, it'll pull the code you uploaded, and execute it when invoked.
Execution environments have a concept of ephemeral (temporary) storage with a default size of 512mb.
Lambda doesn't have access to your files in S3 by default. You'd first need to download your file from S3 using something like the AWS SDK for Python. You can store it in the /tmp directory to make use of the ephemeral storage I mentioned earlier.
Once you've downloaded the file using the SDK, you can interact with it as you would if you were running this locally, like in your example.
On the flip side, you'd also need to use the SDK to upload the CSV back to S3 if you want to keep it beyond the lifecycle of that execution environment.
Something else you might want to explore in future is reading that file into memory and doing away with storing it in ephemeral storage altogether.
In order to achieve this you will need to use S3 as the event source for your Lambda, there's a useful tutorial for this provided by AWS themselves and has some sample python code to assist you, you can view it here.
To break it down slightly further and answer how you get the name of the file. The lambda handler will look similar to the following:
def lambda_handler(event, context)
What is important here is the event object. When your event source is the S3 bucket you will be given the name of the bucket and the s3 key in the object which is effectively the path to the file in the S3 bucket. With this information you can do some logic to decide if you want to download the file from that path. If you do, you can use the S3 get_object( ) api call as shown in the tutorial.
Once this file is downloaded it can be used like any other file you would have on your local machine, so you can then proceed to process the json to a CSV. Once it is converted you will presumably want to put it back in S3 and for this you can use the S3 put_object( ) call for this and reuse the information in the event object in order to specify the path.

Bring machine learning to live production with AWS Lambda Function

I am currently working on implementing Facebook Prophet for a live production environment. I haven't done that before so I wanted to present you my plan here and hope you can give me some feedback whether that's an okay solution or if you have any suggestions.
Within Django, I create a .csv export of relevant data which I need for my predictions. These .csv exports will be uploaded to an AWS S3 bucket.
From there I can access this S3 bucket with an AWS Lambda Function where the "heavy" calculations are happening.
Once done, I take the forecasts from 2. and save them again in a forcast.csv export
Now my Django application can access the forecast.csv on S3 and get the respective predictions.
I am especially curious if AWS Lambda Function is the right tool in that case. Exports could probably also saved in DynamoDB (?), but I try to keep my v1 simple, therefore .csv. There is still some effort to install the right layers/packages for AWS Lambda. So I want to make sure I am walking in the right direction before diving deep in its documentation.
I am a little concerned about using AWS Lambda for the "heavy" calculations. There are a few reasons.
Binary size Limit: AWS Lambda has a size limit of 250MB for binaries. This is the biggest limitation we faced as you will not be able to include all the libraries like numpy, pandas, matplotlib, etc in that binary.
Disk size limit: AWS only provides max disk size of 500MB for lambda execution, it can become a problem if you want to save intermediate results in the disk.
The cost can skyrocket: If your lambda is going to run for a long time instead of multiple small invocations, you will end up paying a lot of money. In that case, I think you will be better off with something like EC2 and ECS.
You can evaluate linking the S3 bucket to an SQS queue and a process running on EC2 machine which is listening to the queue and performing all the calculations.
AWS Lambda Limits.

Working around AWS Lambda space limitations

I am doing an example of a Simple Linear Regression in Python and I want to make use of Lambda functions to make it work on AWS so that I could interface it with Alexa. The problem is my Python package is 114 MB. I have tried to separate the package and the code so that I have two lambda functions but to no avail. I have tried every possible way on the internet.
Is there any way I could upload the packages on S3 and read it from there like how we read csv's from S3 using boto3 client?
Yes, you can upload the package to S3. There is a limit for that as well. It's 250 MB currently. https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Here's a simple command to do that.
aws lambda update-function-code --function-name FuncName --zip-file fileb://path/to/zip/file --region us-west-2
There are certain limitations when using AWS lambda.
1) The total size of your uncompressed code and dependencies should be less than 250MB.
2) The size of your zipped code and dependencies should be less than 75MB.
3) The total fixed size of all function packages in a region should not exceed 75GB.
If you are exceeding the limit, try finding smaller libraries with less dependencies or breaking your functionality in to multiple micro services rather than building code which does all the work for you. This way you don't have to include every library in each function. Hope this helps.

parallell copy of buckets/keys from boto3 or boto api between 2 different accounts/connections

I want to copy keys from buckets between 2 different accounts using boto3 api's.
In boto3, I executed the following code and the copy worked
source = boto3.client('s3')
destination = boto3.client('s3')
destination.put_object(source.get_object(Bucket='bucket', Key='key'))
Basically I am fetching data from GET and pasting that with PUT in another account.
On Similar lines in boto api, I have done the following
source = S3Connection()
source_bucket = source.get_bucket('bucket')
source_key = Key(source_bucket, key_name)
destination = S3Connection()
destination_bucket = destination.get_bucket('bucket')
dist_key = Key(destination_bucket, source_key.key)
dist_key.set_contents_from_string(source_key.get_contents_as_string())
The above code achieves the purpose of copying any type of data.
But the speed is really very slow. I get around 15-20 seconds to copy data for 1GB. And I have to copy 100GB plus.
I tried python mutithreading wherein each thread does the copy operation. The performance was bad as it took 30 seconds to copy 1GB. I suspect GIL might be the issue here.
I did multiprocessing and I am getting the same result as of single process i.e. 15-20 seconds for 1GB file.
I am using a very high end server with 48 cores and 128GB RAM. The network speed in my environment is 10GBPS.
Most of the search results tell about copying data between buckets in same account and not across accounts. Can anyone please guide me here. Is my approach wrong? Does anyone have a better solution?
Yes, it is wrong approach.
You shouldn't download the file. You are using AWS infrastructure, so you should make use of the efficient AWS backend call to do the works. Your approach is wasting resources.
boto3.client.copy will do the job better than this.
In addition, you didn't describe what you are trying to achieve(e.g. is this some sort of replication requirement? ).
Because with proper understanding of your own needs, it is possible that you don't even need a server to do the job : S3 Bucket events trigger, lambda etc can all execute the copying job without a server.
To copy file between two different AWS account, you can checkout this link Copy S3 object between AWS account
Note :
S3 is a huge virtual object store for everyone, that's why the bucket name MUST be unique. This also mean, the S3 "controller" can done a lot of fancy work similar to a file server , e.g. replication,copy, move file in the backend, without involving network traffics.
As long as you setup the proper IAM permission/policies for the destination bucket, object can move across bucket without additional server.
This is almost similar to file server. User can copy file to each other without "download/upload", instead, one just create a folder with write permission for all, file copy from another user is all done within the file server, with fastest raw disk I/O performance. You don't need powerful instance nor high performance network using backend S3 copy API.
Your method is similar to attempt FTP download file from user using the same file server, which create unwanted network traffics.
You should check out the TransferManager in boto3. It will automatically handle the threading of multipart uploads in an efficient way. See the docs for more detail.
Basically you must have to use the upload_file method and TransferManager will take care of the rest.
import boto3
# Get the service client
s3 = boto3.client('s3')
# Upload tmp.txt to bucket-name at key-name
s3.upload_file("tmp.txt", "bucket-name", "key-name")

How to efficiently copy all files from one directory to another in an amazon S3 bucket with boto?

I need to copy all keys from '/old/dir/' to '/new/dir/' in an amazon S3 bucket.
I came up with this script (quick hack):
import boto
s3 = boto.connect_s3()
thebucket = s3.get_bucket("bucketname")
keys = thebucket.list('/old/dir')
for k in keys:
newkeyname = '/new/dir' + k.name.partition('/old/dir')[2]
print 'new key name:', newkeyname
thebucket.copy_key(newkeyname, k.bucket.name, k.name)
For now it is working but is much slower than what I can do manually in the graphical managment console by just copy/past with the mouse. Very frustrating and there are lots of keys to copy...
Do you know any quicker method ? Thanks.
Edit: maybe I can do this with concurrent copy processes. I'm not really familiar with boto copy keys methods and how many concurrent processes I can send to amazon.
Edit2: i'm currently learning Python multiprocessing. Let's see if I can send 50 copy operations simultaneously...
Edit 3: I tried with 30 concurrent copy using Python multiprocessing module. Copy was much faster than within the console and less error prone. There is a new issue with large files (>5Gb): boto raises an exception. I need to debug this before posting the updated script.
Regarding your issue with files over 5GB: S3 doesn't support uploading files over 5GB using the PUT method, which is what boto tries to do (see boto source, Amazon S3 documentation).
Unfortunately I'm not sure how you can get around this, apart from downloading it and re-uploading in a multi-part upload. I don't think boto supports a multi-part copy operation yet (if such a thing even exists)

Categories