I have developed a lambda function which hits API url and getting the data in Json Format. So need to use modules/libraries like requests which is not available in AWS online editor using Python 2.7.
So need to upload the code in Zip file, How we can do step by step to deploy Lambda function from windows local server to AWS console. What are the requirements?
You could use code build, which will build your code on the aws linux envoirnment. Then it wont matter if the envoirnment is windows or linux.
code build will put the artifacts directly on s3, from there you can directly upload it to lambda.
Related
I would like to use a serverless lambda that will execute commands from a tool called WSO2 API CTL as I would on linux cli. I am not sure of how to mimic the downloading and calling of the commands as if I were on a linux machine using either Nodejs or Python via the lambda?
I am okay with creating and setting up the lambda and even getting it in the right VPC so that the commands will reach an application on an EC2 instance but I am stuck at how to actually execute the linux commands using either Nodejs or Python and which one would be better, if any.
After adding the following I get an error trying to download:
os.system("curl -O https://apim.docs.wso2.com/en/latest/assets/attachments/learn/api-controller/apictl-3.2.1-linux-x64.tar.gz")
Warning: Failed to create the file apictl-3.2.1-linux-x64.tar.gz: Read-only
It looks like there is no specific reason to download apictl during the initialisation of your Lambda. Therefore, I would propose to bundle it with your deployment package.
The advantage of this approach are:
Quicker initialisation
Less code in your Lambda
You could extend your CI/CD pipeline to download the application during build and then add it to your ZIP archive that you deploy.
I wonder is it able to create an HTTP triggered python function on Azure Function without doing any local coding? I want to do everything on Azure cloud. My python function codes are in a Github/Azure repos repository, but I do not have all the extra files of an Azure function project (for example, a init.py script file that is the HTTP trigger function of the Azure Function App). Is it possible to generate those files from Azure (without generating any Azure Function related files on my local computer)? I noticed that we cannot do in-portal editing for Python function Apps.
As far as I know, we can just deploy the python function from local to Azure cloud, but can not deploy it as you expected. And I think it will not be too difficult to us to deploy the python function from local to azure cloud.
Since you have had the main python function code "init.py", you just need to sign in to Azure in you VS code and create python function by following this tutorial. And then use your init.py code to replace the new python function code. After that, run the command below in "TERMINAL" window to generate the "requirements.txt":
pip freeze > requirements.txt
The "requirements.txt" includes all of the modules which imported in your "init.py" and when the function deployed to azure, azure will install these modules by this "requirements.txt". I saw you mentioned you don't have all the extra files of this function project, if these modules are what you care about, the "requirements.txt" will solve your problem.
Then use this command to deploy it to Azure:
func azure functionapp publish hurypyfunapp --build remote
I currently have a python project which basically reads data from an excel file, transforms and formats it, performs intensive calculations on the formatted data, and generates an output. This output is written back on the same excel file.
The script is run using a Pyinstaller EXE which basically is packing all the required libraries and the code itself, so every user is not required to prep the environment to run the script.
Both, the script EXE and the Excel file, sit on the user's machine.
I need some suggestion on how this entire workflow could be achieved using AWS. Like what AWS services would be required etc.
Any inputs would be appreciated.
One option would include using S3 to store the input and output files. You could create a lambda function (or functions) that does the computing work and that writes the update back to S3.
You would need to include the Python dependencies in your deployment zip that you push to AWS Lambda or create a Lambda layer that has the dependencies.
You could build triggers to run on things like S3 events (a file being added to S3 triggers the Lambda), on a schedule (EventBridge rule invokes the Lambda according to a specific schedule), or on demand using an API (such as an API Gateway that users can invoke via a web browser or HTTP request). It just depends on your need.
I want to execute awscli for s3 sync from AWS Lambda.
When I use ./aws I get following error:
('Status : FAIL', 127, 's3: ./aws: No such file or directory\n')
Even tried using full path but still got error:
('Status : FAIL', 127, 's3: /Library/Frameworks/Python.framework/Versions/2.7/bin/aws: No such file or directory
')
I would recommend to use the AWS SDK which is implemented in AWS Lambda. With this you can access S3 and many other AWS services.
Try to look here for programming in Python
The other SDKs can be found here.
Remember that you can just use the supported languages for Lambda.
The AWS CLI is not installed on AWS Lambda, so that won't work out of the box.
As the AWS CLI is just a Python package, you could upload it as part of your deployment package if you're using Python as runtime.
Using the Python boto3 which is already included within your Lambda environment. You can use it to upload to your S3 bucket, as far as I know it doesn't use the sync command but will work the same as the aws s3 cp command used in the CLI.
I've used AWS CLI in AWS Lambda - by adding it as a Layer. You can add this as a layer by codifying thee Lambda infrastructure using CDK constructs.
https://pypi.org/project/aws-cdk.lambda-layer-awscli/
In my use case - I wanted to run aws s3 sync commmand.
I am going to create search api for Android and iOS developers.
Our client have setup a lambda function in AWS.
Now we need to fetch data using jwplatform Api based on search keyword passed as parameter. For this, I have to install jwplatform module in Lambda function or upload zip file of code with dependencies. So that i want to run python script locally and after getting appropriate result i will upload zip in AWS Lambda.
I want to use the videos/list (jwplatform Api) class to search the video library using python but i don't know much about Python. So i want to know how to run python script? and where should i put the pyhton script ?
There are a handful of useful Python script examples here: https://github.com/jwplayer/jwplatform-py
I am succeed to install jwplatform module locally.
Steps are as follows:
1. Open command line
2. Type 'python' on command line
3. Type command 'pip install jwplatform'
4. Now, you can use jwplatform api.
Above command added module jwplatform in python locally
But my another challenge is to install jwplatform in AWS Lambda.
After research i am succeed to install module in AWS Lambda. I have bundled module and code in a directory then create zip of bundle and upload it in AWS Lambda. This will install module(jwplatform) in AWS Lambda.