creating python server in AWS without breaching free tier - python

Just a conceptual question here.
I'm a newbie to aws. I have a node app and a python file that is currently on a Flask server. The node app sends data to the Py server and gets data back. This takes approx 3.2 secs to happen. I am not sure how I can apply this to AWS. I tried sagemaker but it was really costly for me. Is there anyway I can create a Python server with an endpoint in AWS within the free tier?
Thanks
Rushi

You do not need to use sagemaker to deploy your flask application to AWS. AWS has a nice documentation to deploy a Flask Application to an AWS Elastic Beanstalk environment.
Other than that you can also deploy the application using two methods.
via EC2
via Lambda
EC2 Instances
You can launch the ec2 instance with public IP with SSH enabled from your IP address. Then SSH into the instance and install the python, it's libraries and your application.
Lambda
AWS lambda is the perfect solution. It scales automatically, depends upon the requests your application will receive.
As lambda needs your dependencies be available in the package, so you need to install them using --target parameter, zip the python code along with the installed packages and then upload to the Lambda.
pip install --target ./package Flask
cd package
zip -r9 function.zip . # Create a ZIP archive of the dependencies.
cd .. && zip -g function.zip lambda_function.py # Add your function code to the archive.
For more detailed instructions you can read these documentations
Lambda

Related

How to code a serverless AWS lambda function that will download a linux third party application using wget and then execute commands from that app?

I would like to use a serverless lambda that will execute commands from a tool called WSO2 API CTL as I would on linux cli. I am not sure of how to mimic the downloading and calling of the commands as if I were on a linux machine using either Nodejs or Python via the lambda?
I am okay with creating and setting up the lambda and even getting it in the right VPC so that the commands will reach an application on an EC2 instance but I am stuck at how to actually execute the linux commands using either Nodejs or Python and which one would be better, if any.
After adding the following I get an error trying to download:
os.system("curl -O https://apim.docs.wso2.com/en/latest/assets/attachments/learn/api-controller/apictl-3.2.1-linux-x64.tar.gz")
Warning: Failed to create the file apictl-3.2.1-linux-x64.tar.gz: Read-only
It looks like there is no specific reason to download apictl during the initialisation of your Lambda. Therefore, I would propose to bundle it with your deployment package.
The advantage of this approach are:
Quicker initialisation
Less code in your Lambda
You could extend your CI/CD pipeline to download the application during build and then add it to your ZIP archive that you deploy.

Is there a way to run an already-built python API from google cloud?

I built a functioning python API that runs from my local machine. I'd like to run this API from Google Cloud SDK, but after looking through the documentation and googling every variation of "run local python API from google cloud SDK" I had no luck finding anything that wouldn't involve me restructuring the script heavily. I have a hunch that "google run" or "API endpoint" might be what I'm looking for, but as a complete newbie to everything other than Firestore (which I would rather not convert my entire api into if I don't have to), I want to ask if there's a straightforward way to do this.
tl;dr The API runs successfully when I simply type "python apiscript.py" into local console, is there a way I can transfer it to Google Cloud without adjusting the script itself too much?
IMO, the easiest solution for portable app is to use Container. And to host the container in serverless mode, you can use Cloud Run.
In the getting started guide, you have python example. The main task for you is to create a Dockerfile
FROM python:3.9-slim
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip install -r requirements.txt
CMD python apiscript.py
I adapted the script to your description, and I assumed that you have a requirements.txt file for the dependencies.
Now, build your container
gcloud builds submit --tag gcr.io/<PROJECT_ID>/apiscript
Replace the PROJECT_ID by your project ID, not the name of the project (even if sometimes it's the same, it's a common mistake for the newcomers)
Deploy on Cloud Run
gcloud run deploy --region=us-central1 --image=gcr.io/<PROJECT_ID>/apiscript --allow-unauthenticated --platform=managed apiscript
I assume that your API is served on the port 8080. else you need to add a --port parameter to override this.
That should be enough
Here it's a getting started example, you can change the region, the security mode (here no security) the name and the project.
In addition, for this deployment, the Compute Engine default service account is used. You can use another service account if you want, but, in any cases, you need to grant the used service account the permission to access to the Firestore database.

Pip install package from private github repo in google cloud appengine

I am using google cloud appengine and deploying with gcloud app deploy and a standard app.yaml file. My requirements.txt file has one private package that is fetched from github (git+ssh://git#github.com/...git). This install works locally, but when I run the deploy I get
Host key verification failed.
fatal: Could not read from remote repository.
This suggests there is no ssh key when installing. Reading docs (https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies) it appears that this just isn't an option???
Dependencies are installed in a Cloud Build environment that does not provide access to SSH keys. Packages hosted on repositories that require SSH-based authentication must be copied into your project directory and uploaded alongside your project's code using the pip package manager.
To me this seems severely not-optimal - the whole point of factoring out code into a package was to be able to avoid duplication in repos. Now, if I want to use appengine, you're telling me this not possible?
Is there really no workaround?
See:
https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies#private_dependencies
The App Engine service does not (and should not) have access to your private repo.
One alternative (that you don't want) is to upload your public key to the App Engine service.
The other -- as documented -- is that you must provide the content of your private repo to the service as part of your upload.
I'm going through the same issue, deploying on gcloud a python project that contains in its requirements.txt some private repositories. As #DazWilkin wrote already, there's no way to deploy it like you do normally.
One option would be to create a docker image of the whole project and its dependencies, save it into the gcloud docker registry and then pull it into the App Engine instance.

How to deploy AWS python Lambda project locally?

I got an AWS python Lambda function which contains few python files and also several dependencies.
The app is build using Chalice so by that the function will be mapped like any REST function.
Before the deployment in prod env, I want to test it locally, so I need to pack all this project (python files and dependencies), I tried to look over the web for the desired solution but I couldn't find it.
I managed to figrue how to deploy one python file, but a whole project did not succeed.
Take a look to the Atlassian's Localstack: https://github.com/atlassian/localstack
It's a full copy of the AWS cloud stack, locally.
I use Travis : I hooked it to my master branch in git, so that when I push on this branch, Travis tests my lambda, with a script that uses pytest, after having installed all its dependencies with pip install. If all the tests passed, it then deploy the lambda in AWS in my prod-env.

What is the best way to run a django project on aws?

How should the project be deployed and run. There are loads of tools in this space. Which should be used and why?
Supervisor
Gunocorn
Ngnix
Fabric
Boto
Pip
Virtualenv
Load balancers
It depends on your configuration. We are using the following stack for our environment on Rackspace, but you can setup the same thing on AWS with EC2 instances.
Ubuntu 11.04
Varnish (in memory cache) to avoid disk seeks
NginX to server static content
Apache to server dynamic content (MOD-WSGI)
Python 2.7.2 with Django
Jenkins for our continuous builds
GIT for version control
Fabric for the deployment.
So the way it works is that a GIT push to the origin repository is being polled by Jenkins. Jenkins then pulls the changes down from the origin. Builds a Python Egg, runs Unit tests, uses Fabric to deploy this egg to the environments necessary and reloads the Apache config to make sure the forked Apache processes are picking up the new Python egg.
Hope this helps.
As Michael Klockel already stated depends on your configuration, I have:
Ubuntu 10.04 LTS
Nginx
Uwsgi
git version control
python virtualenv and pip
You can check the deployment settings here:
Django, Virtualenv, nginx + uwsgi import module wsgi error
and why I use nginx and uwsgi here:
http://nichol.as/benchmark-of-python-web-servers
Also I use fabric for the deployment of the app, and chef solo http://ericholscher.com/blog/2010/nov/8/building-django-app-server-chef/
johny cache for sql queries and raven and sentry to keep a log of whats going on on the app.
I'd use uWSGI+Nginx from a performance perspective (I think the comparison has already been linked in another answer), pip and virtualenv for deployment as this keeps things self-contained, and facilitates clean deployment using fabric or similar. Use git for version control. Jenkins can handle continuous integration. I'd use the AWS load balancer (ELB) in front of your EC2 instances for balancing - does the job without you having to fret too much about it. django-storages for uploading your static files to s3, that saves you the effort of having another server to hand out static files.
However, it depends a little on your admin overheads. If you're looking for something clean and simple for deployment and scaling, I'd scrap the whole AWS EC2 stack, use Heroku as a front end, and s3 for your static files. This saves all the admin time of maintaining the boxes, and allows you to concentrate on the dev.

Categories