Pushing to an existing AWS Elastic Beanstalk application from the command line - python

I've used the web dashboard of Elastic Beanstalk to make an application and an environment. I know I can update that using the dashboard and uploading a zip file of my application, but I would rather use the command line to upload my application.
Apparently the correct tool for this is eb, the CLI for Elastic Beanstalk. I've installed this and attempted to use it, following the Amazon "Deploying a Flask Application to AWS Elastic Beanstalk" tutorial. However, this seems to create a completely different application to the one visible on the EB dashboard - changes made to it don't appear on the dashboard, and the application even has a different URL.
How can I use the command line to access an existing application on AWS Elastic Beanstalk?

To begin using git aws.push for your application you will have to initialize your git repository with AWS Beanstalk metadata. I'm assuming you are using git for version control (if you are not, you will have to initialize your project with git init first).
$ cd angrywhopper
$ git init #optional
$ eb init
...
$ git aws.push
Walk through wizard steps, commit your code and push the app.
Elastic Beanstalk container can be further customized by either rerunning eb init or with configuration file inside .ebextensions directory.
If eb does not support something you would like to use, have a look at AWS Elastic Beanstalk API Command Line Interface, which is more feature-rich.
More details on the configuration can be found in the following guides:
Customizing and Configuring AWS Elastic Beanstalk Environments
Customizing and Configuring a Python Container
Make sure that service region in eb wizard is the same as region you pick in dashboard dropdown.
NB: I would suggest to use temporary name in the beginning to make sure your app works as expected with the new workflow and then rename it to the original by rerunning eb init. Don't forget to terminate the temporary environment as soon as you done with the migration to avoid any unnecessary fees.

Here are the steps to use "git aws.push" with your existing ElasticBeanstalk(EB) application. (These steps would be useful, specifically, for your question and also if you had setup EB using command line from another machine and are now setting up the tools on a new machine.)
--
Before you start
You should have git installed on your system and your code should have a git repository.
Download the latest "AWS Elastic Beanstalk Command Line Tool" and get it working. Find a link to download here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-reference-branch-environment.html
The git aws.push command won't work yet cause your .ebextensions isn't configured. (Basically the .ebextensions stores your AWS Keys and info on EB instance to deploy to etc.)
--
Steps
Run the eb --init command. (I do this from the root of my application code directory, and it automatically picks the name of the application. Maybe you can run the command from any other location as well and specify the name manually later.)
AWS-ElasticBeanstalk-CLI-2.6.0/eb/linux/python2.7/eb (on Linux) or
AWS-ElasticBeanstalk-CLI-2.6.0/eb/windows/eb.exe (on Windows)
Enter you AWS Access Key ID and Secret Access Key
Select the environment you configured your application with (The choices are AMI Linux 64bit, Ubuntu 32bit etc.). Basically select the options that you selected while creating your first EB instance.
For Create RDS instance? [y/n]: Say n (You already have a DB instance or don't need one).
Choose "Create a default instance profile".
This would be the last step under eb --init and the script will exit.
You can find more information on the above steps here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html
--
Environment ready to use
The above steps will result in an .ebextensions directory (in ~/ I guess.)
From now on just git commit the changes in your code and run git aws.push and the application will be deployed to AWS. It's quite cool once you have it all configured.
Hope this helps. I jotted this down quickly. Let me know if you find the steps confusing, and I'll try to write it better.

Created Application in aws.amazon.com->elastic beanstalk and trying to access the application on the eb CLI:
a. When you provide eb init on console, CLI will prompt you to choose the region.
b. Make sure to choose the same region as the one you choose on the webpage.
(Note: if you don't choose the same region, it is going to take you into creating a whole new application. This was the mistake I did)
Creating application using eb CLI first locally and then trying to access the same application on the webpage.
a. $> eb console (from the app root directory, provided you did $> eb init initially)
b. You can directly login into the website and make sure to choose the same region (eg: US - N California) where you configured the app locally and you should be able to see the application you deployed.

Related

Is there a way to run an already-built python API from google cloud?

I built a functioning python API that runs from my local machine. I'd like to run this API from Google Cloud SDK, but after looking through the documentation and googling every variation of "run local python API from google cloud SDK" I had no luck finding anything that wouldn't involve me restructuring the script heavily. I have a hunch that "google run" or "API endpoint" might be what I'm looking for, but as a complete newbie to everything other than Firestore (which I would rather not convert my entire api into if I don't have to), I want to ask if there's a straightforward way to do this.
tl;dr The API runs successfully when I simply type "python apiscript.py" into local console, is there a way I can transfer it to Google Cloud without adjusting the script itself too much?
IMO, the easiest solution for portable app is to use Container. And to host the container in serverless mode, you can use Cloud Run.
In the getting started guide, you have python example. The main task for you is to create a Dockerfile
FROM python:3.9-slim
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip install -r requirements.txt
CMD python apiscript.py
I adapted the script to your description, and I assumed that you have a requirements.txt file for the dependencies.
Now, build your container
gcloud builds submit --tag gcr.io/<PROJECT_ID>/apiscript
Replace the PROJECT_ID by your project ID, not the name of the project (even if sometimes it's the same, it's a common mistake for the newcomers)
Deploy on Cloud Run
gcloud run deploy --region=us-central1 --image=gcr.io/<PROJECT_ID>/apiscript --allow-unauthenticated --platform=managed apiscript
I assume that your API is served on the port 8080. else you need to add a --port parameter to override this.
That should be enough
Here it's a getting started example, you can change the region, the security mode (here no security) the name and the project.
In addition, for this deployment, the Compute Engine default service account is used. You can use another service account if you want, but, in any cases, you need to grant the used service account the permission to access to the Firestore database.

Where is my python-flask app source stored on ec2 instance deployed with elastic beanstalk?

I just deployed a flask-python app with elastic beanstalk on AWS but cannot locate my app source files like application.py or templates/index.html etc
I've looked at looked at /var/../.. or /opt/../.. etc but nowhere to be found.
Is there an ebs command like $ eb find 'filename.py' etc?
/opt/python – Root of where you application will end up.
/opt/python/current/app – The current application that is hosted in the environment.
/opt/python/on-deck/app – The app is initially put in on-deck and then, after all the deployment is complete, it will be moved to current. If you are getting failures in yourcontainer_commands, check out out the on-deck folder and not the current folder.
/opt/python/current/env – All the env variables that eb will set up for you. If you are trying to reproduce an error, you may first need to source /opt/python/current/env to get things set up as they would be when eb deploy is running.
opt/python/run/venv – The virtual env used by your application; you will also need to run source /opt/python/run/venv/bin/activate if you are trying to reproduce an error
As of today, using the default AWS Linux option when creating eb (Python 3.7 running on 64bit Amazon Linux 2/3.1.1), I found the application files in:
/var/app/current
If that doesn't work you can search for a file that you know to be unique in your app, e.g
sudo find / -name hello.html
In my case the above returns
/var/app/current/templates/hello.html

Is there a better way to set a gcloud project in a directory?

I work on multiple appengine projects in any given week. i.e. assume multiple clients. Earlier I could set application in app.yaml. So whenever I did appcfg.py update.... it would ensure deployment to the right project.
When deploying, the application variable throws an error with gcloud deploy. I had to use
gcloud app deploy --project [YOUR_PROJECT_ID]. So what used to be a directory level setting for a project, is now going into our build tooling. And missing out that simple detail can push a project code to the wrong customer.
i.e. if I did gcloud config set project proj1 and then somehow did a gcloud app deploy in proj2, it would deploy to proj1. Production deployments are done after detailed verification on the build tools and hence it is less of an issue there because we still use the --project flag.
But its hard to do similar stuff on the development environment. dev_appserver.py doesn't have a --project flag.
When starting dev_appserver.py I've to do gcloud config set project <project-id> before I start the server. This is important when I using stuff like PubSub or GCS (in dev topics or dev buckets).
Unfortunately, missing out a simple configuration like setting a project ID in a dev environment can result into uploading blobs/messages/etc into the wrong dev gcs bucket or wrong dev pubsub topic (not using emulators). And this has happened quite a few times especially when starting new projects.
I find the above solutions as hackish-workarounds. Is there a good way to ensure that we do not deploy or develop in a wrong project when working from a certain directory?
TL;DR - Not supported based on the current working directory, but there are workarounds.
Available workarounds
gcloud does not directly let you set up a configuration per working directory. Instead, you could use one of these 3 options to achieve something similar:
Specify --project, --region, --zone or the config of interest per command. This is painful but gets the job done.
Specify a different gcloud configuration directory per command (gcloud uses ~/.config/gcloud on *nix by default):
CLOUDSDK_CONFIG=/path/to/config/dir1 gcloud COMMAND
CLOUDSDK_CONFIG=/path/to/config/dir2 gcloud COMMAND
Create multiple configurations and switch between them as needed.
gcloud config configurations activate config-1 && gcloud COMMAND
Shell helpers
As all of the above options are ways to customize on the command line, aliases and/or functions in your favorite shell will also help make things easier.
For example in bash, option 2 can be implemented as follows:
function gcloud_proj1() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir1 $#
}
function gcloud_proj2() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir2 $#
}
gcloud_proj1 COMMAND
gcloud_proj2 COMMAND
There's a very nice way I've been using with PyCharm, I suspect you can do so with other IDEs.
You can declare the default env variables for the IDE Terminal, so when you open a new terminal gcloud recognises these env variables and sets the project and account.
No need to switch configurations between projects manually (gcloud config configurations activate ). Terminals open in other projects will inherit it's own GCP project and config from the ENV variables.
I've had this problem for years and I believe I found a decent compromise.
Create a simple script called contextual-gcloud. Note the \gcloud, fundamental for future aliasing.
🐧$ cat > contextual-gcloud
#!/bin/bash
if [ -d .gcloudconfig/ ]; then
echo "[$0] .gcloudconfig/ directory detected: using that dir for configs instead of default."
CLOUDSDK_CONFIG=./.gcloudconfig/ \gcloud "$#"
else
\gcloud "$#"
fi
Add to your .bashrc and reload / start new bash. This will fix autocompletion.
alias gcloud=contextual-gcloud
That's it! If you have a directory called that way the system will use that instead, which means you can load your configuration into source control etc.. only remember to git ignore stuff like logs, and private stuff (keys, certificates, ..).
Note: auto-completion is fixed by the alias ;)
Code: https://github.com/palladius/sakura/blob/master/bin/contextual-gcloud
These are exactly the reasons for which I highly dislike gcloud. Making command line argument mandatory and dropping configuration files support, much too error prone for my taste.
So far I'm still able to use the GAE SDK instead of Google Cloud SDK (see What is the relationship between Google's App Engine SDK and Cloud SDK?), which could be one option - basically keep doing stuff "the old way". Please note that it's no longer the recommended method.
You can find the still compatible GAE SDKs here.
For whenever the above will no longer be an option and I'll be forced to switch to the Cloud SDK my plan is to have version-controlled cheat-sheet text files in each app directory containing the exact cmds to use for running the devserver, deploy, etc for that particular project which I can just copy-paste into the terminal without fear of making mistakes. You carefully set these up once and then you just copy-paste them. As a bonus you can have different branch versions for different environments (staging/production, for example).
Actually I'm using this approach even for the GAE SDK - to prevent accidental deployment of the app-level config files to the wrong GAE app (such deployments must use cmdline arguments to specify the app in multi-service apps).
Or do the same but with environment config files and wrapper scripts instead of cheat-sheet files, if that's your preference.

Modify deployment process on elasticbeanstalk ami

I've grown tired of trying to get elastic beanstalk to run python 3.5. Instead, I want to create a custom ami which establishes a separate virtualenv for the application (with python 3.5) and knows enough to launch the application using that virtualenv.
The problem is that once I ssh into the ec2 instance in order to create my custom ami, I am left wondering where the scripts are which govern the elastic beanstalk deployment behavior.
For example, when deploying via travis to elastic beanstalk, EB knows enough to look in a specific folder for the file application.py and to execute the file using a specific virtualenv (or maybe even the shudder root python installation of the machine). It even knows to execute a pip install -r requirements. Can anyone point me to where the script(s) are which govern this behavior?
UPDATE
Please see Elastic beanstalk require python 3.5 for those referencing the .ebextensions option. So far, it has not proved able to handle this problem due to the interdependency between the EB image operating system and the python environment used to run the application.
All of the EB files can be found in /opt/elasticbeanstalk - /opt/elasticbeanstalk/hooks is probably most relevant for what you're looking for.
You can use the ebextensions to run scripts you want when starting your ami.

Get version label of python app deployed to elastic beanstalk

I would like to know how I can get version label attribute of bundle deployed on elastic beanstalk to show it in my application.
As far as I know, bundles deployed with "git aws.push" are uploaded to an S3 bucket. My goal is to retrieve the version number and set it as environment variable or get it dinamically in order to be shown inside my django app.
Thanks
I see two potential ways. Although I must say I don't understand why this is not easier given this is a pretty legitimate use case, but it looks like it's not supported as of writing by AWS.
Solution 1: generated version label
Generate the version label on your side (for instance, with the commit hash), then make it part of your code.
For instance in your Makefile:
VERSION=$(shell git rev-parse --short HEAD)
deploy: requirements.txt
echo $(VERSION) > version.txt
eb deploy --label $(VERSION)
.PHONY: deploy
Then you can just read this file from the instance. There are some other options, for instance using sed to put it as a variable in one of your file.
Solution 2: get it from an undocumented file (unstable)
I tried to find this metadata on the EC2 instance, and could find it in a file that is unfortunately owned by root:
[root#... ec2-user]# cat /opt/elasticbeanstalk/deploy/manifest
{"RuntimeSources":{"yourappname":{"app-VERSION":{"s3url":""}}},"DeploymentId":24,"Serial":26}
[ec2-user#... ~]$ ls -la /opt/elasticbeanstalk/deploy/manifest
-rw-rw---- 1 root awseb 98 Mar 22 17:57 /opt/elasticbeanstalk/deploy/manifest
I'm not sure if you could do this, but you could have a post deployment command that chown or copies this file in a place where you can read it. I might try that and let you know if it worked.
A similar question has been asked here: How can you get the Elastic Beanstalk Application Version in your application? (evidently I found it only after writing the above).

Categories