Azure msdeploy Python App Service - python

I'm trying to deploy an Azure App Service running Flask on Python 3.4. When I deploy from within Visual Studio (2015) via Web Deploy, everything works nicely. But, when I attempt to deploy from my CI/CD server (TeamCity 10.0.3 on Windows Server 2012 R2) using an MSBuild step, the deployment succeeds without errors, but my app is apparently missing some crucial components and just throws HTTP errors on every request (my logging isn't able to capture the actual errors because the app is apparently totally hosed at this point). I'm deploying numerous C# applications from this TeamCity instance using Web Deploy without fail. My build has the following steps:
Command Line Runner - Copy publish profile (because msdeploy looks for it at ~/__profiles for some unknown reason and I can't find a flag or configuration setting to change):
mkdir __profiles
copy *.pubxml __profiles
Command Line Runner - Create venv at top level folder:
c:\python34\python.exe -m venv env
Command Line Runner - Install from requirements.txt:
env\scripts\pip install -r requirements.txt
Powershell Runner - Stop Azure App Service
MSBuild Runner - Deploy (Build file path points to the .pyproj file):
/p:DeployOnBuild=true
/p:PublishProfile="My Publish Profile"
/p:Configuration=Release
/p:AllowUntrustedCertificate=True
/p:UserName=%WebDeployUserName%
/p:Password=%WebDeployPassword%
Powershell Runner - Start Azure App Service
Related GitHub Issue

Related

html to pdf on Azure using pdfkit with wkhtmltopdf

I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the pdfkit python library. This requires the wkhtmltopdf executable to be available.
To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.
When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.
sudo apt-get install wkhtmltopdf
I'm also aware that I can write this start up script on the app service itself.
My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?
By setting the below commands in the App configuration will work.
Thanks to #pamelafox for the comments.
Commands
PRE_BUILD_COMMAND or POST_BUILD_COMMAND
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Create python virtual environment if specified by VIRTUALENV_NAME.
Run python -m pip install --cache-dir /usr/local/share/pip-cache --prefer-binary -r requirements.txt if requirements.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH.
Run python setup.py install if setup.py exists.
Run python package commands and determine python package wheel.
If manage.py is found in the root of the repo manage.py collectstatic is run. However, if DISABLE_COLLECTSTATIC is set to true this step is skipped.
Compress virtual environment folder if specified by compress_virtualenv property key.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Build Conda environment and Python JupyterNotebook
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Set up Conda virtual environemnt conda env create --file $envFile.
If requirment.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH, activate environemnt conda activate $environmentPrefix and run pip install --no-cache-dir -r requirements.txt.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Package manager
The latest version of pip is used to install dependencies.
Run
The below process is applied to know how to start an app.
If user has specified a start script, run it.
Else, find a WSGI module and run with gunicorn.
Look for and run a directory containing a wsgi.py file (for Django).
Look for the following files in the root of the repo and an app class within them (for Flask and other WSGI frameworks).
application.py
app.py
index.py
server.py
Gunicorn multiple workers support
To enable running gunicorn with multiple workers strategy and fully utilize the cores to improve performance and prevent potential timeout/blocks from sync workers, add and set the environment variable PYTHON_ENABLE_GUNICORN_MULTIWORKERS=true into the app settings.
In Azure Web Apps the version of the Python runtime which runs your app is determined by the value of LinuxFxVersion in your site config. See ../base_images.md for how to modify this.
References taken from
Python runtime on App Service

Upgrading Elastic Beanstalk environment from AWS Linux 1 to AWS Linux 2

I have an Elastic Beanstalk environment running Python 3.6 on AWS Linux 1, and I want to switch it to Python 3.8 on Amazon Linux 2.
I know that I can upgrade environments using the aws CLI update-environment command:
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
However, AWS Linux 2 uses different configuration parameters. I can't deploy the AWS Linux 2 config because it's invalid on AWS Linux 1 and I can't upgrade to AWS Linux 2 because my config is invalid.
How do I do the upgrade, and is there a way to do it in-place?
Differences in Configuration
AWS Linux 2 has changed a lot of how elastic beanstalk works and how it is configured. Regardless of whether you are doing an in-place upgrade or spinning up a new environment, here is a list of things that will be different to run through before making the upgrade. Most of the items here are things that are different in Elastic Beanstalk config that live in .ebextensions.
There are differences in sub-package dependencies between Python 3.6 and 3.8. You should test your requirements file on Python 3.8 and make sure it's compatible, especially if you use a generated requirements.txt.
AWS Linux 2 no longer allows you to write Apache config using a file directive in .ebextensions. These modifications now need to live in .platform/httpd/conf.
The virtual environment is no longer active while running container_commands. Any container commands that use your code need to have source $PYTHONPATH/activate run first.
Generated files now get wiped on config changes, so commands like django's collectstatic need to get moved to hooks.
Postgres client is no longer available normally though yum. To install it, you need to do:
packages:
yum:
amazon-linux-extras: []
commands:
01_postgres_activate:
command: sudo amazon-linux-extras enable postgresql10
02_postgres_install:
command: sudo yum install -y postgresql-devel
Apache is no longer the default web server (it is Nginx). To continue using it, you need to specify that as an option on your environment, such as:
option_settings:
aws:elasticbeanstalk:environment:proxy:
ProxyServer: apache
Modwsgi has been replaced with Gunicorn. Any modwsgi customizations you have will no longer work, and the WSGI path has a different format:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: config.wsgi:application
Static files config has a different format:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: staticfiles
You will get opted-in to advanced health reporting. Adding an Elastic Beanstalk health check is strongly recommended:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /health-check/
The application is now run on port 8000 on the server via Gunicorn and Apache/Nginx are just proxying requests to Gunicorn. This matters if you are doing apache customizations such as encrypting traffic between the load balancers and applications servers.
Apache is now run through systemctl rather than supervisord. If you are trying to restart Apache, the command is now sudo systemctl restart httpd
If you want to load your environment variables when sshed into the server, you need to parse them differently:
The environment variables live in a different place and have a different format. To get access to them when SSHed in, you need to add jq: [] to your yum installs. Then, either run the following commands or add them to the bashrc of the server (using a file directive in .ebextensions) to load environment variables and activate the python virtual environment:
source <(/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""')
source $PYTHONPATH/activate
cd /var/app/current
Upgrading by Launching a New Environment
To take this upgrade path, you need to not be using the Elastic Beanstalk data tier (i.e. you launched your RDS instance yourself, rather than through Elastic Beanstalk).
Create a code branch with your AWS Linux 2 config
Launch a new Elastic Beanstalk environment on AWS Linux 2.
Copy the environment variables from your previous environment.
Allow access to your database from the new environment (add the new environment's server security group as the target of an ingress rule on the database's security group)
Set up SSL on the new environment.
Deploy the AWS Linux 2 code branch to the new environment.
Test this new environment, ignoring browser certificate warnings (or set up a temporary DNS entry to test it).
Switch the DNS entry to point to your new environment or use AWS's CNAME swap feature on the two environemnts.
After your new environment has been running without problems for sufficient time, terminate your old environment.
Upgrading In-Place
There is a way to do the upgrade in-place, though there will be a few minutes where your site says "502 Bad Gateway". In order to do this, you need EB config that is compatible with both the AWS Linux 1 and AWS Linux 2 environments.
For Python, you can do this with a small Flask app and a four part deploy.
Part 1: deploy placeholder app that is compatible with both platforms
Add flask to your requirements.txt (if it's not already there).
Delete all files in .ebextensions
Make .ebextensions/01.config:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: wsgi_shim.py
make wsgi_shim.py:
from flask import Flask
application = Flask(__name__)
#application.route("/")
#application.route("/<path:path>/")
def hello_world(path=None):
return "This site is currently down for maintenance"
[If using load balancer to application server encryption, change the load balancer to send all traffic to the server via HTTP.]
eb deploy
Part 2: upgrade platform to AWS Linux 2
[If you have any static routes configured in elastic beanstalk delete them.]
Upgrade your eb environment
# Get list of solution stacks
aws elasticbeanstalk list-available-solution-stacks --output=json --query 'SolutionStacks' --region us-east-1
# Use one of the above options here
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
Part 3: deploy your main application to AWS Linux 2
Replace .ebextensions/01.config with your new AWS Linux 2 config.
Add .platform/httpd/conf.d/ssl_rewrite.conf:
RewriteEngine On
<If "-n '%{HTTP:X-Forwarded-Proto}' && %{HTTP:X-Forwarded-Proto} != 'https'">
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
eb deploy
Part 4: cleanup
[If using load balancer to application server encryption, change the load balancer back to sending traffic to the server via HTTPS.]
Delete wsgi_shim.py and remove flask from requirements.txt (unless it's a flask project).
eb deploy

Run Selenium tests via Jenkins on Docker Django app

I would like to run Selenium integration tests on a Development server.
Our App is Django app and it is deployed via Jenkins and Docker.
We know how to write and run Selenium tests localy
We know how to run tests with Jenkins and present Cobertura and Junit reports
The problem we have is:
In order for Selenium tests (apart from unit tests) the server needs to run
So we can't run tests before we build docker image
How to run tests inside docker images (this could be potentially achieved via script called inside Dockerfile...)
BUT even more important: How can Jenkins get the reports from inside docker containers
Whats are the best practice here.
The Deployment structure:
Jenkins get code from GIT
Jenkins Builds docker images
pass this image to the Docker registry (a private one)
log in to the remote server
on remote server pull the image from the registry
run image on a remote server

how to solve "An error occurred (InvalidClientTokenId)" AWS Chalice deployment error

I'm new to application development and decided to use AWS services for this project. however, I am having difficulty deploying chalice. every time I run "chalice deploy", I get an error.
here are the steps I followed along with commands for Windows:
upgraded my powershell
"virtualenv enve" : then ".\venv\Scripts\activate" # install and run virtual environment
"pip install aws cli" : # install aws command line interface
"aws configure" : # configure my AWS_KEY and AWS_SCERET
"pip install chalice" : # install chalice
"chalice new-project": # created a new project
"chalice deploy" # deploy
I get
An error occurred (InvalidClientTokenId) when calling the GetRole
operation: The security token included in the request is invalid.
I'm able to use localhost and run my application but not able to deploy to the server. I don't know what i'm doing wrong. someone, please help!
additional info:
my operating system is windows 10. I upgraded my PowerShell to 7
I somehow figured it out!. The error occurred because the command "
chalice deploy
" was used in the wrong directory. Make sure you are in the directory where your chalice file is before initializing it to deploy.

How to debug Django Project inside Docker Compose project (Cannot retrieve debug connection: Debug mode is not supported for...)

I've installed a docker-compose project and I am running it from IntelliJ.
docker-compose
django project
database project
nginx project
I achieved to run it succesfully, and I already can deploy from IntelliJ IDE.
I configured a setting for debugging with docker-compose as in the following image.
When I click the debug icon to start debugging, I am getting the following error:
Deploying 'Compose: docker'...
/usr/local/bin/docker-compose -f /home/someone/project/docker-compose.yml up -d django db nginx
pr_django_1 is up-to-date
pr_db_1 is up-to-date
pr_nginx_1 is up-to-date
'Compose: docker' has been deployed successfully.
Cannot retrieve debug connection: Debug mode is not supported for 'Docker-compose: docker'
I can't find what I missed to configure it, the django webpage and whole project is visualized good, I just can't start debugging.

Categories