I am trying to deploy a python Webapp which is basically an interface for an excel file. The application is based on python.
I've build and ran the container on my local machine and everything is smooth. But when I try to deploy on Azure Web Apps I face an issue which says that my file is missing.
The Dockerfile looks like this
FROM python:3.8-slim-buster AS base
WORKDIR /home/mydashboard
RUN python -m pip install --upgrade pip
RUN pip3 install pipenv
COPY Pipfile /home/mydashboard/Pipfile
COPY Pipfile.lock /home/mydashboard/Pipfile.lock
COPY dataset.csv /home/mydashboard/dataset.csv
COPY src/ /home/mydashboard/src
RUN pipenv install --system
WORKDIR /home/mydashboard/src
EXPOSE 8000
CMD gunicorn -b :8000 dashboard_app.main:server
Weirdly enough when this runs on the Azure App Service I receive a message which was that "dataset.csv" does not exist.
I printed the files inside of /home/dashboard/ it seems that it was an empty folder!!
Who does this happen??
It runs perfectly on my personal computer but it seems that it just run on Azure.
Based on:
Your error message is dataset.csv does not exist
It works on your local machine
Fails in Azure
Probably dataset.csv exists in the copy from location on your local machine, but is missing in Azure.
Related
I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the pdfkit python library. This requires the wkhtmltopdf executable to be available.
To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.
When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.
sudo apt-get install wkhtmltopdf
I'm also aware that I can write this start up script on the app service itself.
My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?
By setting the below commands in the App configuration will work.
Thanks to #pamelafox for the comments.
Commands
PRE_BUILD_COMMAND or POST_BUILD_COMMAND
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Create python virtual environment if specified by VIRTUALENV_NAME.
Run python -m pip install --cache-dir /usr/local/share/pip-cache --prefer-binary -r requirements.txt if requirements.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH.
Run python setup.py install if setup.py exists.
Run python package commands and determine python package wheel.
If manage.py is found in the root of the repo manage.py collectstatic is run. However, if DISABLE_COLLECTSTATIC is set to true this step is skipped.
Compress virtual environment folder if specified by compress_virtualenv property key.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Build Conda environment and Python JupyterNotebook
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Set up Conda virtual environemnt conda env create --file $envFile.
If requirment.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH, activate environemnt conda activate $environmentPrefix and run pip install --no-cache-dir -r requirements.txt.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Package manager
The latest version of pip is used to install dependencies.
Run
The below process is applied to know how to start an app.
If user has specified a start script, run it.
Else, find a WSGI module and run with gunicorn.
Look for and run a directory containing a wsgi.py file (for Django).
Look for the following files in the root of the repo and an app class within them (for Flask and other WSGI frameworks).
application.py
app.py
index.py
server.py
Gunicorn multiple workers support
To enable running gunicorn with multiple workers strategy and fully utilize the cores to improve performance and prevent potential timeout/blocks from sync workers, add and set the environment variable PYTHON_ENABLE_GUNICORN_MULTIWORKERS=true into the app settings.
In Azure Web Apps the version of the Python runtime which runs your app is determined by the value of LinuxFxVersion in your site config. See ../base_images.md for how to modify this.
References taken from
Python runtime on App Service
I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message:
OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe'
I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app):
./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr
I am using the following Python code to invoke the command in the API route which is where it fails:
proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True)
The Dockerfile for my backend is pasted below:
FROM python:3
WORKDIR /app
ENV FLASK_APP=main.py
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I have tried to tackle the issue a few different ways:
By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error.
In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed.
I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile.
At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux.
Any chances folks have seen similar issues arise with this error? Thanks in advance!
So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.
Trying to host a python http server and works fine.
FROM python:latest
COPY index.html /
CMD python3 -m http.server
But when trying with python virtualenv, facing issues.
FROM python:3
COPY index.html .
RUN pip install virtualenv
RUN virtualenv --python="python3" .virtualenv
RUN .virtualenv/bin/pip install boto3
RUN python3 -m http.server &
CMD ["/bin/bash"]
Please help.
I just want to point up that using virtualenv within docker container might be redundant.
With docker, you are encapsulating your one specific application along with its dependencies (libraries, frameworks, boto3 in your case), as opposed to your local machine where you might have several applications being developed, each with different dependencies.
Thus, I would recommend considering again the necessity of virtualenv within docker.
Second, running the command:
RUN python3 -m http.server &
in the background is also bad practice here. You want to run it with the CMD command in the foreground, so it will run as the first process (PID 1). Then it will receive all docker signals, and start automatically with the container start:
CMD ["virtualenv/bin/python3", "-m", "http.server"]
I use PyCharm Professional to develop python.
I am able to connect the PyCharm run/debugs GUI to local Docker image's python interpreter and run local code using the Docker Container python environment libraries, eg. via the procedure described here: Configuring Remote Interpreter via Docker.
I am also able to SSH into AWS instances with PyCharm and connect to remote python interpreters there, which maps files from my local project into a remote directory and again allows me to run a GUI stepping through remote code as though it was local, eg. via the procedure described here: Configuring Remote Interpreters via SSH.
I have a Docker image on Docker hub that I would like to deploy to an AWS instance, and then connect my local PyCharm GUI to the environment inside the remote container, but I can't see how to do this, can anybody help me?
[EDIT] Once proposal that has been made is to put an SSH Server inside the remote container and connect my local PyCharm directly into the container via SSH, for example as described here. It's one solution but has been extensively criticised elsewhere - is there a more canonical solution?
After doing a bit of research, I came to the conclusion that installing an SSH server inside my container and logging in via the PyCharm SSH remote interpreter was the best thing to do, despite concerns raised elsewhere. I managed it as follows.
The Dockerfile below will create an image with an SSH server inside that you can SSH into. It also has anaconda/python, so it's possible to run a notebook server inside and connect to that in the usual way for Jupyter degubbing. Note that it's got a plain-text password (screencast), you should definitely enable key login if you're using this for anything sensitive.
It will take local libraries and install them into your package library inside the container, and optionally you can pull repos from GitHub as well (register for an API key in GitHub if you want to do this so you don't need to enter a plain text password). It also requires you to create a plaintext requirements.txt containing all of the other packages you will need to be pip installed.
Then run build command to create the image, and run to create a container from that image. In the Dockerfile we expose the SSH through the container's port 22, so let's hook that up to an unused port on the AWS instance - this is the port we will SSH through. Also add another port pairing if you want to use Jupyter from your local machine at any point:
docker build -t your_image_name .
don't miss the . at the end - it's important!
docker run -d -p 5001:22 -p8889:8889 --name=your_container_name your_image_name
Nb. you will need to bash into the container (docker exec -it xxxxxxxxxx bash) and turn Jupyter on, with jupyter notebook.
Dockerfile:
ROM python:3.6
RUN apt-get update && apt-get install -y openssh-server
# Load an ssh server. Change root username and password. By default in debian, password login is prohibited,
# go into the file that controls this and make a change to allow password login
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN /etc/init.d/ssh restart
# Install git, so we can pull in some repos
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Install the requirements and the libraries we need (from a requirements.txt file)
COPY requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
# These are local libraries, add them (assuming a setup.py)
ADD your_libs_directory /your_libs_directory
RUN python3 -m pip install /your_libs_directory
RUN python3 your_libs_directory/setup.py install
# Adding git repos (optional - assuming a setup.py)
git clone https://git_user_name:git_API_token#github.com/YourGit/git_repo.git
RUN python3 -m pip install /git_repo
RUN python3 git_repo/setup.py install
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I work on my mac and have a python-flask application running inside a container. I am using Docker for mac.
Purpose: I want my app to reload automatically everytime I make a change in code. I want to access and make changes to the code from my IDE(atom) in mac.
My Dockerfile creates a virtualenv(/app/venv) when I build the image.
WORKDIR /app
ADD ./myapp /app
RUN virtualenv venv
RUN venv/bin/activate && pip install requirements.lock
When I run the container, I mount the code volume so that I can access and make changes to code from local IDE.
volumes:
- ./myapp:/app
Problem: The problem with this approach is my venv folder which got created in image build disappears because of the volume mount i made.
What is the best practice around it?