Not able to download pyodbc to Azure App Service - python

I'm using Azure App Service to make my website. The website worked fine on my localhost, using a sqllite database and SQLAlchemy. Now I am trying to switch to the Azure SQL DB using this: https://gist.github.com/timmyreilly/f4a351eda5dd45aa9d56411d27573d7c
I followed the directions, but I'm getting this error. I looked up the error and found this: pyodbc - error while running application within a container but it wasn't able to help because the solution there said to do sudo apt install unixodbc-dev, but Azure CLI doesn't let me use sudo so I'm not sure how I can do this. Can you guys help me, what should I do?
2019-02-15T00:55:28.174067202Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/sqlalchemy/connectors/pyodbc.py", line 38, in dbapi
2019-02-15T00:55:28.174070902Z return __import__('pyodbc')
2019-02-15T00:55:28.174195702Z ImportError: libodbc.so.2: cannot open shared object file: No such file or directory

According to your error information and the #IvanYang comments, you deployed your Python app on Azure App Service for Linux Container which be based on Docker.
So you can refer to the offical document SSH support for Azure App Service on Linux to connect to the Linux system of your app to install the missing unixodbc-dev package via sudo apt install unixodbc-dev and then to make your app works before restart your app service.
The change of unixodbc-dev installed is temporary for docker container, you can refer to the existing SO thread Install unixodbc-dev for a Flask web app on Azure App Service to know it. The only way to keep it works is to add the content below into your .dockerfile file or to use a docker image which had been installed the required packages when create an App Service instance on Linux.
# Add unixodbc support
RUN apt-get update \
&& apt-get install -y --no-install-recommends unixodbc-dev

Related

html to pdf on Azure using pdfkit with wkhtmltopdf

I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the pdfkit python library. This requires the wkhtmltopdf executable to be available.
To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.
When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.
sudo apt-get install wkhtmltopdf
I'm also aware that I can write this start up script on the app service itself.
My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?
By setting the below commands in the App configuration will work.
Thanks to #pamelafox for the comments.
Commands
PRE_BUILD_COMMAND or POST_BUILD_COMMAND
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Create python virtual environment if specified by VIRTUALENV_NAME.
Run python -m pip install --cache-dir /usr/local/share/pip-cache --prefer-binary -r requirements.txt if requirements.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH.
Run python setup.py install if setup.py exists.
Run python package commands and determine python package wheel.
If manage.py is found in the root of the repo manage.py collectstatic is run. However, if DISABLE_COLLECTSTATIC is set to true this step is skipped.
Compress virtual environment folder if specified by compress_virtualenv property key.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Build Conda environment and Python JupyterNotebook
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Set up Conda virtual environemnt conda env create --file $envFile.
If requirment.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH, activate environemnt conda activate $environmentPrefix and run pip install --no-cache-dir -r requirements.txt.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Package manager
The latest version of pip is used to install dependencies.
Run
The below process is applied to know how to start an app.
If user has specified a start script, run it.
Else, find a WSGI module and run with gunicorn.
Look for and run a directory containing a wsgi.py file (for Django).
Look for the following files in the root of the repo and an app class within them (for Flask and other WSGI frameworks).
application.py
app.py
index.py
server.py
Gunicorn multiple workers support
To enable running gunicorn with multiple workers strategy and fully utilize the cores to improve performance and prevent potential timeout/blocks from sync workers, add and set the environment variable PYTHON_ENABLE_GUNICORN_MULTIWORKERS=true into the app settings.
In Azure Web Apps the version of the Python runtime which runs your app is determined by the value of LinuxFxVersion in your site config. See ../base_images.md for how to modify this.
References taken from
Python runtime on App Service

Why doesn't docker build copy the files over?

I am trying to deploy a python Webapp which is basically an interface for an excel file. The application is based on python.
I've build and ran the container on my local machine and everything is smooth. But when I try to deploy on Azure Web Apps I face an issue which says that my file is missing.
The Dockerfile looks like this
FROM python:3.8-slim-buster AS base
WORKDIR /home/mydashboard
RUN python -m pip install --upgrade pip
RUN pip3 install pipenv
COPY Pipfile /home/mydashboard/Pipfile
COPY Pipfile.lock /home/mydashboard/Pipfile.lock
COPY dataset.csv /home/mydashboard/dataset.csv
COPY src/ /home/mydashboard/src
RUN pipenv install --system
WORKDIR /home/mydashboard/src
EXPOSE 8000
CMD gunicorn -b :8000 dashboard_app.main:server
Weirdly enough when this runs on the Azure App Service I receive a message which was that "dataset.csv" does not exist.
I printed the files inside of /home/dashboard/ it seems that it was an empty folder!!
Who does this happen??
It runs perfectly on my personal computer but it seems that it just run on Azure.
Based on:
Your error message is dataset.csv does not exist
It works on your local machine
Fails in Azure
Probably dataset.csv exists in the copy from location on your local machine, but is missing in Azure.

Developing using PyCharm in Docker Container on AWS Instance

I use PyCharm Professional to develop python.
I am able to connect the PyCharm run/debugs GUI to local Docker image's python interpreter and run local code using the Docker Container python environment libraries, eg. via the procedure described here: Configuring Remote Interpreter via Docker.
I am also able to SSH into AWS instances with PyCharm and connect to remote python interpreters there, which maps files from my local project into a remote directory and again allows me to run a GUI stepping through remote code as though it was local, eg. via the procedure described here: Configuring Remote Interpreters via SSH.
I have a Docker image on Docker hub that I would like to deploy to an AWS instance, and then connect my local PyCharm GUI to the environment inside the remote container, but I can't see how to do this, can anybody help me?
[EDIT] Once proposal that has been made is to put an SSH Server inside the remote container and connect my local PyCharm directly into the container via SSH, for example as described here. It's one solution but has been extensively criticised elsewhere - is there a more canonical solution?
After doing a bit of research, I came to the conclusion that installing an SSH server inside my container and logging in via the PyCharm SSH remote interpreter was the best thing to do, despite concerns raised elsewhere. I managed it as follows.
The Dockerfile below will create an image with an SSH server inside that you can SSH into. It also has anaconda/python, so it's possible to run a notebook server inside and connect to that in the usual way for Jupyter degubbing. Note that it's got a plain-text password (screencast), you should definitely enable key login if you're using this for anything sensitive.
It will take local libraries and install them into your package library inside the container, and optionally you can pull repos from GitHub as well (register for an API key in GitHub if you want to do this so you don't need to enter a plain text password). It also requires you to create a plaintext requirements.txt containing all of the other packages you will need to be pip installed.
Then run build command to create the image, and run to create a container from that image. In the Dockerfile we expose the SSH through the container's port 22, so let's hook that up to an unused port on the AWS instance - this is the port we will SSH through. Also add another port pairing if you want to use Jupyter from your local machine at any point:
docker build -t your_image_name .
don't miss the . at the end - it's important!
docker run -d -p 5001:22 -p8889:8889 --name=your_container_name your_image_name
Nb. you will need to bash into the container (docker exec -it xxxxxxxxxx bash) and turn Jupyter on, with jupyter notebook.
Dockerfile:
ROM python:3.6
RUN apt-get update && apt-get install -y openssh-server
# Load an ssh server. Change root username and password. By default in debian, password login is prohibited,
# go into the file that controls this and make a change to allow password login
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN /etc/init.d/ssh restart
# Install git, so we can pull in some repos
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Install the requirements and the libraries we need (from a requirements.txt file)
COPY requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
# These are local libraries, add them (assuming a setup.py)
ADD your_libs_directory /your_libs_directory
RUN python3 -m pip install /your_libs_directory
RUN python3 your_libs_directory/setup.py install
# Adding git repos (optional - assuming a setup.py)
git clone https://git_user_name:git_API_token#github.com/YourGit/git_repo.git
RUN python3 -m pip install /git_repo
RUN python3 git_repo/setup.py install
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

No module named 'flask' while using Vagrant

I am trying to setup Vagrant in my machine(ubuntu 15.10 64bit). and I followed the steps mentioned here link
I am getting error as no Flask found when I run app.py
Am i missing something here? Its mentioned that all packages from requirements will be installed automatically. But I am not able to make it work.
Steps are as follows:
Getting started
Install Vagrant
Clone this repo as your project name:
git clone git#github.com:paste/fvang.git NEW-PROJECT-NAME
Configure project name and host name in ansible/roles/common/vars/main.yml:
project_name: "fvang"
host_name: "fvang.local"
Modify your local /etc/hosts:
192.168.33.11 fvang.local
Build your Vagrant VM:
vagrant up
Log into the VM via SSH:
vagrant ssh
Start Flask development server:
cd ~/fvang
python app/app.py
I am the author of the FVANG repo, but I don't have the rep to join your chat. I posted a response on the github issue, see here:
https://github.com/paste/fvang/issues/2
I think the Ansible provisioning script failed to complete due to changes in Ansible 2.0. (otherwise Flask would have been installed from requirements.txt). You can check which version of Ansible was installed by running ansible --version. I will be upgrading the scripts to 2.0 shortly.
Edit --
I just updated the repo to work with Ansible 2.0 and simplified a few things. Everything should work as expected now, give it a shot. You'll probably want to just vagrant destroy and vagrant up again.
A vagrant machine as new as a new operating system. You need to install each and every software you need. try this
sudo pip install Flask
After installation if you need to run the app, then you need to uncomment vagrant's ip
(In Vagrantfile) before accessing vagrant's localhost, it turns out to be 192.168.33.10 generally and port 5000

Postgres in Azure Flask Web App

I've got Flask up and running as per this tutorial
https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-flask-app/
I want Postgres to be my database system, but I'm in a Web App, so I can't just log into the VM and install it. What can I do here?
Thanks
It seems that we don’t have permission to install PostgreSQL database on Azure Web APP server. You need install PostgreSQL on Azure VM.
For examples:
A. If you created a Windows Server VM, refer to the link https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-tutorial/ and connect your VM.
At the link page http://www.enterprisedb.com/products-services-training/pgdownload#windows, you can download a PostgreSQL Windows Installer file and run it on your VM to install it using default configuration step by step.
The PostgreSQL default port is 2345. Make sure the Windows Server Firewall allow the port accessing and try to test the connection by using VM DNS NAME from your local host, and then you can continue to develop.
B. If you create a Linux VM such as Ubuntu,refer to the link https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-tutorial/ and connect your VM.
To install PostgreSQL, you can use Linux Package Management Tool.
Refer to the link:https://wiki.postgresql.org/wiki/Detailed_installation_guides
Ubuntu/Debian:
$ sudo apt-get update
$ sudo apt-get install postgresql
RedHat/CentOS:
Refer to the link:http://wiki.postgresql.org/wiki/YUM_Installation
You can use SQLAlchemy ORM Framework in Flask for PostgreSQL, please refer to http://flask.pocoo.org/docs/0.10/patterns/sqlalchemy/ and http://docs.sqlalchemy.org/en/rel_1_0/dialects/postgresql.html.

Categories