In my Az Function app, I have some ubuntu packages like Azure CLI and Kubectl that I need to install on the AZ Host whenever it starts a new container. I have already tried Start-up Commands and also going into the Bash. The former doesnt work and the latter tells me permission is denied and resource is locked. Is there any way to install these packages on function start-up in Azure Functions?
If you try to install the package via bash, it is impossible and will not be dealt with at all. The reason is because when you use python to write functions and deploy them to linux os on azure, in fact it installs various packages according to requirements.txt, and finally merges these packages into a whole. When you run the function on azure, you are based on this whole package. Therefore, if it is incorrect to try to install the package after deployment, you should specify the package to be installed in requirements.txt before deployment and then deploy to azure.
I have a Python Azure function which executes locally. It is deployed to Azure and I selected the 'free app plan'. The Python has dependencies on various modules, such as requests. The modules are not loaded into the app like they are locally on my machine. The function fails when triggered.
I have tried installing the dependencies using Kudu console from my site, this hangs with message cleaning up >> every time.
I have tried installing the dependencies using SSH terminal from my site, the installations succeed but i cannot see the modules when python pip list in kudo and the app still fails. I cannot navigate the directories ls does nothing.
I tried to install extensions using the portal but this option is greyed out in development-tools.
You can find a requirements.txt in your local function folder.
If you want function on azure to install the 'requests', your requirements.txt should be like this:(Azure will install the extension based on this file)
azure-functions
requests
And all these packages will be packaged into a new package on Azure, so you can not display which packages using pip list. Also, please keep in mind that Linux's Kudu feature is limited and you cannot install packages through it.
Problem seems comes from VS Code, you can use command to deploy your function app.
For example, my functionapp on Azure named 423PythonBowman2, So this is my command:
func azure functionapp publish 423PythonBowman --build remote
I quoted requests in the code, and with cmd deploy my function can works fine on portal with no errors.
Have a look of the offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Ccsharp%2Cbash#publish
I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server
I would like to deploy a Python Flask application on beanstalk.
The application depends on external packages (e.g. geopy) and internal packages (e.g. adam_geography).
The manual
Create a requirements.txt file and place it in the top-level directory
of your source bundle.
This would probably fetch geopy and its dependencies, but would not fetch adam_geography which is available from a custom repo inside my VPC.
How do I specify/upload private, internal Python package dependencies in a Beanstalk application?
1) copy internal Python package to server
2) use Pip's "editable installs" feature to install the private package:
pip install -e path/to/SomeProject
http://pip.readthedocs.org/en/latest/reference/pip_install.html#editable-installs
Use ebextensions to specify custom commands you can use to download files on all your EC2 instances. These ebextensions can be used to run pip like #shavenwarthog suggested in his answer.
Create a directory called .ebextensions in your app source root directory. Inside this directory create a file with a .config extension say 01-custom-files.config.
This file can contain custom unix commands you want to run on each EC2 instance.
You can run your own scripts here.
You can also use container_commands which are executed after unzipping your app source on the EC2 instance.
Read more about commands and container_commands here. You can also find examples here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-container_commands
The Problem
My problem is exactly like How do I install in-house requirements for Python Heroku projects? and How to customize pip's requirements.txt in Heroku on deployment?. Namely, I have a private repo from which I need a Python dependency installed into my Heroku app. The canonical answer, given by Heroku's own Kenneth Reitz, is to put something like
-e git+https://username:password#github.com/kennethreitz/requests.git#v0.10.0#egg=requests
in your requirements.txt file.
My security needs prevent my storing my password in a repo. (I also do not want to put the dependency inside my app's repo; they're separate pieces of software and need to be in separate repos.) The only place I can give my password (or, preferably, a GitHub OAuth token or deployment key) to Heroku, is in an environment variable like
heroku config:add GITHUB_OAUTH_TOKEN=12312312312313
Attempted Solutions
I could use a custom .profile in my app's repo, but then I'd be downloading and installing my dependency each time a process (web, worker, etc) restarts.
This leaves using a custom buildpack and the Heroku Labs addon that exposes my heroku config environment before the buildpack compiles. I tried building one on top of Buildpack Multi. The idea is Buildpack Multi is the primary buildpack, and using the .buildpacks file in my app's repo, it first downloads the normal Heroku Python buildpack, then my custom one.
The trouble is even after Buildpack Multi successfully runs the Python buildpack, the Python binary and Pip package are not visible to my buildpack once Buildpack Multi runs. So the custom buildpack just fails outright. (In my tests, the GITHUB_OAUTH_TOKEN environment variable was correctly exposed to the buildpacks.)
The only other thing I can think to try is to make my own fork of the Python buildpack that installs my dependency when it installs everything from requirements.txt, or even rewrites requirements.txt directly. Both of these seem like really heavy solutions to what I would think is a very common problem.
Update: Current Workaround
My custom buildpack (linked above) now downloads and saves my closed-source dependency ("foo") into the vendor directory that the geos buildpack uses. I committed into my app the dependencies that foo itself has into my app's requirements.txt. Thus Pip installs foo's dependencies through my app's requirements.txt and the buildpack adds the vendored copy of foo to my app's environment's PYTHONPATH (so foo's setup.py install never runs).
The biggest problem with this approach is coupling my (admittedly badly written) buildpack with my app. The second problem is that my app's requirements.txt should just list foo as a dependency and leave foo's dependencies to foo to determine. Lastly, there isn't a good way to give myself in six months from now when I forget how I did all this an error message if I forget to set my GITHUB_OAUTH_TOKEN environment variable (or, producing even less useful error feedback would be if the token expires and the environment variable still exists but is no longer valid).
Cry for Help
What (likely obvious) thing am I missing? How have you solved this problem in your apps? Any suggestions on getting my build pack to work, or hopefully an even simpler solution?
I created a buildpack to solve this problem using a custom ssh key stored as an environment variable. As the buildpack is technology agnostic, it can be used to download dependencies using any tool like composer for php, bundler for ruby, npm for javascript, etc: https://github.com/simon0191/custom-ssh-key-buildpack
Add the buildpack to your app:
$ heroku buildpacks:add --index 1 https://github.com/simon0191/custom-ssh-key-buildpack
Generate a new SSH key (lets say you named it deploy_key)
Add the public key to your private repository account. For example:
Github: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
Bitbucket: https://confluence.atlassian.com/bitbucket/add-an-ssh-key-to-an-account-302811853.html
Encode the private key as a base64 string and add it as the CUSTOM_SSH_KEY environment variable of the heroku app.
Make a comma separated list of the hosts for which the ssh key should be used and add it as the CUSTOM_SSH_KEY_HOSTS environment variable of the heroku app.
# MacOS
$ heroku config:set CUSTOM_SSH_KEY=$(base64 --input ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
# Ubuntu
$ heroku config:set CUSTOM_SSH_KEY=$(base64 ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
Deploy your app and enjoy :)
I faced the same problem. Like you, I am amazed how difficult it is to find good documentation on how to install private dependency (whatever the language and the service used).
Because this is not a main concern of service providers, I now try a systematic approach relying as few as possible on idiosyncratic features. I try to find the easier solution for each of these steps:
Pass the credentials to the build environment using a secure channel. For python, use an environment variable containing a SSH key as a base64 string. For js, same with the npm token.
Configure the build process to use these credentials. In the best case it involves configuring ssh to use a deploy key. Otherwise it can as basic as cloning the dependency for later use. For your specific case with python and heroku, you can use the hook 'pre_compile'.
I detailed the process for my future self here: https://gist.github.com/michelbl/a6163522d95540cf0c8b6667bd35d5f5
I need to give access to a private dependency. It can happen for continuous integration or deployment.
Here we use python and github, using the services CircleCI and Heroku. However, the principles applies everywhere.
What is a deploy key?
See https://developer.github.com/v3/guides/managing-deploy-keys/
There are 4 ways of granting access to a private dependency, but deploy keys are a good compromise in term of security and ease of use for projects that do not require too many dependencies (in that case, prefer a machine user). In any case, do not use username/password of a developer account or oauth token as they do not provide privilege limitation.
Create a deploy key:
ssh-keygen -t rsa -b 4096 -C "myself#my_company.com"
Give the public part to gihub.
Give the private part to the service needing access. See below.
General strategy
Whatever the service or the technology that I use, the goal is to access the git repo using ssh, using the deploy key.
Obviously, I do not want to put the deploy key in the repo. But most services (CI, deployment) provide a way to set protected environment variables that can be used at build time. The key can be encoded using base64:
cat deploy-key | base64
cat deploy-key.pub | base64
Most services also provide a way to tailor the build procedure. This is needed to configure ssh to use the deploy key.
CircleCI
Set the deploy key using env variables, encode with base64.
In config.yml, add a step:
echo $DEPLOY_KEY_PRIVATE | base64 --decode > ~/.ssh/deploy-key
chmod 400 ~/.ssh/deploy-key
echo $DEPLOY_KEY_PUBLIC | base64 --decode > ~/.ssh/deploy-key.pub
ssh-add ~/.ssh/deploy-key
# Run this to check which private key is used. If the checkout key is used,
# github replies "Hi my_org/my_package". If the deploy key is used as wished,
# github replies "Hi my_org/my_dependency".
#ssh -i ~/.ssh/deploy-key -T git#github.com || true
# Now pip connects to git+ssh using the deploy key
export GIT_SSH_COMMAND="ssh -i ~/.ssh/deploy-key"
pip install -r requirements.txt
requirements.txt can be something like:
# The purpose of this file is to install the private dependency *before*
# setup.py is run.
# Be sure ssh is configured to use a ssh key with read permission to the repo.
git+ssh://git#github.com/my_org/my_dependency#1.0.10
# Run setup.py. The private dependency is already installed with the good
# version so pip doesn't try to fetch it from PyPI.
--editable .
and setup.py does not care about the dependency beeing private:
from distutils.core import setup
setup(
name='my_package',
version='1.0',
packages=[
'my_package',
],
install_requires=[
# Beware, the following package is a private dependency.
# Python provides several way to install private dependencies, none
# are really satisfactory.
# 1. Use dependency_links / --process-dependency-links. Good luck with
# that!
# 2. Maintain a private package repository. Good luck with that!
# 3. Install the private dependency separately before setup.py is run.
# This is now the prefered way. Be sure that ssh is properly
# configured to use a ssh key with read permission to the github repo
# of the private dependency, then run:
# `pip install -r requirements.txt`
'my_dependency==1.0.10',
... # my normal dependencies
'unidecode==1.0.22',
'uwsgi==2.0.15',
'nose==1.3.7', # tests
'flake8==3.5.0', # style
],
)
Heroku
For python, there is no need to write a custom buildpack. First, set the deploy key using env variables, encode with base64.
Then add the hook bin/pre_compile:
# This script configures ssh on Heroku to use the deploy key.
# This is needed to install private dependencies.
#
# Note that this does not work with Heroku review apps. Indeed review apps can
# inherits env variables from their parents, but they access their values after
# the build. You would need a way to pass the ssh key to this script another
# way.
#
# See also
# * https://stackoverflow.com/questions/21297755/heroku-python-dependencies-in-private-repos-without-storing-my-password#
# * https://github.com/bjeanes/ssh-private-key-buildpack
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Create the key files
cat $ENV_DIR/DEPLOY_KEY | base64 --decode > ~/.ssh/deploy-key
chmod 400 ~/.ssh/deploy-key
cat $ENV_DIR/DEPLOY_KEY | base64 --decode > ~/.ssh/deploy-key.pub
#ssh-add ~/.ssh/deploy-key
# If you want to disable host verification, you could use that.
#ssh -oStrictHostKeyChecking=no -T git#github.com 2>&1
# Run that if you want to check that ssh uses the correct key.
#ssh -i ~/.ssh/deploy-key -T git#github.com || true
# Configure ssh to use the correct deploy key when connecting to github.
# Disables host verification.
echo -e "Host github.com\n"\
" IdentityFile ~/.ssh/deploy-key\n"\
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
>> ~/.ssh/config
# Unfortunately this does not seem to work.
#export GIT_SSH_COMMAND="ssh -i ~/.ssh/deploy-key"
# The vanilla python buildpack can now install all the dependencies in
# requirement.txt
Create a private PyPI server
If you create your own PyPI server, you can simply list your packages in your requirements.txt file and then store the url for your server (including username and password) in the config variable, PIP_EXTRA_INDEX_URL.
For example:
heroku config:set PIP_EXTRA_INDEX_URL='https://username:password#privateserveraddress.com/simple'
Note that this is the same as using the pip install command line option, --extra-index-url. (See https://pip.pypa.io/en/stable/user_guide/#environment-variables)
The primary index url will still be the default (https://pypi.org/simple). This means that pip will first attempt to resolve package names in your requirements file at the default PyPI server, and then try your private server second.
If you need packages in your private server that have the same name as packages in PyPI, then you need the primary index url to be your server and the --extra-index-url option to be the default server's url. You would need to do this if you want to host your own version of an existing package without changing the package name. I haven't tried this, but it currently looks like you would need to to create a fork of heroku's official python buildpack and make a small change to the bin/steps/pip-install file.
The reason pip has access to the PIP_EXTRA_INDEX_URL is because of this block in that file:
# Set Pip env vars
# This reads certain environment variables set on the Heroku app config
# and makes them accessible to the pip install process.
#
# PIP_EXTRA_INDEX_URL allows for an alternate pypi URL to be used.
if [[ -r "$ENV_DIR/PIP_EXTRA_INDEX_URL" ]]; then
PIP_EXTRA_INDEX_URL="$(cat "$ENV_DIR/PIP_EXTRA_INDEX_URL")"
export PIP_EXTRA_INDEX_URL
mcount "buildvar.PIP_EXTRA_INDEX_URL"
fi
Code like this is necessary to read config variables in buildpacks (see https://devcenter.heroku.com/articles/buildpack-api#buildpack-api), but you should be able to simply duplicate this codeblock, replacing PIP_EXTRA_INDEX_URL with PIP_INDEX_URL. Then set PIP_INDEX_URL to your private server's url and PIP_EXTRA_INDEX_URL to the default PyPI url.
If you are using another source instead of a private PyPI server, such as github, and simply need a way to avoid hardcoding a username and password in your requirements.txt file, then also note that you can use environment variables in requirements.txt (see https://pip.pypa.io/en/stable/reference/pip_install/#using-environment-variables). You would just have to export them in bin/steps/pip-install as you would for PIP_INDEX_URL.
You could use a pre-compile step as described here to run something like M4 to do substitutions on your requirements.txt to file in the password from the environment variable.