How do I configure pip.conf in AWS Elastic Beanstalk? - python

I need to deploy a Python application to AWS Elastic Beanstalk, however this module requires dependencies from our private PyPi index. How can I configure pip (like what you do with ~/.pip/pip.conf) so that AWS can connect to our private index while deploying the application?
My last resort is to modify the dependency in requirements.txt to -i URL dependency before deployment, but there must be a clean way to achieve this goal.

In .ebextensions/files.config add something like this:
files:
"/opt/python/run/venv/pip.conf":
mode: "000755"
owner: root
user: root
content: |
[global]
find-links = <URL>
trusted-host = <HOST>
index-url = <URL>
Or whatever other configurations you'd like to set in your pip.conf. This will place the pip.conf file in the virtual environment of your application, which will be activated before pip -r requirements.txt is executed. Hopefully this helps!

Related

Python Lambda missing dependencies when set up through Amplify

I've been trying to configure an Amplify project with a Python based Lambda backend API.
I have followed the tutorials by creating an API through the AWS CLI and installing all the dependencies through pipenv.
When I cd into the function's directory, my Pipfile looks like this:
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
[packages]
src = {editable = true, path = "./src"}
flask = "*"
flask-cors = "*"
aws-wsgi = "*"
boto3 = "*"
[requires]
python_version = "3.8"
And when I run amplify push everything works and the Lambda Function gets created successfully.
Also, when I run the deploy pipeline from the Amplify Console, I see in the build logs that my virtual env is created and my dependencies are downloaded.
Something else that was done based on github issues (otherwise build would definitely fail) was adding the following to amplify.yml:
backend:
phases:
build:
commands:
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- pip3 install --user pipenv
- amplifyPush --simple
Unfortunately, from the Lambda's logs (both dev and prod), I see that it fails importing every dependency that was installed through Pipenv. I added the following in index.py:
import os
os.system('pip list')
And saw that NONE of my dependencies were listed so I was wondering if the Lambda was running through the virtual env that was created, or was just using the default Python.
How can I make sure that my Lambda is running the virtualenv as defined in the Pipfile?
Lambda functions do not run in a virtualenv. Amplify uses pipenv to create a virtualenv and download the dependencies. Then Amplify packages those dependencies, along with the lambda code, into a zip file which it uploads to AWS Lambda.
Your problem is either that the dependencies are not packaged with your function or that they are packaged with a bad directory structure. You can download the function code to see exactly how the packaging went.

Pip config settings not working for virtual environment

Studying https://pip.pypa.io/en/stable/topics/configuration/ I understand that I can have multiple pip.conf files (on a UNIX-based system) which are loaded in the described order.
My task is to write a bash script that automatically creates a virtual environment and sets pip configuration only for the virtual environment.
# my_bash_script.sh
...
python -m virtualenv .myvenv
....
touch pip.conf
# this will create path/to/.myvenv/pip.conf
# otherwise following commands will be in the user's pip.conf at ~/.config/pip/pip.conf
path/to/.myvenv/bin/python -m pip config set global.proxy "my-company-proxy.com"
# setting our company proxy here
path/to/.myvenv/bin/python -m pip config set global.trusted-host "pypi.org pypi.python.org files.pythonhosted.org"
# because of SSL issues from behind the company's firewall I need this to make pip work
...
My problem is, that I want to set the configuration not for global but for site. If I exchange global.proxy and global.trusted-host for site.proxy and site.trusted-host pip won't be able to install packages anymore whereas everything works fine if I leave it at global. Also changing it to install.proxy and install.trusted-host doesn't work.
The pip.conf file looks like this afterwards:
# /path/to/.myvenv/pip.conf
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
pip config debug yields the following:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/path/to/.myvenv/pip.conf, exists: True
global.proxy: my-company-proxy.com
global.trusted-host: pypi.org pypi.python.org files.pythonhosted.org
user:
/path/to/myuser/.pip/pip.conf, exists: False
/path/to/myuser/.config/pip/pip.conf, exists: True
What am I missing here?
Thank you in advance for your help!
The [global] in the config file refers to the fact that these settings are used for all pip commands. See this section of the manual. So you can do something like
[global]
timeout = 60
[freeze]
timeout = 10
The global/site distinction comes from the location of the config file. So your file /path/to/.myvenv/pip.conf is referred to as the site config file through its location. In it, you still need to have
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"

problem with command heroku run -a <name of app> pipenv run upgrade

I have made a code with python flask and I am following the next steps to deploy it:
Deploying to Heroku (takes 7 minutes)
Install heroku (if you don't have it yet)
$ npm i heroku -g
Login to heroku on the command line (if you have not already)
$ heroku login -i
Create an application (if you don't have it already)
$ heroku create <your_application_name>
Enviroment Variables (takes 2 minutes)
Now navigate to your heroku dashboard and look for your application settings, we have to manually add our environment variables into heroku:
You cannot create a .env file on Heroku, instead you need to manually create all the variables under your project settings.
Open your .env file and copy and paste each variable (FLASK_APP, DB_CONNECTION_STRING, etc.) to Heroku.
Deploying your database to Heroku (takes 3 minutes)
You local MySQL Database now has to be uploaded to a cloud, there are plenty of services that provide MySQL database hosting but we recommend JawDB because it has a Free Tier, its simple and 100% integrated with Heroku.
Go to your heroku project dashboard and look to add a new heroku add-on.
Look for JawDB MySQL and add it to your project (it may ask for a Credit Card but you will not be charged as long as your remain within 5mb database size, enough for your demo.
Once JawDB is added to your project look for the Connection String inside your JawDB dashboard, something like:
mysql://tqqa0ui0cga32nxd:eqi8nchjbpwth82v#c584md9egjnm02sk.5btxwkvyhwsf.us-east-1.rds.amazonaws.com:3306/45fds423rbtbr
Copy the connection string and create a new environment variable on your project settings.
Run migrations on heroku: After your database is connected, you have to create the tables and structure, you can do that by running the pipenv run upgrade command on the production server like this:
$ heroku run -a=<your_app_name> pipenv run upgrade
:warning: Note: Notice that you have to replace <your app name> with your application name, you also have to be logged into heroku in your terminal (you can do that by typing heroku login -i)
Push to the Heroku codebase
Commit and push to heroku, make sure you have added and committed your changes and push to heroku
$ git push heroku main hh
That is it!
Now the problem is when I run this command:
heroku run -a=<your_app_name> pipenv run upgrade
And the respons is:
bash: pipenv: command not found
This is my Pipfile.txt:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
flask = "*"
sqlalchemy = "*"
flask-sqlalchemy = "*"
flask-migrate = "*"
flask-swagger = "*"
psycopg2-binary = "*"
python-dotenv = "*"
mysql-connector-python = "*"
flask-cors = "*"
gunicorn = "*"
mysqlclient = "*"
flask-admin = "*"
cloudinary = "*"
flask-login = "*"
pipenv = "*"
[requires]
python_version = "3.8"
[scripts]
start="flask run -p 3000 -h 0.0.0.0"
init="flask db init"
migrate="flask db migrate"
upgrade="flask db upgrade"
deploy="echo 'Please follow this 3 steps to deploy: https://github.com/4GeeksAcademy/flask-rest-hello/blob/master/README.md#deploy-your-website-to-heroku' "
These are the commands I run before deploying:
pipenv install;
mysql -u root -e "CREATE DATABASE example";
pipenv run init;
pipenv run migrate;
pipenv run upgrade;
If I don't run the upgrade in heroku this is what Release Log in heroku:
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
(Background on this error at: http://sqlalche.me/e/14/4xp6)
It seems you are missing the pipenv tool or missing in PATH. You may install it using:
$ pip install pipenv
If pipenv already is installed check the PATH variable and if you are able to locate pipenv using $ which pipenv. Have a look at the docs regarding PATH:
https://pipenv-fork.readthedocs.io/en/latest/advanced.html

How to upload the python packages to Nexus sonartype private repo

I have configured the Nexus-OSS-3.14 private Python artifact server on aws cloud. I want to be maintain all my project related Python packages on my private repository server.
I downloaded the all the Python packages on my local Linux box and I want to be upload all the Python packages to private Python artifact server.
I have tried curl put request and I didn't upload and your help is needed to complete this.
I have tried curl put request:
curl -v -u admin:admin --upload-file boto3-1.9.76-py2.py3-none-any.whl https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/
When I used that command and I get 404 response.
I think the recommended approach is to use twine, something like this should work:
pip install twine
twine upload --repository https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/ boto3-1.9.76-py2.py3-none-any.whl
It should ask for your username and password. To make life a bit easier you can create $HOME/.pypirc file with the URL, username and password
[nexus]
repository: https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/
username: admin
password: admin
Then when you call twine, do so like this:
twine upload --repository nexus boto3-1.9.76-py2.py3-none-any.whl
It's not a hard requirement, but if you're on multi user system and you've put a password in the file you should probably do
chmod 600 $HOME/.pypirc
Pip (yarn) for download. Twine for upload.
Configuration:
be careful with trailing slashes!
Download with pip (yarn)
pip config edit [--editor [nano|code|...]] [--global|--user] for edit config
[global]
index = https://nexus.your.domain/repository/pypi/pypi
index-url = https://nexus.your.domain/repository/pypi/simple
Or set environment variables. Dockerfile for example:
ENV \
PIP_INDEX=https://nexus.your.domain/repository/pypi/pypi \
PIP_INDEX_URL=https://nexus.your.domain/repository/pypi/simple
Or use command line args pip install --index
Upload with twine
Edit .pypirc:
[distutils]
index-servers =
pypi
[pypi]
repository: https://nexus.your.domain/repository/pypi-hosted/
username: nexususername
password: nexuspassword
Or environment
ENV \
TWINE_REPOSITORY_URL=https://nexus.your.domain/repository/pypi-hosted/ \
TWINE_USERNAME=nexususername \
TWINE_PASSWORD=nexuspassword
Or command line
twine upload --repository-url

App Engine Flexible - requirements.txt include GCP repository

I am trying to set up an application running in a python 3 App Engine Flexible environment. I have an app.yaml file:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT application:app
runtime_config:
python_version: 3
I have a requirements.txt listing some packages my app needs:
Flask==0.12
gunicorn==19.7.1
...
I also have a common functions package that is located in a GCP source respository (git). I don't want to host it publicly on PyPi. Is it possible to still include it as a requirement? Something like:
git+https://source.developers.google.com/p/app/r/common
Using the above ask for a username and password when I try it on my local machine, even though I have a helper set up:
git config credential.helper gcloud.sh
You can add -i http://yourhost.com --trusted-host yourhost.com flags to requirements.txt file.

Categories