Install dependency’s extras if package’s specific extras is requested - python

My project has Celery as a dependency. It’s a hard dependencies, ie. my project can’t live without it. However, it can use Redis as its backend, which my app doesn’t need specifically.
I want my package to be set up so if a user installs dependencies with poetry install -E redis, it would install the redis block of Celery (as if it were specified in pyproject.toml as celery = { version="^4.4.0", extras=["redis"] }).
However, if a user uses a plain poetry install (without -E redis), i don’t want Celery’s Redis dependencies (as if it were only specified as celery = "^4.4.0") to be installed.
Is there a way to put this into Poetry config? Or should I track the optional requirements of celery[redis] and manually add them to my pyproject.toml file?
I already checked the Poetry documentation on this matter, but it doesn’t offer a way to specify the same dependency (celery in my case) with different options.

This should work by defining redis as an optional extra, e.g.:
[tool.poetry]
name = "mypackage"
version = "0.1.0"
description = ""
authors = ["finswimmer <finswimmer#example.org>"]
[tool.poetry.dependencies]
python = "^3.6"
celery = "^4.4.7"
redis = { version = "^3.5.3", optional = true }
[tool.poetry.dev-dependencies]
[tool.poetry.extras]
redis = ["redis"]
[build-system]
requires = ["poetry>=1.0"]
build-backend = "poetry.masonry.api"

Related

Python Lambda missing dependencies when set up through Amplify

I've been trying to configure an Amplify project with a Python based Lambda backend API.
I have followed the tutorials by creating an API through the AWS CLI and installing all the dependencies through pipenv.
When I cd into the function's directory, my Pipfile looks like this:
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
[packages]
src = {editable = true, path = "./src"}
flask = "*"
flask-cors = "*"
aws-wsgi = "*"
boto3 = "*"
[requires]
python_version = "3.8"
And when I run amplify push everything works and the Lambda Function gets created successfully.
Also, when I run the deploy pipeline from the Amplify Console, I see in the build logs that my virtual env is created and my dependencies are downloaded.
Something else that was done based on github issues (otherwise build would definitely fail) was adding the following to amplify.yml:
backend:
phases:
build:
commands:
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- pip3 install --user pipenv
- amplifyPush --simple
Unfortunately, from the Lambda's logs (both dev and prod), I see that it fails importing every dependency that was installed through Pipenv. I added the following in index.py:
import os
os.system('pip list')
And saw that NONE of my dependencies were listed so I was wondering if the Lambda was running through the virtual env that was created, or was just using the default Python.
How can I make sure that my Lambda is running the virtualenv as defined in the Pipfile?
Lambda functions do not run in a virtualenv. Amplify uses pipenv to create a virtualenv and download the dependencies. Then Amplify packages those dependencies, along with the lambda code, into a zip file which it uploads to AWS Lambda.
Your problem is either that the dependencies are not packaged with your function or that they are packaged with a bad directory structure. You can download the function code to see exactly how the packaging went.

Pipfile with different indexes for each environement

I encounter a small problem with my Python/Django and pipenv projects, in particular for the management of package indexes according to the environment.
Indeed on my local machine I cannot install packages from https://pypi.org/simple I have to use an artifactory specific to my company.
Locally I therefore install all my packages from this index, the Pipfile.lock is also generated according to this index.
The problem is that on my dev or production environment, the artifactory index no longer works (because no longer on the company network) and I therefore have to use Pypi.
So I can't manage to use only artifactory locally and only Pypi in the dev/production environment.
This is what my local Pipfile looks like not working on dev/production.
[[source]]
url = "https://artifactory-xxxxx/artifactory/api/pypi/remote-pypi/simple"
verify_ssl = true
name = "artifactory-xxxxxxx"
[packages]
wagtail = ">=3.0.1"
django = "<4.1,>=4.0"
django-tailwind = "*"
wagtailmedia = "*"
wagtailcodeblock = "*"
wagtailfontawesome = "*"
mozilla-django-oidc = "*"
psycopg2-binary = "*"
gunicorn = "*"
[dev-packages]
psycopg2 = {version = "*", index = "artifactory-xxxx"}
[requires]
python_version = "3.9"

Automatically download all dependencies when mirroring a package

In my organization we maintain an internal mirrored Anaconda repository containing packages that our users requested. The purpose is to exclude certain packages that may pose a security risk, and all users in our organization connect to this internal Anaconda repository to download and install packages instead of the official Anaconda repo site. We have a script that runs regularly to update the repository using the conda-mirror command:
conda-mirror --config [config.yml file] --num-threads 1 --platform [platform] --temp-directory [directory] --upstream-channel [channel] --target-directory [directory]
The config.yml file is setup like this:
blacklist:
- name: '*'
channel_alias: https://repo.continuum.io/pkgs/
channels:
- https://conda.anaconda.org/conda-forge
- free
- main
- msys2
- r
repo-build:
dependencies: true
platforms:
- noarch
- win-64
- linux-64
root-dir: \\root-drive.net\repo
whitelist:
- name: package1
So the logic of this config file is to blacklist all packages except the ones listed under whitelist. However the problem I'm having is, if a user request package x to be added to the repository and I added package x under the whitelist, it only downloads package x to the repository and not its dependent packages. I've checked the documentation on conda-mirror and the configuration file and can't find anything related to automatically mirroring a package and all its dependencies. Is there a way to do this automatically?

How do I configure pip.conf in AWS Elastic Beanstalk?

I need to deploy a Python application to AWS Elastic Beanstalk, however this module requires dependencies from our private PyPi index. How can I configure pip (like what you do with ~/.pip/pip.conf) so that AWS can connect to our private index while deploying the application?
My last resort is to modify the dependency in requirements.txt to -i URL dependency before deployment, but there must be a clean way to achieve this goal.
In .ebextensions/files.config add something like this:
files:
"/opt/python/run/venv/pip.conf":
mode: "000755"
owner: root
user: root
content: |
[global]
find-links = <URL>
trusted-host = <HOST>
index-url = <URL>
Or whatever other configurations you'd like to set in your pip.conf. This will place the pip.conf file in the virtual environment of your application, which will be activated before pip -r requirements.txt is executed. Hopefully this helps!

Running multiple uwsgi python versions

I'm trying to deploy django with uwsgi, and I think I lack understanding of how it all works. I have uwsgi running in emperor mode, and I'm trying to get the vassals to run in their own virtualenvs, with a different python version.
The emperor configuration:
[uwsgi]
socket = /run/uwsgi/uwsgi.socket
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
emperor-tyrant = true
master = true
autoload = true
log-date = true
logto = /var/log/uwsgi/uwsgi-emperor.log
And the vassal:
uid=django
gid=django
virtualenv=/home/django/sites/mysite/venv/bin
chdir=/home/django/sites/mysite/site
module=mysite.uwsgi:application
socket=/tmp/uwsgi_mysite.sock
master=True
I'm seeing the following error in the emperor log:
Traceback (most recent call last):
File "./mysite/uwsgi.py", line 11, in <module>
import site
ImportError: No module named site
The virtualenv for my site is created as a python 3.4 pyvenv. The uwsgi is the system uwsgi (python2.6). I was under the impression that the emperor could be any python version, as the vassal would be launched with its own python and environment, launched by the master process. I now think this is wrong.
What I'd like to be doing is running the uwsgi master process with the system python, but the various vassals (applications) with their own python and their own libraries. Is this possible? Or am I going to have to run multiple emperors if I want to run multiple pythons? Kinda defeats the purpose of having virtual environments.
The "elegant" way is building the uWSGI python support as a plugin, and having a plugin for each python version:
(from uWSGI sources)
make PROFILE=nolang
(will build a uWSGI binary without language support)
PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27"
will build the python27_plugin.so that you can load in vassals
PYTHON=python3 ./uwsgi --build-plugin "plugins/python python3"
will build the plugin for python3 and so on.
There are various way to build uWSGI plugins, the one i am reporting is the safest one (it ensure the #ifdef are honoured).
Having said that, having a uWSGI Emperor for each python version is viable too. Remember Emperor are stackable, so you can have a generic emperor spawning one emperor (as its vassal) for each python version.
Pip install uWSGI
One option would be to simply install uWSGI with pip in your virtualenvs and start your services separately:
pip install uwsgi
~/.virtualenvs/venv-name/lib/pythonX.X/site-packages/uwsgi --ini path/to/ini-file
Install uWSGI from source and build python plugins
If you want a system-wide uWSGI build, you can build it from source and install plugins for multiple python versions. You'll need root privileges for this.
First you may want to install multiple system-wide python versions.
Make sure you have any dependencies installed. For pcre, on a Debian-based distribution use:
apt install libpcre3 libpcre3-dev
Download and build the latest uWSGI source into /usr/local/src, replacing X.X.X.X below with the package version (e.g. 2.0.19.1):
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar vzxf uwsgi-latest.tar.gz
cd uwsgi-X.X.X.X/
make PROFILE=nolang
Symlink the versioned folder uwsgi-X.X.X.X to give it the generic name, uwsgi:
ln -s /usr/local/src/uwsgi-X.X.X.X /usr/local/src/uwsgi
Create a symlink to the build so it's on your PATH:
ln -s /usr/local/src/uwsgi/uwsgi /usr/local/bin
Build python plugins for the versions you need:
PYTHON=pythonX.X ./uwsgi --build-plugin "plugins/python pythonXX"
For example, for python3.8:
PYTHON=python3.8 ./uwsgi --build-plugin "plugins/python python38"
Create a plugin directory in an appropriate location:
mkdir -p /usr/local/lib/uwsgi/plugins/
Symlink the created plugins to this directory. For example, for python3.8:
ln -s /usr/local/src/uwsgi/python38_plugin.so /usr/local/lib/uwsgi/plugins
Then in your uWSGI configuration (project.ini) files, specify the plugin directory and the plugin:
plugin-dir = /usr/local/lib/uwsgi/plugins
plugin = python38
Make sure to create your virtualenvs with the same python version that you created the plugin with. For example if you created python38_plugin.so with python3.8 and you have plugin = python38 in your project.ini file, then an easy way to create a virtualenv with python3.8 is with:
python3.8 -m virtualenv path/to/project/virtualenv

Categories