python conda deployment on server - python

Let say I have two projects that I develop on my personal machine. I use conda to manage my python dependencies. I created environments to manage these projects. When I'm done with the dev, I want to export them to a remote machine that will run regularly, in the same time, these two projects. How should I manage this deployment ?

After some researches, I came up with this:
clone your environments as described on conda's doc.
export your environment file on the server along with your project.
import the environment on the server's conda.
create a bash script like that
#!/bin/bash
source activate my_environment
python ~/my_project/src/code.py
set up cron as usual calling this previous bash script

Related

how activate venv on production

I have a backend app based on Node.js. The code is writen on JavaScript, except on folders 'scripts' which it is on Python.
I have some external libraries installed (pandas, matplotlib...) to execute those scripts and I used the virtual env (venv) to use them it correctly.
However, I always need to activate 'source venv/bin/activate' before to execute them (when I am in localhost).
The problem is in production.
Is there any way to let them activate permanently on production? Or other extra software? I hosted these files on VPS in Hostinger, but on production I got some errors in scripts which has some libraries installed.
This 'activate' might be the problem.
To make the comments an answer:
In production, you'll be running your app proper with a service manager such as systemd (that makes sure it stays running). You can direct the service manager to directly use the venv's Python, e.g. /home/app/venv/bin/python myapp.py; you don't need the activate script.
To have the virtualenv automatically activated for ad-hoc use on the production server, you can use a .bashrc file, e.g. /home/app/.bashrc that includes source ~/venv/bin/activate.

What is a good way to setup a sandboxed container environment for development?

I want to develop on Linux in various languages (python, rust, ...) and will be installing packages through their respective tools (and also some infrastructure like redis, postgresql, ...).
I'd like to create a sandbox for each project:
shell access to run dev and system utilities (perf, htop, etc ...)
limit access to a few directories where the source code I'll be editing will be (so that I don't have to configure/run my editor for each environment and not not lose any file if the container stops)
can only do outbound network requests to known package hosting domains (like pypi.org, github.com, etc ...).
can start servers listening on tcp sockets and access them from within the container without custom configuration
on occasion allow some port pass-through for localhost only so it the ports can be accessed from another container or from the host system
I'm hoping there are existing tools for it or some detailed tutorials. Some aspects like having the proper list of domains can be tedious to establish/maintain. So far my google searches haven't been yielding anything too promising besides starting from scratch with firejail/docker/lxc/...
I do not want to use VMs to not tax system resources too much as I may have many such environment.
Ideally something like:
dev-env-setup --name myapp --base-container python-dev --shell bash --code ~/coding/myapp
or
dev-env-setup --name myapp --base-container myapp-dev --code ~/coding/myapp --listen-ports 9999,8888,7777 --access-ports 111111,11123
Have you heard of asdf? It can install python, rust, postgres, redis and many other things. What's more, you can have per-project versions using a .tool-versions file. I use asdf a lot and would recommend it for what you describe.
You don't have to do any "dev setup" stuff either. Once you cd to your project directory, you will be using whatever version has been specified in .tool-versions via asdf local command.
Start here: https://asdf-vm.com/#/core-manage-asdf-vm
If you need help using it, I can give you some good starter tips but even just running asdf on its own will give you all the info you need.
Note that asdf calls the things it installs "plugins". For example, to be able to install different postgres versions, you would do asdf plugin add postgres. Then you could install different versions of postgres with asdf install postgres 12.3 (for example). You can set a per-project version (which is saved in a .tool-versions file) by doing asdf local python 3.8.1. You can also set a default global version for any plugin (for when you are not in a project directory that has a .tool-versions file) by doing asdf global rust 1.43.0.

How to set up a local dev environment for Django

Python web development newbie question here. I'm coming from PHP/Laravel and there you have Homestead which is a pre-configured Vagrant box for local development. In a so-called Homestead file, you configure everything such as webserver, database or PHP version. Are there any similar pre-configured dev environments for Django?
I already googled and there don't seem to be any official or widely-used Vagrant boxes for Django. The official Django tutorial even tells you how to install and set up Apache and your preferred database. This is a lot of work everytime you want to create a new Django project, especially if those projects run in different production environments. All the other tutorials I've found just explain how you set up virtual environments with venv or the like. But that doesn't seem to be sufficient to me. What you obviously want is a dev environment that is as close as possible to your production environment, so you need some kind of virtual machines.
I'm a little bit confused right now. Do you just grab some plain Ubuntu (or any other OS) Vagrant box and install everything yourself? Don't you use Vagrant at all but something else? Did I miss something and the Python web development workflow is completely different?
The typical local development in Django just uses the builtin web server and an SQLite database. The steps to get that up and running are:
Ensure you have the desired version of Python installed.
Create a virtual env to isolate libraries needed for your project from the rest of the system (this is optional by highly recommended, I'd actually recommend using Poetry).
Install Django, probably via pip.
Run manage.py runserver (and migrate the database and set up a superuser, yada yada).
That's pretty much it and sufficient for local development. What you need to be aware of is that some differences exist between SQLite and Postgres, MySQL etc., and if you hit the spots where the difference is important, you'll want to set up your targeted database as well to develop directly against it. That can probably happen in a Docker container if that makes sense for you. But there's little reason to put Django into a container during development, unless your project is especially complex and requires simulating certain conditions which the builtin server somehow can't.
Does this help?
$ python3 -m venv my_env # create your virtual environment
$ source my_env/bin/activate # Any package you install will be inside this environment
$ pip install -r requirements.txt # can also install packages indivdually
$ deactivate # get out of the isolated environment
Here's the doc

Should I activate my Python virtual environment before running my app in upstart?

I am working through the process of installing and configuring the Superset application. (A Flask app that allows real-time slicing and analysis of business data.)
When it comes to the Python virtual environment, I have read a number of articles and how-to guides and understand the concept of how it allows you to install packages into the virtual environment to keep things neatly contained for my application.
Now that I am preparing this application for (internal) production use, do I need to be activating the virtual environment before launching gunicorn in my upstart script? Or is the virtual environment more just for development and installing/updating packages for my application? (In which case I can just launch gunicorn without the extra step of activating the virtualenv.)
You should activate a virtualenv on the production server the same way as you do on the development machine. It allows you to run multiple Python applications on the same machine in a controlled environment. No need to worry that an update of packages in one virtualenv will cause an issue in the other one.
If I may suggest something. I really enjoy using virtualenvwrapper to simplify the use of virtualenvs even more. It allows you to define hooks, e.g.: preactivate, postactivate, predeactivate and postdeactivate using the scripts in $VIRTUAL_ENV/bin/. It's a good place for setting up environmental variables that your Python application can utilize.
And a good and simple tool for process control is supervisord.

How can I use multiple python virtual environments on same server

How can I deploy and host multiple python projects with different dependancies on the same server at the same time?
It's not true of course that only one virtualenv can be activated at once. Yes, only one can be active in a shell session at once, but your sites are not deployed via shell sessions. Each WSGI process, for example, will create its own environment: so all you need to do is to ensure that each wsgi script activates the correct virtualenv, as is (in the case of mod_wsgi at least) well documented.
Use virtualenv for python. You can can install any other version of python/packages in it, if required.

Categories