I have a number of environment variables on my server. Currently they are in .bashrc and also in my virtualenv's postactivate file - also in my various supervisor config files.
I'm wondering if it is possible to read either one of these files and set these environment variables in Fabric before running commands.
I'm basically calling manage.py commands like this:
/path/to/virtuaenv/python /path/to/manage.py command --settings=proj.settings.prod
I learned that I can also put these files in .bash_profile, but it's just one more place I don't what to have to maintain.
Any tips on how I can do this in fabric - and possibly even consolidating them into one place?
I don't know if this is the best solution, but what I have done for now is moved my export commands for environment variables into .bash_profile.
I removed them from .bashrc & my virtualenv postactivate file and replaced them with:
source /path/to/.bash_profile
Now I have them all in one place and the environment variables are set when I log in, when I activate my virtualenv and when I use fabric.
Related
I want to be able to set up environment variables in my virtual environment so that they are available in my code when I activate the virtual environment. I make my virtual enviornments with venv. I'm working on a Windows machine with VS-code.
What I already tried, but didn't work.
Adding the vars to end of the activate.bat file like this:
set CLIENT_SECRET="MYSECRET"
Adding the vars to the end of the Activate.ps1 file like this:
$CLIENT_SECRET="MYSECRET"
Adding the vars to the end of the activate file like this:
export CLIENT_SECRET="MYSECRET"
I found a lot related to my topic, but none working for me. What to do?
If you want to setup your development environment in VSCode you can simply add .env file with all secrets defined in project root directory. More details in docs
Your first solution
set CLIENT_SECRET=MYSECRET
in activate.batshould work, when using Command Prompt in the terminal as Default Shell.
You can omit the quotes unless they are part of your envirionment variable.
You can verify, if the environment variable is set with:
echo %CLIENT_SECRET% in the terminal in VS-Code.
go to endowment variable folder enter Script folder now activate with cmd
use set in CMD teminal
use env: in powershell
Recently I start a project based on Flask/Python. Right now I have set the environment - but I never create a project step by step in python, I have only experienced little scripts and small apps when I learn the programming language.
I have install and set all the dependencies in the same folder as you can see here:
In 'env' - I have create the virtual environment.
In 'TheSocial' - is and it will be the application itself.
My questions are:
1) If this 'env' will be moved or it wasn't created inside 'project_py', my application from 'TheSocial' can still run inside the virutalenv, or not ?
2) Are there any standards that you need to respect when you created any project structure ?
I have also seen an already answered StackOverFlow question , but I want to find out answers for question 1)
This questions are addressed in scope to understand how real projects are structured and design from scratch.
1) Creating virtualenv changes your sytem's $PATH:
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/MacGPG2/bin:/Users/wgonczaronek/bin:/usr/local/sbin
$ source venv/bin/activate
$ echo $PATH
/Users/wgonczaronek/Projects/django & flask security/CSP/csp-flask/venv/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/MacGPG2/bin:/Users/wgonczaronek/bin:/usr/local/sbin
Inside activate there are following lines responsible for this:
VIRTUAL_ENV="/Users/wgonczaronek/Projects/django & flask security/CSP/csp-flask/venv"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
This means that when you move your env this will no longer be valid.
2) Just be consistent. In my project I like to "encapsulate" git repository in a directory of the same name where I keep my virtual env and other things (I have a special file FFF.txt which stands for Frequently Forgotten Features where I save all project related info for personal use :D). This allows me to clean after project in one command when I want to get rid of it and I am not bothered with naming conventions of my environment - I do not have to edit .gitignore. Some people like to use virtual env wrapper or pyenv which keep all virtual env outside project though. Just experiment and find out what suits you best.
I am working with django and virtualenvwrapper. My objective is to remove all sensitive information from the settings file per the 12Factor app suggestions (http://12factor.net) and ultimately deploy to heroku. When testing this locally, to achieve this, I have created a .env file with different variable values like SECRET_KEY. I went to my virtualenv directory and added the following line to the postactivate script:
source .env
Whenever I start my virtual env for a project aka workon project_name, the environment variables from .env are available if I echo from the terminal
$ echo $SECRET_KEY
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
However when I try to access those variables from python they are unavailable
>>> import os
>>> os.environ.get('SECRET_KEY')
>>>
What is the correct way for python to access the environment variables stored in a .env file?
.env file:
WEB_CONCURRENCY=2
SECRET_KEY='XXXXXXXXXXXX'
DEBUG=True
I think your problem is that you are defining it in your current shell by doing SECRET_KEY=xxxxxxx, but when you open up a python shell, it's running in a sub process and you lost the environment variable in that shell. export will make the variable available in sub process as well.
You should have:
export SECRET_KEY=xxxxxxxx
In your .env file to make it work.
Edit:
From what I read from your links, that's just a normal linux shell environment variable. But django needs to have SECRET_KEY as a python constant in the settings. Linux environment variables and python variables are two different things, so defining a env variable SECRET_KEY doesn't let django recognize settings.SECRET_KEY. You should still consider using separate settings file, which is mostly recommended.
I'm developing several different Python packages with my team. Say that we have ~/src/pkg1, ~/src/pkg2, and ~/src/pkg3. How do we add these to PYTHONPATH without each of us having to manage dot-files?
We could add, say, ~/src/site/sitecustomize.py, which is added once to PYTHONPATH, but is it "guaranteed" that there won't be a global sitecustomize.py.
virtualenv seems like the wrong solution, because we don't want to have to build/install the packages after each change.
You have a lot of options...
1) Why not dotfiles?
You could centralize the management of dotfiles with a centralized repository and optionally with version control. I use a Dropbox folder named dotfiles but many people use github or other services like that to manage dotfiles.
If you do that, you will guarantee every people on your development team to share some dotfiles. So you could define a dotfile say .python_proys which export the appropriate PATH and PYTHONPATH which by convention every developer should source in their environment.
Suppose pkg1 is only an script, pkg2 is an script and also a module and pk3 is only a module. Then, python_proys Example:
export PATH=$PATH:~/src/pkg1:~/src/pkg2
export PYTHONPATH=$PYTHONPATH:~/src/pkg2:~/src/pkg3
And then, every developer have to source this dotfile somewhere by convetion. Each one will do the way he like. One could source the dotfile manually before using the packages. Another one could source it in his .bashrc or .zshenv or whatever dotfile apply to him.
The idea is to have one centralized point of coordination and only one dotfile to maintain: the .python_proys dotfile.
2) Use symlinks
You could define a directory in your home, like ~/dist (for modules) and ~/bin (for scripts) and set symbolic links there to the specific pakages in ~/src/, and make every developer have this PATH and PYTHONPATH setting:
export PATH=$PATH:~/bin
export PYTHONPATH=$PYTHONPATH:~/dist
So, using the same example at Why not dotfiles?, where pkg1 is only an script, pkg2 is an script and also a module and pkg3 is only a module, then you could symlink like:
cd ~/bin
ln -s ../src/pkg1
ln -s ../src/pkg2
cd ~/dist
ln -s ../src/pkg2
ln -s ../src/pkg3
Those commands could be do automatically with an script. You could write a bootstrap script, or simply copy and paste the commands and save it in a shell script. In any way, maintain it and centralize it the same way i explain it before.
This way the .dotfiles will not change, only the script defining the symlinks.
I suggest looking into creating a name.pth path configuration file as outlined in thesitemodule's documentation. These files can hold multiple paths that will be added tosys.pathand be easily edited since they're simply text files.
First, you don't add a python module to PYTHONPATH, you just add the path component.
If you want all your team to be working on some python package, you can install the package as editable with the -e option in a virtual environment.
This way you can continue development and you don't have to mess with the PYTHONPATH. Keep in mind that the working directory is always included in the PYTHONPATH, so unless you have an external requirement; you don't need a virtual environment, just the source in your working directory.
Your workflow would be the following:
Create virtual environment
Create a .pth file, to modify your PYTHONPATH.
Work as usual.
This would be my preferred option. If you have a standard layout across your projects, you can distribute a customized bootstrap script which will create the environment, and then adjust the PYTHONPATH automatically. Share this bootstrap script across the team, or add it as part of the source repository.
I assume that your other modules are at predictable path (relative to $0)
We can compute absolute path of $0
os.path.realpath(sys.argv[0])
then arrive at your module path and append it
sys.path.append(something)
I would like to use conda to create different environments, each with a different $PYTHONPATH. Currently, I have to change the environment variables each time in my .bashrc. Is there a simple way of creating multiple python environments via conda, such that I can seamless switch (via source activate) and have the corresponding $PYTHONPATHs update automatically?
You can specify the PYTHONPATH before you execute any script, which would be easier than changing your .bashrc
For example, to put the current working directory on the path before executing any script, you can do this
PYTHONPATH=`pwd`: python
If you didn't want to overwrite the entire path, but just append to it
PYTHONPATH=`pwd`:$PYTHONPATH python
$PYTHONPATH can be changed when a conda environment is activated or deactivated, the same way it can be done with other environment variables.
The following section in condo documentation describes how to specify this behaviour: Saved environment variables.
For example, you can add the following line to the activation script
export PYTHONPATH="What_you_want_to_add:$PYTHONPATH"
and so on ...