How to run django server with ACTIVATED virtualenv using batch file (.bat) - python

I found this post to be useful on how to code a batch file to automate django web server start.
But the problem is, there is no virtualenv activated, How can i activate it before the manage.py runserver inside the script?
I would like to run this server with virtualenv activated via batch file.

Found my solution by encoding this:
#echo off
cmd /k "cd /d C:\Users\[user]\path\to\your\env\scripts & activate & cd /d C:\Users\[user]\path\to\your\env\[projectname] & python manage.py runserver"

Call the activate.bat script in your batch file, before you run manage.py,
CALL \path\to\env\Scripts\activate.bat
python manage.py runserver

try \path\to\env\Scripts\activate
and look at virtualenv docs

If your virtualenv is created via virtualenvwrapper:
workon yourenvname & python manage.py runserver

Related

start pythonenvironment using shell scrapit and execute command

I have ubuntu server , and my python virtualenv is under /var/www/abc/python folder.
and my code is under /var/www/abc/code folder.
Now i want to start virtualenv before i execute my code using shell script.
Here is the shell file runshell.sh , but it doesn't start the virtual enviorment.
source /var/www/abc/python/bin/activate
python /var/www/abc/code/app.py
Same as you do in shell
#!/bin/bash
source /var/www/abc/python/bin/activate
python /var/www/abc/code/app.py
Or you can directly run python from venv
#!/bin/bash
/var/www/abc/python/bin/python /var/www/abc/code/app.py

manage.py command in crontab not working

I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command

Running Django migrations when deploying to Elastic Beanstalk

I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
How do I set up the container command to respond to this with a yes during the deployment phase?
This is my current config file
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
I'm not sure there is a specific way to answer yes or no. but you can append --noinput to your container command. Use the --noinput option to suppress all user prompting, such as “Are you sure?” confirmation messages.
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
Install awsebcli with pip install awsebcli
Type eb ssh Your EnvironmentName
Navigate to your eb instance app directory with:
sudo -s
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
then run your command.
./manage.py migrate
I hope this helps
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to manually run migration (and any other Django management commands) while working with Amazon Linux 2 (Python 3.7, 3.8) created by Elastic Beanstalk:
First, from your EB cli: eb ssh to connect an instance.
The virtual environment can be activated by
source /var/app/venv/*/bin/activate
The manage.py can be ran by
python3 /var/app/current/manage.py
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by /opt/elasticbeanstalk/bin/get-config, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
More info regarding Amazon Linux 2 splatform script tools: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in django.config
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
to:
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
as recommended here. This will help you avoid issues with wrong settings used.
More on manage.py v.s. django-admin.py.
django-admin method not working as it was not configured properly. You can also use python manage.py migrate in
.ebextentions/django.config
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
In reference to Oscar Chen answer, you can set environmental variables using eb cli with
eb setenv key1=value1 key2=valu2 ...etc
The trick is that the full output of container_commands is in /var/log/cfn-init-cmd.log (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
This doesn't seem to be documented anywhere obvious and it's not displayed by eb logs; I found it by hunting around in /var/log.
The Django example management command django-admin.py migrate did not work for me. Instead I had to use something like:
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
To see the values of your environment variables at deploy time, you can create a debug command like:
03_debug:
command: "env"
You can see most of these environment variable with eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env, but there seem to be some subtle differences at deploy time, hence using env above to be sure.
Here you'll see that $PYTHONPATH is being in a non-typical way, pointing to the virtualenv's bin directory, not the site-packages directory.
This answer looks like it will work for you if you just want to send "yes" to a few prompts.
You might also consider the --noinput flag so that your config looks like:
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput
This takes the default setting, which is "no".
It also appears that there's an open issue/fix to solve this problem a better way.

Run manage.py from AWS EB Linux instance

How to run manage.py from AWS EB (Elastic Beanstalk) Linux instance?
If I run it from '/opt/python/current/app', it shows the below exception.
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
I think it's related with virtualenv. Any hints?
How to run manage.py from AWS Elastic Beanstalk AMI.
SSH login to Linux (eb ssh)
(optional may need to run sudo su - to have proper permissions)
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
python manage.py <commands>
Or, you can run command as like the below:
cd /opt/python/current/app
/opt/python/run/venv/bin/python manage.py <command>
With the new version of Python paths seem to have changed.
The app is in /var/app/current
The virtual environment is in /var/app/venv/[KEY]
So the instructions are:
SSH to the machine using eb shh
Check the path of your environment with ls /var/app/venv/. The only folder should be the [KEY] for the next step
Activate the environment with source /var/app/venv/[KEY]/bin/activate
Execute the command python3 /var/app/current/manage.py <command>
Of course Amazon can change it anytime.
TL;DR
This answer assumes you have installed EB CLI. Follow these steps:
Connect to your running instance using ssh.
eb ssh <environment-name>
Once you are inside your environment, load the environment variables (this is important for database configuration)
. /opt/python/current/env
If you wish you can see the environment variables using printenv.
Activate your virtual environment
source /opt/python/run/venv/bin/activate
Navigate to your project directory (this will depend on your latest deployment, so use the number of your latest deployment instead of XX)
cd /opt/python/bundle/XX/app/
Run the command you wish:
python manage.py <command_name>
Running example
Asumming that your environment name is my-env, your latest deployment number is 13, and you want to run the shell command:
eb ssh my-env # 1
. /opt/python/current/env # 2
source /opt/python/run/venv/bin/activate # 3
cd /opt/python/bundle/13/app/ # 4
python manage.py shell # 5
As of February 2022 the solution is as follows:
$ eb ssh
$ sudo su -
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
$ source /var/app/venv/*/bin/activate
$ python3 /var/app/current/manage.py <command name>
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs) is needed to import your environment variables if you have a database connection (most likely you will)

Trouble activating virtualenv on server via Fabric

I am trying to run some Django management commands via Fabric on my staging server.
The problem is it seems Fabric is not able to activate the virtualenv and thus using system python/libs when executing the commands.
On the server the Django app is run using a virtualenv (no, I don' use virtualenvwrapper yet...)
Using Fabric (1.0.1) a command might look like this when run from my box:
The fabfile method:
def collectstatic():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && python %(repo_path)s/%(project_name)s/configs/%(settings)s/manage.py collectstatic --noinput -v0' % env)
The output:
$ fab staging master collectstatic
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'collectstatic'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
[myserver.no] Login password:
[myserver.no] out: Unknown command: 'collectstatic'
[myserver.no] out: Type 'manage.py help' for usage.
I know of course that the Django command collectstatic does not exist in versions prior to 1.3 which leads med to think that system python (which has Django 1.2) is beeing used.
My fabfile/project layout is based on the great fabfile of the Tribapps guys
So I created a fabric method to test pythonversion:
def pythonver():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && echo "import sys; print sys.path" | python ' % env)
When run it gives the following output:
$ fab staging master pythonver
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'pythonver'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
[myserver.no] Login password:
[myserver.no] out: ['', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/lib/pymodules/python2.6', '/usr/lib/pymodules/python2.6/gtk-2.0',
As you can see it uses system python and not my virtualenv located in home/newsapps/sites/mysite/env
But if I run this command directly on the server
source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
.. then it outputs the right paths from the virtualenv
What am I doing wrong since the commands are not run with the python from my virtualenv using Fabric?
You should call the python version from your virtualenv bin directory, then you will be sure it uses the virtualenv's version of python.
/home/newsapps/sites/mysite/env/bin/python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
I wouldn't bother with activating the virtualenv, just give the full path to the virtualenv's python interpreter. That will then use the correct PYTHONPATH, etc.
I had the same problem. Couldn't solve it the easy way. So I just used the full path to the python bin file inside the virtualenv. I'm not a pro in Python, but I guess it's the same thing in the end.
It goes something like this in my fab file:
PYTHON = '/home/dudus/.virtualenvs/pai/bin/python'
PIP = '/home/dudus/.virtualenvs/pai/bin/pip'
def update_db():
with cd(REMOTE_DIR + 'application/'):
run('%s ./manage.py syncdb --settings="%s"' %
(PYTHON, SETTINGS)) # syncdb
run('%s ./manage.py migrate --settings="%s"' %
(PYTHON, SETTINGS)) # south migrate
This will work perfectly :)
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.hosts = ['servername']
env.user = 'username'
env.directory = '/path/to/virtualenvs/project'
env.activate = 'source /path/to/virtualenvs/project/bin/activate'
#_contextmanager
def virtualenv():
with cd(env.directory):
with prefix(env.activate):
yield
def deploy():
with virtualenv():
run('pip freeze')
This approach worked for me, you can apply this too.
from fabric.api import run
# ... other code...
def install_pip_requirements():
run("/bin/bash -l -c 'source venv/bin/activate' "
"&& pip install -r requirements.txt "
"&& /bin/bash -l -c 'deactivate'")
Assuming venv is your virtual env directory and add this method wherever appropriate.

Categories