Run manage.py from AWS EB Linux instance - python

How to run manage.py from AWS EB (Elastic Beanstalk) Linux instance?
If I run it from '/opt/python/current/app', it shows the below exception.
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
I think it's related with virtualenv. Any hints?

How to run manage.py from AWS Elastic Beanstalk AMI.
SSH login to Linux (eb ssh)
(optional may need to run sudo su - to have proper permissions)
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
python manage.py <commands>
Or, you can run command as like the below:
cd /opt/python/current/app
/opt/python/run/venv/bin/python manage.py <command>

With the new version of Python paths seem to have changed.
The app is in /var/app/current
The virtual environment is in /var/app/venv/[KEY]
So the instructions are:
SSH to the machine using eb shh
Check the path of your environment with ls /var/app/venv/. The only folder should be the [KEY] for the next step
Activate the environment with source /var/app/venv/[KEY]/bin/activate
Execute the command python3 /var/app/current/manage.py <command>
Of course Amazon can change it anytime.

TL;DR
This answer assumes you have installed EB CLI. Follow these steps:
Connect to your running instance using ssh.
eb ssh <environment-name>
Once you are inside your environment, load the environment variables (this is important for database configuration)
. /opt/python/current/env
If you wish you can see the environment variables using printenv.
Activate your virtual environment
source /opt/python/run/venv/bin/activate
Navigate to your project directory (this will depend on your latest deployment, so use the number of your latest deployment instead of XX)
cd /opt/python/bundle/XX/app/
Run the command you wish:
python manage.py <command_name>
Running example
Asumming that your environment name is my-env, your latest deployment number is 13, and you want to run the shell command:
eb ssh my-env # 1
. /opt/python/current/env # 2
source /opt/python/run/venv/bin/activate # 3
cd /opt/python/bundle/13/app/ # 4
python manage.py shell # 5

As of February 2022 the solution is as follows:
$ eb ssh
$ sudo su -
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
$ source /var/app/venv/*/bin/activate
$ python3 /var/app/current/manage.py <command name>
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs) is needed to import your environment variables if you have a database connection (most likely you will)

Related

Run bash script from pydroid inbuilt terminal. Install miniconda on Android. Access sdcard from UserLAnd

chd.sh
#! /bin/bash
cd django/hellodjango
exec bash
python manage.py runserver
chd.py
# a=`python chd.py`;cd $a
import os
new_dir = "django/hellodjango"
os.chdir(new_dir)
are the two ways I have tried.
Also, on terminal I have tried,
. chd.sh
./chd.sh
. ./chd.sh
I have also tried to assign to variable and then run on terminal but no success.
Spent over 4 hours trying multiple methods given on stackoverflow.com but no success yet.
The only thing that has worked yet is,
alias mycd='cd django/hellodjango'
But I will have to copy paste it everytime.
alias myrun = `cd django/hellodjango && python manage.py runserver`
And,
alias myrun = `cd django/hellodjango; python manage.py runserver`
doesn't work.
This is just a sample, there are so many django commands that I have to use repeatedly. Appreciate if you have read all this way.
If you know the link where this is discussed, please attach the link, as I was not able to find after hours of search.
Edit:
/storage/emulated/0 $
This is what the prompt appears like.
/storage/emulated/0/django/hellodjango
This is the path.
/storage/emulated/0 $ cd django/hellodjango
/storage/emulated/0/django/hellodjango $ python manage.py
runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
July 25, 2020 - 19:08:42
Django version 3.0.7, using settings 'hellodjango.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Individually works fine.
Edit:
/storage/emulated/0 $ chmod u+x chd.sh /storage/emulated/0 $ chmod u+x
rn.sh /storage/emulated/0 $ ./chd.sh ./chd.sh: cd: line 2: can't cd t:
No such file or directory /storage/emulated/0 $ chmod u+x chd.py
/storage/emulated/0 $ a=python chd.py;cd $a
~/data/ru.iiec.pydroid3/app_HOME $
Edit:
/data/user/0/tech.ula/files/support/dbclient: Caution, skipping
hostkey check for localhost
subham#localhost's password:
subham#localhost:~$ ls
subham#localhost:~$ cd
subham#localhost:~$ pwd
/home/subham
subham#localhost:~$ pkg install miniconda
-bash: pkg: command not found
subham#localhost:~$ apt install miniconda
Reading package lists... Done Building dependency tree
Reading state information... Done
E: Unable to locate package miniconda
subham#localhost:~$
subham#localhost:~$ cd ..
subham#localhost:/home$ cd ..
subham#localhost:/$ ls
bin dev host-rootfs mnt root srv
sys var boot etc lib opt run storage tmp data home
media proc sbin support usr
subham#localhost:/$ cd ..
subham#localhost:/$ cd sys
subham#localhost:/sys$ ls
ls: cannot open
directory '.': Permission denied
subham#localhost:/sys$ cd..
-bash: cd..: command not found
subham#localhost:/sys$ cd ..
subham#localhost:/$ cd storage
subham#localhost:/storage$ ls internal
subham#localhost:/storage$ cd internal
subham#localhost:/storage/internal$ ls
subham#localhost:/storage/internal$ ls -l total 0
subham#localhost:/storage/internal$ cd 0
-bash: cd: 0: No such file or directory subham#localhost:/storage/internal$
subham#localhost:/$ chmod -R 777 /host-rootfs
chmod: changing permissions of '/host-rootfs': Read-only file system
chmod: cannot read directory '/host-rootfs': Permission denied
subham#localhost:/$
https://github.com/CypherpunkArmory/UserLAnd/issues/46

Crontab Launches Python Script but SystemD Does Not?

I can log into my AWS EC2 server via ssh and type:
cd /opt/myWebApp
sudo python3 /opt/myWebApp/manage.py myCronJob
...and it runs.
I can also launch the same script via crontab:
0 */6 * * * sudo python3 /opt/myWebApp/manage.py myCronJob --settings=server.settings.production
But when I try to run it in SystemD, with .service file:
ExecStart='/usr/bin/python3.7' /opt/myWebApp/manage.py myCronJob --settings=server.settings.production
...I get:
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
$PYTHONPATH and $VIRTUAL_ENV are empty. /opt/myWebApp/server_venv is empty as well. There's a python at /usr/bin/python3.7, but I'm referencing that in the SystemD .service file as noted above, and it's throwing that error.
What am I missing?
Solved it!
ExecStart='/etc/alternatives/python3' ./manage.py myCronJob --settings=server.settings.production
WorkingDirectory=/opt/myWebapp
User=myUser
The user ('myUser' in the above code) has access to Django.

Run a Python script in Vagrant

I'm learning how to use Vagrant with a Udacity course, and we're asked to run a Python script database_setup.py in our virtual machine.
For this, I created a folder "udacityproject" inside my vagrant folder on my computer. I saved my file database_setup.py there.
Now on Bash, I do
$ vagrant up
$ vagrant ssh
$ cd udacityproject
$ python database_setup.py
The interface returns:
"python: can't open file 'database_setup.py': [Errno 2] No such file
or directory".
It must be a silly mistake, but I cannot see what I am doing wrong... A similar topic was opened here (Run Python script in Vagrant) but the answers are not helping me.
The vagrant folder on your computer, which contains the file VagrantFile, is the folder /vagrant on your vm (It is under /). It's not your home directory. After vagrant ssh you are logged in home directory of user vagrant. It's /home/vagrant/.
$ vagrant ssh
$ pwd
/home/vagrant
The tree looks like that:
/root
/tmp
/usr
/var
/home
|-- vagrant # <-- You are here after logging
/vagrant
|-- udacityproject
|-- database_setup.py # <-- Your script is here
...
To run your script you must go to /vagrant
$ cd /vagrant
With ls * you can check if your file exists. Now go to your created folder and run your script
$ cd udacityproject
$ python database_setup.py
Or simply do that from beginning:
$ vagrant ssh
$ python /vagrant/udacityproject/database_setup.py

Running Django migrations when deploying to Elastic Beanstalk

I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
How do I set up the container command to respond to this with a yes during the deployment phase?
This is my current config file
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
I'm not sure there is a specific way to answer yes or no. but you can append --noinput to your container command. Use the --noinput option to suppress all user prompting, such as “Are you sure?” confirmation messages.
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
Install awsebcli with pip install awsebcli
Type eb ssh Your EnvironmentName
Navigate to your eb instance app directory with:
sudo -s
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
then run your command.
./manage.py migrate
I hope this helps
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to manually run migration (and any other Django management commands) while working with Amazon Linux 2 (Python 3.7, 3.8) created by Elastic Beanstalk:
First, from your EB cli: eb ssh to connect an instance.
The virtual environment can be activated by
source /var/app/venv/*/bin/activate
The manage.py can be ran by
python3 /var/app/current/manage.py
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by /opt/elasticbeanstalk/bin/get-config, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
More info regarding Amazon Linux 2 splatform script tools: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in django.config
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
to:
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
as recommended here. This will help you avoid issues with wrong settings used.
More on manage.py v.s. django-admin.py.
django-admin method not working as it was not configured properly. You can also use python manage.py migrate in
.ebextentions/django.config
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
In reference to Oscar Chen answer, you can set environmental variables using eb cli with
eb setenv key1=value1 key2=valu2 ...etc
The trick is that the full output of container_commands is in /var/log/cfn-init-cmd.log (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
This doesn't seem to be documented anywhere obvious and it's not displayed by eb logs; I found it by hunting around in /var/log.
The Django example management command django-admin.py migrate did not work for me. Instead I had to use something like:
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
To see the values of your environment variables at deploy time, you can create a debug command like:
03_debug:
command: "env"
You can see most of these environment variable with eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env, but there seem to be some subtle differences at deploy time, hence using env above to be sure.
Here you'll see that $PYTHONPATH is being in a non-typical way, pointing to the virtualenv's bin directory, not the site-packages directory.
This answer looks like it will work for you if you just want to send "yes" to a few prompts.
You might also consider the --noinput flag so that your config looks like:
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput
This takes the default setting, which is "no".
It also appears that there's an open issue/fix to solve this problem a better way.

Trouble activating virtualenv on server via Fabric

I am trying to run some Django management commands via Fabric on my staging server.
The problem is it seems Fabric is not able to activate the virtualenv and thus using system python/libs when executing the commands.
On the server the Django app is run using a virtualenv (no, I don' use virtualenvwrapper yet...)
Using Fabric (1.0.1) a command might look like this when run from my box:
The fabfile method:
def collectstatic():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && python %(repo_path)s/%(project_name)s/configs/%(settings)s/manage.py collectstatic --noinput -v0' % env)
The output:
$ fab staging master collectstatic
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'collectstatic'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
[myserver.no] Login password:
[myserver.no] out: Unknown command: 'collectstatic'
[myserver.no] out: Type 'manage.py help' for usage.
I know of course that the Django command collectstatic does not exist in versions prior to 1.3 which leads med to think that system python (which has Django 1.2) is beeing used.
My fabfile/project layout is based on the great fabfile of the Tribapps guys
So I created a fabric method to test pythonversion:
def pythonver():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && echo "import sys; print sys.path" | python ' % env)
When run it gives the following output:
$ fab staging master pythonver
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'pythonver'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
[myserver.no] Login password:
[myserver.no] out: ['', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/lib/pymodules/python2.6', '/usr/lib/pymodules/python2.6/gtk-2.0',
As you can see it uses system python and not my virtualenv located in home/newsapps/sites/mysite/env
But if I run this command directly on the server
source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
.. then it outputs the right paths from the virtualenv
What am I doing wrong since the commands are not run with the python from my virtualenv using Fabric?
You should call the python version from your virtualenv bin directory, then you will be sure it uses the virtualenv's version of python.
/home/newsapps/sites/mysite/env/bin/python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
I wouldn't bother with activating the virtualenv, just give the full path to the virtualenv's python interpreter. That will then use the correct PYTHONPATH, etc.
I had the same problem. Couldn't solve it the easy way. So I just used the full path to the python bin file inside the virtualenv. I'm not a pro in Python, but I guess it's the same thing in the end.
It goes something like this in my fab file:
PYTHON = '/home/dudus/.virtualenvs/pai/bin/python'
PIP = '/home/dudus/.virtualenvs/pai/bin/pip'
def update_db():
with cd(REMOTE_DIR + 'application/'):
run('%s ./manage.py syncdb --settings="%s"' %
(PYTHON, SETTINGS)) # syncdb
run('%s ./manage.py migrate --settings="%s"' %
(PYTHON, SETTINGS)) # south migrate
This will work perfectly :)
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.hosts = ['servername']
env.user = 'username'
env.directory = '/path/to/virtualenvs/project'
env.activate = 'source /path/to/virtualenvs/project/bin/activate'
#_contextmanager
def virtualenv():
with cd(env.directory):
with prefix(env.activate):
yield
def deploy():
with virtualenv():
run('pip freeze')
This approach worked for me, you can apply this too.
from fabric.api import run
# ... other code...
def install_pip_requirements():
run("/bin/bash -l -c 'source venv/bin/activate' "
"&& pip install -r requirements.txt "
"&& /bin/bash -l -c 'deactivate'")
Assuming venv is your virtual env directory and add this method wherever appropriate.

Categories