Django manage.py command having SyntaxError on ElasticBeanstalk - python

I'm using AWS for first time, and after uploading my Django project I wanted to know how to backup the information of the DB into a file to be able to modify the data in case I have to modify the models of my project (still on development) and keep having some population data.
I thought about the django dumpdata command, so to be able to execute it on EB through the CLI I did the following (and here is where maybe I'm doing something wrong):
- eb ssh
- sudo -s
- cd /opt/python/current/app/
- python manage.py dumpdata --natural-foreign --natural-primary -e contenttypes -e auth.Permission --indent 4 > project_dump.json
For what I understand, the first command is just to access to the SSH on Elastic Beanstalk.
The second one is to have root permissions inside the linux server to evade problems creating and opening files, etc.
The third one is just to access where the current working aplication is.
And the last is the command I have to use to dump all the data "human friendly" without restrictions to be able to use it in any other new database.
I have to say that I tried this last command on my local machine and
worked as expected without any error or warning.
So, the issue I'm facing here is that when I execute this last command, I get the following error:
File "manage.py", line 14
) from exc
^
SyntaxError: invalid syntax
Also I tried to skip the sudo -s to just use the permissions of the user I'm using to log on the ssh, but got: -bash: project_dump.json: Permission denied. So that is why I thought using the sudo command would help here.
In addition, I followed this known tutorial to deploy
Django+PostgreSQL on EB, so the user I'm using to access to ssh is
one in a group with AdministratorAccess permissions.
Before Trying all of this, I also looked for a way of having this information directly from AWS-RDS, but I only found a way of having a backup restored, but without being able to modify the content manually, so is not what I really need.

As on your local environment, you need to run your manage.py commands inside your correct python virtualenv and make sure the environment variables like RDS_USERNAME and RDS_PASSWORD are set. To do that, you need to:
Activate your virtualenv
Source your environment variables
As described at the end of the tutorial you mentioned, this is how to do it:
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
python manage.py <your_command>
And you have to do that every time you ssh into the machine.
Note: the reason you're getting the permission denied error is that when you pipe the output of dumpdata to project_dump.json, you're trying to write in the app directory itself. Not a good idea. Try piping to > ~/project_dump.json instead (your home directory), then sudo won't be needed.

Related

Missing MapBox Token when using Apache Superset with Docker

I've installed Apache Superset according to this official manual. I can create plots, connect to databases etc. without any problems, only if I want to plot latitude and longitude data with a mapbox or deck.gl plots, I get this warning and can't see any maps:
NO_TOKEN_WARNING
For information on setting up your basemap, read
Note on Map Tokens
I have a MapBox-Api-Key (lets say XXYYZZ) and followed instructions where I created a superset_config.py file in the home folder of the server where superset is running. This is the code I used:
Entries in .bashrc
export SUPERSET_HOME=/home/maximus/envs/superset/lib/python3.6/site-packages/superset
export SUPERSET_CONFIG_PATH=$HOME/.superset/superset_config.py
export PYTHONPATH=/home/maximus/envs/superset/bin/python:/home/maximus/.superset:$PYTHONPATH
Created superset_confiy.py in .superset
path: $ ~/.superset/superset_config.py
with following code
#---------------------------------------------------------
# Superset specific config
#---------------------------------------------------------
ROW_LIMIT = 50000
MAPBOX_API_KEY = 'XXYYZZ'
As I'm using docker, I thought maybe I need to to the same within the main docker container of superset (superset_app) but it still does not work.
My server runs on Ubuntu 18.04 LTS. Anyone any ideas on how to solve this problem with docker, superset and mapbox?
I solved the problem by adding my mapbox token (XXYYZZ) to the docker environment file which is used by docker-compose.
This is what I did in detail:
As superset runs on my server I connected via ssh
Stop superset with docker-compose down
cd into the docker folder within the folder where the docker-compose files is --> cd superset/docker
I was running the non-dev version with docker-compose, therefore I opened the .env-non-dev file with nano. If you run the "normal" version just edit the .env file instead.
Comment: I'm not sure if this is the supposed way, but apparently you can edit the environmental parameters.
I added my Mapbox Key (MAPBOX_API_KEY = "XXYYZZ")
Finally just start superset again with docker-compose -f docker-compose-non-dev.yml up -d or docker-compose -f docker-compose.yml up -d respectively.
Thats all, I can now see the maps when opening the deck.gl sample dashboard.
The documentation and a youtube video tutorial seem outdated.
For the most recent release:
clone the superset repo;
add the MAPBOX_API_KEY to the superset/config.py or docker/pythonpath_dev/superset_config.py;
then docker-compose up solved the problem

How to allow python script to access a root-owned docker container log file?

I wrote a short python program (one script with ~100 lines) that basically tracks the log of a running docker container located in:
/var/lib/docker/containers/7d847e/7d847e-json.log, where 7d847e is the container id.
However, the path /var/lib/docker/containers/ is owned by root. I tail the log file in command line with sudo tail -f ..., but how can I make the python script access the file?
I can think of the following ways:
change the owner of /var/lib/docker/containers/ to me; (not sure if this breaks other stuff)
save the sudo password somewhere and allow the python script to access it
They both look like bad idea to me.
What is the best practice?

Starting TRAC server with multiple independant projects

I'm running a TRAC server (tracd service) with 3 independant projects configured. Each project has an own password file in order to keep the user management independant. TRAC is started as a Windows service as described on https://trac.edgewall.org/wiki/0.11/TracStandalone
It seems that starting the TRAC server does not work if the string length of the key 'AppParameters' in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\tracd\Parameters is too long. The maximum key lenght seems to be around 260 characters.
The TRAC server can be started successfully using following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth=',C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth=',C:\Trac\Balances\conf\.htpasswd,mt.com' --auth=',C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
The TRAC server does not start with following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth='Moisture,C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth='Balances,C:\Trac\Balances\conf\.htpasswd,mt.com' --auth='Weights,C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
If I add a fourth project it is not possible to start the TRAC server anymore because the string is too long. Is this problem known? Is there a workaround?
You can also shorten your command by using the -e option for specifying the Trac environment parent directory rather than explicitly listing each Environment path.
A more extensive solution:
You could run the service with nssm.
Install nssm and put it on your path. I installed using chocolatey package manager: choco install -y nssm.
Create a batch file, run_tracd.bat:
C:\Python27-x86\Scripts\tracd.exe -p 8080 env1
Run nssm install tracd:
Run nssm start tracd
You don't have to do it exactly like this. You could avoid the bat file and enter the parameters in the nssm GUI. I'm not Windows expert, but I like having the bat file because it's easier to edit. However, there may be security concerns that I'm unaware of or it may be more robust to put the parameters in the nssm GUI (you don't have to worry about accidental deletion of the bat file). The following also works for me:

How do you manage development tasks when setting up your development environment using docker containers?

I have been researching docker and understand almost everything I have read so far. I have built a few images, linked containers together, mounted volumes, and even got a sample django app running.
The one thing I can not wrap my head around is setting up a development environment. The whole point of docker is to be able to take your environment anywhere so that everything you do is portable and consistent. If I am running a django app in production being served by gunicorn for example, I need to restart the server in order for my code changes to take affect; this is not ideal when you are working on your project in your local laptop environment. If I make a change to my models or views I don't want to have to attach to the container, stop gunicorn, and then restart it every time I make a code change.
I am also not sure how I would run management commands. python manage.py syncdb would require me to get inside of the container and run commands. I also use south to manage data and schema migrations python manage.py migrate. How are others dealing with this issue?
Debugging is another issue. Would I have to somehow get all my logs to save somewhere so I can look at things? I usually just look at the django development server's output to see errors and prints.
It seems that I would have to make a special dev-environment container that had a bunch of workarounds; that seems like it completely defeats the purpose of the tool in the first place though. Any suggestions?
Update after doing more research:
Thanks for the responses. They set me on the right path.
I ended up discovering fig http://www.fig.sh/ It let's you orchestrate the linking and mounting of volumes, you can run commands. fig run container_name python manage.py syncdb . It seems pretty nice and I have been able to set up my dev environment using it.
Made a diagram of how I set it up using vagrant https://www.vagrantup.com/.
I just run
fig up
in the same directory as my fig.yml file and it does everything needed to link the containers and start the server. I am just running the development server when working on my mac so that it restarts when I change python code.
At my current gig we setup a bash script called django_admin. You run it like so:
django_admin <management command>
Example:
django_admin syncdb
The script looks something like this:
docker run -it --rm \
-e PYTHONPATH=/var/local \
-e DJANGO_ENVIRON=LOCAL \
-e LC_ALL=en_US.UTF-8 \
-e LANG=en_US.UTF-8 \
-v /src/www/run:/var/log \
-v /src/www:/var/local \
--link mysql:db \
localhost:5000/www:dev /var/local/config/local/django-admin $#
I'm guessing you could also hook something up like this to manage.py
I normally wrap my actual CMD in a script that launches a bash shell. Take a look at Docker-Jetty container as an example. The final two lines in the script are:
/opt/jetty/bin/jetty.sh restart
bash
This will start jetty and then open a shell.
Now I can use the following command to enter a shell inside the container and run any commands or look at logs. Once I am done I can use Ctrl-p + Ctrl-q to detach from the container.
docker attach CONTAINER_NAME

How to run Python server without command prompt

I'm developing some Python project with Django. When we render the Python/Django application, we need to open the command prompt and type in python manage.py runserver. That's ok on for the development server. But for production, it looks funny. Is there anyway to run the Python/Django project without opening the command prompt?
The deployment section of the documentation details steps to configure servers to run django in production.
runserver is to be used strictly for development and should never be used in production.
You run the runserver command only when you develop. After you deploy, the client does not need to run python manage.py runserver command. Calling the url will execute the required view. So it need not be a concern
If you are using Linux I wrote a pretty, pretty basic script, which I am always using when I don't want to call this command.
Note: You really just should use the "runserver" for developing :)
#!/bin/bash
#Of course change "IP-Address" to your current IP-Address and the "Port" to your port.
#ifconfig to get your IP-Address
python manage.py runserver IP-Address:Port
just name it runserver.sh and execute it like this in your terminal:
./runserver.sh

Categories