I am learning how to develop Django application in docker with this official tutorial: https://docs.docker.com/compose/django/
I have successfully run through the tutorial, and
docker-compose run web django-admin.py startproject composeexample . creates the image
docker-compose up runs the application
The question is:
I often use python manage.py shell to run Django in shell mode, but I do not know how to achieve that with docker.
I use this command (when run with compose)
docker-compose run <service_name> python manage.py shell
where <service name> is the name of the docker service(in docker-compose.yml).
So, In your case the command will be
docker-compose run web python manage.py shell
https://docs.docker.com/compose/reference/run/
When run with Dockerfile
docker exec -it <container_id> python manage.py shell
Run docker exec -it --user desired_user your_container bash Running this command has similar effect then runing ssh to remote server - after you run this command you will be inside container's bash terminal. You will be able to run all Django's manage.py commands.
Inside your container just run python manage.py shell
You can use docker exec in the container to run commands like below.
docker exec -it container_id python manage.py shell
If you're using docker-compose you shouldn't always run additional containers when it's not needed to, as each run will start new container and you'll lose a lot of disk space. So you can end up with running multiple containers when you totally won't have to. Basically it's better to:
Start your services once with docker-compose up -d
Execute (instead of running) your commands:
docker-compose exec web ./manage.py shell
or, if you don't want to start all services (because, for example - you want to run only one command in Django), then you should pass --rm flag to docker-compose run command, so the container will be removed just after passed command will be finished.
docker-compose run --rm web ./manage.py shell
In this case when you'll escape shell, the container created with run command will be destroyed, so you'll save much space on your disk.
If you're using Docker Compose (using command docker compose up) to spin up your applications, after you run that command then you can run the interactive shell in the container by using the following command:
docker compose exec <container id or name of your Django app> python3 <path to your manage.py file, for example, src/manage.py> shell
Keep in mind the above is using Python version 3+ with python3.
Related
I am using Windows 10, and working on a Django Project with Docker.
If I run a python command from inside docker container, it runs perfectly.
E:\path>docker exec -it my_docker_container bash
root#701z00f607ae:/app# python manage.py makemigrations authentication
No changes detected in app 'authentication'enter code here
But when I try to run same command using a .sh file, it gives different output.
root#701z00f607ae:/app# cat ./migration_script.sh
python manage.py makemigrations authentication
root#701z00f607ae:/app# ./migration_script.sh
'. installed app with label 'authentication
Note: Executing ./migration_script.sh works perfectly on Linux based system.
My existing shell script is trying to:
Start a docker,
Change the permission of the Python file in it,
Run that while looped (streaming) python job in docker (Not wait for it),
Run a python job outside (in the local machine) which will feed data to the docker python job for which it waits.
#!/bin/bash
clear
docker run -d -v /home/ubuntu/Downloads/docker_work/test_py_app/app:/workspace/app -p 8881:8888 -p 5002:5002 --gpus all --name pytorch nvcr.io/nvidia/pytorch:server-v1.0 tail -f /dev/null
sudo docker exec -it pytorch chmod 777 /workspace/server/server.py
sudo docker exec -it pytorch python /workspace/server/server.py
python /home/ubuntu/PycharmProjects/test/pipeline/client.py
exit
If the two python programs' runs are put into two different shell, it works perfectly fine, but the problem with the existing piece is, the shell gets stuck at server python file run inside docker. How can I fire it in an async fashion? So that, immediately after triggering the first python job, it goes and starts the local machine's python job?
I feel there is a subtle difference between the --tty and --interactive switches of the docker run command, that I don't grasp:
--interactive, -i: Keep STDIN open even if not attached
--tty , -t: Allocate a pseudo-TTY
So I decided to run some tests.
First I created a basic Python script, which continuously prints a string.
Then I created a basic docker image, which will run this script when a container is started.
my_script.py
import time
while True:
time.sleep(1)
print('still running...')
Dockerfile
FROM python:3.8.1-buster
COPY my_script.py /
CMD [ "python3", "/my_script.py"]
Built using command:
docker build --tag pytest .
Test 1
I run docker run --name pytest1 -i pytest, to test the interactive behaviour of the container.
Nothing is printed to the console, but when I press Control+C the python script is interrupted and the container stops running.
This confirms my thinking that stdin was open on the container and my keyboard input entered the container.
Test 2
I run docker run --name pytest1 -t pytest, to test the pseudo-tty behaviour of the container. It repeatedly prints still running... to the console, ánd when I press Control+C the python script is interrupted and the container stops running.
Test 3
I run docker run --name pytest1 -it pytest, to test the combined behaviour. The behaviour is the same as in Test 2.
Questions
What are the nuances I'm missing here?
Why would one use the combined -it switches, as you often see, if there is no benefit to the -t switch?
Does the --tty switch just keeps bóth stdin and stdout open?
-t option is needed if you want to interact with a shell like /bin/sh for instance. The shell works by controlling tty. No tty available, no shell.
we use -i in combination with -t to be able to write commands to the shell we opened
few tests you could reproduce to understand:
docker run alpine /bin/sh: the container exits. shell needs to wait for stdin
docker run -i alpine /bin/sh: the container stays, but the shell won't start. we cannot type commands
docker run -t alpine /bin/sh: shell starts, but we are stuck, the keys we press are not interpreted
docker run -it alpine /bin/sh : yeah our shell is working
I have a django project hosted on an amazon ec2 linux instance.
For run my app also when section is close i use gunicorn but i experience some errors and degradation in perfonrmances.
When i run command:
python manage.py runserver
from terminal all works great but when section is close app does not work.
How can i run command "python manage.py runserver" for work forever (until i'll kill it) in background also in case of closed session?
I know there is uWSGI but i prefer if possible use directly django native command.
Thanks in advance
What happens here is that the script is interrupted by SIGHUP signal when your session is closed. To overcome this problem, there is a tool called nohup which doesn't pass the SIGHUP down to the program/script it executes. Use it as follows:
nohup python manage.py runserver &
(note the & in the end, it is needed so that manage.py runs in background rather than in foreground).
By default nohup redirects the output in the file nohup.out, so you can use tail -f nohup.out to watch the output/logs of your Django app.
Note, however, that manage.py runserver is not supposed to be used in production. For production you really should use a proper WSGI server, such as uWSGI or Gunicorn.
You can install and use tmux if you want to run your scripts in background even after closing SSH and mosh connections
$ sudo apt-get install tmux
then run it using command $ tmux a new shell will be opened just execute your command
$ python manage.py runserver 0.0.0.0:8000
0.0.0.0:8000 here will automatically get your allowed hosts. Now you can detach your tmux session to run it in background using CTRL + B and then press D
Now you can exit your terminal but your command keep on running in tmux. Just learn basic commands to use tmux from here
for that, you can use screen just start a new screen and run
python manage.py runserver
I have a few .py files that I want to run in a Docker image.
But they need the scrapy-splash Docker image to function well. How will I be able to run those .py files in a Docker container or image while also running scrapy-splash? I am planning to run it on a VPS server.
You have two choices:
Run the python scripts in the entrypoint, after splash. To run splash in background and then execute your scripts you'll need an entrypoint like this:
ENTRYPOINT ["/bin/bash", "-c", "python3 splash <SPLASH OPTIONS> & python3 your_script.py && python3 your_second_script.py"]
This way the container will end after running the scripts.
Run the splash container in detached mode and then execute the scripts with docker exec.