I have a helper container and an app container.
The helper container handles mounting of code via git to a shared mount with the app container.
I need for the helper container to check for a package.json or requirements.txt in the cloned code and if one exists to run npm install or pip install -r requirements.txt, storing the dependencies in the shared mount.
Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.
One solution would be to mount the docker socket to the helper container and run docker exec <command> <app container> but what if I have thousands of such apps on a single host.
Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?
Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).
You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).
Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do
ssh <alias> your-command
Found that better way I was looking for :-) .
Using supervisord and running the xml rpc server enables me to run something like:
supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi
supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi
In the helper container, this will connect to the rpc server running on port 9002 on the app container and execute a program block that may look something like;
[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
This is exactly what I needed!
For anyone who finds this you'll probably need your supervisord.conf on your app container to look sth like:
[supervisord]
nodaemon=true
[supervisorctl]
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port=127.0.0.1:9002
username=user
password=password
[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
You can setup the inet_http_server to listen on a socket. You can link the containers to be able to access them at a hostname.
Related
I have a complex API which takes around 7GB memory when I deploy it using Uvicorn.
I want to understand how I can deploy it, such a way that from my end I want to be able to make parallel requests. The deployed API should be capable of processing two or three requests at same time.
I am using FastAPI with uvicorn and nginx for deployment. Here is my deployed command.
uvicorn --host 0.0.0.0 --port 8888
Can someone provide some clarity on how people achieve that?
You can use gunicorn instead of uvicorn to handle your backend. Gunicorn offers multiple workers to effectively make load balancing of the arriving requests. This means that you will have as many gunicorn running process as you specify to receive and process requests. From the doc, gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second. However, the number of workers should be no more than (2 x number_of_cpu_cores) + 1 to avoid running out of memory errors. You can check this out in the doc.
For example, if you want to use 4 workers for your fastapi-based backend, you can specify it with the flag w:
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker -b "0.0.0.0:8888"
In this case, the script where I have my backend functionalities is called main and fastapi is instantiated as app.
I'm working on something like this using Docker and NGINX.
There's a Docker official image created by the guy who developed FastAPI that deploys uvicorn/gunicorn for you that can be configured to your needs:
It took some time to get the hang of Docker but I'm really liking it now. You can build an nginx image using the below configuration and then build x amount of your app inside of separate containers for however many you need to serve as hosts.
The below example is running a weighted load balancer for two of my app services with a backup third if those two should fail.
https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi
nginx Dockerfile:
FROM nginx
# Remove the default nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
# Replace with our own nginx.conf
COPY nginx.conf /etc/nginx/conf.d/
nginx.conf:
upstream loadbalancer {
server 192.168.115.5:8080 weight=5;
server 192.168.115.5:8081;
server 192.168.115.5:8082 backup;
}
server {
listen 80;
location / {
proxy_pass http://loadbalancer;
}
}
app Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . /app
I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip
I want to use scrapinghub/splash container on Azure App Service (Web App for Containers) on Linux.
But docker run command on deploy randomly changes the binding port of container side (see the log below, port 8961 is automatically assigned. this number varies every deploy)
2020-01-21 08:56:47.494 INFO - docker run -d -p 8961:8050 --name b2scraper-splash_3_d89ce1f2 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8050 -e WEBSITE_SITE_NAME=b2scraper-splash -e WEBSITE_AUTH_ENABLED=False -e PORT=8050 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=b2scraper-splash.azurewebsites.net -e WEBSITE_INSTANCE_ID=5446f93a2cbcb25300f091395c54ce738773ce47489c2818322ffabbc23e3413 scrapinghub/splash:latest python3 /app/bin/splash --proxy-profiles-path /etc/splash/proxy-profiles --js-profiles-path /etc/splash/js-profiles --filters-path /etc/splash/filters --lua-package-path "/etc/splash/lua_modules/?.lua" --disable-private-mode --port 8050
Changing host port binding is possible using WEBSITES_PORT, but seems no way to change container side.
Is there way to fix container-side port binding like -p 8050:80 or -p 8050:443 of docker run command?
e.g. Using the container on Azure Container Instances is possible, without changing service port 8050.
--publish in the docker run command creates a firewall rule which maps a container port to a port on the Docker host.
https://docs.docker.com/config/containers/container-networking/
For the command: docker run -d -p 8961:8050 imagename, TCP port 8050 in the container is mapped to 8961 on the Docker Host. On App Services, this docker run command cannot be changed. The container port (8050 in this case) can be set to a specific value using WEBSITES_PORT application setting.
That doesn´t work. you get 443 as port with HTTPS.
Neither EXPOSE XXXX, nor WEBSITES_PORT or PORT as configuration parameters...
You do see "docker run -d -p 8961:8050" in the logs, but it doesn´t matter to Azure when it comes to exposing the app...
I am trying to run a spider with portia in its docker version but i don't want to execute the spider using a terminal command like docker exec ... portiacrawl .... Is there any way I can run the spider, that is already created, by making a request at its localhost port and save it in an specific folder?
Something like:
https://localhost:9001/execute/spider_name/folder_path
Example of my own usage:
First what I do is run the container and leave it running, because i cant stop it for other reasons:
docker run -i -t -d --rm -v <PROJECTS_FOLDER>:/app/data/projects:rw -p 9001:9001 scrapinghub/portia
Next I execute the portiacrawl:
docker exec <CONTAINER_ID> portiacrawl <PROJECT_NAME_PATH> <SPIDER_NAME> -o /some/path/in/my/pc/<SPIDER_NAME>.json
Now, what i want is to replace the docker exec step with som http request to the localhost server that is running.
Thanks very much for your time
Yes, you can by doing a port mapping. While starting a docker container you wont have any ports published publicly or exposed internally unless you told docker to do so.
For example:
if you wish to expose a port internally (inside the docker network itself, you need to add EXPOSE in the dockerfile)
if you wish to publish a port publicly that can be access either through localhost or the public ip you can use -p option along with passing the ports so in your case it will be like this:
docker run -p 9001:9001 imagename
The command above will tell docker that you would like to do port mapping from 9001 (using localhost or any other interface) to 9001 (inside the container and you can change the ports according to your actual setup).
If you wish to expose it to localhost only you can change the command to something like this:
docker run -p 127.0.0.1:9001:9001 imagename
For more information check the following docs
According to the updated question, the other and safest way to accomplish this will be implementing an API inside portiacrawl that can be called through HTTP to do the needed tasks instead of using docker exec
I'm getting a connection refused after building my Docker image and running docker run -t imageName
Inside the container my python script is making web requests (external API call) and then communicating over localhost:5000 to a logstash socket.
My dockerfile is really simple:
FROM ubuntu:14.04
RUN apt-get update -y
RUN apt-get install -y nginx git python-setuptools python-dev
RUN easy_install pip
#Install app dependencies
RUN pip install requests configparser
EXPOSE 80
EXPOSE 5000
#Add project directory
ADD . /usr/local/scripts/
#Set default working directory
WORKDIR /usr/local/scripts
CMD ["python", "logs.py"]
However, I get a [ERROR] Connection refused message when I try to run this. It's not immediately obvious to me what I'm doing wrong here - I believe I'm opening 80 and 5000 to the outside world? Is this incorrect? Thanks.
Regarding EXPOSE:
Each container you run has its own network interface. Doing EXPOSE 5000 tell docker to link a port 5000 from container-network-interface to a random port in your host machine (see it with docker ps), as long as you tell docker to do it when you docker run with -P.
Regarding logstash.
If your logstash is installed in your host, or in another container, it means that logstash is not in the "localhost" of the container (remember that each container has its own network interface, each one has its own localhost). So you need to point to logstash properly.
How?
Method 1:
Don't give container its own iterface, so it has the same localhost as your machine:
docker run --net=host ...
Method 2:
If you are using docker-compose, use the docker network linking. i.e:
services:
py_app:
...
links:
- logstash
logstash:
image: .../logstash..
So point as this: logstash:5000 (docker will resolve that name to the internal IP corresponding to logstash)
Method 3:
If logstash listen in your localhost:5000 (from your host), you can point to it as this: 172.17.0.1:5000 from inside your container (the 172.17.0.1 is the host fixed IP, but this option is less elegant, arguably)