I have a web project which I have hosted on the server. Frontend is angular, backend is flask and database is mongodb and all are made as a docker container linking with each other.
I am doing flowing steps
Backend Hosting
Backend is created as a docker container at 10.31.61.52.
Use “docker build . -t backend” to create docker image.And
after it is build use - “ docker run -itd -p 5000:5000 --linkmongodb:mongodb --name skybridge_backend backend” to run this container.Your backend is good to go!
2 Frontends Hosting
Frontend is created as a docker container at 10.31.61.52
docker build . -t frontend” to create docker image.And
after it is build use - “ docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend” torun this container
instead of 80 want to use 8080 as host port so that my URL would be below.
Then just want to access the URL like this http://10.31.61.52:8080/login
You specify the port redirection from the host machine to the container using docker run -p <host port>:<container port>. Therefore changing the port from 80 to 8080 in the command to run the frontend container like this should do the trick:
docker run -itd -p 8080:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
when i run this command docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
linking is happening but internally its referring port 80 in the url http://10.31.61.52/login and page is displaying
but want to change it to port 8080 so that my URL should be http://10.31.61.52:8080/login..
when i try to access this url page not opening.
http://10.31.61.52/login -- Login screen come up
http://10.31.61.52:8080/login -- Login screen not coming
var arr = [
{
"TOTAL_USAGE":17243,
"MNTH":"NOV",
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"AUG",
"TOTAL_USAGE":1104,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"OCT",
"TOTAL_USAGE":45,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"JUL",
"SERVICE_NAME":"CustomerConsumerEmcsfdc",
"TOTAL_USAGE":19747
},
{
"TOTAL_USAGE":1539,
"SERVICE_NAME":'CustomerConsumerEmcsfdc',
"MNTH":"SEP"
}
];
Related
I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip
I have two docker containers.
Flask app
MongoDB
Flask app has a DockerFile that looks like this.
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py"]
This is how I am connecting my local Mongo (Not Container) from Flask
mongo_uri = "mongodb://host.docker.internal:27017/myDB"
appInstance.config["MONGO_URI"] = mongo_uri
mongo = PyMongo(appInstance)
MongoDB is running on the container in mongodb://0.0.0.0:2717/myDB.
This is obvious when I run Flask container with local mongo uri which is mongodb://host.docker.internal:27017/myDB, everything works. But It shouldn't work when I try to connect the Mongo Container in the same way. Coz Flask container doesn't know anything about that Mongo Container.
My question is - how do I connect this Mongo Container with Flask Container so that I can query Mongo container from Flask Container.
Thanks in advance.
If I was you, I would use docker-compose.
Solution just using docker
You'd have to find out the IP address of your mongo container and put this IP in the flask configuration file. Keep in mind that the IP address of the container can change - for example if you use a newer image.
Find IP address:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Solution using docker-compose
In your docker-compose file you'd define two services - one for flask and one for mongo. In the flask configuration file you can then access the mongo container with its service name as both services run in the same network.
docker-compose.yml:
services:
mongo:
...
flask:
...
flask configuration:
mongo_uri = "mongodb://mongo/myDB"
In this example mongo is the name for your mongo service.
I want to use scrapinghub/splash container on Azure App Service (Web App for Containers) on Linux.
But docker run command on deploy randomly changes the binding port of container side (see the log below, port 8961 is automatically assigned. this number varies every deploy)
2020-01-21 08:56:47.494 INFO - docker run -d -p 8961:8050 --name b2scraper-splash_3_d89ce1f2 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8050 -e WEBSITE_SITE_NAME=b2scraper-splash -e WEBSITE_AUTH_ENABLED=False -e PORT=8050 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=b2scraper-splash.azurewebsites.net -e WEBSITE_INSTANCE_ID=5446f93a2cbcb25300f091395c54ce738773ce47489c2818322ffabbc23e3413 scrapinghub/splash:latest python3 /app/bin/splash --proxy-profiles-path /etc/splash/proxy-profiles --js-profiles-path /etc/splash/js-profiles --filters-path /etc/splash/filters --lua-package-path "/etc/splash/lua_modules/?.lua" --disable-private-mode --port 8050
Changing host port binding is possible using WEBSITES_PORT, but seems no way to change container side.
Is there way to fix container-side port binding like -p 8050:80 or -p 8050:443 of docker run command?
e.g. Using the container on Azure Container Instances is possible, without changing service port 8050.
--publish in the docker run command creates a firewall rule which maps a container port to a port on the Docker host.
https://docs.docker.com/config/containers/container-networking/
For the command: docker run -d -p 8961:8050 imagename, TCP port 8050 in the container is mapped to 8961 on the Docker Host. On App Services, this docker run command cannot be changed. The container port (8050 in this case) can be set to a specific value using WEBSITES_PORT application setting.
That doesn´t work. you get 443 as port with HTTPS.
Neither EXPOSE XXXX, nor WEBSITES_PORT or PORT as configuration parameters...
You do see "docker run -d -p 8961:8050" in the logs, but it doesn´t matter to Azure when it comes to exposing the app...
EDIT
I think what is going on is that localhost inside the docker process refers to the container's own localhost, not my system's localhost. So how do I ensure that when the application running the container tries to connect to the container's localhost:9200, it actually connects to my system's localhost:9200?
When I visit localhost:9200, my ES application seems to be running. It looks like this in chrome:
{
"name" : "H1YDvcg",
"cluster_name" : "elasticsearch_jwan",
"cluster_uuid" : "aAorzRYTQPOI0j_OgMGKpA",
"version" : {
"number" : "6.8.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "1fad4e1",
"build_date" : "2019-06-18T13:16:52.517138Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
I am running ES in a terminal window and it works after I run the command elasticsearch.
I am running a docker container with this command:
docker run -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
<application name>
and I get this error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /person/_search (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f36e4189c90>
Anyone know what is going on? I don't understand why it cannot connect to ES even though ti seems to be running on localhost:9200.
Solution was to use host.docker.internal in ES host setup.
I just used es_client = Elasticsearch(host=_es_host, where es_host = host.docker.internal and made sure to use http while local instead of https.
Elasticsearch is running on your host at port 9200 and the application which want to access elasticsearch is running inside container.
The docker container by default runs in bridge networking mode, in which host and container network are different. Hence localhost inside container is not the same as on host.
Here you can do two things:
In your application code try to access elasticsearch using private/public-ip:9200
OR
Run docker container in host networking mode, so that localhost inside container is same as that on host. Because in this mode container uses network of host.
docker run -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
--net=host \
<application name>
NOTE: --net=host option will tell docker container to use host networking mode.
You can bind a host port to docker port using the -p option docker command. So in your below docker command, you can add one more line for binding the host 9200 port to docker 9200 port. notice I have added -p 9200:9200 and same is explained nicely in point 3 this doc
docker run -p 9200:9200 -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
<application name>
I'm new to docker, redis and any kind of networking, (I know python at least!). Firstly I have figured out how to get a redis docker image and run it in a docker container:
docker run --name some-redis -d redis
As I understand this redis instance has port 6379 available to connect to other containers.
docker network inspect bridge
"Containers": {
"2ecceba2756abf20d5396078fd9b2ecf0d60ab04ca6b8df5e1b631b6fb5e9a85": {
"Name": "some-redis",
"EndpointID": "09f0069dae3632a2456cb4d82ad5e7c9782a2b58cb7a4ee655f57b5c410c3e87",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
If I run the following command I can interact with the redis instance and generate key:value pairs:
docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
set 'a' 'abc'
>OK
get 'a'
>"abc"
quit
I have figured out how to make and run a docker container with the redis library installed that will run a python script as follows:
Here is my Dockerfile:
FROM python:3
ADD redis_test_script.py /
RUN pip install redis
CMD [ "python", "./redis_test_script.py" ]
Here is redis_test_script.py:
import redis
print("hello redis-py")
Build the docker image:
docker build -t python-redis-py .
If I run the following command the script runs in its container:
docker run -it --rm --name pyRed python-redis-py
and returns the expected:
>hello redis-py
It seems like both containers are working ok, the problem is connecting them both together, I would like to ultimately use python to perform operation on the redis container. If I modify the script as follows and rebuild the image for the python container it fails:
import redis
print("hello redis-py")
r = redis.Redis(host="localhost", port=6379, db=0)
r.set('z', 'xyz')
r.get('z')
I get several errors:
...
OSError: [Errno 99] Cannot assign requested address
...
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
.....
It looks like they're not connecting, I tried again using the bridge IP in the python script:
r = redis.Redis(host="172.17.0.0/16", port=6379, db=0)
and get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.0/16:6379. Name or service not known.
and I tried the redis sub IP:
r = redis.Redis(host="172.17.0.2/16", port=6379, db=0)
and I get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.2/16:6379. Name or service not known.
It feels like I'm fundamentally misunderstanding something about how to get the containers to talk to each other. I've read quite a lot of documentation and tutorials but as I say have no networking experience and have not previously used docker so any helpful explanations and/or solutions would be really great.
Many thanks
That's all about Docker networking. Fast solution - use host network mode for both containers. Drawback is low isolation, but you will get it working fast:
docker run -d --network=host redis ...
docker run --network=host python-redis-py ...
Then to connect from python to redis just use localhost as a hostname.
Better solution is to use docker user-defined bridge network
# create network
docker network create foo
docker run -d --network=foo --name my-db redis ...
docker run --network=foo python-redis-py ...
Note that in this case you cannot use localhost but instead use my-db as a hostname. That's why I've used --name my-db parameter when starting first container. In user-defined bridge networks containers reach each other by theirs names.
Do:
Explicitly create a Docker network for your application, and run your containers connected to that network. (If you use Docker Compose, this happens for you automatically and you don’t need to do anything.)
docker network create foo
docker run -d --net foo --name some-redis redis
docker run -it --rm --net foo --name pyRed python-redis-py
Use containers’ --name as DNS hostnames: you connect to some-redis:6379 to reach the container. (In Docker Compose the name of the service block works too.)
Make the locations of external services configurable, most likely using an environment variable. In your Python code you can connect to
redis.Redis(host=os.environ.get("REDIS_HOST", "localhost"),
port=int(os.environ.get("REDIS_PORT", "6379"))
docker run --rm -it \
--name py-red \
--net foo \
-e REDIS_HOST=some-redis \
python-redis-py
Don’t:
docker inspect anything to find the container-private IP addresses. Between containers you can always use hostnames as described above. The container-private IP addresses are unreachable from other hosts, and may even be unreachable from the same hosts on some platforms.
Use localhost in Docker for anything, expect the specific case of connecting from a browser or other process running directly on the host (not in a container) to a port you’ve published with docker run -p on the same host. (It generally means “this container”.)
Hard-code host names in your code like this; it makes it hard to run the service in a different environment. (For databases in particular it’s not uncommon to run them outside of Docker or even in a hosted cloud service.)
Use --link, it’s outdated and unnecessary.