I have an application which is split into multiple Docker containers:
Container 1 contains a MongoDB database
Container 2 contains a python script which performs some operations on a daily basis and stores the results in the MongoDB of Container 1 using pymongo.
Previously to using Docker, i.e., having the MongoDB and the python app on the same machine, I could use mongoexport right after the daily scripts finished to backup my database. However, in the Docker setup, I can not use mongoexport in Container 2 because MongoDB is not installed, i.e., the command is unknown.
From my point of view, the only option would be using a cronjob in Container 1 running a script which performs the backup on a preset time.
However, I would prefer a solution in which Container 2 triggers the backup since the runtime of the daily scripts can vary considerably.
Thanks in advance for any suggestions!
You can download mongodb binaries on docker 2 form here. That way you can get rid of the command is unknown
You can export mongodb collection from mongodb running on container 1 using mongoexport or take dump using mongodump from container 2 using --host and --port options.
Note: mongoexport does not export indexes from collection.
Related
I have created three containers, one is for Python Flask application, second is for PostgreSQL db and third is for angular8. Im using Docker compose to run this. my question so each container has ports so total 3 ports. Is there a way I can use only one port to run this whole application like Docker Run instead of Docker Compose? All I want is a single port where this API can be called from anywhere.
If the only thing you want to be visible to the "outside" is the API, you can use the --link flag when calling docker run. Basically, start up the PG container, then start up the Flask container, linking to PG, then start up the Angular container, linking to Flask. However, the --link flag is a legacy feature, and may disappear sometime in the future.
Another option is to create a network with docker network create and make sure your three containers are all using that same network. They should all be able to communicate with each other in this way, and you just need to publish the API port so that other apps can use your API.
I'm not sure what your requirements are, but docker-compose is generally the cleaner way to do it, as it helps you achieve consistency in your automations.
I wrote a Python script that analyzes and filters stock data. I write the stock data into a mongodb. The result is a CSV file with the filtered values in it.
Is it now possible to create a docker container that contains Python & mongodb and copies the CSV from the container to the host?
I tried creating a Dockerfile with python only. But when it comes to adding the mongodb service and exporting the file to the host i am a little overstrained.
My goal is that at the end I'll have one Docker container that runs the python script & exports the file to the host.
Do you know any best practice? Or a good tutorial that covers my needs?
I would not recommend python and mongodb to be installed on the same docker container. Usually db and the app should be installed on separate containers using docker-compose. But still, if you want them to be on same containers, then you can do so by using an Ubuntu ( or anything else you are comfortable with) image installing mongodb and python on it and then running your scripts. I found this following git repo that contains one such Dockerfile.
Regarding copying CSVs from dockert to host machine, you can do so by using volumes, if you want to use docker-compose which I would totally recommend, or you can use docker cp command to get the data manually from docker to host.
I am currently doing some experimentation with AWS Elastic Container Service in the context of creating a data processing pipeline and I had a few questions regarding the specifics of how best to set up the docker container/ecs task definitions.
The general goal of the project is to create a system that allows users to add data files to an S3 bucket to trigger an ECS task using S3 events and Lambda, then return the outputs to another S3 bucket.
So far I've been able to figure out the S3 triggers and the basics of Lambda, but I am a bit more confused on how to properly set up the docker container and task definition so that it automatically processes the data using a set of python scripts. I believe that creating a docker container that runs a shell script that copies the necessary files and calls the python code makes sense, but I was confused on how to run the docker container with a bind mounted volume from an ECS task, and also whether or not this process makes sense. Currently, when I am testing the system on a single EC2, I am running my docker container using:
docker run -b $ (pwd)/data:/home/ec2-user/docker_test/data docker_test
I'm still relatively new to the AWS tools, so please let me know if I can clarify any of my points/questions and thank you in advance!
I think docker container is caching MySQL query result in container itself. I did one quick round of testing
Truncated the table
Trigger the job which runs on docker container(hits the MySQL to fetch the data)
Still I can see MySQL queries are returning data from table.
We are not using application/DB level caching. So I believe this is the docker container which is caching the data.
My application is using python 3 , flask and mysql-connector-python==8.0.5 connector. MYSQL version is 5.6.
If docker container caching the data then how we can avoid/remove the cache? Any help here will be appreciated.
Note - When I restart the container then it did not returned any data for same query for which we were getting the data before restarting the container.
I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.