I am trying to set up this Bullet Train API server on our machines. I am successfully able to run their Python server using the docker-compose up method.
As per the docs, it needs the database as well. I preferred using the docker image for the Postgres DB docker run --name local_postgres -d -P postgres which returns this:
It doesn't return a success message saying if the Postgres Docker is running successfully or not. It just returns some kinda long string which I feel should be an identifier of the Postgres Docker image.
As I need to connect this Bullet Train API server to this Dockerized database -
The question is how to find the connection string for this Postgres Docker image?
The trick is to use docker-compose. Put your application image in there as one service and your postgres image as a second service. Then also include an overlay network in the stack and specify that in each of your services. After that it is possible for the application to access the database via the docker service's name "local_postgres" as the hostname.
Update as per your comment
Make sure that your dockerfile that defines the postgres container contains an EXPOSE command.
EXPOSE 5432
If missing, add it and rebuild the container.
Start the container and include the below option, which will expose the database port on localhost.
docker run --name local_postgres -p 5432:5432 -d -P postgres
Check if the port is really exposed by typing
docker ps | grep 'local_postgres'
You should see something like this in the output.
PORTS 0.0.0.0:5432->5432/tcp
If you see this output, the port 5432 is successfully exposed on your host. So if your app runs on localhost, you can access the database via localhost:5432
Related
I have created a Docker container running Azure SQL Edge on my Mac M1. I am able to run the container, connect to it with Azure Data Studio, and create/manipulate entries there. However, when I try to use mysql.connector.connect to access the server via a Python script, the connection just does nothing - no connection confirmation, no refusal, just hangs (or terminates at my timeout limit). Below is my code to create the Docker container, and the Python code I am running to try and access it:
sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=bigStrongPwd1!' -p 1433:1433 --name sqledge -d mcr.microsoft.com/azure-sql-edge
connection = mysql.connector.connect(host='localhost',
database='dbo.Customers',
port=1433,
user='sa',
password='bigStrongPwd1!',
use_pure=True,
connection_timeout=5
)
When creating the Docker container, I have tried using different ports, using the -P flag, specifying the --network, etc. I have also tried checking the Docker container logs, and I cannot see any instance of the connection trying to be made. Also I have tried viewing the container in my browser and get "Safari can't open the page "localhost:1433" because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then try again."
I'm having a python program that stores the output file in local. Now my code is run inside docker container I want to move the generated output file (like output.txt) to my local (outside the docker container)
I know there is a command which transfers files from docker to local:
docker cp <containerId>:/file/path/within/container /host/path/target
#I tried like this inside docker container but it didn't work
os.system(sudo docker cp <containerId>:/file/path/within/container /host/path/target)
But since my program is executing inside docker this doesn't work and I want to push the file to local as the code runs.
If you have any ideas please share them.
There is no way for code inside a container to directly manipulate anything outside the container. The whole purpose of a container is to isolate it from the host system; breaking this barrier would have grave security consequences, and basically render containers pointless.
What you can do is mount a directory from the host inside the container with -v (or run docker cp from outside the container once you are confident it has succeeded in creating the file successfully; but then how would you communicate this fact to the outside?)
With
docker run --volume=`pwd`:/app myimage
you can
cp myfile /app
inside the container; this will create ./myfile from within Docker.
Treat container as a Linux system, your question will consider as: how transfer files between two hosts.
Maybe there are some another ways without rerun the container:
scp(recommend) with other options or tools while you need, like expect could handle the ssh's accept fingerprint or input the password, assume 172.17.0.1 is the host's docker interface and ssh_port is 22 by default, and the ssh progress is listening the docker interface such as 0.0.0.0:22.
os.system(sudo scp /file/path/within/container user#172.17.0.1:/host/path/target)
other client–server models, such as rsync(client and server), python SimpleHTTPServer(server) and curl | python request | wget(client) and so on. But these are not recommend, because the server and client need to deploy.
I some questions about Docker. I have very little knowledge about it, so kindly bear with me.
I have a python script that does something and writes into a PostgreSQL DB. Both are run on Docker. Python uses python:3.8.8-buster and PostgreSQL postgres:13. Using docker-compose up, I am able to instantiate both these services and I see the items inserted in the PostgreSQL table. When I docker-compose down, as usual, the services shut down as expected. Here are the questions I have:
When I run the container of the PostgreSQL service by itself (not using docker-compose up, but docker run then docker exec) then login into db using PSQL, it doesn't take the db name as the db name mentioned in the docker-compose.yml file. It takes localhost, but with the username mentioned per the docker-compose.yml file. It also doesn't ask me for the password, although it's mentioned in the Dockerfile itself(not docker-compose.yml - for each of the services, I have a Dockerfile that I build in the docker-compose.yml). Is that expected? If so, why?
After I've logged in, when I SELECT * FROM DB_NAME; it displays 0 records. So, basically it doesn't display the records written in the DB in the previous run. Why's that? How can I see the contents of the DB when it's not up? When the container is running (when I docker-compose up), I know I can see the records from PG Admin (which BTW is also a part of my docker-compose.yml file, and I have it only to make it easier to see if the records have been written into the DB).
So after my script runs, and it writes into the db, it stops. Is there a way to restart with without docker-compose down then docker-compose up? (On VSCode) when I simply run the script, while still docker-compose is up it says it cannot find the db (that's mentioned in the docker-compose.yml file). So I have to go back and change the db name in the script to point localhost - This circles back to the question #1.
I am new to docker, and I am trying my best to wrap my head around all this.
This behavior depends on your specific setup. I will have to see the Dockerfile(s) and docker-compose.yaml in order to give a suitable answer.
This is probably caused by mounting an anonymous volume to your postgres service instead of a named volume. Anonymous volumes are not automatically mounted when executing docker-compose up. Named volumes are.
Here's a docker-compose.yaml example of how to mount a named volume called database:
version: '3.8'
# Defining the named volume
volumes:
database:
services:
database:
image: 'postgres:latest'
restart: 'always'
environment:
POSTGRES_USER: 'admin'
POSTGRES_PASSWORD: 'admin'
POSTGRES_DB: 'app'
volumes:
# Mounting the named volume
- 'database:/var/lib/postgresql/data/'
ports:
- '5432:5432'
I assume this depends more on the contents of your script than on the way you configured your docker postgres service. Postgres does not shut down after simply writing data to it. But again, I will have to see the Dockerfiles(s) and docker-compose.yaml (and the script) in order to provide a more suitable answer.
If you docker run an image, it always creates a new container, and it never looks at the docker-compose.yml. If you for example
docker run --name postgres postgres
docker exec -it postgres ...
that starts a new container based on the postgres:latest image, with no particular storage or networking setup. That's why you can't use the Compose host name of the container or see any of the data that your Compose setup would normally have.
You can use docker-compose up to start a specific service and its dependencies, though:
docker-compose up -d postgres
Once you do this, you can use ordinary tools like psql to connect to the database through its published ports:
psql -h localhost -p 5433 my_db
You should not need normally debugging tools like docker exec; if you do, there is a Compose variant that knows about the Compose service names
# For debugging use only -- not the primary way to interact with the DB
docker-compose exec postgres psql my_db
After my script runs, and it writes into the db, it stops. Is there a way to restart it?
Several options:
Make your script not stop, in whatever way. Frequently a Docker container will have something like an HTTP service that can accept requests and act on them.
Re-running docker-compose up -d (without explicitly down first) will restart anything that's stopped or anything whose Compose configuration has changed.
You can run a one-off script directly on the host, with configuration pointing at your database's published ports:.
It's relevant here that "in the Compose environment" and "directly on a developer system" are different environments, and you will need a mechanism like environment variables to communicate these. In the Compose environment you will need the database container's name and the default PostgreSQL port 5432; on a developer system you will typically need localhost as the host name and whichever port has been published. You cannot hard-code this configuration in your application.
I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
I have a docker container that is setup to perform some given actions with selenium. My goal is to have the docker container be created when a request is received for a certain endpoint created using flask. The flask app has been setup with uWSGI and Nginx using this tut.
When the endpoint receives a request it is suppose to run the bash script ./run.sh:
#!/bin/bash
ID=$1
docker run --rm \
-v $(pwd)/code:/code \
-v /etc/hosts:/etc/hosts \
selenium \
python3 \
/code/main.py ${ID}
I can successfully make a call to the endpoint using the IP given from digital ocean but when it gets to the point where it needs to run docker it says:
docker: command not found
Note, I can go into the virtualenv manually, run python app.py, send request to flask endpoint and the docker container is created and everything works great.
You probably need to add a PATH variable to your bash script which includes the location of your docker executable. The user running NGINX likely doesn't have a path set.
PATH=$PATH:/usr/local/bin:/usr/bin
Also you'll need to ensure that the user running NGINX has permission to use docker, so add them to the docker group.
If this is a public service, then I would think carefully about whether you really want internet users to be launching containers on your server, does $1 come from user input?