I am trying to migrate a django project on macOS Monterey 12.3, and I am having some troubles.
It seems like psycopg2 doesn't want to connect to my docker container at all. Everytime it prints this error:
django.db.utils.OperationalError: connection to server at "localhost" (127.0.0.1), port 5433 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
For my docker process, I create it by running
docker run --name local-postgres -p 5433:5433 -e POSTGRES_PASSWORD=test123 -d postgres
I am running python 3.9.12 in a virtual environment using pipenv, and my arch is arm64, for anyone wondering.
I've tried changing the ports, I've tried resetting, completely removing docker and downloading it again, reinstalling django and reinstalling the venv again and so far nothing has worked. I've also tried setting the CONN_MAX_AGE=0 in settings, which has not worked.
Please help
Postgres listens on port 5432, so you need to map that to the port you want to connect to on the host. It looks like you want to use port 5433, so you'd do
docker run --name local-postgres -p 5433:5432 -e POSTGRES_PASSWORD=test123 -d postgres
Then you can connect on the host using localhost port 5433.
Related
I have created a Docker container running Azure SQL Edge on my Mac M1. I am able to run the container, connect to it with Azure Data Studio, and create/manipulate entries there. However, when I try to use mysql.connector.connect to access the server via a Python script, the connection just does nothing - no connection confirmation, no refusal, just hangs (or terminates at my timeout limit). Below is my code to create the Docker container, and the Python code I am running to try and access it:
sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=bigStrongPwd1!' -p 1433:1433 --name sqledge -d mcr.microsoft.com/azure-sql-edge
connection = mysql.connector.connect(host='localhost',
database='dbo.Customers',
port=1433,
user='sa',
password='bigStrongPwd1!',
use_pure=True,
connection_timeout=5
)
When creating the Docker container, I have tried using different ports, using the -P flag, specifying the --network, etc. I have also tried checking the Docker container logs, and I cannot see any instance of the connection trying to be made. Also I have tried viewing the container in my browser and get "Safari can't open the page "localhost:1433" because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then try again."
Im new with docker and odoo
trying to use odoo14 and postgresql with some dependencies in docker-compose
when i use docker logs for postgresql container the db its ready to accept the connection instead of logs of odoo container i got that issue any help please
Thank you
I was facing the same issue in my system. I think this occurs because the container hasn't stopped correctly.
What worked for me was to restart Docker.
Try this command:
sudo systemctl restart docker.socket docker.service
If it doesn't work then try to kill the 5432 port:
sudo kill -9 5432
I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
I am trying to set up this Bullet Train API server on our machines. I am successfully able to run their Python server using the docker-compose up method.
As per the docs, it needs the database as well. I preferred using the docker image for the Postgres DB docker run --name local_postgres -d -P postgres which returns this:
It doesn't return a success message saying if the Postgres Docker is running successfully or not. It just returns some kinda long string which I feel should be an identifier of the Postgres Docker image.
As I need to connect this Bullet Train API server to this Dockerized database -
The question is how to find the connection string for this Postgres Docker image?
The trick is to use docker-compose. Put your application image in there as one service and your postgres image as a second service. Then also include an overlay network in the stack and specify that in each of your services. After that it is possible for the application to access the database via the docker service's name "local_postgres" as the hostname.
Update as per your comment
Make sure that your dockerfile that defines the postgres container contains an EXPOSE command.
EXPOSE 5432
If missing, add it and rebuild the container.
Start the container and include the below option, which will expose the database port on localhost.
docker run --name local_postgres -p 5432:5432 -d -P postgres
Check if the port is really exposed by typing
docker ps | grep 'local_postgres'
You should see something like this in the output.
PORTS 0.0.0.0:5432->5432/tcp
If you see this output, the port 5432 is successfully exposed on your host. So if your app runs on localhost, you can access the database via localhost:5432
I'm using pyorient 1.5.4 and the docker for orientdb 2.2.5
If I use the browser to connect to the database, the server is clearly running.
If I connect with pyorient, I get an error.
Here is the code I use to connect to the database:
import pyorient
database = pyorient.OrientDB('127.0.0.1', 2424)
database.db_open(
'myDB',
'root',
'mypassword',
db_type='graph'
)
I get the following error:
pyorient.exceptions.PyOrientConnectionException: Server seems to have went down
I created the docker container with the following command:
docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -v /home/myuser/Code/database:/orientdb/databases -e ORIENTDB_ROOT_PASSWORD=mypassword orientdb:latest /orientdb/bin/server.sh -Ddistributed=true
The server is running because connecting via the browser works fine.
It seems like the necessary ports are open so why does pyorient thinks the database is closed?
I found my problem. I was starting the docker container with:
-Ddistributed=true
removing the parameter enabled me to connect just fine.
However, I have found that pyorient gets into an infinite loop when trying to parse the packets that's returned from orientDB under distributed mode. This is due to a bug on pyorient. The bug is explained in more detail over here:
https://github.com/mogui/pyorient/issues/215#issuecomment-245007336