I have a docker container on a EC2 if i run the code directly in EC2 everything is fine, but when the docker is running it throws the next error
(1045, "Access denied for user 'xxxxx'#'xxxx' (using password: YES)")
I know that's an error in the connection, but is the same user, password and ip of the EC2.
I execute the docker image with network in host mode and the problem persist
I'm using a python image and pymysql to conect to my db
This is exactly my problem Accessing RDS from within a Docker container not getting through security group?
My solution was to use ECS and avoid ec2 to run the docker image
Related
I have an application running inside of a docker container created starting from the official dockerhub python image (python:3.6.9-slim-stretch) https://hub.docker.com/_/python.
When I am trying to connect to an external database:
# mysql --host=myhost --user=mysuser--password=mypassword
ERROR 1045 (28000): Access denied for user 'muiser'#'EXTERNALH_HOST_NAME' (using password: YES)
If I try to use another docker image (i.e.: ubuntu:20.04), the same command executed from the same host node works fine:
# mysql --host=myhost --user=mysuser--password=mypassword
mysql>
I also tried to run the same command from the host, and it work fine.
Any idea of what can be the reason of such strange behavior?
I am able to access GCP Memorystore Redis from gcp cloud run through vpc connector. But how can I do that from my localhost ?
You can connect from a localhost machine with port forwarding and it can be helpful to connect to your Redis instance during development.
Create a compute engine instance by running the following command:
gcloud compute instances create NAME --machine-type=f1-micro --zone=ZONE
Open a new terminal on your local machine.
To create an SSH tunnel that port forwards traffic through the Compute Engine VM, run the following command:
gcloud compute ssh COMPUTE_VM_NAME --zone=ZONE -- -N -L 6379:REDIS_INSTANCE_IP_ADDRESS:6379
To test the connection, open a new terminal window and run the following command:
redis-cli ping
The SSH tunnel remains open as long as you keep the terminal window with the SSH tunnel connection up and running.
I suggest you use the link for setting up a development environment.
If you are using Redis as caching-only, or simple pub/sub, I would just spin up a local redis container for development.
I am trying to set up this Bullet Train API server on our machines. I am successfully able to run their Python server using the docker-compose up method.
As per the docs, it needs the database as well. I preferred using the docker image for the Postgres DB docker run --name local_postgres -d -P postgres which returns this:
It doesn't return a success message saying if the Postgres Docker is running successfully or not. It just returns some kinda long string which I feel should be an identifier of the Postgres Docker image.
As I need to connect this Bullet Train API server to this Dockerized database -
The question is how to find the connection string for this Postgres Docker image?
The trick is to use docker-compose. Put your application image in there as one service and your postgres image as a second service. Then also include an overlay network in the stack and specify that in each of your services. After that it is possible for the application to access the database via the docker service's name "local_postgres" as the hostname.
Update as per your comment
Make sure that your dockerfile that defines the postgres container contains an EXPOSE command.
EXPOSE 5432
If missing, add it and rebuild the container.
Start the container and include the below option, which will expose the database port on localhost.
docker run --name local_postgres -p 5432:5432 -d -P postgres
Check if the port is really exposed by typing
docker ps | grep 'local_postgres'
You should see something like this in the output.
PORTS 0.0.0.0:5432->5432/tcp
If you see this output, the port 5432 is successfully exposed on your host. So if your app runs on localhost, you can access the database via localhost:5432
I have a simple Python Flask based application in a docker image and I created a container with port mapping
docker run --name MypyFlskApp -p 5010:5000 flask-crud-rest-app
when i try to access MypyFlskApp container from host machine ("http://127.0.0.1:5010") i am getting error "Unable to connect to the remote server" but when i find container IP(http://172.19.247.234:5000) and tried the same then i get the response.
not sure why the port mapping is not working.
from host machine instead of 127.0.0.1 ( localhost) i have used VM IP address then my PythonFlask App started responding. Not sure why it is not able to access with 127.0.0.1(localhost)
I built a Django app in a docker container. And run it on a server with ip 192.168.1.13. And I set the Django settings.py to connect mysql server at 192.168.1.6. It is an external independent server. But when I run the container, it always say access denied for user xxx#192.168.1.13. How can Django connect to the docker host ip but not the defined server ip?
Any body can help me to solve this problem? Many thanks.
There are two server.
Server A is 192.168.1.13. There is a django docker container running on it.
Server B is 192.168.1.6. It is a mysql server.
And I want django container to connect server B. But it reported can't connect to its host server.
It looks like you granted access to your user accessing only through the localhost. Try the following on your MySQL server:
GRANT ALL PRIVILEGES ON * . * TO 'xxx'#'192.168.1.13';
Please note that the IP address after the # above refers to the address the MySQL client is connecting from (your Django container), not the address of the MySQL server.