Windows Azure VMs: Connection Timeout to Remote MySQL Database - python

I'm having a unique problem with Windows Azure that I don't see on other providers. I've been running connections from remote VMs to a MySQL database running on a DigitalOcean VM. I've successfully connected with AWS, Rackspace, Google, and all other providers, but for some reason, Microsoft Azure VMs don't seem to work.
VM OS: Ubuntu 14.04
I'm trying to connect using PyMySQL and SQLAlchemy.
What I've Tried:
The port is open and listening
The user definitely has permission to upload data into the DB (other remote connections with this user all work fine).
I have even tried "ufw disable" for the Firewall on the Windows Azure VM
I've set 3306 as an endpoint on the Azure VM
Despite all my attempts, the connection cannot be established. Is there something I'm missing on the setup?

As Azure VMs disable ICMP and we can use SSH tunnels to allow outside access to internal network resources. However I don’t have resource to create a DigitalOcean VM, but I have created 2 Azure VMs in 2 Cloud Services to try to reproduce the issue.
I installed mysql-server in VM.1 and mysql-client in VM.2.
Then I tried to connect MySQL server directly from VM.2, I got message “can’t connect to MySQL…”.
To work around this issue, I followed this post, created a SSH tunnel in VM.1 which hosted the MySQL server:
Open port 3306, so a remote client can connect to your MySQL Server. Run the following command to open TCP port 3306
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT
Now let’s check if the port 3306 is open by running this command:
sudo netstat -anltp|grep :3306
Create a SSH tunnel for port 3307
sudo ssh -fNg -L 3307:127.0.0.1:3306 azurevmuser#servername
Create an endpoint for the port 3307 in the dashboard of the VM in Azure management portal. For more details , See how to add endpoint to you Virtual machine. Now your Database host name is <vm_name>.cloudapp.net:3307
Then connect MySQL server from VM.2 using command:
# mysql -h <vm_1_name>.cloudapp.net -P 3307 -u user –pPassword
and it would work fine. Feel free to let us know if we have any misunderstood on your issue.

Related

PostgreSQL Connection issues- Python

i want to connect a python code with a database localized in a Google Cloud Platform PostgreSQL database. I've been running this code and this working correctly, but recently i chaged me server to a VPS provided by a host. Now when i try to connect to the database with python3 i receive the follow message:
Psycopg2.OperationalError: could not connect to server: Connection timed out
Is the server running on host "35.199.90.49" and accepting
TCP/IP connections on port 5432?
I have authorized the IP of the server in the Google Cloud Platform, i tried to open all IPv4 address too, but nothing is working. I changed the /XX from IPv4 for my server. I disabled the firewall of the server. Nothing is working but when i connect with the psql command at ubuntu server, the database connection is okay.

Expose API using other server and SSH tunnel

I have been trying to solve this for a couple of days and don't seem to find a way to do it. I have a raspberry pi in my local network which is running jupyter (port 8888) and a flask api (port 5000). I want to be able to access it remotely using another server. My setup and what I have until now is:
Server in GCP with static IP (let's say it's gcp.static.ip). I opened the ports 7003 and 7004 as udp.
Raspberry Pi in my home network with dynamic IP (can't have static IP) and jupyter and flask api on ports 8888 and 5000. I forwarded the ports with:
ssh -NR 7003:localhost:5000 -R 7004:localhost:88888 user#gcp.static.ip
Laptop in remote network. If I do the following ssh tunnel I can access the jupyter server at localhost:7004:
ssh -NL 7004:localhost:7004 user#gcp.static.ip
I can't seem to do the same for the flask API. If I ssh into the gcp server I can query the API at port 7003. How can I set the gcp server so that I can query the api with gcp.static.ip:APIPort and access jupyter in gcp.static.ip:JupyterPort.
Thanks a lot!
UPDATE: I'm able to query the api forwarding a TCP port. However, still want to know if this is possible without having to create another tunnel on my lapto.
Following this Link. Had to change /etc/ssh/sshd_config to set GatewayPorts to clientspecified and ssh tunnel with:
ssh -NR 0.0.0.0:7003:localhost:5000 user#gcp.static.ip

How to connect to GCP Memorystore redis from local?

I am able to access GCP Memorystore Redis from gcp cloud run through vpc connector. But how can I do that from my localhost ?
You can connect from a localhost machine with port forwarding and it can be helpful to connect to your Redis instance during development.
Create a compute engine instance by running the following command:
gcloud compute instances create NAME --machine-type=f1-micro --zone=ZONE
Open a new terminal on your local machine.
To create an SSH tunnel that port forwards traffic through the Compute Engine VM, run the following command:
gcloud compute ssh COMPUTE_VM_NAME --zone=ZONE -- -N -L 6379:REDIS_INSTANCE_IP_ADDRESS:6379
To test the connection, open a new terminal window and run the following command:
redis-cli ping
The SSH tunnel remains open as long as you keep the terminal window with the SSH tunnel connection up and running.
I suggest you use the link for setting up a development environment.
If you are using Redis as caching-only, or simple pub/sub, I would just spin up a local redis container for development.

How to connect Jupyter Notebook Remote Server on AWS- Ubuntu

I'm trying to setup a remote Jupyter Notebook server on an AWS Ubuntu machine.
I followed this blog: http://blog.impiyush.me/2015/02/running-ipython-notebook-server-on-aws.html
I'm able to do a wget on the server and get the html. However when I try from my laptop browser i get a Connection Timed Out message.
I thought it may be a port issue for port 8888 (on which my notebook server is configured.
So I did sudo ufw allow 8888. Doing netstat shows that python is listening to all IPs on the port:
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 1833/python
Posting here for future reference.
Found the issue. The AWS Security Group settings were not configured to allow incoming connections on either HTTPS (443) or 8888. I added those rules in to the AWS console and it started working.
Go to Inbound settings, change rules, below rules worked for me !

psycopg2.connect failing with error 'connection refused' (Django app+db hosted on separate VMs)

I have a Django app where the app and database reside in two separate VMs. I use Azure. In trying to connect to a postgresql backend hosted in a separate Azure VM, I'm writing: conn = psycopg2.connect("dbname='dbname' host='dnsname'"), where value of host was gotten off the Azure portal from here:
I have also tried the Virtual IP address above as the host. Both approaches fail and I get a connection error: could not connect to server: Connection refused. Is the server running on host "myapp.cloudapp.net" (23.132.341.192) and accepting TCP/IP connections on port 5432? Please note that I changed the IP address when pasting the error here.
The Azure VM hosting the postgresql database has port 5432 added to its Endpoints. Moreover, my app to which this postgresql backend belongs is a Django/Python app. My VMs have Ubuntu 14.04. In setting up postgresql, I ran the following installation commands on both app and database VMs separately:
sudo apt-get update
sudo apt-get install PostgreSQL postgresql-contrib
sudo apt-get install postgresql-server-dev-all
sudo apt-get install libpq-dev
And even though both app and database VMs have all of the above installed in them, the database itself was only created in the latter VM. I didn't create a separate user; I instead did:
sudo su postgres
psql -d postgres -U postgres
And then
CREATE DATABASE dbname;
How do I fix the psycopg2.connect error? Ask for more information if needed.
As to separate PostgreSQL into another Azure VM, we need to configure several settings both on Azure VM side and your PostgreSQL service.
1, modify the listen_addresses="*" at /etc/postgresql/9.4/main/postgresql.conf ,to allow other client beside local to request your PostgreSQL service.
2, Add the following line as the first line of /etc/postgresql/9.4/main/pg_hba.conf. It allows access to all databases for all users with an encrypted password:
# TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 0.0.0.0/0 md5
3, run command sudo service postgresql restart to restart the service
4, login Azure manage portal, add a request rule in Endpoints page of your VM, configure public port 5432 to private port 5432:

Categories