I am able to access GCP Memorystore Redis from gcp cloud run through vpc connector. But how can I do that from my localhost ?
You can connect from a localhost machine with port forwarding and it can be helpful to connect to your Redis instance during development.
Create a compute engine instance by running the following command:
gcloud compute instances create NAME --machine-type=f1-micro --zone=ZONE
Open a new terminal on your local machine.
To create an SSH tunnel that port forwards traffic through the Compute Engine VM, run the following command:
gcloud compute ssh COMPUTE_VM_NAME --zone=ZONE -- -N -L 6379:REDIS_INSTANCE_IP_ADDRESS:6379
To test the connection, open a new terminal window and run the following command:
redis-cli ping
The SSH tunnel remains open as long as you keep the terminal window with the SSH tunnel connection up and running.
I suggest you use the link for setting up a development environment.
If you are using Redis as caching-only, or simple pub/sub, I would just spin up a local redis container for development.
Related
I have been trying to solve this for a couple of days and don't seem to find a way to do it. I have a raspberry pi in my local network which is running jupyter (port 8888) and a flask api (port 5000). I want to be able to access it remotely using another server. My setup and what I have until now is:
Server in GCP with static IP (let's say it's gcp.static.ip). I opened the ports 7003 and 7004 as udp.
Raspberry Pi in my home network with dynamic IP (can't have static IP) and jupyter and flask api on ports 8888 and 5000. I forwarded the ports with:
ssh -NR 7003:localhost:5000 -R 7004:localhost:88888 user#gcp.static.ip
Laptop in remote network. If I do the following ssh tunnel I can access the jupyter server at localhost:7004:
ssh -NL 7004:localhost:7004 user#gcp.static.ip
I can't seem to do the same for the flask API. If I ssh into the gcp server I can query the API at port 7003. How can I set the gcp server so that I can query the api with gcp.static.ip:APIPort and access jupyter in gcp.static.ip:JupyterPort.
Thanks a lot!
UPDATE: I'm able to query the api forwarding a TCP port. However, still want to know if this is possible without having to create another tunnel on my lapto.
Following this Link. Had to change /etc/ssh/sshd_config to set GatewayPorts to clientspecified and ssh tunnel with:
ssh -NR 0.0.0.0:7003:localhost:5000 user#gcp.static.ip
I am trying to connect my python application to kafka running on AWS EC2. I am able to connect with ec2 via terminal i check with telnet <ec2 ip> 9092. I am able to connect via this but not able to connect with python application.
Even if my python application starts with without any error with ec2 ip address, i am not able to receive any data from my kafka topic from ec2 to local machine.
When i add my pulic ip address to:
advertised.listeners=PLAINTEXT://<local ip addrss>:9092
Debezium connector with kafka-connect won't start , but without enabling advertised.listeners it works.
How do i configure kafka and kafka-connect so that i can consume kafka topic from ec2 instance on my local machine?
You need to set advertised.listeners to be the EC2 Public DNS/IP, restart the broker, then open the VPC / firewall connection on the listening port.
Debezium's rest.advertised.listener property is different from Kafka broker's, and you woudn't need it set on your local machine.
Python and Kafka Connect should share the same bootstrap.server protocol
You can test your listeners better using kafkacat -L -b <bootstrap>:9092
i'm trying to connect pycharm to docker remote interpreter, but the container is part of a cluster (specifically using AWS ECS cluster)
i can not access the container directly i have to go through the bastion machine (the instance that is running the container do not have public IP)
i.e to access the container i need to access it via bastion machine
i had an idea of using ssh tunneling but i could not figure out how to do so using pycharm docker utility
is there any solution that pycharm suggests for that?
I'm having a unique problem with Windows Azure that I don't see on other providers. I've been running connections from remote VMs to a MySQL database running on a DigitalOcean VM. I've successfully connected with AWS, Rackspace, Google, and all other providers, but for some reason, Microsoft Azure VMs don't seem to work.
VM OS: Ubuntu 14.04
I'm trying to connect using PyMySQL and SQLAlchemy.
What I've Tried:
The port is open and listening
The user definitely has permission to upload data into the DB (other remote connections with this user all work fine).
I have even tried "ufw disable" for the Firewall on the Windows Azure VM
I've set 3306 as an endpoint on the Azure VM
Despite all my attempts, the connection cannot be established. Is there something I'm missing on the setup?
As Azure VMs disable ICMP and we can use SSH tunnels to allow outside access to internal network resources. However I don’t have resource to create a DigitalOcean VM, but I have created 2 Azure VMs in 2 Cloud Services to try to reproduce the issue.
I installed mysql-server in VM.1 and mysql-client in VM.2.
Then I tried to connect MySQL server directly from VM.2, I got message “can’t connect to MySQL…”.
To work around this issue, I followed this post, created a SSH tunnel in VM.1 which hosted the MySQL server:
Open port 3306, so a remote client can connect to your MySQL Server. Run the following command to open TCP port 3306
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT
Now let’s check if the port 3306 is open by running this command:
sudo netstat -anltp|grep :3306
Create a SSH tunnel for port 3307
sudo ssh -fNg -L 3307:127.0.0.1:3306 azurevmuser#servername
Create an endpoint for the port 3307 in the dashboard of the VM in Azure management portal. For more details , See how to add endpoint to you Virtual machine. Now your Database host name is <vm_name>.cloudapp.net:3307
Then connect MySQL server from VM.2 using command:
# mysql -h <vm_1_name>.cloudapp.net -P 3307 -u user –pPassword
and it would work fine. Feel free to let us know if we have any misunderstood on your issue.
I created a server EC2 on Amazon AWS and I installed the HappyBase python library for working with HBASE.
Also I created a Job Cluster Hbase in EMR.
Then I tried to run the script on the first server on EC2:
import happybase
connection = happybase.Connection('….us-west-2.compute.amazonaws.com')
connection.open ()
print connection.tables ()
But I get an error that the server is not found. As host I have Hbase cluster public dns. What I need to configure for working with the database created in the EMR from another EC2 server using python happybase?
Thanks.
Did you start Thrift server on your cluster master node? You can do it with
$ ssh -i <your-key.pem> hadoop#<master-node-dns>
$ hbase-daemon.sh start thrift
For this to work, HBase must be configured on your cluster. You have to choose HBase job type if configuring a job from visual interface.
In old management console ensure that step Start HBase is present under Steps tab, and bootstrap action Install HBase is under Bootstrap Actions.
In new console, in a Cluster Details window, there is Applications section, check that a row like
Applications:HBase 0.92.0
is present there.
When all done corectly, ssh on your master instance, and check for hbase-daemon script with
~$ which hbase-daemon.sh
/home/hadoop/bin/hbase-daemon.sh