Unable to connect to AWS ElastiCache form python client - python

I have an AWS ElastiCache instance of 2 replicated nodes (cluster-mode disabled).
I am able to connect through my java client using redisson (a service running in the same cluster). However, when I'm using the python redis client, it does not seem to connect. Or it seems to connect but doesn't subscribe. I don't see any errors for connection, but when I subscribe to a pub/sub topic I don't get any acknowledgment as well. Not even the first message which returns 1 for the successful subscription. Not sure what I'm doing wrong.
Also it works if I'm connecting to a local redis instance. Below is the code:
self.redis_conn = redis.Redis(host=os.environ.get(host), port=6379, password=os.environ.get('REDIS_PASSWORD'))
self.pubsub = self.redis_conn.pubsub()
self.pubsub.subscribe('XYZ_EVENTS')
for new_message in self.pubsub.listen():
self._logger.info("received: " + str(new_message['data']))

had to set ssl=True and that did it. the Elasticache instance has encryption enabled so this config had to be set to true.

Related

Accessing AWS ElastiCache (Redis CLUSTER mode) from different AWS accounts via AWS PrivateLink

I have a business case where I want to access a clustered Redis cache from one account (let's say account A) to an account B.
I have used the solution mentioned in the below link and for the most part, it works Base Solution
The base solution works fine if I am trying to access the clustered Redis via redis-py however if I try to use it with redis-py-cluster it fails.
I am testing all this in a staging environment where the Redis cluster has only one node but in the production environment, it has two nodes, so the redis-py approach will not work for me.
Below is my sample code
redis = "3.5.3"
redis-py-cluster = "2.1.3"
==============================
from redis import Redis
from rediscluster import RedisCluster
respCluster = 'error'
respRegular = 'error'
host = "vpce-XXX.us-east-1.vpce.amazonaws.com"
port = "6379"
try:
ru = RedisCluster(startup_nodes=[{"host": host, "port": port}], decode_responses=True, skip_full_coverage_check=True)
respCluster = ru.get('ABC')
except Exception as e:
print(e)
try:
ru = Redis(host=host, port=port, decode_responses=True)
respRegular = ru.get('ABC')
except Exception as e:
print(e)
return {"respCluster": respCluster, "respRegular": respRegular}
The above code works perfectly in account A but in account B the output that I got was
{'respCluster': 'error', 'respRegular': '123456789'}
And the error that I am getting is
rediscluster.exceptions.ClusterError: TTL exhausted
In account A we are using AWS ECS + EC2 + docker to run this and
In account B we are running the code in an AWS EKS Kubernetes pod.
What should I do to make the redis-py-cluster work in this case? or is there an alternative to redis-py-cluster in python to access a multinode Redis cluster?
I know this is a highly specific case, any help is appreciated.
EDIT 1: Upon further research, it seems that TTL exhaust is a general error, in the logs the initial error is
redis.exceptions.ConnectionError:
Error 101 connecting to XX.XXX.XX.XXX:6379. Network is unreachable
Here the XXXX is the IP of the Redus cluster in Account A.
This is strange since the redis-py also connects to the same IP and port,
this error should not exist.
So turns out the issue was due to how redis-py-cluster manages host and port.
When a new redis-py-cluster object is created it gets a list of host IPs from the Redis server(i.e. Redis cluster host IPs form account A), after which the client tries to connect to the new host and ports.
In normal cases, it works as the initial host and the IP from the response are one and the same.(i.e. the host and port added at the time of object creation)
In our case, the object creation host and port are obtained from the DNS name from the Endpoint service of Account B.
It leads to the code trying to access the actual IP from account A instead of the DNS name from account B.
The issue was resolved using Host port remapping, here we bound the IP returned from the Redis server from Account A with IP Of Account B's endpoints services DNA name.
Based on your comment:
this was not possible because of VPCs in Account-A and Account-B had the same CIDR range. Peered VPCs can’t have the same CIDR range.
I think what you are looking for is impossible. Routing within a VPC always happens first - it happens before any route tables are considered at all. Said another way, if the destination of the packet lies within the sending VPC it will never leave that VPC because AWS will try routing it within its own VPC, even if the IP isn't in use at that time in the VPC.
So, if you are trying to communicate with a another VPC which has the same IP range as yours, even if you specifically put a route to egress traffic to a different IP (but in the same range), the rule will be silently ignored and AWS will try to deliver the packet in the originating VPC, which seems like it is not what you are trying to accomplish.

How can I use MQTT long term in IoT Core?

So first of all, what I really want to achieve: I want to know when an IoT device has stopped working (i.e. lost connection, shut down, basically it's not longer talking to IoT Core). I can't seem to find an implementation for this on GCP.
I have a raspberry pi as my IoT device, I have configured it on IoT core and somewhere I read that since this is not implemented a way to solve it is to create a logging sink which activates a cloud function whenever there is a CONNECT/DISCONNECT log. This would serve my purpose and I have implemented this sink and cloud function to alert me.
I have been following this guide on connecting to MQTT. However, the way the explain it, they set it up such that whenever the expiration time on the JWT is exceeded, they disconnect the client and create a new one to re-new the JWT. This would make it such that I am going to be alerted of connection/disconnection whenever this client needs to be renewed. So I won't be able to differentiate of a real issue from renewals of the MQTT client.
In the same guide, I see that they mention MQTT long term or LTS, and they claim that this way you can set up the client once and communicate continuously through it for the supported time which it says its until 2030. This seems to be what I really want, but I have not been able to connect this way and they don't explain it other than saying the hostname should be mqtt.2030.ltsapis.goog and to use a primary and backup certificates which are different from the complete root CA from the first method.
I tried using basically the same process for setting up the client:
client = mqtt.Client(client_id=client_id)
# With Google Cloud IoT Core, the username field is ignored, and the
# password field is used to transmit a JWT to authorize the device.
client.username_pw_set(
username='unused',
password=create_jwt(project_id, private_key_file, algorithm))
# Enable SSL/TLS support.
client.tls_set(ca_certs=ca_certs, tls_version=ssl.PROTOCOL_TLSv1_2)
but changing the hostname and giving it the primary cert where I would give it the complete ca_certs, but it won't accept it and I am not sure how to do it otherwise with primary and backup certifications. I am looking at the documentation on tls_set, but I don't see where these would go or how they differ from the complete ca certs. I haven't seen any other examples outside of this guide.
I am hoping to be able to connect to this MQTT LTS so that I can maintain the connection without having to constantly renew the client.
The long term MQTT domain lets you use the LTS configuration for a long period of time, not the connection.
As you mention, for your use case the solution would be to activate and use device logs. One of the events is triggered when a device disconnects from IoT Core, and you can use that event to trigger an alert.
Keep in mind that the time limits for the connection are set for security purposes, and the client should renew the connection.

Server to Server Websocket communication

Here is the architecture topology:
An IoT device that counts people and saves the data to its cloud platform. Data can be accessed via an API and more specific it requires to provide a webserver endpoint where it can push the data every minute or so. This a ready-made product that I cannot change the data transfer method.
A webserver on my side that receives and stores the data.
As I am new to WebSockets, I interpret the above configuration as a WebSocket server installed on my webserver and wait for the data to be received from the IoT server (client).
So I deployed a Linux server in digitalocean and enabled the websocket server to wait for the incoming connections. The code I used for the server is:
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
print(message)
start_server = websockets.serve(echo, "MYSERVERIP", 80)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
All I need at this stage is to print all JSON packets that are pushed from the IoT server.
When I try to set the endpoint address in the IoT server, it refuses to accept ws://Myserver:80 and only accepts HTTP://Myserver:80. Obviously I don't have any HTTP server running on my server and therefore I am guessing the connection is refused from my server.
Also, the IoT API requires token X-Auth-token authentication. I am using the WebSockets python library but I didn't set up the authentication on my server. I left it null on both IoT server API and my server.
If I was to add a token authentication, what would be parameters or arguments required for the websocket server? I tried to search the websockets docs but with no luck.
This is not for production environment!! I am only trying to learn.
Any thoughts are welcome.
So these are the requirements:
An IoT device that counts people and saves the data to its cloud
platform. Data can be accessed via an API and more specific it
requires to provide a webserver endpoint where it can push the data
every minute or so.
A webserver on my side that receives and stores
the data.
They need data to be refresh every minute or so. In my humble opinion, websockets are neccesary only on real time.
That said, my proposed solution is to use a Message Broker instead. I think it's easier to handle than websockets directly, and you do not have to care about maintaining a live socket connection all the time (which is not efficient in terms of energy in IoT world).
In other words, use a Pub/Sub architecture instead. Your IoT devices publish data to the Message Broker (common one is RabbitMQ), and then you build a server that subscribes to the broker, consuming its data and stores it.
Now, every device connects to the cloud only when it has data available, this saves energy. The protocol may be MQTT or HTTP, MQTT is often used in the IoT world.
Related: Pub-sub messaging benefits

How to connect to a Rabbit-MQ server over the network?

I've got 3 clients on 3 different computers.
Client A is running a RabbitMQ server.
Client B is a producer.
Client C is a consumer.
I've gone through the tutorials on RabbitMQ's site (in Python) and I thought that changing them to work from localhost to over the network would just be to just enter the IP in the line:
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
Their guide even stated
If we wanted to connect to a broker on a different machine we'd simply specify its name or IP address here.
So what am I doing wrong and how can I get the clients to talk to the server over the network?
Edit: For clarification - I'm running the server using the rabbitmq-server command.
The clients are connected to the broker using the line stated above.
By default it will try and connect using guest as the user id and password, and also by default guest will not work from a remote machine, you need to either create a new user and use those credentials in your connection
e.g.
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters('serverip', credentials=credentials)
or modify the guest user to allow it to connect from remote machines. The former is probably the better option, directions for the latter option can be found here.
http://blog.shippable.com/rabbitmq-on-docker-fix
You can do something like that:
credentials = pika.PlainCredentials('username','password')
parameters = pika.URLParameters('amqp://username:password#localhost:5672/%2F')
connection = pika.BlockingConnection(parameters)
If you want to connect to broker on different machine , change the "localhost" above to name or IP address of that machine :
For example on client B :
parameters = pika.URLParameters('amqp://username:password#(ip of client A):5672/%2F')

Client machine not able to connect with xmlrpc server, hosted on EC2 cloud Server

I have ubuntu instance on Ec2 cloud server and on same instance I have created xmlrpc server using simpleXMLRP. I'd like to access server methods from my local ubuntu machine.but when I tried to do so, it raised "Protocol Error" as below,
"XMLRPC Error : xmlrpclib.ProtocolError: ProtocolError for ec2-70-41-59-2.amazonaws.com:8000/Common: -1 >"
As per the link http://docs.python.org/library/xmlrpclib.html. protocol error will occur, if the server named by the URI does not exist. but server is running on the cloud.
What is this error and how do I fix it ? does any changes required on Amazon cloud, for giving access to particular host and port? if so, what changes should be apply?
This answer may help someone to solve the same problem,
1) Select your (or default) Security Group in Ec2 Section of cloud Server.
2) Select "Inbound" Tab and create new Rule for "All TCP" and give access to your required port.
as per my knowledge, second step will inform cloud server to open selected port for Inbound access from end users.

Categories