How to connect to a Rabbit-MQ server over the network? - python

I've got 3 clients on 3 different computers.
Client A is running a RabbitMQ server.
Client B is a producer.
Client C is a consumer.
I've gone through the tutorials on RabbitMQ's site (in Python) and I thought that changing them to work from localhost to over the network would just be to just enter the IP in the line:
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
Their guide even stated
If we wanted to connect to a broker on a different machine we'd simply specify its name or IP address here.
So what am I doing wrong and how can I get the clients to talk to the server over the network?
Edit: For clarification - I'm running the server using the rabbitmq-server command.
The clients are connected to the broker using the line stated above.

By default it will try and connect using guest as the user id and password, and also by default guest will not work from a remote machine, you need to either create a new user and use those credentials in your connection
e.g.
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters('serverip', credentials=credentials)
or modify the guest user to allow it to connect from remote machines. The former is probably the better option, directions for the latter option can be found here.
http://blog.shippable.com/rabbitmq-on-docker-fix

You can do something like that:
credentials = pika.PlainCredentials('username','password')
parameters = pika.URLParameters('amqp://username:password#localhost:5672/%2F')
connection = pika.BlockingConnection(parameters)
If you want to connect to broker on different machine , change the "localhost" above to name or IP address of that machine :
For example on client B :
parameters = pika.URLParameters('amqp://username:password#(ip of client A):5672/%2F')

Related

How to form an OPCUA connection in python from server IP address, port, security policy and credentials?

I have never used OPC-UA before, but now faced with a task where I have to pull data from a OPC-UA machine to push to a SQL database using python. I can handle the database part, but how to basically connect to the OPCUA server when I have only the following fields available?
IP address 192.168.38.94
Port 8080
Security policy: Basic256
Username: della_client
Password: amorphous##
Some tutorials I saw directly use a url, but is there any way to form the URL from these parameters, or should I ask the machine owners something more specific to be able to connect? I just want to be sure of what I need before I approach him.
Related, how to use the same parameters in the application called UA-Expert to verify the connections as well? Is it possible?
If it is relevant, I am using python 3.10 on Ubuntu 22.04.
You need to know which protocol is used. Then you can create the URLs by using the IP address as domain:
OPC UA binary: opc.tcp://ip:port
https https://ip:port
OPC UA WebSockets opc.wss://ip:port
http http://ip:port (Deprecated in Version 1.03)
In your example this could be opc.tcp://192.168.38.94:8080 or https://192.168.38.94:8080
In most cases, the binary protocol is used. But the port 8080 is a typical http(s) port.
The credential and the securityPolice are needed later in the connection process.
And yes: You can test the URLs with the UaExpert. You can finde a step-by-step tutorial in the documention

Unable to connect to AWS ElastiCache form python client

I have an AWS ElastiCache instance of 2 replicated nodes (cluster-mode disabled).
I am able to connect through my java client using redisson (a service running in the same cluster). However, when I'm using the python redis client, it does not seem to connect. Or it seems to connect but doesn't subscribe. I don't see any errors for connection, but when I subscribe to a pub/sub topic I don't get any acknowledgment as well. Not even the first message which returns 1 for the successful subscription. Not sure what I'm doing wrong.
Also it works if I'm connecting to a local redis instance. Below is the code:
self.redis_conn = redis.Redis(host=os.environ.get(host), port=6379, password=os.environ.get('REDIS_PASSWORD'))
self.pubsub = self.redis_conn.pubsub()
self.pubsub.subscribe('XYZ_EVENTS')
for new_message in self.pubsub.listen():
self._logger.info("received: " + str(new_message['data']))
had to set ssl=True and that did it. the Elasticache instance has encryption enabled so this config had to be set to true.

Accessing AWS ElastiCache (Redis CLUSTER mode) from different AWS accounts via AWS PrivateLink

I have a business case where I want to access a clustered Redis cache from one account (let's say account A) to an account B.
I have used the solution mentioned in the below link and for the most part, it works Base Solution
The base solution works fine if I am trying to access the clustered Redis via redis-py however if I try to use it with redis-py-cluster it fails.
I am testing all this in a staging environment where the Redis cluster has only one node but in the production environment, it has two nodes, so the redis-py approach will not work for me.
Below is my sample code
redis = "3.5.3"
redis-py-cluster = "2.1.3"
==============================
from redis import Redis
from rediscluster import RedisCluster
respCluster = 'error'
respRegular = 'error'
host = "vpce-XXX.us-east-1.vpce.amazonaws.com"
port = "6379"
try:
ru = RedisCluster(startup_nodes=[{"host": host, "port": port}], decode_responses=True, skip_full_coverage_check=True)
respCluster = ru.get('ABC')
except Exception as e:
print(e)
try:
ru = Redis(host=host, port=port, decode_responses=True)
respRegular = ru.get('ABC')
except Exception as e:
print(e)
return {"respCluster": respCluster, "respRegular": respRegular}
The above code works perfectly in account A but in account B the output that I got was
{'respCluster': 'error', 'respRegular': '123456789'}
And the error that I am getting is
rediscluster.exceptions.ClusterError: TTL exhausted
In account A we are using AWS ECS + EC2 + docker to run this and
In account B we are running the code in an AWS EKS Kubernetes pod.
What should I do to make the redis-py-cluster work in this case? or is there an alternative to redis-py-cluster in python to access a multinode Redis cluster?
I know this is a highly specific case, any help is appreciated.
EDIT 1: Upon further research, it seems that TTL exhaust is a general error, in the logs the initial error is
redis.exceptions.ConnectionError:
Error 101 connecting to XX.XXX.XX.XXX:6379. Network is unreachable
Here the XXXX is the IP of the Redus cluster in Account A.
This is strange since the redis-py also connects to the same IP and port,
this error should not exist.
So turns out the issue was due to how redis-py-cluster manages host and port.
When a new redis-py-cluster object is created it gets a list of host IPs from the Redis server(i.e. Redis cluster host IPs form account A), after which the client tries to connect to the new host and ports.
In normal cases, it works as the initial host and the IP from the response are one and the same.(i.e. the host and port added at the time of object creation)
In our case, the object creation host and port are obtained from the DNS name from the Endpoint service of Account B.
It leads to the code trying to access the actual IP from account A instead of the DNS name from account B.
The issue was resolved using Host port remapping, here we bound the IP returned from the Redis server from Account A with IP Of Account B's endpoints services DNA name.
Based on your comment:
this was not possible because of VPCs in Account-A and Account-B had the same CIDR range. Peered VPCs can’t have the same CIDR range.
I think what you are looking for is impossible. Routing within a VPC always happens first - it happens before any route tables are considered at all. Said another way, if the destination of the packet lies within the sending VPC it will never leave that VPC because AWS will try routing it within its own VPC, even if the IP isn't in use at that time in the VPC.
So, if you are trying to communicate with a another VPC which has the same IP range as yours, even if you specifically put a route to egress traffic to a different IP (but in the same range), the rule will be silently ignored and AWS will try to deliver the packet in the originating VPC, which seems like it is not what you are trying to accomplish.

Connecting to MongoDb DB that is installed in different server

We have to servers. I have installed MongoDB on one of the servers (UBUNTU - Digital Ocean VPS).
When I run a script to retrieve data from the same server using a localhost, I can do that perfectly.
import pymongo
//SERVER = 'mongodb://localhost:27017/myproject'
SERVER = 'mongodb://root:password#x.x.x.x:27017/myproject' where x.x.x.x is the address of my server
connection=pymongo.MongoClient(SERVER)
db = connection.myproject
print list(db.coll.find())
The problem is thqt I can't connect to this DB. Note that I can ssh and run the script using localhost inside the server; but not the case out of the server.
Do I need to go through some configuration:
You must allow remote access
vi /etc/mongod.conf
Listen only local interface.
bind_ip = 127.0.0.1
you must add the IP of your other servers. For Example:
Listen local interface and 192.168.0.100.
bind_ip = 127.0.0.1, 192.168.0.100
Comment out to listen on all interfaces
Nota: Comma Separated
I hope to help
For development purposes you can open an ssh tunnel like
ssh <UBUNTU - Digital Ocean VPS> -L27018:localhost:27017
and then connect to the remote db as
SERVER = 'mongodb://root:password#localhost:27018/myproject'
while ssh connection remains open. You can use any free port instead of 27018.
Otherwise you need to reconfigure mongodb to listen to all interfaces. Comment out bindIp line in mongodb config and restart the server. This will make the DB publicly accessible, so make sure you use strong passwords and don't allow anonymous access.
Finally, if you are using VPN, you need to uncomment bindIp line in the mongodb config, and add VPN interface there, e.g.:
bindIp = 127.0.0.1,10.0.1.12
where 10.0.1.12 should be replaced with vpn interface of your ubuntu box. You can find exact value with ifconfig. Important: there are no spaces around coma.

Socket: Get user information

How can I get information about a user's PC connected to my socket
a socket is a "virtual" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network.
the basic informations are those provided in the TCP/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services.
a reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer.
there are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant.
Look here. See "# Echo server program" section.
conn, addr = s.accept()
print 'Connected by', addr
I am unsure if this is what you are looking for, hth.
You could try asking identd about the connection, but a lot of hosts don't run that or only put up info there that you can't use.

Categories