Facebook Thrift SSH Frame Size Error - python

I am running the Facebook Thrift service on my machine. I used a sample code to show its working:
import asyncio
from fbnet.command_runner.thrift_client import AsyncioThriftClient
# Import FCR Thrift Types
from fbnet.command_runner_asyncio.CommandRunner import ttypes as fcr_ttypes
# Import FCR Service Client
from fbnet.command_runner_asyncio.CommandRunner.Command import Client as FcrClient
import getpass
# Device Information
hostname = 'my_vm'
username = 'root'
password = getpass.getpass('%s Password: ' % username)
# Destination device
device = fcr_ttypes.Device(hostname=hostname, username=username, password=password)
async def run(cmd, device):
async with AsyncioThriftClient(FcrClient, 'x.x.x.x',22 ) as client:
res = await client.run(cmd, device)
# type of res is `struct CommandResult`
print(res.output)
loop = asyncio.get_event_loop()
loop.run_until_complete(run('uname -a', device))
However I am getting the following error:
Frame size 1397966893 too large for THeaderProtocol Traceback (most
recent call last):
File "pgm1.py", line 28, in
loop.run_until_complete(run('uname -a', device))
File "/usr/local/lib/python3.6/asyncio/base_events.py", line 467, in run_until_complete
return future.result()
File "pgm1.py", line 23, in run
res = await client.run(cmd, device) thrift.transport.TTransport.TTransportException: Connection closed
Any ideas on how to correct this?

#Kenster's comment indicates the real problem here.
0x5353482D is the four characters "SSH-", which happens to be the first data sent by an ssh server when something connects to it
There are some server implementations that require TFramedProtocol by design. In other words, it is mandatory, the client has to use it, simply because the server expects it that way.
The insight comes quickly to one who knows that TFramedProtocol adds a 4 byte header carrying the frame size of the data to follow. If the client does not use TFramedProtocol, the server will interpret the first four databytes as the frame size - hence the error message.
Solution
Add TFramedProtocol on the client side to the Thrift transport/protocol stack.

Related

BLE device does not make new /dev/input/eventX when it reconnects, using Python Gatt Library

I am new to python gatt module, and i am having a problem with reconnections.
Basically what I am trying to do is establish a connection with a Bluetooth Low Energy (BLE) device with the python gatt module( https://github.com/getsenic/gatt-python ) and then read the input from the /dev/input/eventX path with the evdev module. I also want to automate the reconnection process, so when the device gets out of range and comes back, it will reconnect and continue working normally.
The problem is that when the device disconnects, and eventually reconnects (via simple routine like this: catch disconnect message -> try to reconnect) if the reconnection has taken more than 2-3 minutes, the connection process does not make a new /dev/input/eventX path. This is not happening when the reconnection is successful in between the first 1-2 minutes.
The error I am getting when the 2-3 minutes have passed is:
File "/usr/lib/python3.7/site-packages/dbus/proxies.py", line 145, in
call
File "/usr/lib/python3.7/site-packages/dbus/connection.py", line 651, in call_blocking
dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible
causes include: the remote application did not send a reply, the
message bus security policy blocked the reply, the reply timeout
expired, or the network connection was broken.
The core of the script is the following:
def reconnect(mac_address):
try:
devices[mac_address].connect()
except:
print(f"thread from {mac_address} crashed")
class AnyDevice(gatt.Device):
shut_down_flag = False
def connect_succeeded(self):
super().connect_succeeded()
print(f"{self.mac_address} Connected")
def connect_failed(self, error):
super().connect_failed(error)
print(f"{self.mac_address} Connection failed.")
reconnect_thread = threading.Thread(target=reconnect, name=f'reconnect {self.mac_address}',args=(self.mac_address,))
reconnect_thread.start()
def disconnect_succeeded(self):
super().disconnect_succeeded()
print(f"{self.mac_address} Disconnected")
if not self.shut_down_flag:
reconnect_thread = threading.Thread(target=reconnect, name=f'reconnect {self.mac_address}',args=(self.mac_address,))
reconnect_thread.start()
def gatt_connect_device(mac_address):
global devices
devices.update({f'{mac_address}': AnyDevice(mac_address=f'{mac_address}', manager=manager)})
devices[f'{mac_address}'].connect()
#==== OPEN bd_addresses.txt JSON FILE ====#
if path.exists("bd_addresses.txt"):
with open("bd_addresses.txt", "r") as mac_addresses_json:
mac_addresses = json.load(mac_addresses_json)
else:
print("bd_addresses.txt file NOT FOUND\nPlace it in the same directory as the multiple_scanners.py")
#========================================#
devices={}
manager = gatt.DeviceManager(adapter_name='hci0')
for scanner_number in mac_addresses:
device_instance_thread=threading.Thread(target=gatt_connect_device, name=f'device instance for {mac_addresses[scanner_number]}', args=(mac_addresses[scanner_number],))
device_instance_thread.start()
time.sleep(3)
manager.run()

Cause of a multithreading error in a MQTT application (Python)?

In my code I create threads, which publish.single multiple times on a MQTT connection. However this error is raised and I cannot understand or find its origin. The only time it mentions my code is with line 75, in send_on_sensor.
Exception in thread Thread-639:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/Users//PycharmProjects//V3_multiTops/mt_GenPub.py", line 75, in send_on_sensor
publish.single(topic, payload, hostname=hostname)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 223, in single
protocol, transport)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 159, in multiple
client.connect(hostname, port, keepalive)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 839, in connect
return self.reconnect()
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 962, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
This is the mentioned code part. The thrown line 75 is the one with time.sleep(delay). This method will be called on a new thread whenever a new set of data (as a queue of points) shall be sent.
def send_on_sensor(q, topic, delay):
while q.empty is not True:
payload = json.dumps(q.get())
publish.single(topic, payload, hostname=hostname)
time.sleep(delay)
I get the feeling I am doing something which is not "threadsafe"?! Also this issue occurs especially, when the delay is a short interval (< 1sec). From my output I can see that the next set of data (100 points) will start sending in a new thread before the first one has finished sending. I can fix that and also this error by increasing the time interval in between two sets of data. E.g. if I determine the delay between sets using this relation set_delay = 400 * point_delay I can safely use a delay of 0.1 secs. However the same relation won't work for smaller delays, so this solution really does not satisfy me.
What can I do about this issue? I really want to get my delay below 0.1 secs and be able to adjust it.
EDIT
this is the method which creates the threads:
def send_dataset(data, labels, secs=0):
qs = []
for i in range(8):
qs.append(queue.Queue())
for value in data:
msg = {
"key": value,
}
# c is set accordingly
qs[c].put(msg)
for q in qs:
topic = sensors[qs.index(q)]
t = threading.Thread(target=send_on_sensor, args=(q, topic, secs))
t.start()
time.sleep(secs)
and this is where I start all methods off
output_interval = 0.01
while True:
X, y = give_dataset()
send_dataset(X, y, output_interval)
time.sleep(output_interval * 2000)
Even though you added extra code, it doesnt reveal much. However, I have an experience with similar thing happening to me. I was doing heavy threaded app with MQTT and its quite save. Not totally but it is.
The reason why you get error with lowering the delay is that you have ONE client. By publishing message (I cant be sure because I dont see your code) you connect, send message and disconnect!. Since you are threading this process, you most propably send one message(still in process) and you are about to publish new one in new thread. However the first thread is going to finish and disconnects the client. The new thread is trying to publish, but you cant, because previous thread disconnected you.
Solution:
1) Dont disconnect the client upon publishing
2) Risky and you need more code: For every publish, create new client but be sure to handle this correctly. That means: create client, publish and disconnect, again and again, but make sure you close the connections correctly and delete the clients so your you dont store dead clients
3) solution to 2) - try to make function that will do all - create client, connect and publish and dies after the the end. If you thread such function, I guess you will not have to take care of problems arising in solution 2
Update:
In case your problem is something else, I still think that its not because of threads itself, but because multiple threads are trying to control something that should be controlled only by one thread - like client object
Update: template code
be aware that its my old code and I dont use it anymore because my applications needs particular thread attitude and so on, so I rewrite this one for each application individually. But this one works like charm for not threaded apps and possible for threaded too. It can publish only with qos=0
import paho.mqtt.client as mqtt
import json
# Define Variables
MQTT_BROKER = ""
MQTT_PORT = 1883
MQTT_KEEPALIVE_INTERVAL = 5
MQTT_TOPIC = ""
class pub:
def __init__(self,MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC,transport = ''):
self.MQTT_TOPIC = MQTT_TOPIC
self.MQTT_BROKER =MQTT_BROKER
self.MQTT_PORT = MQTT_PORT
self.MQTT_KEEPALIVE_INTERVAL = MQTT_KEEPALIVE_INTERVAL
# Initiate MQTT Client
if transport == 'websockets':
self.mqttc = mqtt.Client(transport='websockets')
else:
self.mqttc = mqtt.Client()
# Register Event Handlers
self.mqttc.on_publish = self.on_publish
self.mqttc.on_connect = self.on_connect
self.connect()
# Define on_connect event Handler
def on_connect(self,mosq, obj, rc):
print("mqtt.thingstud.io")
# Define on_publish event Handler
def on_publish(self,client, userdata, mid):
print("Message Published...")
def publish(self,MQTT_MSG):
MQTT_MSG = json.dumps(MQTT_MSG)
# Publish message to MQTT Topic
self.mqttc.publish(self.MQTT_TOPIC,MQTT_MSG)
# Disconnect from MQTT_Broker
def connect(self):
self.mqttc.connect(self.MQTT_BROKER, self.MQTT_PORT, self.MQTT_KEEPALIVE_INTERVAL)
def disconnect(self):
self.mqttc.disconnect()
p = pub(MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC)
p.publish('some messages')
p.publish('more messages')
Note that on object creation I connect automaticly, but I dont disconnect. That is something you have to do manually
I suggest you try to create as many pub objects as you have sensors and publish with them.

Python multiprocessing Manger OSError "Only one usage of each socket address"

I am creating a communication platform in python (3.4.4) and using the multiprocessing.managers.BaseManager class. I have isolated the problem to the code below.
The intention is to have a ROVManager(role='server') instance running in one process on the main computer and providing read/write capabilities to the system dictionary for multiple ROVManager(role='client') instances running on the same computer and a ROV (remotely operated vehicle) connected to the same network. This way, multiple clients/processes can do different tasks like reading sensor values, moving motors, printing, logging etc, all using the same dictionary. start_reader() below is one of those clients.
Code
from multiprocessing.managers import BaseManager
import multiprocessing as mp
import sys
class ROVManager(BaseManager):
def __init__(self, role, address, port, authkey=b'abc'):
super(ROVManager, self).__init__(address=(address, port),
authkey=authkey)
if role is 'server':
self.system = {'shutdown': False}
self.register('system', callable=lambda: self.system)
server = self.get_server()
server.serve_forever()
elif role is 'client':
self.register('system')
self.connect()
def start_server(server_ip, port_var):
print('starting server')
ROVManager(role='server', address=server_ip, port=port_var)
def start_reader(server_ip, port_var):
print('starting reader')
mgr = ROVManager(role='client', address=server_ip, port=port_var)
i = 0
while not mgr.system().get('shutdown'):
sys.stdout.write('\rTotal while loops: {}'.format(i))
i += 1
if __name__ == '__main__':
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
server_p.start()
reader_p.start()
while True:
# Check system status, restart processes etc here
pass
Error
This results in the following output and error:
starting server
starting reader
Total while loops: 15151
Process Process - 2:
Traceback(most recent call last):
File "c:\python34\Lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "c:\python34\Lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\git\eduROV\error_test.py", line 29, in start_reader
while not mgr.system().get('shutdown'):
File "c:\python34\Lib\multiprocessing\managers.py", line 640, in temp
token, exp = self._create(typeid, *args, **kwds)
File "c:\python34\Lib\multiprocessing\managers.py", line 532, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "c:\python34\Lib\multiprocessing\connection.py", line 496, in Client
c = SocketClient(address)
File "c:\python34\Lib\multiprocessing\connection.py", line 629, in SocketClient
s.connect(address)
OSError: [WinError 10048] Only one usage of each socket address(protocol / network address / port) is normally permitted
My research
Total while loops are usually in the range 15000-16000. From my understanding it seems like a socket is created and terminated each time mgr.system().get('shutdown') is called. Windows then runs out of available sockets. I can't seem to find a way to set socket.SO_REUSEADDR.
Is there a way of solving this, or isn't Managers made for this kind of communication? Thanks :)
As error suggests Only one usage of each socket address in general , you can/should bind only a single process to a socket ( unless you design your application accordingly, by passing SO_REUSEADDR option while creating socket)
. These lines
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
creates two processes on same port 5050 & so the error.
You can refer here to learn how to use SO_REUSEADDR & its implications but i am quoting the main part which should get you going
The second socket calls setsockopt with the optname parameter set to
SO_REUSEADDR and the optval parameter set to a boolean value of TRUE
before calling bind on the same port as the original socket. Once the
second socket has successfully bound, the behavior for all sockets
bound to that port is indeterminate. For example, if all of the
sockets on the same port provide TCP service, any incoming TCP
connection requests over the port cannot be guaranteed to be handled
by the correct socket — the behavior is non-deterministic.

Why does pyro4 fail to locate nameserver under 127.0.1.1 but succeed with 127.0.0.1?

I am trying to run a pyro4 server with a custom event loop on a Raspberry Pi running Raspbian 8 (jessie). When I create a nameserver using the hostname obtained from socket.gethostname(), specifically 'raspberrypi', my client script cannot find the nameserver. When I use 'localhost' as the hostname, my client script is able to find the hostname. In /etc/hosts, 'raspberrypi' is bound to 127.0.1.1, while 'localhost' is obviously bound to 127.0.0.1. I had thought that both of these addresses were bound to the loopback interface, so I don't understand why one should work and not the other.
For what it's worth, after some digging in the pyro4 code, it looks like at l.463 of Pyro4.naming.py, the call to proxy.ping() fails with 127.0.1.1 but not with 127.0.0.1, and this is ultimately what triggers the failure with the former address. Not being an expert in Pyro, it isn't clear to be whether this behavior is expected. Any thoughts? I assume this must be a common problem because most (all?) flavors of Debian include separate lines in /etc/hosts for these two addresses.
I have attached code below that reproduces the problem. This is basically just a slightly modified version of the "eventloop" example that ships with pyro.
server.py:
import socket
import select
import sys
import Pyro4.core
import Pyro4.naming
import MotorControl
Pyro4.config.SERVERTYPE="thread"
hostname=socket.gethostname()
print("initializing services... servertype=%s" % Pyro4.config.SERVERTYPE)
# start a name server with broadcast server as well
nameserverUri, nameserverDaemon, broadcastServer = Pyro4.naming.startNS(host=hostname)
pyrodaemon=Pyro4.core.Daemon(host=hostname)
motorcontroller = MotorControl.MotorControl()
serveruri=pyrodaemon.register(motorcontroller)
nameserverDaemon.nameserver.register("example.embedded.server",serveruri)
# below is our custom event loop.
while True:
nameserverSockets = set(nameserverDaemon.sockets)
pyroSockets = set(pyrodaemon.sockets)
rs = []
rs.extend(nameserverSockets)
rs.extend(pyroSockets)
rs,_,_ = select.select(rs,[],[], 0.001)
eventsForNameserver=[]
eventsForDaemon=[]
for s in rs:
if s in nameserverSockets:
eventsForNameserver.append(s)
elif s in pyroSockets:
eventsForDaemon.append(s)
if eventsForNameserver:
nameserverDaemon.events(eventsForNameserver)
if eventsForDaemon:
pyrodaemon.events(eventsForDaemon)
motorcontroller.increment_count()
nameserverDaemon.close()
broadcastServer.close()
pyrodaemon.close()
client.py:
from __future__ import print_function
import Pyro4
proxy=Pyro4.core.Proxy("PYRONAME:example.embedded.server")
print("count = %d" % proxy.get_count())
MotorControl.py
class MotorControl(object):
def __init__(self):
self.switches = 0
def get_count(self):
return self.switches
def increment_count(self):
self.switches = self.switches + 1
error:
Traceback (most recent call last):
File "pyroclient.py", line 5, in <module>
print("count = %d" % proxy.get_count())
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 248, in __getattr__
self._pyroGetMetadata()
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 548, in _pyroGetMetadata
self.__pyroCreateConnection()
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 456, in __pyroCreateConnection
uri = resolve(self._pyroUri, self._pyroHmacKey)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/naming.py", line 548, in resolve
nameserver = locateNS(uri.host, uri.port, hmac_key=hmac_key)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/naming.py", line 528, in locateNS
raise e
Pyro4.errors.NamingError: Failed to locate the nameserver
Pyro's name server lookup relies on two things:
broadcast lookup
direct lookup by hostname/ip-address
The first is not available when you're using the loopback adapter to bind the name server on (loopback doesn't support broadcast sockets). So we're left with the second one.
The answer to your question is then simple: the direct lookup is done on the value of the NS_HOST config item, which is by default set to 'localhost'. As localhost resolves to 127.0.0.1 it will never connect to 127.0.1.1.
Suggestion: bind the name server on 0.0.0.0 or "" (empty hostname) and it should be able to start a broadcast responder as well. Then your clients won't have any problem locating it.
Alternatively, simply set NS_HOST to 127.0.1.1 (or the hostname of your box) for your clients and they should be able to locate the name server as well if it's bound on 127.0.1.1

How do you check if the client for a MongoDB instance is valid?

In particular, I am currently trying to check if a connection to a client is valid using the following function:
def mongodb_connect(client_uri):
try:
return pymongo.MongoClient(client_uri)
except pymongo.errors.ConnectionFailure:
print "Failed to connect to server {}".format(client_uri)
I then use this function like this:
def bucket_summary(self):
client_uri = "some_client_uri"
client = mongodb_connect(client_uri)
db = client[tenant_id]
ttb = db.timebucket.count() # If I use an invalid URI it hangs here
Is there a way to catch and throw an exception at the last line if an invalid URI is given? I initially thought that's what the ConnectionFailure was for (so this could be caught when connecting) but I was wrong.
If I run the program with an invalid URI, which fails to run, issuing a KeyboardInterrupt yields:
File "reportjob_status.py", line 58, in <module>
tester.summarize_timebuckets()
File "reportjob_status.py", line 43, in summarize_timebuckets
ttb = db.timebucket.count() #error
File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 1023, in count
return self._count(cmd)
File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 985, in _count
with self._socket_for_reads() as (sock_info, slave_ok):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/Library/Python/2.7/site-packages/pymongo/mongo_client.py", line 699, in _socket_for_reads
with self._get_socket(read_preference) as sock_info:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/Library/Python/2.7/site-packages/pymongo/mongo_client.py", line 663, in _get_socket
server = self._get_topology().select_server(selector)
File "/Library/Python/2.7/site-packages/pymongo/topology.py", line 121, in select_server
address))
File "/Library/Python/2.7/site-packages/pymongo/topology.py", line 106, in select_servers
self._condition.wait(common.MIN_HEARTBEAT_INTERVAL)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 358, in wait
_sleep(delay)
The serverSelectionTimeoutMS keyword parameter of pymongo.mongo_client.MongoClient controls how long the driver will try to connect to a server. The default value is 30s.
Set it to a very low value compatible with your typical connection time¹ to immediately report an error. You need to query the DB after that to trigger a connection attempt :
>>> maxSevSelDelay = 1 # Assume 1ms maximum server selection delay
>>> client = pymongo.MongoClient("someInvalidURIOrNonExistantHost",
serverSelectionTimeoutMS=maxSevSelDelay)
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> client.server_info()
This will raise pymongo.errors.ServerSelectionTimeoutError.
¹ Apparently setting serverSelectionTimeoutMS to 0 might even work in the particular case your server has very low latency (case of a "local" server with very light load for example)
It is up to you to catch that exception and to handle it properly. Something like that:
try:
client = pymongo.MongoClient("someInvalidURIOrNonExistantHost",
serverSelectionTimeoutMS=maxSevSelDelay)
client.server_info() # force connection on a request as the
# connect=True parameter of MongoClient seems
# to be useless here
except pymongo.errors.ServerSelectionTimeoutError as err:
# do whatever you need
print(err)
will display:
No servers found yet
Hi to find out that the connection is established or not you can do that :
from pymongo import MongoClient
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
except ConnectionFailure:
print("Server not available")
serverSelectionTimeoutMS
This defines how long to block for server selection before throwing an
exception. The default is 30,000 (milliseconds). It MUST be
configurable at the client level. It MUST NOT be configurable at the
level of a database object, collection object, or at the level of an
individual query.
This default value was chosen to be sufficient for a typical server
primary election to complete. As the server improves the speed of
elections, this number may be revised downward.
Users that can tolerate long delays for server selection when the
topology is in flux can set this higher. Users that want to "fail
fast" when the topology is in flux can set this to a small number.
A serverSelectionTimeoutMS of zero MAY have special meaning in some
drivers; zero's meaning is not defined in this spec, but all drivers
SHOULD document the meaning of zero.
https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.rst#serverselectiontimeoutms
# pymongo 3.5.1
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError
client = MongoClient("mongodb://localhost:27000/", serverSelectionTimeoutMS=10, connectTimeoutMS=20000)
try:
info = client.server_info() # Forces a call.
except ServerSelectionTimeoutError:
print("server is down.")
# If connection create a new one with serverSelectionTimeoutMS=30000
serverSelectionTimeoutMS doesn't work for me (Python 2.7.12, MongoDB 3.6.1, pymongo 3.6.0). A. Jesse Jiryu Davis suggested in a GitHub issue that we attempt a socket-level connection first as a litmus test. This does the trick for me.
def throw_if_mongodb_is_unavailable(host, port):
import socket
sock = None
try:
sock = socket.create_connection(
(host, port),
timeout=1) # one second
except socket.error as err:
raise EnvironmentError(
"Can't connect to MongoDB at {host}:{port} because: {err}"
.format(**locals()))
finally:
if sock is not None:
sock.close()
# elsewhere...
HOST = 'localhost'
PORT = 27017
throw_if_mongodb_is_unavailable(HOST, PORT)
import pymongo
conn = pymongo.MongoClient(HOST, PORT)
print(conn.admin.command('ismaster'))
# etc.
There are plenty of problems this won't catch, but if the server isn't running or isn't reachable, this'll show you right away.
Can also be checked this way:
from pymongo import MongoClient
from pymongo.errors import OperationFailure
def check_mongo_connection(client_uri):
connection = MongoClient(client_uri)
try:
connection.database_names()
print('Data Base Connection Established........')
except OperationFailure as err:
print(f"Data Base Connection failed. Error: {err}")
check_mongo_connection(client_uri)
For pymongo >= 4.0 the preferred method is to use ping command instead of deprecated ismaster:
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
client.admin.command('ping')
except ConnectionFailure:
print("Server not available")
To handle auth failures, include OperationFailure:
except OperationFailure as err:
print(f"Database error encountered: {err}")
Source: mongo_client.py

Categories