Why is PyMongo 3 giving ServerSelectionTimeoutError? - python

I'm using:
Python 3.4.2
PyMongo 3.0.2
mongolab running mongod 2.6.9
uWSGI 2.0.10
CherryPy 3.7.0
nginx 1.6.2
uWSGI start params:
--socket 127.0.0.1:8081 --daemonize --enable-threads --threads 2 --processes 2
I setup my MongoClient ONE time:
self.mongo_client = MongoClient('mongodb://user:pw#host.mongolab.com:port/mydb')
self.db = self.mongo_client['mydb']
I try and save a JSON dict to MongoDB:
result = self.db.jobs.insert_one(job_dict)
It works via a unit test that executes the same code path to mongodb. However when I execute via CherryPy and uWSGI using an HTTP POST, I get this:
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
Why am I seeing this behavior when run via CherryPy and uWSGI? Is this perhaps the new thread model in PyMongo 3?
Update:
If I run without uWSGI and nginx by using the CherryPy built-in server, the insert_one() works.
Update 1/25 4:53pm EST:
After adding some debug in PyMongo, it appears that topology._update_servers() knows that the server_type = 2 for server 'myserver-a.mongolab.com'. However server_description.known_servers() has the server_type = 0 for server 'myserver.mongolab.com'
This leads to the following stack trace:
result = self.db.jobs.insert_one(job_dict)
File "/usr/local/lib/python3.4/site-packages/pymongo/collection.py", line 466, in insert_one
with self._socket_for_writes() as sock_info:
File "/usr/local/lib/python3.4/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.4/site-packages/pymongo/mongo_client.py", line 663, in _get_socket
server = self._get_topology().select_server(selector)
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 121, in select_server
address))
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 97, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: No servers found yet

We're investigating this problem, tracked in PYTHON-961. You may be able to work around the issue by passing connect=False when creating instances of MongoClient. That defers background connection until the first database operation is attempted, avoiding what I suspect is a race condition between spin up of MongoClient's monitor thread and multiprocess forking.

As mentioned here: https://stackoverflow.com/a/54314615/8953378
I added ?ssl=true&ssl_cert_reqs=CERT_NONE to my connection string, and it fixed the issue.
so instead of:
connection_string = "mongodb+srv://<USER>:<PASSWORD>#<CLUSTER>/<COLLECTION>"
I wrote:
connection_string = "mongodb+srv://<USER>:<PASSWORD>#<CLUSTER>/<COLLECTION>?ssl=true&ssl_cert_reqs=CERT_NONE"
(Note that if you have other parameters in your connection string, you need to change the ? to & )

I am not sure if you are using the MongoDB paired with AWS Cloud service. But if you are, I found that you have to specify which IP Address you want MongoDB to have access to.
So what you need to do is add the IP Address of your host server to allow entry.
In MongoAtlas, this can be done at this page
I know there was already a solution to the same issue, but I didn't find a solution that helped my situation, so wanted to post this, so others could benefit if they ever face the same problem that I do.

I fixed it for myself by downgrading from pymongo 3.0 to 2.8. No idea what's going on.
flask/bin/pip uninstall pymongo
flask/bin/pip install pymongo==2.8

I had the same problem with Pymongo 3.5
Turns out replacing localhost with 127.0.0.1 or corresponding ip address of your mongodb instance solves the problem.

I solved this by installing dnspython (pip install dnspython). The issue is that: "The "dnspython" module must be installed to use mongodb+srv:// URIs"

Go to your Atlas Console > Network Access, then add your client IP address,
Ex. 0.0.0.0/00
(Note: All client ips can access your database)

In my case
I was using Mongo Atlas
I got another IP adress after a router reboot
hence I had to add that IP to the whitelist on Mongo Atlas settings via
MongoAtlas website -> Network Access -> IP Whitelist -> Add IP Address -> Add Current IP Address
then wait for IP Address's status to change to Active and then try to run the app again
if you are using a repl.it server to host, just add the host ip you used to configure your server, for me it was 0.0.0.0, which is the most common

I was facing the same exception today. In my case, the proxy settings were probably blocking the connection since I could establish a successful connection to the mongodb by changing my wifi. Even if this question is marked as solved already, it can hopefully narrow down the problem for some others.

I've come accross the same problem and finally I found that the client IP is blocked by the firewall of the mongo server.

I encountered this too.
This could be due to pymongo3 isn't fork safe.
I fix this by adding --lazy-apps param to uwsgi, this can avoid the "fork safe" problem.
seeing uwsgi doc preforking-vs-lazy-apps-vs-lazy.
Notice, no sure for this two having positive connection.

I simply added my current IP address in the network access tab, as it got changed automatically. Deleted the earlier one, there was a slight change in IP address.

pymongo 3 will not tell you your connection failed when you instantiate your client. You may not be connected.
https://api.mongodb.com/python/3.5.1/api/pymongo/mongo_client.html
"it no longer raises ConnectionFailure if they are unavailable ..
You can check if the server is available like this:"
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
except ConnectionFailure:
print("Server not available")

maybe you can try to add your server ip address into the mongod.conf file.
if you use linux(ubuntu) os,you can try my solution:
modify mongod.conf file:
vi /etc/mongod.conf
and you can add mongodb server ip address behind 127.0.0.1,and save:
net:
port:27017
bindIp:127.0.0.1,mongodb server ip
in the teminal:
sudo service mongod restart
Now,you can try to connect mongodb by using pymongo MongoClient.

That error has occurred because there is no MongoDB server running in the background. To run the MongoDB server open cmd or anaconda prompt and type this:-
"C:\Program Files\MongoDB\Server\3.6\bin\mongod.exe"
then run
import pymongo
myclient = pymongo.MongoClient()
mydb = myclient["mydatabase"]
myclient.list_database_names()

I'm using pymongo 3.2 and I run into the same error, however it was a missconfiguration in my case. After enabling authorization, I forgot to update the port in the url which ended up in a connection timout. Probably it is worth to mention that ?authSource might be required as it is typically different than the database storing the application data.

I commented out bindIP variable in mongod.conf instead of allowing all connections (for which you have to enter 0.0.0.0). Of course, beware of the consequence.

The developers are investigating this problem, tracked in PYTHON-961. You may be able to work around the issue by running mongod.exe manually and monitoring it. This issue arises when the console freezes and you can hit the enter if the mongod console is got stuck. This is the simplest solution for now until the developers fix this bug.

I ran into the same issue during development. It was due to the fact that mongodb wasn't running on my local machine (sudo systemctl restart mongod to get mongodb running on Ubuntu).

I faced the same error on windows and I just started the MongoDB service
open services ctrl+R then type services.msc then Enter

For my case I only set my ip allow list 0.0.0.0 allow anywhere but you can set your ip using "what is my ip" and copy paste it to network access > add ip

I have been struggling with same problem. Read and either insert did not work at all failed with ServerSelectionTimeoutError.
I have been using pymongo==3.11.4 on Ubuntu 18.04 LTS.
Tried use connect=False, pass extra ?ssl=true&ssl_cert_reqs=CERT_NONE options to my connection string and other suggestions listed above. In my case they didn't work.
Finally simple tried to upgrade to pymongo==3.12.1 and connection started to work without passing connect=false, and other extra arguments suggested.
login = '<USERNAME>'
password = '<PASSWORD>'
host = '*.mongodb.net'
db = '<DB>'
uri = f'mongodb+srv://{login}:{password}#{host}/{db}?retryWrites=true&w=majority'
client = MongoClient(uri, authsource='admin')#, connect=False)
collection = client.db.get_collection('collection_name')
# t = collection.find_one({'hello': '1'})
t = collection.insert_one({'hello': '2'})
print(t)

Make sure you entered the user password, not the MongoDB account password. I encountered similar issue. In my case, I mistakenly entered the MongoDB account password instead of the user password.

I had this issue today. I managed to deal with it by:
installing dnspython library > going to MongoDB webpage > signing in > security > network access > add IP address > adding the IP address from where my request comes from.
Hope this could help someone.

I had the same issue..the code that was working perfectly fine 2 minutes before gave this error. I was looking for solutions over google for about 30 minutes and it automatically got fixed. The problem could be my home internet connection. Just a guess but if you haven't made any changes to the code or any other config file best to wait for sometime and retry.

I was also facing the same issue. Then, I added
import certifi
Client = MongoClient("mongodb+srv://<username>:<password>#cluster0.ax9ugoz.mongodb.net/?retryWrites=true&w=majority", tlsCAFile=certifi.where())
and it solved my issue.
Certifi provides a collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts.

This has been fixed in PyMongo with this pull_request.

This problem solved when I just toggled the MongoDB in the services to running which was stopped previously.

First set up the MongoDB environment.
Run this on CMD - "C:\Program Files\MongoDB\Server\3.6\bin\mongod.exe"
Open another CMD and run this - "C:\Program Files\MongoDB\Server\3.6\bin\mongo.exe"
And then you can use pymongo [anaconda prompt]
import pymongo
from pymongo import MongoClient
client = MongoClient()
db = client.test_db
collection = db['test_coll']
Refer - https://docs.mongodb.com/tutorials/install-mongodb-on-windows/

Related

How to connect to a Lotus-Notes database with Python?

I have to extract data from a Notes database automatically for a data pipeline validation.
With HCL Notes I was able to connect to the database, so I know the access works.
I have the following information to access the database:
host (I got both hostname and ip address), domino server name, database name (.nsf filename)
I tried the noteslib library in the below way:
import noteslib
db = noteslib.Database('domino server name','db filename.nsf')
I also tried adding the host to the server parameter instead, but it does not work.
I receive this error:
Error connecting to ...Double-check the server and database file names, and make sure you have
read access to the database.
My question is how can I add the host and the domino server name as well (if it is required)?
Notes HCL authenticates me before accessing the database using the domino server name and the .nsf file. I tried adding the password parameter to the end, but also without luck. I am on company VPN, so that also should not be an issue.
In Order for noteslib to work you need an installed and configured HCL Notes Client on that machine. Only with an installed Notes Client the needed COM registrations and the dlls to connect to Domino are present.
In addition the Notes Client and the python version you are using need to be the same bitness: If Notes Client is 32Bit then python needs to be 32Bit. If Notes Client is 64Bit (only available since 12.0.2) then python needs to be 64Bit as well.
As soon as this requirement is met, you can simply use your example by adding the password parameter as a third parameter to your command:
db = noteslib.Database('domino server name','db filename.nsf', 'yourIDPassword')
If you still get an error when connecting to the server then you might need to put the server common name and its IP address into your hosts file.
So if your Domino- Servername is
YourServer/YourOrganization
and the IP address of that server is
192.168.1.20
then you put this into your hosts:
yourserver 192.168.1.20
You can connect using com on windows.
I use this python library https://pypi.org/project/pywin32/
import win32com.client
import sys
notesServer = "Servername/Domain"
notesFile = "file.nsf"
notesPass = ""
#Connect to notes database on server
notesSession = win32com.client.Dispatch('Lotus.NotesSession')
notesSession.Initialize(notesPass)
notesDatabase = notesSession.GetDatabase(notesServer,notesFile)

App Engine, pymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed"

I'm using Python 3.7 and Flask 1.0.2
I plugged my app to mongoDB Atlas, and all works fine in local
client = pymongo.MongoClient(connector)
connector is my standard connection string given by Atlas
connector = "mongodb://xxx:<PASSWORD>#xxcluster-shard-00-00-y0phk.gcp.mongodb.net:27017,xxcluster-shard-00-01-y0phk.gcp.mongodb.net:27017,xxxcluster-shard-00-02-y0phk.gcp.mongodb.net:27017/test?ssl=true&replicaSet=xxxCluster-shard-0&authSource=admin&retryWrites=true"
When I deploy my app to Google App Engine standard Python3 runtime environment, it does not works. Would anyone have an idea of the problem?
Code:
Log of Appengine:
Local log(works fine in local):
Appengine error:
pymongo.errors.ServerSelectionTimeoutError: connection
closed,connection closed,connection closed"
The problem was the Ip Whitelist, thus I have add via the vpc peering
connection with gcp.
To do simple, we can add 0.0.0.0/0 to allow access from anywhere(but,
be careful)
.
In my case, I had ssl=False. Hope this helps someone!
Probably the problem is the Network Access configuration in your MongoDB whitelist.
After adding your IP the problem should be solved.
Keep in mind that the IP address must be IPv4.
I would suggest you to setup a peering connection between your network and the mongodb cluster's network with the VPC Peering feature of Atlas. Way more secure ;)
Make sure to follow the instruction for a Private only connection. to do so, you need to adjust the URI by adding -pri.
Example:
old_con = "mongodb://xxx:<PASSWORD>#xxcluster-shard-00-00-y0phk.gcp.mongodb.net:27017,xxcluster-shard-00-01-y0phk.gcp.mongodb.net:27017,xxxcluster-shard-00-02-y0phk.gcp.mongodb.net:27017/test?ssl=true&replicaSet=xxxCluster-shard-0&authSource=admin&retryWrites=true"
new_con = "mongodb://xxx:<PASSWORD>#xxcluster-shard-00-00-y0phk-pri.gcp.mongodb.net:27017,xxcluster-shard-00-01-y0phk-pri.gcp.mongodb.net:27017,xxxcluster-shard-00-02-y0phk-pri.gcp.mongodb.net:27017/test?ssl=true&replicaSet=xxxCluster-shard-0&authSource=admin&retryWrites=true"

Remote tcp connection in python with zeromq

I have a python client that needs to talk to a remote server I manage. They communicate using zeromq. When I tested the client/server locally everything worked. But now I have the client and server deployed on the cloud, each using a different provider. My question is, what's the simplest way (that is safe) to make the connection? I'm assuming I can't pass the password over, and even if I could I'm guessing there are safer alternatives.
I know how to set an ssh connection without a password using ssh-keygen. Would that work? Would the client need to make an ssh connection with the server before sending the tcp req? If there's a python library that helps with this it'd be a big help.
Thanks!
Update:
So more than 24 hours passed and no one replied/answered. I think I'm getting closer to solve this, but not quite there yet. I added my client's key to .ssh/authorized_key on the server, and now I can ssh from the client to the server without a password. Next, I followed this post about "Tunneling PyZMQ Connections with SSH". Here's what I have in my client code:
1 context = zmq.Context()
2 socket = context.socket(zmq.REQ)
3 socket.connect("tcp://localhost:5555")
4 ssh.tunnel_connection(socket, "tcp://locahost:5555", "myuser#remote-server-ip:5555")
5 socket.send_string(some_string)
6 reply = socket.recv()
This doesn't work. I don't really understand lines 3 & 4 and I assume I do something wrong there. Also, my server (hosted on linode) has a "Default Gateway" IP and a "Public IP" -- in the tunnel connection I only specify the public ip, which is also the ip I use to ssh to the machine.
Indeed, ZMQ way is - tunnelling connection with the SSH. Your example is exactly what needs to be done, except that one should either use connect or tunnel_connection, not both.
Also, when specifying server to connect to, make sure to define the SSH port, not the ZMQ REP socket port. That is, instead of myuser#remote-server-ip:5555 you might try myuser#remote-server-ip or myuser#remote-server-ip:22.
import zmq
import zmq.ssh
context = zmq.Context()
socket = context.socket(zmq.REQ)
zmq.ssh.tunnel_connection(socket, "tcp://locahost:5555", "myuser#remote-server-ip")
socket.send(b"Hello")
reply = socket.recv()
Finally, make sure you've installed either pexpect or paramiko - they will do the tunnelling actually. Note that if you're using Windows, paramiko is the only solution which will work - pexpect openssh tunnelling won't work on Windows.
If you use paramiko instead of pexpect, make sure to set paramiko=True in the tunnel_connection arguments.
I have found ssh in Python to be iffy at best, even with paramiko and fabric libraries, so to debug, you might try setting up a tunnel separately, just to see if that's the issue with the broken connection.
For example:
ssh myuser#remote-server-ip -L 5050:localhost:5555 -N
This says: connect to myuser#remote-server-ip, and whenever I request a connection to localhost:5050 on my machine, forward it across the ssh connection so that the server at remote-server-ip thinks it's receiving a connection from localhost:5555.
-L constructs the tunnel, and -N means don't do anything else on the connection.
With that running in another shell, e.g., a different Terminal window, on your local development machine, try to connect to a zeromq server at localhost:5050, which will actually be the zeromq running on the remote server.
You could use 5555:localhost:5555 in the ssh command above, but I find that can be confusing and often conflicts with a local copy of the same service.

Unexpected session close error is thrown when connecting netconf

I am using ncclient to connect to the netconf. However when ever i try to connect through python
"ncclient.transport.errors.SessionCloseError: Unexpected session close" error is thrown. the code snippet that i am using is given below
manager.connect('<servername>',22,username='<username>')
Any help on this is much appriciated. I am able to connect to the remote server by using public key, hence i didnt provide passwordk in connect
And in the netconf server logs i am able to see access-denied error. (I got the same prob even when i tried with username and pwd)
You haven't given a lot of information.
Which version of ncclient are you using?
Which version of Python are you using?
Which NETCONF implementation are you trying to connect to? Is this to an actual switch or router, or something like a Linux server running libnetconf or yuma?
Based on the info here, I could imagine a couple of things being wrong:
paramiko isn't using the right key to establish SSH transport.
You're attempting to establish a NETCONF session with an SSH server rather than a NETCONF server.
In your script, create some logs with something like manager.logging.basicConfig(filename='ncclient.log', level=manager.logging.DEBUG) and then re-run your script - do you get anything more informative?
This is an old question, but I hope I can point you in the right direction at least.
its possible that your machines don't know each other (like when you connect via normal ssh and get the "unknown key, really connect (y/n)?" error. In that case, by default the session will not connect. To change this behavior use the "unknown_host_cb" parameter:
def allowUnknownHosts(host,fingerprint):
return True
self.manager = manager.connect(host=host, port=port, username=user,password=password, unknown_host_cb=allowUnknownHosts)

Error 2006: "MySQL server has gone away" using Python, Bottle Microframework and Apache

After accessing my web app using:
- Python 2.7
- the Bottle micro framework v. 0.10.6
- Apache 2.2.22
- mod_wsgi
- on Ubuntu Server 12.04 64bit; I'm receiving this error after several hours:
OperationalError: (2006, 'MySQL server has gone away')
I'm using MySQL - the native one included in Python. It usually happens when I don't access the server. I've tried closing all the connections, which I do, using this:
cursor.close()
db.close()
where db is the standard MySQLdb.Connection() call.
The my.cnf file looks something like this:
key_buffer = 16M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#max_connections = 100
#table_cache = 64
#thread_concurrency = 10
It is the default configuration file except max_allowed_packet is 128M instead of 16M.
The queries to the database are quite simple, at most they retrieve approximately 100 records.
Can anyone help me fix this? One idea I did have was use try/except but I'm not sure if that would actually work.
Thanks in advance,
Jamie
Update: try/except calls didn't work.
This is MySQL error, not Python's.
The list of possible causes and possible solutions is here: MySQL 5.5 Reference Manual: C.5.2.9. MySQL server has gone away.
Possible causes include:
You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
and so on...
In other words, there are plenty of possible causes. Go through that list and check every possible cause.
Make sure you are not trying to commit to a closed MySqldb object
An answer to a (very closely related) question has been posted here: https://stackoverflow.com/a/982873/209532
It relates directly to the MySQLdb driver (MySQL-python (unmaintained) and mysqlclient (maintained fork)), but the approach is the the same for other driver the does not support automatic reconnect.
For me this was fixed using
MySQLdb.connect("127.0.0.1","root","","db" )
instead of
MySQLdb.connect("localhost","root","","db" )
and then
df.to_sql('df',sql_cnxn,flavor='mysql',if_exists='replace', chunksize=100)

Categories