Unstable connection to port-forwarded Elastic Search from python - python

I'm running the quick start elastic search in a k8s cluster in azure.
I port forwarding on 3002 using https
In my python script I'm using elastic_enterprise_search==7.17.0 and the following code to connect and test my connection - expecting a dictionary of results
search = AppSearch(os.environ["ELASTIC_URL"], http_auth=os.environ["ELASTIC_KEY"], use_ssl=True, verify_certs=False)
search.get_engine(engine_name = folder.lower())
Sometimes it produces a warning but connects and produces a result - i'm ok with this
InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
warnings.warn(
Other times it just gives an error and does not connect, yet nothing has changed in the code
ConnectionTimeout: Connection timed out during request
I can't find a pattern as to why sometimes it works and others it doesn't
I do find first thing in the morning its fine
Is there a setting limiting connections?
Is there a way to close active connections?

Related

In Python, what is the difference between using ssl.get_server_certificate and using SSLSocket.getpeercert?

I am wanting to use Python to retrieve the remote server certificate (not validate or check it in any way). I have retrieved the server certificate using both methods ``ssl.get_server_certificateandSSLSocket.getpeercert`.
The main reason I had to try SSLSocket.getpeercert over ssl.get_server_certificate was that the timeout value on the TLS handshake was not being honored with the ssl.get_server_certificate. One of the hosts I was trying to the the server certificate had some problem and would hang my python script during the TLS handshake and only the SSLSocket.getpeercert would time this out.
I also notice I cannot retrieve the server certificates from very old systems that use TLS 1.0 or even SSL with SSLSocket.getpeercert and there is no place to specify to the ssl_version like there is in ssl.get_server_certificate.
So I see both methods retrieve the server certificate and each seems to have different issues. But what are the differences with what each does? When would I use one over the other?

pyodbc: How to test whether it's possible to establish connection with SQL server without freezing up

I am writing an app with wxPython that incorporates pyodbc to access SQL Server. A user must first establish a VPN connection before they can establish a connection with the SQL server. In cases where a user forgets to establish a VPN connection or is simply not authorized to access a particular server, the app will freeze for up to 60+ seconds before it produces an error message. Often, users will get impatient and force-close the app before the error message pops up.
I wonder if there is a way to test whether it's possible to connect to the server without freezing up. I thought about using timeout, but it seems that timeout can be used only after I establish a connection
A sample connection string I use is below:
connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True')
See https://code.google.com/archive/p/pyodbc/wikis/Connection.wiki under timeout
Note: This attribute only affects queries. To set the timeout for the
actual connection process, use the timeout keyword of the
pyodbc.connect function.
So change your connection string to:
connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True', timeout=3)
should work
took a while before it threw an error message about server not existing or access being denied
Your comment conflates two very different kinds of errors:
server not existing is a network error. Either the name has no address, or the address is unreachable. No connection can be made.
access being denied is a response from the server. For the server to respond, a connection must exist. This is not to be confused with connection refused (ECONNREFUSED), which means the remote is not accepting connections on the port.
SQL Server uses TCP/IP. You can use standard network functions to determine if the network hostname of the machine running SQL Server can be found, and if the IP address is reachable. One advantage to using them to "pre-test" the connection is that any error you'll get will be much more specific than the typical there was a problem connecting to the server.
Note that not all delay-inducing errors can be avoided. For example, if the DNS server is not responding, the resolver will typically wait 30 seconds before giving up. If an IP address is valid, but there's no machine with that address, attempting a connection will take a long time to fail. There's no way for the client to know there's no such machine; it could just be taking a long time to get a response.

Why my tcp client skip over the proxy and connect to server directly

I write a reverse proxy accroding to this https://gist.github.com/voorloopnul/415cb75a3e4f766dc590#file-proxy-py.
I need this to overwrite the authentication infomation from client side. Like following.
Client(passA) ---> Proxy(overwrite passA into passB) ---> Server(passB)
Where passB is the correct password and passA is random number.
The algorithms is SCRAMSHA256, a little bit complex but I manage to do this.
Eventhing works well when the proxy and the server is not on the same machine.
I have tried to deploy the proxy on both windows and linux. The proxy uses 'ip address' to point to Server
While, when the proxy using 'localhost' to point the Server, it is broken, that the authentication cannot be passed with one certain client(for which I made the proxy). But with the other clients, it also works well.
Shouldn't this be encapsulation and transparent to user?
Why the localhost so special and how can I fix this?
Update the latest research
The authenication failed because the client connect to the server directly, so the password is not modified by my proxy.
Condition 1: Proxy on another machine. The proxy works.
Client(192.168.1.1) ==> Proxy(192.168.1.3:8000) ==> Server(192.168.1.2:6000)
-
Condition 2: Proxy on the same machine as the Server.
The proxy listen 0.0.0.0:8000 and forward packets to localhost:6000.
Client(192.168.1.1) ==> Proxy(192.168.1.2:8000) ==> Server(192.168.1.2:6000)
After the first connection, the rest connection becomes
Client(192.168.1.1) =====> Server(192.168.1.2:6000) without proxy.
That makes the proxy not work anymore.
Why the client will skip it in condition 2?

Scrapy: no route to host and persistent support enabled

If I'm running a crawler with persistent support enabled and I temporarily loose internet connection. Will the crawler retry the URLs that get a no route to host error during the temporary internet loss?
Yes.
Scrapy uses an HTTP 1.1 client which have persistent support by default, and under the hood (thanks to Twisted) this uses a pool of persistent connections with automatic retry when the connection is lost.
Besides that, when Scrapy gets a connection error for a request (timeout, dns error, no route, etc), the RetryMiddleware takes care of retrying the request. See http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.retry

Insert into remote Couchbase server by Python

I use this code to insert data into Couchbase
from couchbase import Couchbase
c = Couchbase.connect(host="remote-server.com", bucket="default")
c.set('first_key', 'first-_value')
But I got this error:
couchbase.exceptions.TimeoutError: <Key=u'first_key', RC=0x17[Operation timed out], Operational Error, Results=1, C Source=(src/multiresult.c,148)>
And, I tried these steps:
I printed c (Couchbase connection object) out
The object was created so it connected to Couchbase server
successfully?
I tried to telnet remote-server.com at port 8091, it
connected successfully, too.
Increase timeout connection to 30
seconds.
But, the problem has not been solved.
To connect to couchbase you should ensure that your server is configured with dns name remote-server.com, not IP, not localhost. And couchbase server also should be able to get ip via this dns name.
I.e. if you host your server in AWS EC2, couchbase usally get internal IP address like 10.X.X.X and even if you try to access it from internet via public ip with clien library, your request will be timed out. But you will be able to access REST API, and admin console via public dns.
Also you should check all ports (not only 8091) needed by couchbase. See this doc for all ports needed to be opened.

Categories