I recently started working with Yaskawa's OPC UA Server provided on its robot's controller.
I'm connecting to the server via Python's OPCUA library. Everything works well, but when my code crashes or when I will close terminal without disconnecting from the server I cannot connect to it once again.
I receive an error from library, saying:
The server has reached its maximum number of sessions.
And the only way to solve this is to restart the controller by turning it off and on again.
Documentation of the server is saying that max number of sessions is 2.
Is there a way to clear the connection to the server without restarting the machine?
The server keeps track of the client session and doesn't know that your client crashed.
But the client can define a short enough SessionTimeout, after which the server can remove the crashed session.
The server may have some custom configuration where you can define the maximum number of sessions that it supports. 2 sessions is very limited, but if the hardware is very limited maybe that is the best you can get. See the product documentation about that.
Related
I have a python web server (django)
It talks to other services such as elasticsearch
I notice when elasticsearch goes down, the web server soon (after a few minutes) stops responding to client requests.
I use https://github.com/elastic/elasticsearch-py and it implements timeout and don't think it's blocking.
My hunch is that requests piles up during the timeout period and server becomes unavailable but it's a just guess.
What's the reason for the server not being able to handle requests in such a scenario and how do I fix it?
I have nginx - uwsgi - django on unix (amazon ecs) setup if that makes difference
I'm working in a NetHack clone that is supposed to be playing through Telnet, like many NetHack servers. As I've said, this is a clone, so it's being written from scratch, on Python.
I've set up my socket server reusing code from a SMTP server I wrote a while ago and all of suddenly my attention jumped to this particular line of code:
s.listen(15)
My server was designed to be able to connect to 15 simultaneous clients just in case the data exchange with any took too long, but ideally listen(1) or listen(2) would be enough. But this case is different.
As it happens with Alt.org when you telnet their NetHack servers, people connected to my server should be able to play my roguelike remotely, through a single telnet session, so I guess this connection should not be interrupted. Yet, I've read here that
[...] if you are really holding more than 128 queued connect requests you are
a) taking too long to process them or b) need a heavy-weight
distributed server or c) suffering a DDoS attack.
What is the better practice to carry out here? Should I keep every connection open until the connected user disconnects or is there any other way? Should I go for listen(128) (or whatever my system's socket.SOMAXCONN is) or is that a bad practice?
number in listen(number) request limits number of pending connect requests.
Connect request is pending from initial SYN request received by OS until you called accept socket method. So number does not limits open (established) connection number but it limits number of connections in SYN_RECV state.
It is bad idea not to answer on incoming connection because:
Client will retransmit SYN requests until answer SYN is received
Client can not distinguish situation when your server is not available and it just in queue.
Better idea is to answer on connection but send some message to client with rejection reason and then close connection.
I have an API written with python flask running on Bluemix. Whenever I send it a request and the API takes more than 120 seconds to respond it times out. It does not return anything and it returns the following error: 500 Error: Failed to establish a backside connection.
I need it to be able to process longer requests as well. Is there any way to extend the timeout value or is there a workaround for this issue?
All Bluemix traffic goes through the IBM WebSphere® DataPower® SOA Appliances, which provide reverse proxy, SSL termination, and load balancing functions. For security reasons DataPower closes inactive connections after 2 minutes.
This is not configurable (as it affects all Bluemix users), so the only solution for your scenario is to change your program to make sure the connection is not idle for more than 2 minutes.
I'm uploading hundreds of millions of items to my database via a REST API from a cloud server on Heroku to a database in AWS EC2. I'm using Python and I am constantly seeing the following INFO log message in the logs.
[requests.packages.urllib3.connectionpool] [INFO] Resetting dropped connection: <hostname>
This "resetting of the dropped connection" seems to take many seconds (sometimes 30+ sec) before my code continues to execute again.
Firstly what exactly is happening here and why?
Secondly is there a way to stop the connection from dropping so that I am able to upload data faster?
Thanks for your help.
Andrew.
Requests uses Keep-Alive by default. Resetting dropped connection, from my understanding, means a connection that should be alive was dropped somehow. Possible reasons are:
Server doesn't support Keep-Alive.
There's no data transfer in established connections for a while, so server drops connections.
See https://stackoverflow.com/a/25239947/2142577 for more details.
The problem is really that the server has closed the connection even though the client has requested it be kept alive.
This is not necessarily because the server doesn't support keepalives, but could be that the server is configured to only allow a certain number of requests on a connection. This could be done to help spread out requests on different servers, but I think this practice is/was common as a practical defence against poorly written code that operates in the server (eg. PHP) that doesn't clean up after itself after serving a request (perhaps due to an error condition etc.)
If you think this is the case for you and you'd like to not see these logs (which are logged at INFO level), then you can add the following to quieten that part of the logging:
# Really don't need to hear about connections being brought up again after server has closed it
logging.getLogger("requests.packages.urllib3.connectionpool").setLevel(logging.WARNING)
This is common practice for services that expose RESTful APIs to avoid abuse (or DoS).
If you're stressing their API they'll drop your connection.
Try getting your script to sleep a bit every once in a while to avoid the drop.
I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible.
If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.