Timeout for idle connection - python

I am using asyncore and asynchat modules to build a SMTP server (I used code from smtpd lib to build the SMTP server) but I have a problem with connection timeouts. When I open telnet connection to the SMTP server and leave it so, the connection is established althought no data exchange happens. I want to set timeout e.g 30 seconds and to close the idle connection by server if nothing comes from the client (else there could be an easy DOS vulnerability). I googled for a solution, read source codes and documentation but didn't found anything usable.
Thanks

According to asyncore documentation, asyncore.loop() has a timeout parameter, which defaults to 30 seconds. So apparently default already should be 30 seconds, you can try and play with it to suit your own needs.
The timeout argument sets the timeout parameter for the appropriate
select() or poll() call, measured in seconds; the default is 30
seconds.
Ok, the above actually refers to poll() or select() timeout and not the idle timeout.
As per this page, you can hack asyncore to support timeouts like this:
Add the following block to your own copy of asyncore.poll just after the for fd in e: block...
#handle timeouts
rw = set(r) + set(w)
now = time.time()
for f in (i for i in rw if i in map):
map[f].lastdata = now
for j in (map[i] for i in map if i not in rw):
if j.timeout+j.lastdata now:
#timeout!
j.handle_close()
You ARE going to need to initialize .timeout and .lastdata members for
every instance, but that shouldn't be so bad (for a socket that
doesn't time out, I would actually suggest a 1 hour or 1 day timeout).

Related

strange while loop behavior with time.sleep above 90 s

I'm struggling understanding a "weird" behavior of my simple script. Basically, it works as expected if time.sleep() is set as 60s but as soon as I put a value above 90 (90 is the limit apparently in my case), the loop doesn't work properly. I discovered this when I was trying to pause the script for 3 mins.
Here's my script
from gpiozero import CPUTemperature
import time
import paho.mqtt.client as mqtt #import the client1
import psutil
broker_address="192.168.1.17"
client = mqtt.Client("P1") #create new instance
client.connect(broker_address) #connect to broker
#time.sleep(60)
while True:
cpu = CPUTemperature()
print(cpu.temperature)
#a=cpu.temperature
#print(psutil.cpu_percent())
#print(psutil.virtual_memory()[2])
#print(a)
client.publish("test/message",cpu.temperature)
#client.publish("test/ram", psutil.virtual_memory()[2])
#client.publish("test/cpu", psutil.cpu_percent())
time.sleep(91)
In this case, with 91s it just prints the value of cpu.temperature every 91s, whereas with a value like 60s, besides printing, it also publishes the value via mqtt every cycle.
Am I doing something wrong here? Or for a longer sleep I need to change my code? I'm running this on a RaspberryPi.
Thanks in advance
EDIT:
I solved modifying the script, in particular how mqtt was handling the timing
here's the new script
mqttc=mqtt.Client("P1")
#mqttc.on_connect = onConnect
#mqttc.on_disconnect = onDisconnect
mqttc.connect("192.168.1.17", port=1883, keepalive=60)
mqttc.loop_start()
while True:
cpu = CPUTemperature()
print(cpu.temperature)
mqttc.publish("test/message",cpu.temperature)
time.sleep(300)
The MQTT client uses a network thread to handle a number of different aspects of the connection to the broker.
Firstly, it handles sending ping request to the broker in order to keep the connection alive. The default period for the keepalive period is 60 seconds. The connection will be dropped by the broker if it does not receive any messages in 1.5 times this value, which just happens to be 90 seconds.
Secondly, the thread handles any incoming messages that the client may have subscribed to.
Thirdly, if you try to publish a message that is bigger than the MTU of the network link, calling mqttc.publish() will only send the first packet and the loop is needed to send the rest of the payload.
There are 2 ways to run the network tasks.
As you have found, you can start a separate thread with the mqttc.loop_start()
The other option is to call mqttc.loop() within your own while loop

MicroPython usockets not timing out

For various reasons, I am trying to have my ESP32 device with MicroPython poll all 256 options of 192.168.1.*:79 to find a 'host' PC. In doing so, the ESP32 attempts to create a socket and connect it to each possible address, i.e.:
while not connected:
try:
addr = generate_next_address()
s = usocket.socket()
s.connect(addr)
except OSError:
s.close()
continue
print("Found a connection!")
connected = True
When attempting to send a connection to a device that refuses the connect(), it is very quick to throw the exception and move onward. However, the problem is when it starts encountering devices that either don't respond or don't exist, it waits for a significant time before timing out.
Now, I've tried every variation of using usocket.settimeout(), usocket.setblocking(), uselect.poll(), and time.delay(), but I was unable to get anything to change the timeout period.
By setting blocking to false, the script immediately attempts all 256 addresses and then breaks out of the while loop, disallowing an opportunity to connect properly. Having blocking on completely ignores any timeout setting I attempt, continuing to take 15-20 seconds to timeout, as opposed to 1.
Is there something I'm not understanding about how this works? Is there a solution that is obvious but I have missed?

How to keep an inactive connection open with PycURL?

Pseudo-code to better explain question:
#!/usr/bin/env python2.7
import pycurl, threading
def threaded_work():
conn = pycurl.Curl()
conn.setopt(pycurl.TIMEOUT, 10)
# Make a request to host #1 just to open the connection to it.
conn.setopt(pycurl.URL, 'https://host1.example.com/')
conn.perform_rs()
while not condition_that_may_take_very_long:
conn.setopt(pycurl.URL, 'https://host2.example.com/')
print 'Response from host #2: ' + conn.perform_rs()
# Now, after what may be a very long time, we must request host #1 again with a (hopefully) already established connection.
conn.setopt(pycurl.URL, 'https://host1.example.com/')
print 'Response from host #1, hopefully with an already established connection from above: ' + conn.perform_rs()
conn.close()
for _ in xrange(30):
# Multiple threads must work with host #1 and host #2 individually.
threading.Thread(target = threaded_work).start()
I am omitting extra, only unnecessary details for brevity so that the main problem has focus.
As you can see, I have multiple threads that must work with two different hosts, host #1 and host #2. Mostly, the threads will be working with host #2 until a certain condition is met. That condition may take hours or even longer to be met, and will be met at different times in different threads. Once the condition (condition_that_may_take_very_long in the example) is met, I would like host #1 to be requested as fast as possible with the connection that I have already established at the start of the threaded_work method. Is there any efficient way to efficiently accomplish this (open to the suggestion of using two PycURL handles, too)?
Pycurl uses libcurl. libcurl keeps connections alive by default after use, so as long as you keep the handle alive and use that for the subsequent transfer, it will keep the connection alive and ready for reuse.
However, due to modern networks and network equipment (NATs, firewalls, web servers), connections without traffic are often killed off relatively soon so having an idle connection and expecting it to actually work after "hours", is a very slim chance and rare occurance. Typically, libcurl will then discover that the connection has been killed in the mean time and create a new one to use at the next use.
Additionally, and in line with what I've described above, since libcurl 7.65.0 it now defaults to not reusing connections anymore that are older than 118 seconds. Changeable with the CURLOPT_MAXAGE_CONN option. The reason is that they barely ever work so by avoiding having to keep them around, detect them to be dead and reissue the request, this is an optimization.

Python using try to reduce timeout wait

I am using exscripts module which has a call conn.connect('IP address').
It tries to open a telnet session to that IP.
It will generate an error after connection times out.
The timeout exception is set somewhere in the code of the module or it would be what the default for telnet is. (not sure)
This timeout time is too long and slowing down the script if 1 device is not reachable. Is there something we can do with the try except here ? Like
Try for 3 secs:
then process the code
except:
print " timed out"
We changed the API. Mike Pennington only recently introduced the new connect_timeout parameter for that specific use case.
New solution (current master, latest release on pypi 2.1.451):
conn = Telnet(connect_timeout=3)
We changed the API because you usually don't want to wait for unreachable devices, but want to wait for commands to finish (some take a little longer).
I think you can use
conn = Telnet(timeout=3)
I dont know whether timeout in seconds. If microseconds, try 3000

How to timeout early when trying to connect

I have an application that is trying to connect to a rabbitmq-server, but I want my application to timeout within a specified number of seconds if it cannot connect to the server.
My problem is that I can't figure out how to do it.
to clarify, it's when my producer tries to connect that I want to timeout earlier, because now It takes up to 20-30 seconds.
If the library you're using makes use of the socket module (many do), a simple import socket; socket.setdefaulttimeout( SECONDS ) will suffice
[edited to include the correction by Daniel Figueroa]

Categories