Twisted Python pause/postpone reactor - python

I'm pretty new to twisted, I have an HTTP client that queries a server that has rate limit, when I hit this limit the server responds with HTTP 204, so when I'm handling the response I'm doing probably something nasty, like this:
def handleResponse(r, ip):
if r.code == 204:
print 'Got 204, sleeping'
time.sleep(120)
return None
else:
jsonmap[ip] = ''
whenFinished = twisted.internet.defer.Deferred()
r.deliverBody(PrinterClient(whenFinished, ip))
return whenFinished
I'm doing this because I want to pause all the tasks.
Following there are 2 behaviours that I've in my mind, either re-run the tasks that hit 204 afterwards in the same execution (don't know if it's possible) or just log the errors and re-run them afterwards in another execution of the program. Another problem that may raise is that I've set a timeout on the connection in order to cancel the deferred after a pre-defined amount of time (see the code below) if there's no response from the server
timeoutCall = reactor.callLater(60, d.cancel)
def completed(passthrough):
if timeoutCall.active():
timeoutCall.cancel()
return passthrough
d.addCallback(handleResponse, ip)
d.addErrback(handleError, ip)
d.addBoth(completed)
Another problem that I may encounter is that if I'm sleeping I may hit this timeout and all my requests will be cancelled.
I hope that I've been enough precise.
Thank you in advance.
Jeppo

Don't use time.sleep(20) in any Twisted-based code. This violates basic assumptions that any other Twisted-based code that you might be using makes.
Instead, if want to delay something by N seconds, use reactor.callLater(N, someFunction).
Once you remove the sleep calls from your program, the problem of unrelated timeouts being hit just because you've stopped the reactor from processing events will go away.

For anyone stumbling across this thread, it's imperative that you never call time.sleep(...); however, it is possible to create a Deferred that does nothing but sleep... which you can use to compose delays into a deferred chain:
def make_delay_deferred(seconds, result=None):
d = Deferred()
reactor.callLater(seconds, d.callback, result)
return d

Related

How to bypass a request when it takes too long?

I have a python library which must be fast enough for online application. If a particular request (function call) takes too long, I want to just bypass this request with an empty result returned.
The function looks like the following:
def fast_function(text):
result = mylibrary.process(text)
...
If the mylibrary.process spend time more than a threshold limit, i.e. 100 milliseconds, I want to bypass this request and proceed to process the next 'text'.
What's the normal way to handle this? Is this a normal scenario? My application can afford to bypass a very small number of requests like this, if it takes too long.
One way is to use a signal timer. As an example:
import signal
def took_too_long():
raise TimeoutError
signal.signal(signal.SIGALRM, took_too_long)
signal.setitimer(signal.ITIMER_REAL, 0.1) # 0.1 seconds
try:
result = mylibrary.process(text)
signal.setitimer(signal.ITIMER_REAL, 0) # success, reset to 0 to disable the timer
except TimeoutError:
# took too long, do something
You'll have to experiment to see if this does or does not add too much overhead.
You can add a timeout to your function.
One way to implement it is to use a timeout decorator which will throw an exception if the function runs for more than the defined timeout. In order to pass to the next operation you can catch the exception thrown by the timeout.
Install this one for example: pip install timeout-decorator
import timeout_decorator
#timeout_decorator.timeout(5) # timeout of 5 seconds
def fast_function(text):
result = mylibrary.process(text)

Python socket sendall blocks and I'm not sure how to handle bad clients / slow consumers

To simplify things, assume a TCP client-server app where the client sends a request and the server responds. The server uses sendall to respond to each client.
Now assume a bad client that sends requests to the server but doesn't really handle the responses. I.e. the client never calls socket.recv. (It doesn't have to be a bad client btw...it may be a slow consumer on the other end).
What ends up happening, is that the server keeps sending responses using sendall, until I'm assuming a buffer gets full, and then at some point sendall blocks and never returns.
This seems like a common problem to me so what would be the recommended solution?
Is there something like a try-send that would raise or return an EWOULDBLOCK (or similar) if the recipient's buffer is full? I'd like to avoid non-blocking select type calls if possible (happy to go that way if there are no alternatives).
Thank you in advance.
Following rveed's comment, here's a solution that works for my case:
def send_to_socket(self, sock: socket.socket, message: bytes) -> bool:
try:
sock.settimeout(10.0) # protect against bad clients / slow consumers by making this timeout (instead of blocking)
res = sock.sendall(message)
sock.settimeout(None) # put back to blocking (if needed for subsequent calls to recv, etc. using this socket)
if res is not None:
return False
return True
except socket.timeout as st:
# do whatever you need to here
return False
except Exception as ex:
# handle other exceptions here
return False
If needed, instead of setting the timeout to None afterwards (i.e. back to blocking), you can store the previous timeout value (using gettimeout) and restore to that.

How to continue my program after internet disconnect-reconnects?

I have a program like this:
for i in range(25200):
time.sleep(1)
with requests.Session() as s:
data = {'ContractCode' : 'SAFMO98' }
r = s.post('http://cdn.ime.co.ir/Services/Fut_Live_Loc_Service.asmx/GetContractInfo', json = data ).json()
for key, value in r.items():
plt.clf()
last_prices = (r[key]['LastTradedPrice'])
z.append(last_prices)
plt.figure(1)
plt.plot(z)
Sometimes server rejects the connection and gives Exceeds request message. Or sometimes I lost my connection, etc.
Then I must re run my program and I will loose my plotted graph, and also the time my program was disconnected and the data I lost through this time. So what I like to do is add something to my program to keep my connection against interupts/desconnections. I mean my program wouldn't stop when it lost the connection or rejected from server side and will keep it's work when it connected again.
How is it possible?
EDIT: I edited my code like following but don't know how good is this way?
try:
for i in range(25200):
time.sleep(1)
with requests.Session() as s:
data = {'ContractCode' : 'SAFMO98' }
r =s.post('http://cdn.ime.co.ir/Services/Fut_Live_Loc_Service.asmx/GetContractInfo', json = data ).json()
for key, value in r.items():
plt.clf()
last_prices = (r[key]['LastTradedPrice'])
z.append(last_prices)
plt.figure(1)
plt.plot(z)
except:
pass
You have at least two connection failure events here, and either might result in an inability to connect for undefined amounts of time. A good option here is exponential backoff.
Basically, you attempt an operation, detect failures you know will require retrying, and wait. Each subsequent time the operation fails (in this case, presumably throwing an exception), you wait a multiple of the previous wait time. The idea is that, if you're being rate limited, you'll wait longer and longer until the API you're connecting to stops rejecting your requests. Also, if you've been physically disconnected, you'll attempt fewer connections over time, rather than spamming requests at a dead adapter.
There's a Python library, backoff, that handles most of the work involved in this for you with a decorator.

how to check whether a program using requests module is dead or not

I am trying to using python download a batch of files, and I use requests module with stream turned on, in other words, I retrieve each file in 200K blocks.
However, sometimes, the downloading may stop as it just gets stuck (no response) and there is no error. I guess this is because the connection between my computer and server was not stable enough. Here is my question, how to check this kind of stop and make a new connection?
You probably don't want to detect this from outside, when you can just use timeouts to have requests fail instead of stopping is the server stops sending bytes.
Since you didn't show us your code, it's hard to show you how to change it… but I'll show you how to change some other code:
# hanging
text = requests.get(url).text
# not hanging
try:
text = requests.get(url, timeout=10.0).text
except requests.exceptions.Timeout:
# failed, do something else
# trying until success
while True:
try:
text = requests.get(url, timeout=10.0).text
break
except requests.exceptions.Timeout:
pass
If you do want to detect it from outside for some reason, you'll need to use multiprocessing or similar to move the requests-driven code to a child process. Ideally you'll want it to post updates on some Queue or set and notify some Condition-protected shared flag variable every 200KB, then the main process can block on the Queue or Condition and kill the child process if it times out. For example (pseudocode):
def _download(url, q):
create request
for each 200kb block downloaded:
q.post(buf)
def download(url):
q = multiprocessing.Queue()
with multiprocessing.Process(_download, args=(url, q)) as proc:
try:
return ''.join(iter(functools.partial(q.get, timeout=10.0)))
except multiprocessing.Empty:
proc.kill()
# failed, do something else

How to properly use timeout parameter in select?

I'm new to socket programming (and somewhat to Python too) and I'm having trouble getting the select timeout to work the way I want to (on the server side). Before clients connect, timeout works just fine. I give it a value of 1 second and the timeout expires in my loop every 1 second.
Once a client connects, however, it doesn't wait 1 second to tell me the timeout expires. It just loops as fast as it can and tells me the timeout expires. Here's a snippet of my code:
while running:
try:
self.timeout_expired = False
inputready, outputready, exceptready = select.select(self.inputs, self.outputs, [], self.timeout)
except select.error, e:
break
except socket.error, e:
break
if not (inputready):
# Timeout expired
print 'Timeout expired'
self.timeout_expired = True
# Additional processing follows here
I'm not sure if this is enough code to see where my problem is, so please let me know if you need to see more. Basically, after a client connects, it at least appears that it ignores the timeout of 1 second and just runs as fast as it can, continuously telling me "Timeout expired". Any idea what I'm missing?
Thanks much!!
Edit: I should clarify..."inputready" represents input from a client connecting or sending data to the server, as well as stdin from the server. The other variables returned from select are only server-side variables, and is what I'm trying to do is detect whether the CLIENT took too long to reply, so I'm only checking if inputready is empty.
It is only a timeout if inputready, outputready, and exceptready are ALL empty. My guess is you have added the client socket to both self.inputs and self.outputs. Since the output socket is usually writable, it will always show up in outputready. Only add the client socket to self.outputs if you are ready to output something.
"When the timeout expires, select() returns three empty lists.
...To use a timeout requires adding the extra argument to the select() call and handling the empty lists after select() returns."
readable, writable, exceptional = select.select(inputs, outputs, inputs,timeout)
if not (readable or writable or exceptional):
print(' timed out, do some other work here', file=sys.stderr)
[https://pymotw.com/3/select/index.html][1]

Categories