What is wrong with my event realisation?
class MyHandler(RequestHandler):
counter = 0
#coroutine
def post(self):
yield self.foo()
self.write("Next 5 request!!!")
#coroutine
def foo(self):
if MyHandler.counter == 0:
MyHandler.callback = yield tornado.gen.Callback("MyEvent")
MyHandler.counter += 1
if MyHandler.counter == 5:
MyHandler.callback()
MyHandler.counter = 0
else:
tornado.gen.Wait("MyEvent")
I always have:
raise UnknownKeyError("key %r is not pending" % (key,))
UnknownKeyError: key 'MyEvent' is not pending
Also I found in tornado doc on Callback and Wait:
Deprecated since version 4.0: Use Futures instead.
But nowhere can find usecase of Futures for my situation.
Please help.
The problem is that every request you get creates a new instance of MyHandler, so your counter and callback variables are not shared between requests. You really want them to be class variables, so that they're shared between the instances.
Here is how you can implement it with Futures:
class MyHandler(tornado.web.RequestHandler):
fut = None
counter = 0
#coroutine
def get(self):
yield self.foo()
self.write("Next 5 request!!!")
#coroutine
def foo(self):
if MyHandler.counter == 0:
MyHandler.fut = Future()
MyHandler.counter += 1
if MyHandler.counter == 5:
MyHandler.counter = 0
MyHandler.fut.set_result("done") # This will wake up waiting requests.
else:
yield MyHandler.fut
Related
Imagine having the two threads observeT and upT. observeT observes the value of an instance attribute (instance.a) and should 'alert' (print a note in this example) if its value is 7. Then there's the thread upT, which increases the value of the instance attribute by 1 at a time (instance.a += 1).
However, due to the randomly chosen thread to continue with Python's Lock we can't make sure that the observer thread (observeT) catches the moment when the value of instance.a was increased to 7.
How do I make sure that the observer is called every time after upT releases to lock? Note that it is important to keep the threads upT and observeT split.
Please see the following code for more details:
from threading import Lock, Thread
class MyClass():
a: int
def __new__(cls):
instance = super().__new__(cls)
instance.a = 0
return instance
instance = MyClass()
lock = Lock()
def up():
for i in range(100000):
with lock:
instance.a += 1
def observe():
while True:
with lock:
a = instance.a
if a == 7:
print("This is 7!")
if instance.a == 100000:
break
observeT = Thread(target=observe)
upT = Thread(target=up)
observeT.start()
upT.start()
upT.join()
observeT.join()
Thank you for your help!
Is this what you're looking for?
from threading import Thread, Lock, Condition
class MyClass:
def __init__(self, a_lock):
self.cond = Condition(a_lock)
self.canproceed = False
self.a = 0
def __setattr__(self, key, value):
super().__setattr__(key, value)
if key == 'a':
if value == 7 or value == 100000:
self.cond.notify()
if value == 7:
while not self.canproceed:
self.cond.wait()
lock = Lock()
instance = MyClass(lock)
def up():
for i in range(100000):
with lock:
instance.a += 1
def observe():
with instance.cond:
while instance.a != 7:
instance.cond.wait()
print("This is 7!")
instance.canproceed = True
instance.cond.notify()
while instance.a != 100000:
instance.cond.wait()
observeT = Thread(target=observe)
upT = Thread(target=up)
observeT.start()
upT.start()
upT.join()
observeT.join()
Output:
This is 7!
I have Python 2.7 code that makes 1,000 requests to check my current IP address. I have written it using 200 threads and sockets. The code does the same task with two methods. I couldn't find any differences between the two below, besides that one subclasses threading.Thread
#!/usr/bin/env python2.7
import sys, ssl, time, socket, threading
class Counter:
def __init__(self):
self._value = 0
self._LOCK = threading.Lock()
def increment(self):
with self._LOCK:
self._value += 1
return self._value
def pr(out):
sys.stdout.write('{}\n'.format(out))
sys.stdout.flush()
def recvAll(sock):
data = ''
BUF_SIZE = 1024
while True:
part = sock.recv(BUF_SIZE)
data += part
length = len(part)
if length < BUF_SIZE: break
return data
class Speed(threading.Thread):
_COUNTER = Counter()
def __init__(self):
super(Speed, self).__init__()
self.daemon = True
self._sock = ssl.wrap_socket(socket.socket(socket.AF_INET, socket.SOCK_STREAM))
self._sock.settimeout(5)
self._sock.connect(('checkip.amazonaws.com', 443))
self._request = 'GET / HTTP/1.1\r\n'
self._request += 'Host: checkip.amazonaws.com\r\n\r\n'
def run(self):
i = 0
while i < 5:
self._sock.write(self._request)
response = recvAll(self._sock)
if '200 OK' not in response: continue
count = Speed._COUNTER.increment()
pr('#{} - {}'.format(count, response))
i += 1
self._sock.close()
def speed():
sock = ssl.wrap_socket(socket.socket(socket.AF_INET, socket.SOCK_STREAM))
sock.settimeout(5)
sock.connect(('checkip.amazonaws.com', 443))
request = 'GET / HTTP/1.1\r\n'
request += 'Host: checkip.amazonaws.com\r\n\r\n'
i = 0
while i < 5:
sock.write(request)
response = recvAll(sock)
if '200 OK' not in response: continue
count = counter.increment()
pr('#{} - {}'.format(count, response))
i += 1
sock.close()
slow = False
if slow:
for _ in xrange(200):
thread = Speed()
thread.start()
else:
counter = Counter()
for _ in xrange(200):
thread = threading.Thread(target = speed)
thread.daemon = True
thread.start()
while threading.active_count() > 1: time.sleep(1)
I expected both to have similar speeds. However, the variation that subclasses threading.Thread is much, much, much slower. Any ideas as to why?
Your Thread subclass is doing far too much of its work in __init__, which doesn't execute in the new thread. The version that uses the subclass ends up executing largely sequentially as a result.
I am using python twisted library and am making a server to receive data to do some processing on the received data and then close the connection. I observe that the program hangs in dataReceived without a print statement. With print statement it goes through. Wondering is print is somehow slowing down the execution to avoid race condition or if I have coded a bug?
My code is as follows:
class Stack(Protocol):
def __init__(self, factory):
self.factory = factory
self.bytesremaining = None
self.payload = ""
self.headerseen = False
def dataReceived(self, data):
if self.headerseen == False:
header = unpack('B',data[0])[0]
if header == 128:
self.pop()
return
self.bytesremaining = self.datalength = unpack('B',data[0])[0]
print self.datalength #without this print the execution hangs in the middle.
if len(data) > 1 and (len(self.factory.pushstack) < 100):
self.payload += data[1:]
self.bytesremaining -= len(data) - 1
self.headerseen = True
elif len(self.factory.pushstack) < 100:
self.payload += data
self.bytesremaining -= len(data) - 1
if self.bytesremaining == 0:
self.factory.pushstack.appendleft(self.payload)
retval = pack('B',0)
self.transport.write(retval)
self.transport.loseConnection()
class StackFactory(ServerFactory):
def __init__(self):
self.clients = []
self.pushstack = collections.deque()
self.popstack = collections.deque()
self.clientsmap = {}
def buildProtocol(self, addr):
return Stack(self)
It appears to me that the default twisted reactor for OS X (selectreactor) is not as stable as kqueue.
I am not seeing the issue anymore after installing the kqueue reactor.
from twisted.internet import kqreactor
kqreactor.install()
from twisted.internet import reactor
I have the following function:
def getSuggestengineResult(suggestengine, seed, tablename):
table = getTable(tablename)
for keyword_result in results[seed][suggestengine]:
i = 0
while True:
try:
allKeywords.put_item(
Item={
'keyword': keyword_result
}
)
break
except ProvisionedThroughputExceededException as pe:
if (i > 9):
addtoerrortable(keyword_result)
print(pe)
break
sleep(1)
i = i + 1
print("ProvisionedThroughputExceededException in getSugestengineResult")
The function gets started in more then one thread. I have this process and if the process works, the function should be ready in the thread. Otherwise it should try again 9 times. Now my problem:
the "print("ProvisionedThroughputExceededException in getSugestengineResult")" Never got printed. Just the exception as pe gets printed. So there is my problem? Are all the threads working on the same "i"? Or is it never possible to get to the print? I dont know what I am doin wrong ...
You have to use a specific counter if you want all your thread to have the same counter :
from multiprocessing import Lock, Process, Value
class ThreadCounter(object):
def __init__(self, initval=0):
self.val = Value('i', initval)
self.lock = Lock()
def increment(self):
with self.lock:
self.val.value += 1
def value(self):
with self.lock:
return self.val
then you can pass the counter to your function
counter=ThreadCounter(0)
def getSuggestengineResult(suggestengine, seed, tablename,counter):
...
except ProvisionedThroughputExceededException as pe:
if (counter.value() > 9):
...
counter.increment()
...
This counter will be shared with the other threads
I have a simple aiohttp-server with two handlers.
First one does some computations in the async for loop. Second one just returns text response. not_so_long_operation returns 30-th fibonacci number with the slowest recursive implementation, which takes something about one second.
def not_so_long_operation():
return fib(30)
class arange:
def __init__(self, n):
self.n = n
self.i = 0
async def __aiter__(self):
return self
async def __anext__(self):
i = self.i
self.i += 1
if self.i <= self.n:
return i
else:
raise StopAsyncIteration
# GET /
async def index(request):
print('request!')
l = []
async for i in arange(20):
print(i)
l.append(not_so_long_operation())
return aiohttp.web.Response(text='%d\n' % l[0])
# GET /lol/
async def lol(request):
print('request!')
return aiohttp.web.Response(text='just respond\n')
When I'm trying to fetch / and then /lol/, it gives me response for the second one only when the first one gets finished.
What am I doing wrong and how to make index handler release the ioloop on each iteration?
Your example has no yield points (await statements) for switching between tasks.
Asynchronous iterator allows to use await inside __aiter__/__anext__ but don't insert it automatically into your code.
Say,
class arange:
def __init__(self, n):
self.n = n
self.i = 0
async def __aiter__(self):
return self
async def __anext__(self):
i = self.i
self.i += 1
if self.i <= self.n:
await asyncio.sleep(0) # insert yield point
return i
else:
raise StopAsyncIteration
should work as you expected.
In real application most likely you don't need await asyncio.sleep(0) calls because you will wait on database access and similar activities.
Since, fib(30) is CPU bound and sharing little data, you should probably use a ProcessPoolExecutor (as opposed to a ThreadPoolExecutor):
async def index(request):
loop = request.app.loop
executor = request.app["executor"]
result = await loop.run_in_executor(executor, fib, 30)
return web.Response(text="%d" % result)
Setup executor when you create the app:
app = Application(...)
app["exector"] = ProcessPoolExector()
An asynchronous iterator is not really needed here. Instead you can simply give the control back to the event loop inside your loop. In python 3.4, this is done by using a simple yield:
#asyncio.coroutine
def index(self):
for i in range(20):
not_so_long_operation()
yield
In python 3.5, you can define an Empty object that basically does the same thing:
class Empty:
def __await__(self):
yield
Then use it with the await syntax:
async def index(request):
for i in range(20):
not_so_long_operation()
await Empty()
Or simply use asyncio.sleep(0) that has been recently optimized:
async def index(request):
for i in range(20):
not_so_long_operation()
await asyncio.sleep(0)
You could also run the not_so_long_operation in a thread using the default executor:
async def index(request, loop):
for i in range(20):
await loop.run_in_executor(None, not_so_long_operation)