I have a view in which i send one message (request id) to workers. Then i need to listen result queue and return response only when i received a response msg (The response must contain this msg). How do i do it correctly?
This is my code:
def get(self):
ruid = rand_ruid()
# add msg to q1
write_ch = current_app.amqp_conn.channel()
write_ch.queue_declare(queue='q1', durable=True)
msg = mkmsg(ruid=ruid)
write_ch.basic_publish(exchange='', routing_key='q1', body=msg)
write_ch.close()
# then wait results from qbus
listen_ch = current_app.amqp_conn.channel()
listen_ch.exchange_declare(exchange='exbus', type='direct')
listen_ch.queue_declare(queue='bus')
listen_ch.queue_bind(exchange='exbus', queue='bus', routing_key=ruid)
while 1:
for method, properties, body in listen_ch.consume('bus'):
if method and body:
listen_ch.basis_ack(delivery_tag=method.delivery_tag)
listen_ch.close()
return make_response(body)
Updated
My question is incorrect, as my approach. I wanted to do asynchronous action (waiting for result from the queue) in synchronous part of the program (flask view).
Related
Is have code
class Tasks(SequentialTaskSet):
#task
def shopping(self):
with self.parent.environment.parsed_options.shopping.request_airshopping(
client=self.client, catch_response=True) as response:
self.run_req(response, self.parent.environment.parsed_options.rsa)
class WebUser(HttpUser):
tasks = [Tasks]
min_wait = 0
max_wait = 0
#staticmethod
#events.test_start.add_listener
def on_test_start(environment: Environment, **kwargs):
os.environ["ENABLE_SLEEP"] = "False"
credentials = {
"login": environment.parsed_options.login,
"password": environment.parsed_options.password,
"structure_unit_code": environment.parsed_options.suid,
}
login = Login(base_url=environment.host)
response = login.request_login(credentials)
parse_xml = parsing_xml_response(response.text)
headers = {
"Content-Type": "application/xml",
"Authorization": f'Bearer {parse_xml.xpath("//Token")[0].text}'}
shopping = Shopping(
base_url=environment.parsed_options.host,
headers=headers
)
set_paxs(environment, shopping)
set_flight(environment, shopping)
environment.parsed_options.__dict__.update({"shopping": shopping})
If start without worker - success working
If start workers - errors: TypeError: can not serialize 'Shopping' object
How to send object Shopping in worker from master?
Try https://docs.locust.io/en/stable/running-distributed.html
You can do this by sending a message from worker to master or master to worker. The message is just a string but you can include a data payload with it. You need a function that registers what your message is and what to do when it receives that message, which many times will be a function that does something with the data payload. The docs use this example:
from locust import events
from locust.runners import MasterRunner, WorkerRunner
# Fired when the worker receives a message of type 'test_users'
def setup_test_users(environment, msg, **kwargs):
for user in msg.data:
print(f"User {user['name']} received")
environment.runner.send_message('acknowledge_users', f"Thanks for the {len(msg.data)} users!")
# Fired when the master receives a message of type 'acknowledge_users'
def on_acknowledge(msg, **kwargs):
print(msg.data)
#events.init.add_listener
def on_locust_init(environment, **_kwargs):
if not isinstance(environment.runner, MasterRunner):
environment.runner.register_message('test_users', setup_test_users)
if not isinstance(environment.runner, WorkerRunner):
environment.runner.register_message('acknowledge_users', on_acknowledge)
The #events.init.add_lisenter means Locust will run that function when it starts up and the key part is that register_message call. The first argument is an arbitrary string, whatever you want it to be. The second argument is the function to call when that message is received.
Note the arguments the function to be called has defined. One of them is msg, which you need to use to get the data that is sent.
Then you can send the message and data payload any time you want. The docs use this example:
users = [
{"name": "User1"},
{"name": "User2"},
{"name": "User3"},
]
environment.runner.send_message('test_users', users)
The string is the message you want the receiver to match and the second argument users in this case is the actual data you want to send to the other instance(s).
Can someone tell me please: how to take callers, from enqueue to dial(forwarding). In one app, with automation this ???
i have something like that and it wont work:
<Say>hello</Say>
<Enqueue waitUrl="https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml">support</Enqueue>
<Dial url="/ivr/agent/screencall">
+000000000
<Queue>support</Queue>
</Dial>
<Redirect>/ivr/welcome/</Redirect>
</Response>
in python look like this:
twiml_response.say('hello')
twiml_response.enqueue('support', wait_url='https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml')
twiml_response.dial('+000000000', url=reverse('ivr:agents_screencall')).queue('support')
It looks like you are trying to perform actions for both your caller and an agent within the one TwiML response, and that will not work.
When you <Enqueue> a caller, no following TwiML will execute until you dequeue the caller with a <Leave>.
It looks like you want to dial an agent, allow them to screen the call and then connect them to the caller in the queue. To do this, you would start by creating a call to your agent using the REST API. With that call you would provide a URL that will be requested when that agent connects. In the response to that URL you should say the message to them, then <Dial> the <Queue>. Something like this:
import os
from twilio.rest import Client
account_sid = os.environ['TWILIO_ACCOUNT_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
client = Client(account_sid, auth_token)
def call(request):
twiml_response = VoiceResponse()
twiml_response.say('hello')
twiml_response.enqueue('support', wait_url='https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml')
call = client.calls.create(
url = '/agent'.
to = agent_phone_number,
from = your_twilio_number
)
return HttpResponse(twiml_response, content_type='text/xml')
Then, in response to the webhook to the /agent endpoint, you should return your response for screening, which might look like this:
def agent(request):
twiml_response = VoiceResponse()
gather = Gather(action='/agent_queue', method='POST', numDigits=1)
gather.say('You are receiving an incoming call, press 1 to accept')
twiml_response.append(gather)
return HttpResponse(twiml_response, content_type='text/xml')
And finally in /agent_queue you determine the result of screening the call and if the agent accepts, then you connect them to the queue.
def agent_queue(request):
twiml_response = VoiceResponse()
digits = request.POST.get("Digits", "")
if digits == "1":
dial = Dial()
dial.queue('support')
twiml_response.append(dial)
else:
twiml_response.hangup()
return HttpResponse(twiml_response, content_type='text/xml')
Using Python 3.9 and Quart 0.15.1, I'm trying to create a websocket route that will listen on a websocket for incoming request data, parse it, and send outbound response data to a client, over and over in a loop - until and unless the client sends a JSON struct with a given key "key" in the payload, at which point we can move on to further processing.
I can receive the initial inbound request from the client, parse it, and send outbound responses in a loop, but when I try to gather the second payload to parse for the presence of "key", things fall apart. It seems I can either await websocket.send_json() or await websocket.receive(), but not both at the same time.
The Quart docs suggest using async-timeout to (https://pgjones.gitlab.io/quart/how_to_guides/request_body.html?highlight=timeout) to timeout if the body of a request isn't received in the desired amount of time, so I thought I'd try to send messages in a while loop, with a brief period of time spent in await websocket.receive() before timing out if a response wasn't receive()'d:
#app.websocket('/listen')
async def listen():
payload_requested = await websocket.receive()
parsed_payload_from_request = json.loads(payload_requested)
while "key" not in parsed_payload_from_request:
response = "response string"
await websocket.send_json(response)
async with timeout (1):
payload_requested = await websocket.receive()
parsed_payload_from_request = json.loads(payload_requested)
if "key" == "present":
do_stuff()
...but that doesn't seem to work, an asyncio.exceptions.CancelledError is thrown by the timeout.
I suspect there's a better way to accomplish this using futures and asyncio, but it's not clear to me from the docs.
I think your code is timing out waiting for a message from the client. You may not need it in this case.
I've tried to write out code as you've described your needs and got this,
#app.websocket('/listen')
async def listen():
while True:
data = await websocket.receive_json()
if "key" in data:
await websocket.send_json({"key": "response"})
else:
do_stuff()
return # websocket closes
does it do what you want, if not what goes wrong?
I'm following this Route_Guide sample.
The sample in question fires off and reads messages without replying to a specific message. The latter is what i'm trying to achieve.
Here's what i have so far:
import grpc
...
channel = grpc.insecure_channel(conn_str)
try:
grpc.channel_ready_future(channel).result(timeout=5)
except grpc.FutureTimeoutError:
sys.exit('Error connecting to server')
else:
stub = MyService_pb2_grpc.MyServiceStub(channel)
print('Connected to gRPC server.')
this_is_just_read_maybe(stub)
def this_is_just_read_maybe(stub):
responses = stub.MyEventStream(stream())
for response in responses:
print(f'Received message: {response}')
if response.something:
# okay, now what? how do i send a message here?
def stream():
yield my_start_stream_msg
# this is fine, i receive this server-side
# but i can't check for incoming messages here
I don't seem to have a read() or write() on the stub, everything seems to be implemented with iterators.
How do i send a message from this_is_just_read_maybe(stub)?
Is that even the right approach?
My Proto is a bidirectional stream:
service MyService {
rpc MyEventStream (stream StreamingMessage) returns (stream StreamingMessage) {}
}
What you're trying to do is perfectly possible and will probably involve writing your own request iterator object that can be given responses as they arrive rather than using a simple generator as your request iterator. Perhaps something like
class MySmarterRequestIterator(object):
def __init__(self):
self._lock = threading.Lock()
self._responses_so_far = []
def __iter__(self):
return self
def _next(self):
# some logic that depends upon what responses have been seen
# before returning the next request message
return <your message value>
def __next__(self): # Python 3
return self._next()
def next(self): # Python 2
return self._next()
def add_response(self, response):
with self._lock:
self._responses.append(response)
that you then use like
my_smarter_request_iterator = MySmarterRequestIterator()
responses = stub.MyEventStream(my_smarter_request_iterator)
for response in responses:
my_smarter_request_iterator.add_response(response)
. There will probably be locking and blocking in your _next implementation to handle the situation of gRPC Python asking your object for the next request that it wants to send and your responding (in effect) "wait, hold on, I don't know what request I want to send until after I've seen how the next response turned out".
Instead of writing a custom iterator, you can also use a blocking queue to implement send and receive like behaviour for client stub:
import queue
...
send_queue = queue.SimpleQueue() # or Queue if using Python before 3.7
my_event_stream = stub.MyEventStream(iter(send_queue.get, None))
# send
send_queue.push(StreamingMessage())
# receive
response = next(my_event_stream) # type: StreamingMessage
This makes use of the sentinel form of iter, which converts a regular function into an iterator that stops when it reaches a sentinel value (in this case None).
How to wait for a response from client after sending the client something using django-channels?
Whenever Group.send() is called from function send_to_client() and upon receiving the message by client, send_to_client() is expecting a response back from client which is getting received on websocket.receive channel.
Is there a way to return the response_msg in send_to_client() function?
Now I have reached here
Sample Code for consumers.py:
def ws_receive(message):
response_msg = message.content['text']
return response_msg
def send_to_client(param1, param2, param3):
Group.send({
"text" : json.dumps({
"First" : param1,
"Second" : param2,
})
})
So once the message reaches at the client side, the client will send a response back to the server which will be received by the ws_receive(message) function through the websocket.receive channel which is defined in the urls.py file,
channel_patterns = [
route("websocket.receive", ws_receive),
...
]
Is there a way to do this so that my function would look like this?
def send_to_client(...):
Group.send(...)
response_msg = #response message from client
Since you are recieving via a websocket, I am not sure if you would be able to even tell if the recieving thing is directly as a response for your request. I would rather put an id variable or something in the ongoing request, and maybe ask the client to put that id in the response as well. That might require both the sender and reciever to know the value of id as well (probably store in the db?)
Also, it do not seem logical to be blocked waiting for the response from websocket as well.