How to send messages through DataChannel within a loop - python

I got a question about programming in Python with WebRTCbin. So the channel.connect('on-open', self.on_data_channel_open) is an event listener that is triggered when the channel’s state changes to open, then the callback function on_data_channel_open is called. I used a for loop to loop through the log data and I was expecting to send each log inside the loop. channel.emit('send-string', str(log)) is the code to send messages through the data channel. However, the messages only are sent when the whole loop is finished.
def on_data_channel(self, webrtcbin_object, channel):
"""
This is a call back function when a data channel is created
"""
print('data_channel created')
channel.connect('on-error', self.on_data_channel_error)
channel.connect('on-open', self.on_data_channel_open)
channel.connect('on-close', self.on_data_channel_close)
channel.connect('on-message-string', self.on_data_channel_message)
print(f'The datachannel state is {channel.ready_state}')
def on_data_channel_open(self, channel):
print('{} data_channel opened'.format(self.camera_id))
self.sendlog(channel)
def sendlog(self, channel):
try:
with open('data.txt') as f:
json_data = json.load(f)
for log in json_data:
print(channel.ready_state)
if channel.ready_state == 2:
print(f'sending the log: {log}')
channel.emit('send-string', str(log))
time.sleep(0.33)
else:
break
except (FileNotFoundError, JSONDecodeError) as ex:
print("dealing file encouter error: {}; error is {}".format(ex, type(ex)))
I saw in the Datachannel's buffer was increased while the message send was executed. I guess all data have entered the buffer, and the DataChannel object sent the entire buffer to the receiver through the channel.
I have tried a few ways to async the message sending, but I am new to python, none of them were worked.
Are there any suggestions?
Thanks!

Related

Python/Quart: how to send while awaiting a response?

Using Python 3.9 and Quart 0.15.1, I'm trying to create a websocket route that will listen on a websocket for incoming request data, parse it, and send outbound response data to a client, over and over in a loop - until and unless the client sends a JSON struct with a given key "key" in the payload, at which point we can move on to further processing.
I can receive the initial inbound request from the client, parse it, and send outbound responses in a loop, but when I try to gather the second payload to parse for the presence of "key", things fall apart. It seems I can either await websocket.send_json() or await websocket.receive(), but not both at the same time.
The Quart docs suggest using async-timeout to (https://pgjones.gitlab.io/quart/how_to_guides/request_body.html?highlight=timeout) to timeout if the body of a request isn't received in the desired amount of time, so I thought I'd try to send messages in a while loop, with a brief period of time spent in await websocket.receive() before timing out if a response wasn't receive()'d:
#app.websocket('/listen')
async def listen():
payload_requested = await websocket.receive()
parsed_payload_from_request = json.loads(payload_requested)
while "key" not in parsed_payload_from_request:
response = "response string"
await websocket.send_json(response)
async with timeout (1):
payload_requested = await websocket.receive()
parsed_payload_from_request = json.loads(payload_requested)
if "key" == "present":
do_stuff()
...but that doesn't seem to work, an asyncio.exceptions.CancelledError is thrown by the timeout.
I suspect there's a better way to accomplish this using futures and asyncio, but it's not clear to me from the docs.
I think your code is timing out waiting for a message from the client. You may not need it in this case.
I've tried to write out code as you've described your needs and got this,
#app.websocket('/listen')
async def listen():
while True:
data = await websocket.receive_json()
if "key" in data:
await websocket.send_json({"key": "response"})
else:
do_stuff()
return # websocket closes
does it do what you want, if not what goes wrong?

python-telegram-bot send message without using update and context

I want my bot to send a message to the telegram chat when I am active on my computer. I've wrote other functions that use this fuction to send messages.
# Function to send messages back to chat
def sendmessage(update, context, message):
context.bot.send_message(chat_id=update.effective_chat.id, text=message)
This however uses update and context from the dispatcher and handler.
The function I want to send a message runs in a separate thread and is not using the dispatcher or handler. Thus I cannot use above function. I can't seem to figure out how to send a message without the use of update and context.
What I have now:
def user_returned():
row_list = []
with open('log.csv') as csv_file:
csv_reader = csv.DictReader(csv_file, delimiter=',')
for row in csv_reader:
row_list.append(row)
last_line = row_list[-1]
second_to_last_line = row_list[-2]
print(last_line['active'], second_to_last_line['active'])
if last_line['active'] == 'True' and second_to_last_line['active'] == 'False':
message = 'user is back'
context.bot.send_message(chat_id=update.effective_chat.id, text=message)
else:
message = 'still gone'
context.bot.send_message(chat_id=update.effective_chat.id, text=message)
import telegram
bot=telegram.Bot(token=bot_token)
chat=123456
bot.sendMessage(chat, msg)

Constantly polling SQS Queue using infinite loop

I have an SQS queue that I need to constantly monitor for incoming messages. Once a message arrives, I do some processing and continue to wait for the next message. I achieve this by setting up an infinite loop with a 2 second pause at the end of the loop. This works however I can't help but feel this isn't a very efficient way of solving the need to constantly pole the queue.
Code example:
while (1):
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=1,
WaitTimeSeconds=1
)
try:
message = response['Messages'][0]
receipt_handle = message['ReceiptHandle']
# Delete received message from queue
sqs.delete_message(
QueueUrl=queue_url,
ReceiptHandle=receipt_handle
)
msg = message['Body']
msg_json = eval(msg)
value1 = msg_json['value1']
value2 = msg_json['value2']
process(value1, value2)
except:
pass
#print('Queue empty')
time.sleep(2)
In order to exit the script cleanly (which should run constantly), I catch the KeyboardInterrupt which gets triggered on Ctrl+C and do some clean-up routines to exit gracefully.
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
logout()
Is there a better way to achieve the constant poling of the SQS queue and is the 2 second delay necessary? I'm trying not to hammer the SQS service, but perhaps it doesn't matter?
This is ultimately the way that SQS works - it requires something to poll it to get the messages. But some suggestions:
Don't get just a single message each time. Do something more like:
messages = sqs.receive_messages(
MessageAttributeNames=['All'],
MaxNumberOfMessages=10,
WaitTimeSeconds=10
)
for msg in messages:
logger.info("Received message: %s: %s", msg.message_id, msg.body)
This changes things a bit for you. The first thing is that you're willing to get up to 10 messages (this is the maximum number for SQS in one call). The second is that you will wait up to 10 seconds to get the messages. From the SQS docs:
The duration (in seconds) for which the call waits for a message to
arrive in the queue before returning. If a message is available, the
call returns sooner than WaitTimeSeconds. If no messages are available
and the wait time expires, the call returns successfully with an empty
list of messages.
So you don't need your own sleep call - if there are no messages the call will wait until it expires. Conversely, if you have a ton of messages then you'll get them all as fast as possible as you won't have your own sleep call in the code.
Adding on #stdunbar Answer:
You will find that MaxNumberOfMessages as stated by the Docs might return fewer messages than the provided integer number, which was the Case for me.
MaxNumberOfMessages (integer) -- The maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned). Valid values: 1 to 10. Default: 1.
As a result, i made this solution to read from SQS Dead-Letter-Queue:
def read_dead_letter_queue():
""" This function is responsible for Reading Query Execution IDs related to the insertion that happens on Athena Query Engine
and we weren't able to deal with it in the Source Queue.
Args:
None
Returns:
Dictionary: That consists of execution_ids_list, mssg_receipt_handle_list and queue_url related to messages in a Dead-Letter-Queue that's related to the insertion operation into Athena Query Engine.
"""
try:
sqs_client = boto3.client('sqs')
queue_url = os.environ['DEAD_LETTER_QUEUE_URL']
execution_ids_list = list()
mssg_receipt_handle_list = list()
final_dict = {}
# You can change the range stop number to whatever number that suits your scenario, you just need to add a number that's more than the number of messages that maybe in the Queue as 1 thousand or 1 million, as the loop will break out when there aren't any messages left in the Queue before reaching the end of the range.
for mssg_counter in range(1, 20, 1):
sqs_response = sqs_client.receive_message(
QueueUrl = queue_url,
MaxNumberOfMessages = 10,
WaitTimeSeconds = 10
)
print(f"This is the dead-letter-queue response --> {sqs_response}")
try:
for mssg in sqs_response['Messages']:
print(f"This is the message body --> {mssg['Body']}")
print(f"This is the message ID --> {mssg['MessageId']}")
execution_ids_list.append(mssg['Body'])
mssg_receipt_handle_list.append(mssg['ReceiptHandle'])
except:
print(f"Breaking out of the loop, as there isn't any message left in the Queue.")
break
print(f"This is the execution_ids_list contents --> {execution_ids_list}")
print(f"This is the mssg_receipt_handle_list contents --> {mssg_receipt_handle_list}")
# We return the ReceiptHandle to be able to delete the message after we read it in another function that's responsible for deletion.
# We return a dictionary consists of --> {execution_ids_list: ['query_exec_id'], mssg_receipt_handle_list: ['ReceiptHandle']}
final_dict['execution_ids_list'] = execution_ids_list
final_dict['mssg_receipt_handle_list'] = mssg_receipt_handle_list
final_dict['queue_url'] = queue_url
return final_dict
#TODO: We need to delete the message after we finish reading in another function that will delete messages for both the DLQ and the Source Queue.
except Exception as ex:
print(f"read_dead_letter_queue Function Exception: {ex}")

Trying to produce message to a kafka topic for every iteration but looks like I end up sending no msg to consumer

Not able to write message into kafka topic (producer) when calling kakfa produce class with a loop.
I'm very new to Python and Kafka. I'm trying to write a python program to write messages into a Kafka topic and produce so Kafka consumer can subscribe to that topic to publish the message.
I'm not sure what is missing in my program which restricts from writing the message to the topic.
Point to Note: I'm reading a JSON file and using a for loop to ready the key value. Then assign it to a variable and refer that variable with Kafka produce with arg for msg.
Attached is the Kafka producer program.
Input: Json_smpl.json
File Content:
{
"transaction":{
"Accnttype":"Saving"
,"Branch":"West"
,"id":"WS"
}
}
Program:
from confluent_kafka import Producer
import json
def acked(err, msg):
if err is not None:
print("Failed to deliver message: {0}: {1}"
.format(msg.value(), err.str()))
else:
print("Message produced: {0}".format(msg.value()))
p = Producer({'bootstrap.servers': 'localhost:9092'})
try:
with open('json_smpl.json') as read_j:
data = json.load(read_j)
get_data = data.get("transactions")
print(get_data)
for i in get_data:
a = list(get_data.items()[0])
p.produce(topic='mytopic12', 'myvalue #{0}'.format(a), callback=acked)
except KeyboardInterrupt:
pass
p.flush(1)
Expected result: Message(JSON Key & Value) to be written to kafka topic for every iteration within the loop.
Actual Result: No messages in topic. so consumer is not receiving any messages.
Your file has no transactions key, and no loop to go over, so your JSON isn't being parsed, and you are not catching a KeyError or ValueError
Start with this
p = Producer({'bootstrap.servers': 'localhost:9092'})
try:
with open('json_smpl.json') as read_j:
data = json.load(read_j).get("transaction")
tosend = json.dumps(data)
print("Ready to send : {}".format(tosend))
p.produce(topic='mytopic12', tosend, callback=acked)
except:
print("There was some error")

How do i handle streaming messages with Python gRPC

I'm following this Route_Guide sample.
The sample in question fires off and reads messages without replying to a specific message. The latter is what i'm trying to achieve.
Here's what i have so far:
import grpc
...
channel = grpc.insecure_channel(conn_str)
try:
grpc.channel_ready_future(channel).result(timeout=5)
except grpc.FutureTimeoutError:
sys.exit('Error connecting to server')
else:
stub = MyService_pb2_grpc.MyServiceStub(channel)
print('Connected to gRPC server.')
this_is_just_read_maybe(stub)
def this_is_just_read_maybe(stub):
responses = stub.MyEventStream(stream())
for response in responses:
print(f'Received message: {response}')
if response.something:
# okay, now what? how do i send a message here?
def stream():
yield my_start_stream_msg
# this is fine, i receive this server-side
# but i can't check for incoming messages here
I don't seem to have a read() or write() on the stub, everything seems to be implemented with iterators.
How do i send a message from this_is_just_read_maybe(stub)?
Is that even the right approach?
My Proto is a bidirectional stream:
service MyService {
rpc MyEventStream (stream StreamingMessage) returns (stream StreamingMessage) {}
}
What you're trying to do is perfectly possible and will probably involve writing your own request iterator object that can be given responses as they arrive rather than using a simple generator as your request iterator. Perhaps something like
class MySmarterRequestIterator(object):
def __init__(self):
self._lock = threading.Lock()
self._responses_so_far = []
def __iter__(self):
return self
def _next(self):
# some logic that depends upon what responses have been seen
# before returning the next request message
return <your message value>
def __next__(self): # Python 3
return self._next()
def next(self): # Python 2
return self._next()
def add_response(self, response):
with self._lock:
self._responses.append(response)
that you then use like
my_smarter_request_iterator = MySmarterRequestIterator()
responses = stub.MyEventStream(my_smarter_request_iterator)
for response in responses:
my_smarter_request_iterator.add_response(response)
. There will probably be locking and blocking in your _next implementation to handle the situation of gRPC Python asking your object for the next request that it wants to send and your responding (in effect) "wait, hold on, I don't know what request I want to send until after I've seen how the next response turned out".
Instead of writing a custom iterator, you can also use a blocking queue to implement send and receive like behaviour for client stub:
import queue
...
send_queue = queue.SimpleQueue() # or Queue if using Python before 3.7
my_event_stream = stub.MyEventStream(iter(send_queue.get, None))
# send
send_queue.push(StreamingMessage())
# receive
response = next(my_event_stream) # type: StreamingMessage
This makes use of the sentinel form of iter, which converts a regular function into an iterator that stops when it reaches a sentinel value (in this case None).

Categories