Deprecated method when making c2d com from azure IotHub to python script - python

message = client.receive_message()
This code is now deprecated and when searching for a solution it seems I am the only one with this issue.
I get this warning:
DeprecatedWarning: receive_message is deprecated as of 2.3.0. We
recommend that you use the .on_message_received property to set a
handler instead of message = client.receive_message()
If you have a possible solution, please post it here.
I am running the latest python 3.9 and the latest Azure IoT device library.

You're trying to use a method that's no longer supported (because it's deprecated). As the error message says, the correct way to handle C2D messages is to use an event handler. There is a good example of that here
The part you will be interested in is:
# define behavior for receiving a message
# NOTE: this could be a function or a coroutine
def message_received_handler(message):
print("the data in the message received was ")
print(message.data)
print("custom properties are")
print(message.custom_properties)
print("content Type: {0}".format(message.content_type))
print("")
# set the mesage received handler on the client
device_client.on_message_received = message_received_handler

Related

How to connect kafka IO from apache beam to a cluster in confluent cloud

I´ve made a simple pipeline in Python to read from kafka, the thing is that the kafka cluster is on confluent cloud and I am having some trouble conecting to it.
Im getting the following log on the dataflow job:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:820)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:631)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:612)
at org.apache.beam.sdk.io.kafka.KafkaIO$Read$GenerateKafkaSourceDescriptor.processElement(KafkaIO.java:1495)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
So I think Im missing something while passing the config since it mentions something related to it, Im really new to all of this and I know nothing about java so I dont know how to proceed even reading the JAAS documentation.
The code of the pipeline is the following:
import apache_beam as beam
from apache_beam.io.kafka import ReadFromKafka
from apache_beam.options.pipeline_options import PipelineOptions
import os
import json
import logging
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='credentialsOld.json'
with open('cluster.configuration.json') as cluster:
data=json.load(cluster)
cluster.close()
def logger(element):
logging.INFO('Something was found')
def main():
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"auto.offset.reset":"earliest"
}
print('======================================================')
beam_options = PipelineOptions(runner='DataflowRunner',project='project',experiments=['use_runner_v2'],streaming=True,save_main_session=True,job_name='kafka-stream-test')
with beam.Pipeline(options=beam_options) as p:
msgs = p | 'ReadKafka' >> ReadFromKafka(consumer_config=config,topics=['users'],expansion_service="localhost:8088")
msgs | beam.FlatMap(logger)
if __name__ == '__main__':
main()
I read something about passing a property java.security.auth.login.config in the config dictionary but since that example is with java and I´am using python Im really lost at what I have to pass or even if that´s the property I have to pass etc.
btw Im getting the api key and secret from here and this is what I am passing to sasl.username and sasl.password
I faced the same error the first time I tried the beam's expansion service. The key sasl.mechanisms that you are supplying is incorrect, try with sasl.mechanism also you do not need to supply the username and password since you are connection is authenticated by jasl basically the consumer_config like below worked for me:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanism":data["sasl.mechanisms"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I got a partial answer to this question since I fixed this problem but got into another one:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I needed to provide the sasl.jaas.config porpertie with the api key and secret of my cluster and also the service name, however, now Im facing a different error whe running the pipeline on dataflow:
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
This error shows after 4-5 mins of trying to run the job on dataflow, actually I have no idea how to fix this but I think is related to my broker on confluent rejecting the connection, I think this could be related to the zone execution since the cluster is in a different zone than job region.
UPDATE:
I tested the code on linux/ubuntu and I dont know why but the expansión service gets downloaded automatically so you wont get unsoported signal error, still having some issues trying to autenticate to confluent kafka tho.

telegram bot with python: map_to_parent state is not recognized in nested conversation

My child convo does not transfer to the parent convo. Seems like the key in map_to_parent isnt being recognized? it just stops after the child convo ends. What am I doing wrong here?
Im also getting this warning:
UserWarning: Handler returned state methodchoiceend which is unknown to the ConversationHandler.
Here is an mwe: https://pastebin.com/pnve9gke
I see two problems in the example that you linked:
Both method_convo_handler and count_convo are both used as a nested conversation within another ConversationHandler and added directly via dispatch.add_handler. This is bound to interefer.
return METHODCHOICEEND is used in done_method, which in turn is used in
count_convo. count_convo doesn't have map_to_parent.
BTW, if you give your ConversationHandlers a name via the corresponding argument, the waring that you mentioned will read
Handler returned state methodchoiceend which is unknown to the ConversationHandler <name>.
making it a bit easier to debug :)
Disclaimer: I'm currently the maintainer of python-telegram-bot

How do I send a telegram dice?

Below is code for my Discord bot.
def dice(bot,update):
bot.send_dice.message(chat_id = update.message.chat_id)
updater = Updater(API_KEY,use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler('dice',dice))
This code produces this error:
AttributeError: 'Update' object has no attribute 'send_dice'
please help, I have no idea how this works
The error is likely due to the fact that you're using the old-style signature def callback(bot, update), while on python-telegram with version >=12. The new syntax is def callback(update, context), where context is an object that contains the bot instance as context.bot and also a bunch of other utility functionailty.
Please see the transition guide to version 12 (and also the one for version 13, if applicable) for details.
Disclaimer: I'm the maintainer of python-telegram-bot.

Confluent Python API for Kafka

I'm getting an error with basic usage of the official Confluent Kafka Python API:
I subscribe:
kafka_consumer.subscribe(topics=["my-avro-topic"], on_assign=on_assign_callback, on_revoke=on_revoke_callback)
Using callback:
def on_assign_callback(consumer, topic_partitions):
for topic_partition in topic_partitions:
print("without position. topic={}. partition={}. offset={}. error={}".format(topic_partition.topic, topic_partition.partition,
topic_partition.offset, topic_partition.error))
topic_partitions_with_offsets = consumer.position(topic_partitions)
print("assigned to {}->{} partitions".format(len(topic_partitions), len(topic_partitions_with_offsets)))
for topic_partition in topic_partitions_with_offsets:
print("with position. topic={}. partition={}. offset={}. error={}".format(topic_partition.topic, topic_partition.partition,
topic_partition.offset, topic_partition.error))
which produces the console output:
without position. topic=my-avro-topic. partition=0. offset=-1001. error=None
assigned to 1->1 partitions
with position. topic=my-avro-topic. partition=0. offset=-1001. error=KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="(null)"}
Can someone explain this? Why would I get a callback notification on an unknown partition? Similar code works perfectly using the Java API.
This is a bug in the underlying C library librdkafka.
See the upstream issue.
If you want to start consuming from the stored offsets you don't actually need to call position() to retrieve them, the client will do so automatically if you don't change the default offset of -1001.

Determining Exact Reason for Facebook Error Code 100

I am experimenting with facebook and trying to create an event, via the Graph API. I am using django and the python-facebook-sdk from github. I can successfully post to my wall pull friends etc.
I am using django-social-auth for facebook login stuff and have settings.py for permissions:
FACEBOOK_EXTENDED_PERMISSIONS = ['publish_stream','create_event','rsvp_event']
In the graph api explorer on facebook my request works so I know what parameters to use and, well, I am using them.
Here is my python code:
def new_event(self):
event = {}
event['name'] = name
event['privacy'] = 'OPEN'
event['start_time'] = '2011-11-04T14:42Z'
event['end_time'] = '2011-11-05T14:46Z'
self.graph.put_object("me", "events", args=None, post_args=event)
The code that is calling the facebook api is roughly: (also the access_token is added to the post_args which then is converted to post_data and urlencoded.
file = urllib.urlopen("https://graph.facebook.com/me/events?" +
urllib.urlencode(args), post_data)
The error I am getting is:
Exception Value: (#100) Invalid parameter
I am trying to figure out what is wrong, but am also curios of how to figure out overall what is wrong so I can debug this in the future. it seems to be too generic of an error because I don't know what is wrong.
Not really sure how post_args works but this call did the trick
graph.put_object("me","events",start_time="2013-11-04T14:42Z", privacy="OPEN", end_time="2013-11-05T14:46Z", name="Test Event")
The invalid parameter most likely is pointing to how you are feeding the parameters as post_args. I don't think the SDK was ever designed to feed it like this. I could be mistaken as I'm not really sure what post_args would be doing.
Another way based on how put_object is setup with **data it would be
graph.put_object("me","events", **event)

Categories