I have the following code
conn_str = "HostName=my_host.azure-devices.net;DeviceId=MY_DEVICE;SharedAccessKey=MY_KEY"
device_conn = IoTHubDeviceClient.create_from_connection_string(conn_str)
await device_conn.connect()
This works fine, but only because I've manually retrieved this from the IoT hub and pasted it into the code. We are going to have hundreds of these devices, so is there a way to retrieve this connection string programmatically?
It'll be the equivalent of the following
az iot hub device-identity connection-string show --device-id MY_DEVICCE --hub-name MY_HUB --subscription ABCD1234
How do I do this?
The device id and key are you give to the each device and you choose where to store/how to load it. The connection string is just a concept for easy to get started but it has no meaning in the actual technical level.
You can use create_from_symmetric_key(symmetric_key, hostname, device_id, **kwargs) to direct pass key, id and hub uri to sdk.
I found it's not possible to retrieve the actual connection string, but a connection string can be built from the device primary key
from azure.iot.hub import IoTHubRegistryManager
from azure.iot.device import IoTHubDeviceClient
# HUB_HOST is YOURHOST.azure-devices.net
# SHARED_ACCESS_KEY is from the registryReadWrite connection string
reg_str = "HostName={0};SharedAccessKeyName=registryReadWrite;SharedAccessKey={1}".format(
HUB_HOST, SHARED_ACCESS_KEY)
device = IoTHubRegistryManager(reg_str).get_device("MY_DEVICE_ID")
device_key = device.authentication.symmetric_key.primary_key
conn_str = "HostName={0};DeviceId={1};SharedAccessKey={2}".format(
HUB_HOST, "MY_DEVICE_ID", device_key)
client = IoTHubDeviceClient.create_from_connection_string(
conn_str)
client.connect()
# Remaining code here...
Other options you could consider include:
Use the Device Provisioning service to manage provisioning and connecting your device to your IoT hub. You won't need to generate your connection strings manually in this case.
Use X.509 certificates (recommended for production environments instead of SAS). Each device has an X.509 cert derived from the root cert in your hub. See: https://learn.microsoft.com/azure/iot-hub/tutorial-x509-introduction
Related
I'm trying to access Azure EvenHub but my network makes me use proxy and allows connection only over https (port 443)
Based on https://learn.microsoft.com/en-us/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient?view=azure-python
I added proxy configuration and TransportType.AmqpOverWebsocket parametr and my Producer looks like this:
async def run():
producer = EventHubProducerClient.from_connection_string(
"Endpoint=sb://my_eh.servicebus.windows.net/;SharedAccessKeyName=eh-sender;SharedAccessKey=MFGf5MX6Mdummykey=",
eventhub_name="my_eh",
auth_timeout=180,
http_proxy=HTTP_PROXY,
transport_type=TransportType.AmqpOverWebsocket,
)
and I get an error:
File "/usr/local/lib64/python3.9/site-packages/uamqp/authentication/cbs_auth_async.py", line 74, in create_authenticator_async
raise errors.AMQPConnectionError(
uamqp.errors.AMQPConnectionError: Unable to open authentication session on connection b'EHProducer-a1cc5f12-96a1-4c29-ae54-70aafacd3097'.
Please confirm target hostname exists: b'my_eh.servicebus.windows.net'
I don't know what might be the issue.
Might it be related to this one ? https://github.com/Azure/azure-event-hubs-c/issues/50#issuecomment-501437753
you should be able to set up a proxy that the SDK uses to access EventHub. Here is a sample that shows you how to set the HTTP_PROXY dictionary with the proxy information. Behind the scenes when proxy is passed in, it automatically goes over websockets.
As #BrunoLucasAzure suggested checking the ports on the proxy itself will be good to check, because based on the error message it looks like it made it past the proxy and cant resolve the endpoint.
My AWS Lambda function code works fine when I run it outside of an Amazon Virtual Private Cloud (Amazon VPC). However, when I configure my function to connect to a VPC, I get function timeout errors. How do I fix these?
def get_db_connection_config():
# Create a Secrets Manager client.
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
# In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
# See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
# We rethrow the exception by default.
try:
logger.info("Retrieving MySQL database configuration...")
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as error:
logger.error(error)
sys.exit()
else:
# Decrypts secret using the associated KMS CMK.
# Depending on whether the secret is a string or binary, one of these fields will be populated.
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
return json.loads(secret)
else:
return base64.b64decode(get_secret_value_response['SecretBinary'])
When a Lambda resides in AWS network it is able to use the internet to connect to these services, however once it joins your VPC outbound internet traffic is also routed through your VPC. As there is presumably no outbound internet connectivity the Lambda is unable to reach the internet.
If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet access or a public IP address.
For your Lambda to be able to communicate with other AWS services when it resides within a VPC, one of the following must be in place.
The first option is that you create either a NAT gateway or a NAT instance, and then add this to the route table which your Lambda resides in. To be clear this subnet should be a private subnet only as by utilizing a NAT for a 0.0.0.0/0 record it will stop inbound traffic to instances which have a public IP address that share the same subnet.
The second option is that you utilize VPC endpoints for the services, by doing this any traffic that would have previously traversed the public internet will instead use a private connection directly to the AWS service itself. Please note that not every AWS service is covered yet for this.
I was trying to connect to remote machine via WinRM in Python (pywinrm) using domain account, following the instruction in
How to connect to remote machine via WinRM in Python (pywinrm) using domain account?
using
session = winrm.Session(server, auth=('user#DOMAIN', 'doesNotMatterBecauseYouAreUsingAKerbTicket'), transport='kerberos')
but I got this:
NotImplementedError("Can't use 'principal' argument with kerberos-sspi.")
I googled "principal argument" and I got its meaning in mathematics,which is in complex_analysis (https://en.m.wikipedia.org/wiki/Argument_(complex_analysis)) and definitely not the right meaning. I'm not a native English speaker and I got stuck here.
The original code is here:
https://github.com/requests/requests-kerberos/blob/master/requests_kerberos/kerberos_.py
def generate_request_header(self, response, host, is_preemptive=False):
"""
Generates the GSSAPI authentication token with kerberos.
If any GSSAPI step fails, raise KerberosExchangeError
with failure detail.
"""
# Flags used by kerberos module.
gssflags = kerberos.GSS_C_MUTUAL_FLAG | kerberos.GSS_C_SEQUENCE_FLAG
if self.delegate:
gssflags |= kerberos.GSS_C_DELEG_FLAG
try:
kerb_stage = "authGSSClientInit()"
# contexts still need to be stored by host, but hostname_override
# allows use of an arbitrary hostname for the kerberos exchange
# (eg, in cases of aliased hosts, internal vs external, CNAMEs
# w/ name-based HTTP hosting)
kerb_host = self.hostname_override if self.hostname_override is not None else host
kerb_spn = "{0}#{1}".format(self.service, kerb_host)
kwargs = {}
# kerberos-sspi: Never pass principal. Raise if user tries to specify one.
if not self._using_kerberos_sspi:
kwargs['principal'] = self.principal
elif self.principal:
raise NotImplementedError("Can't use 'principal' argument with kerberos-sspi.")
Any help will be greatly appreciated.
The kafka-python client supports Kafka 0.9 but doesn't obviously include the new authentication and encryption features so my guess is that it only works with open servers (as in previous releases). In any case, even the Java client needs a special message hub login module to connect (or so it would seem from the example) which suggests that nothing will work unless there is a similar module available for Python.
My specific scenario is that I want to use the message hub service from a Jupyter notebook also hosted in Bluemix (the Apache Spark service).
I was able to connect using the kafka-python library:
$ pip install --user kafka-python
Then ...
from kafka import KafkaProducer
from kafka.errors import KafkaError
import ssl
############################################
# Service credentials from Bluemix UI:
############################################
bootstrap_servers = # kafka_brokers_sasl
sasl_plain_username = # user
sasl_plain_password = # password
############################################
sasl_mechanism = 'PLAIN'
security_protocol = 'SASL_SSL'
# Create a new context using system defaults, disable all but TLS1.2
context = ssl.create_default_context()
context.options &= ssl.OP_NO_TLSv1
context.options &= ssl.OP_NO_TLSv1_1
producer = KafkaProducer(bootstrap_servers = bootstrap_servers,
sasl_plain_username = sasl_plain_username,
sasl_plain_password = sasl_plain_password,
security_protocol = security_protocol,
ssl_context = context,
sasl_mechanism = sasl_mechanism,
api_version=(0,10))
# Asynchronous by default
future = producer.send('my-topic', b'raw_bytes')
# Block for 'synchronous' sends
try:
record_metadata = future.get(timeout=10)
except KafkaError:
# Decide what to do if produce request failed...
log.exception()
pass
# Successful result returns assigned partition and offset
print (record_metadata.topic)
print (record_metadata.partition)
print (record_metadata.offset)
This worked for me from Bluemix spark as a service from a jupyter notebook, however, note that this approach is not using spark. The code is just running on the driver host.
The SASL support in the Kafka Python client has been requested : https://github.com/dpkp/kafka-python/issues/533 but until the username/password login method used by Message Hub is supported, it won't work
Until this is natively supported by the Bluemix Apache Spark Service, you can follow the same approach as the Realtime Sentiment Analysis project. Helper code for this can be found on the cds labs spark samples github repo.
We've added some text to our documentation on non-Java language support - see the "CONNECTING AND AUTHENTICATING IN A NON-JAVA APPLICATION" section:
https://www.ng.bluemix.net/docs/services/MessageHub/index.html
Our current authentication method is non-standard and not supported by the Apache project, but was a temporary solution. The Message Hub team is working with the Apache Kafka community to develop KIP-43. Once this is finalised, we'll change the Message Hub authentication implementation to match and it will be possible to implement clients to that spec in any language.
I am trying to connect to eDirectory using python. It is not as easy as connecting to active directory using python so I am wondering if this is even possible. I am currently running python3.4
I'm the author of ldap3, I use eDirectory for testing the library.
just try the following code:
from ldap3 import Server, Connection, ALL, SUBTREE
server = Server('your_server_name', get_info=ALL) # don't user get_info if you don't need info on the server and the schema
connection = Connection(server, 'your_user_name_dn', 'your_password')
connection.bind()
if connection.search('your_search_base','(objectClass=*)', SUBTREE, attributes = ['cn', 'objectClass', 'your_attribute'])
for entry in connection.entries:
print(entry.entry_get_dn())
print(entry.cn, entry.objectClass, entry.your_attribute)
connection.unbind()
If you need a secure connection just change the server definition to:
server = Server('your_server_name', get_info=ALL, use_tls=True) # default tls configuration on port 636
Also, any example in the docs at https://ldap3.readthedocs.org/en/latest/quicktour.html should work with eDirectory.
Bye,
Giovanni