I am trying to solve this issue on which I don't understand fully why is not working.
I have 2 topics. TOPIC_Aon which I can send my messages and I receive them correctly. And once the message has been received, I would like to send it to another topic, TOPIC_B. So far I have been testing this code locally and everything worked just fine. But since when I started using an azure.function servicebus. The code start acting funny. And here is my code:
import logging
import azure.functions as func
import json
import boto3
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.servicebus import ServiceBusClient, ServiceBusMessage
def main(message: func.ServiceBusMessage):
logging.info(message)
print(message)
#KeyVault Configuration
KeyVault_Url = f'url'
credential = DefaultAzureCredential()
client_keyvault = SecretClient(vault_url=KeyVault_Url, credential=credential)
# # Service Bus Connection string
CONNECTION_STR = client_keyvault.get_secret("CONN").value
# For receiving the feedback from campaigns
TOPIC_NAME_A = "TOPICA"
SUBSCRIPTION_NAME = "XXX"
# For sending feedback and results of sentiment analysis and language detection
TOPIC_NAME_B = "TOPICB"
comprehend = boto3.client(service_name='comprehend', region_name='eu-west-1', aws_access_key_id=client_keyvault.get_secret("ID").value, aws_secret_access_key=client_keyvault.get_secret("SECRET").value)
# This block will receiver the messages from the service bus listed above.
# Please mind, once the message get received and printed (json format) that event will be destroyed from the portal service bus.
servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR)
with servicebus_client:
receiver = servicebus_client.get_subscription_receiver(
topic_name=TOPIC_NAME_A,
subscription_name=SUBSCRIPTION_NAME
)
with receiver:
received_msgs = receiver.receive_messages(max_message_count=10, max_wait_time=60)
output_global = {}
for msg in received_msgs:
message1 = str(msg)
res = json.loads(message1)
# extracting the text from the message from service bus
text = res['Text']
#passing the text to comprehend
result_json= json.dumps(comprehend.detect_sentiment(Text=text, LanguageCode='en'), sort_keys=True, indent=4)
result = json.loads(result_json) # converting json to python dictionary
print(result)
# logging.info("Result from comprehend" , result)
#extracting the sentiment value
sentiment = result["Sentiment"]
#extracting the sentiment score
if sentiment == "POSITIVE":
value = round(result["SentimentScore"]["Positive"] * 100,2)
elif sentiment == "NEGATIVE":
value = round(result["SentimentScore"]["Negative"] * 100,2)
elif sentiment == "NEUTRAL":
value = round(result["SentimentScore"]["Neutral"] * 100,2)
elif sentiment == "MIXED":
value = round(result["SentimentScore"]["Mixed"] * 100,2)
# To detect the language of the feedback, the text received from service bus is passed to the function below
lang_result=json.dumps(comprehend.detect_dominant_language(Text = text), sort_keys=True, indent=4)
#converting languages detection results into a dictionary
lang_result_json=json.loads(lang_result)
#Formatting the score from the results
for line in lang_result_json["Languages"]:
line['Score'] = round(line['Score']* 100, 2)
#storing the output of sentiment analysis, language detection and ids in a dictionary and converting it to JSON
output = {
'XXX': res['XXX'],
'XXX Id': res['XXX'],
'XXX': res['XXX'],
'XXX': res['XXX'],
'XXX': res['XXX'],
'Sentiment': sentiment,
'Value': value,
'Languages': lang_result_json['Languages']
}
# logging.info("Message Body: " + output)
output_json = json.dumps(output, ensure_ascii=False)
#-------------------------------------------------------------------------------------------------------
# Sending the processed output (output_json) in json format to another service bus
def send_output(sender):
message2 = ServiceBusMessage(
output_json,
content_type="XXX", #setting the content type so that the service bus can route it.
ApplicationProperties={b'tenantcode':msg.ApplicationProperties[b'tenantcode']} #setting the tenant code
)
sender.send_messages(message2)
servicebus_client = servicebus_client.from_connection_string(conn_str=CONNECTION_STR, logging_enable=True)
with servicebus_client:
sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME_B)
with sender:
send_output(sender)
this is my host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"autoComplete": true
}
}
},
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
and this is my function.json
{
"scriptFile": "outthinkServiceBus.py",
"entryPoint": "main",
"bindings": [
{
"name": "message",
"type": "serviceBusTrigger",
"direction": "in",
"topicName": "XXX",
"subscriptionName": "XXX",
"connection": "XXX"
}
]
}
Inside the received I have a for loop msg on which I would like to loop over all the messages inside the topic, and one by one send them to the topicB.
Everything works fine as it is, and in the output of azure function, I can see this message
2021-10-15 15:23:45.124
Message receiver b'receiver-link-' state changed from <MessageReceiverState.Open: 3> to <MessageReceiverState.Closing: 4> on connection: b'SBReceiver-'
Information
2021-10-15 15:23:45.552
Shutting down connection b'SBReceiver-'.
So the processing gets until the receiver, but the function sender, never get triggered.
If I remove the for loop, the code works just fine. I am able to see the sender triggering and completing successfully.
Any help to understand where is the mistake and what I am doing wrong?
Thank you so much for any help you can provide me with. Andplease if you need more info just ask
UPDATE:
I have tried to indent the send_out function outside the for loop, in this case the function sent_out triggers but the azure function Servicebus fails as the output_json is out of scope. So far the only thing I could figure out is that for some reason, the function defined in the loop, never fires.
Related
I'm trying to grab information from server A in a few channels (where I have no permissions, only view and read) to then take that information and send it to my server B (which I own).
I've shared with you what I've done so far. Which all it does so far it sends me all the messages in all servers and dms to me.
I'm unable to filter out Server A and the few channels in it and to then send it to my server
import websocket
import json
import threading
import time
def send_json_request(ws,request):
ws.send(json.dumps(request))
def recieve_json_response(ws):
response = ws.recv()
if response:
return json.loads(response)
def heartbeat(interval, ws):
print('Search activated')
while True:
time.sleep(interval)
heartbeatJSON = {
"op": 1,
"d": "null"
}
send_json_request(ws, heartbeatJSON)
print("Looking for information.")
ws = websocket.WebSocket()
ws.connect("wss://gateway.discord.gg/?v=6&encording=json")
event = recieve_json_response(ws)
heartbeat_interval = event['d']['heartbeat_interval'] / 1000
threading._start_new_thread(heartbeat, (heartbeat_interval, ws))
token = "DISCORD_TOKEN"
payload = {
"op": 2,
"d": {
"token": token,
"properties": {
"$os": 'windows',
'$browser': 'chrome',
'$device': 'pc'
}
}
}
send_json_request(ws, payload)
while True:
event = recieve_json_response(ws)
try:
print(f"{event['d']['author']['username']}: {event['d']['content']}")
op_code = event('op')
if op_code == 11:
print('heartbeat received')
except:
pass
```
I am taking a course in solidity/python, where I have encountered an error, which it does seem I can solve myself.
from solcx import compile_standard, install_solc # <- import the install_solc method!
import json
from web3 import Web3
import os
from dotenv import load_dotenv
load_dotenv()
install_solc("0.6.0") # <- Add this line and run it
with open("SimpleStorage.sol", "r") as file:
simple_storage_file = file.read()
compiled_sol = compile_standard(
{
"language": "Solidity",
"sources": {"simpleStorage.sol": {"content": simple_storage_file}},
"settings": {
"outputSelection": {
"*": {"*": ["abi", "metadata", "evm.bytecode", "evm.sourceMap"]}
}
},
},
solc_version="0.6.0",
)
with open("compiled_code.json", "w") as file:
json.dump(compiled_sol, file)
# get bytecode
bytecode = compiled_sol["contracts"]["simpleStorage.sol"]["SimpleStorage"]["evm"]["bytecode"]["object"]
# get abi
abi = compiled_sol["contracts"]["simpleStorage.sol"]["SimpleStorage"]["abi"]
#HTTP provider - connecting to ganache
w3 = Web3(Web3.HTTPProvider("http://XXXXXXX"))
#Network ID - Blockchain ID ganache
chain_id = XXXX
#Adress from ganache
my_address = "XXX"
private_key = os.getenv("my_private_key_1")
# From os env: my_private_key = os.getenv("Private_key_test_1")
# Create the contract in python
SimpleStorage = w3.eth.contract(abi=abi, bytecode=bytecode)
#Get the latest transaction
nonce = w3.eth.getTransactionCount(my_address)
# 1. Build a transaction
# 2. Sign a transaction
# 3. Send a transaction
transaction = SimpleStorage.constructor().buildTransaction(
{"gasPrice": w3.eth.gas_price,
"chainID": chain_id,
"from": my_address,
"nonce": nonce}
)
signed_txn = w3.eth.account.sign_transaction(transaction, private_key=private_key)
print(signed_txn)
I have the following error:
TypeError("Transaction must not include unrecognized fields: %r" % superfluous_keys)
TypeError: Transaction must not include unrecognized fields: {'chainID'}
PS C:\Users\VSCode-win32-x64-1.40.1\Project\demos\web3_py_simple_storage>
Any help would be much appreciated...
If you look on the readthedocs.io page for the function "buildTransaction" it shows that the named parameter in the transaction dictionary is "chainId" and not "chainID". That's all. Python is case-sensitive in that respect. Good luck with your class.
https://web3py.readthedocs.io/en/stable/contracts.html
So basically, I'm trying to make this command where the user enters in an argument in this case the name of a state. Once the user has entered an argument, the client looks inside the voter.json file and finds each dict with the state that is the same as the argument. Once finished, it put the results on a discord embed then sends it to the user. Problem is that, the code works but, it will send a separate embed with each dict that the client found. I want it to send the dicts all in the same embed.
main.py
#client.command(name='search')
async def search(context, *, user):
with open('votes.json','r') as f:
data = json.load(f)
for officials in data['officials']:
o_name = officials["name"]
o_state = officials["state"]
if o_state == user:
myEmbed = discord.Embed(color=0xd4af37)
myEmbed.add_field(name=o_state, value="".join(o_name))
await context.send(embed=myEmbed)
votes.json
{
"officials" :[
{
"name": "RestiveSole267",
"state": "Anchorage"
},
{
"name":"Avia_JP",
"state": "Anchorage"
},
{
"name":"BillBobj",
"state":"Anchorage"
}
]
}
output:
1st Embed:
RestiveSole267
2nd Embed:
Avia_JP
3rd Embed:
BillBobj
This is an example of how to accomplish your goal.
x = {
"officials" :[
{
'BuddyBob':10,
'Guy':True
},
{
'BuddyGirl':13,
'Guy':False
},
{
'BuddyBobo':15,
'Guy':True
}
]
}
for official in x['officials']:
if official["Guy"]==True:
print(official)
output:
{'BuddyBob': 10, 'Guy': True}
{'BuddyBobo': 15, 'Guy': True}
I'm trying to post events to Google Analytics. It works fine when I do it using the NodeJS code below, but fails when I use the Python code below. Both do return a HTTP 200 and even when posting to the debug URL (https://www.google-analytics.com/debug/collect) Google Analytics returns success details in both cases (see valid: true in the response below). The problem is that when posting from NodeJS the result shows up in the GA website, when posting from Python it never shows up. I did compare the requests for both and have not been able to spot a difference.
{
"hitParsingResult": [ {
"valid": true,
"parserMessage": [ ],
"hit": "/debug/collect?v=1\u0026t=event\u0026tid=XXXXXXX\u0026cid=YYYYYYu0026ec=Slack\u0026ea=SlashCommand\u0026el=whowasat-curl\u0026an=staging.Whereis-Everybody?\u0026aid=staging.whereis-everybody.com"
} ],
"parserMessage": [ {
"messageType": "INFO",
"description": "Found 1 hit in the request."
} ]
}
The NodeJS code is (result does show up in Google Analytics):
'use strict';
var request = require('request');
require('request-debug')(request);
function postEventToGA(category, action, label) {
var options = {
v: '1',
t: 'event',
tid: process.env.GOOGLEANALYTICS_TID,
cid: process.env.GOOGLEANALYTICS_CID,
ec: category,
ea: action,
el: label,
an: process.env.STAGE_INFIX + "appname",
aid: process.env.STAGE_INFIX + "appname"
};
console.log("payload: " + JSON.stringify(options))
request.post({ url: 'https://www.google-analytics.com/collect', form: options }, function (err, response, body) {
console.log(request)
if (err) {
console.log("Failed to post event to Google Analytics, error: " + err);
} else {
if (200 != response.statusCode) {
console.log("Failed to post event to Google Analytics, response code: " + response.statusCode + " error: " + err);
}
}
});
}
postEventToGA("some-category", "some-action", "some-label")
And the Python code is (result does not show up in Google Analytics):
import json
import logging
import os
import requests
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
GOOGLEANALYTICS_TID = os.environ["GOOGLEANALYTICS_TID"]
GOOGLEANALYTICS_CID = os.environ["GOOGLEANALYTICS_CID"]
STAGE_INFIX = os.environ["STAGE_INFIX"]
def post_event(category, action, label):
payload = {
"v": "1",
"t": "event",
"tid": GOOGLEANALYTICS_TID,
"cid": GOOGLEANALYTICS_CID,
"ec": category,
"ea": action,
"el": label,
"an": STAGE_INFIX + "appname,
"aid": STAGE_INFIX + "appname",
}
response = requests.post("https://www.google-analytics.com/collect", payload)
print(response.request.method)
print(response.request.path_url)
print(response.request.url)
print(response.request.body)
print(response.request.headers)
print(response.status_code)
print(response.text)
if response.status_code != 200:
LOGGER.warning(
"Got non 200 response code (%s) while posting to GA.", response.status_code
)
post_event("some-category", "some-action", "some-label")
Any idea why the NodeJS post will show up in Google Analytics and the Python post does not?
(while both return a HTTP200)
Did some more testing and discovered that the user agent HTTP header was causing the problem. When I set it to an empty string in the Python code it works. Like this:
headers = {"User-Agent": ""}
response = requests.post(
"https://www.google-analytics.com/collect", payload, headers=headers
)
The documentation at https://developers.google.com/analytics/devguides/collection/protocol/v1/reference does state that the user agent is used, but does not clearly state what the requirements are. "python-requests/2.22.0" (default value by python-requests lib) is apparently not accepted.
I use AWS Step Functions and have the following workflow
initStep - It's a lambda function handler, that gets some data and sends it to SQS for external service.
activity = os.getenv('ACTIVITY')
queue_name = os.getenv('QUEUE_NAME')
def lambda_handler(event, context):
event['my_activity'] = activity
data = json.dumps(event)
# Retrieving a queue by its name
sqs = boto3.resource('sqs')
queue = sqs.get_queue_by_name(QueueName=queue_name)
queue.send_message(MessageBody=data, MessageGroupId='messageGroup1' + str(datetime.time(datetime.now())))
return event
validationWaiting - It's an activity that waits for an answer from the external service that include the data.
complete - It's a lambda function handler, that uses the data from the initStep.
def lambda_handler(event, context):
email = event['email'] if 'email' in event else None
data = event['data'] if 'data' in event else None
client = boto3.client(service_name='ses')
to = email.split(', ')
message_conrainer = {'Subject': {'Data': 'Email from step functions'},
'Body': {'Html': {
'Charset': "UTF-8",
'Data': """<html><body>
<p>""" + data """</p>
</body> </html> """
}}}
destination = {'ToAddresses': to,
'CcAddresses': [],
'BccAddresses': []}
return client.send_email(Source=from_addresses,
Destination=destination,
Message=message_container)
It does work, but the problem is that I'm sending full data from the initStep to external service, just to pass it later to complete. Potentially more steps can be added.
I believe it would be better to share it as some sort of global data (of current step function), that way I could add or remove steps and data would still be available for all.
You can make use of InputPath and ResultPath. In initStep you would only send necessary data to external service (probably along with some unique identifier of Execution). In the ValidaitonWaiting step you can set following properties (in State Machine definition):
InputPath: What data will be provided to GetActivityTask. Probably you want to set it to something like $.execution_unique_id where execution_unique_id is field in your data that external service uses to identify Execution (to match it with specific request during initStep).
ResultPath: Where output of ValidationWaiting Activity will be saved in data. You can set it to $.validation_output and json result from external service will be present there.
This way you can send to external service only data that is actually needed by it and you won't lose access to any data that was previously (before ValidationWaiting step) in the input.
For example, you could have following definition of the State Machine:
{
"StartAt": "initStep",
"States": {
"initStep": {
"Type": "Pass",
"Result": {
"executionId": "some:special:id",
"data": {},
"someOtherData": {"value": "key"}
},
"Next": "ValidationWaiting"
},
"ValidationWaiting": {
"Type": "Pass",
"InputPath": "$.executionId",
"ResultPath": "$.validationOutput",
"Result": {
"validationMessages": ["a", "b"]
},
"Next": "Complete"
},
"Complete": {
"Type": "Pass",
"End": true
}
}
}
I've used Pass states for initStep and ValidationWaiting to simplify the example (I haven't run it, but it should work). Result field is specific to Pass task and it is equivalent to the result of your Lambda functions or Activity.
In this scenario Complete step would get following input:
{
"executionId": "some:special:id",
"data": {},
"someOtherData": {"value": key"},
"validationOutput": {
"validationMessages": ["a", "b"]
}
}
So the result of ValidationWaiting step has been saved into validationOutput field.
Based on the answer of Marcin Sucharski I've came up with my own solution.
I needed to use Type: Task since initStep is a lambda, which sends SQS.
I didn't needed InputPath in ValidationWaiting, but only ResultPath, which store the data received in activity.
I work with Serverless framework, here is my final solution:
StartAt: initStep
States:
initStep:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:init-step
Next: ValidationWaiting
ValidationWaiting:
Type: Task
ResultPath: $.validationOutput
Resource: arn:aws:states:#{AWS::Region}:#{AWS::AccountId}:activity:validationActivity
Next: Complete
Catch:
- ErrorEquals:
- States.ALL
ResultPath: $.validationOutput
Next: Complete
Complete:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:complete-step
End: true
Here a short and simple solution with InputPath and ResultPath. My Lambda Check_Ubuntu_Updates return a list of instance ready to be updated. This list of instances is received by the step Notify_Results, then it use this data. Remember that if you have several ResultPath in your Step Function and you need more than 1 input in a step you can use InputPath only with $.
{
"Comment": "A state machine that check some updates systems available.",
"StartAt": "Check_Ubuntu_Updates",
"States": {
"Check_Ubuntu_Updates": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:#############:function:Check_Ubuntu_Updates",
"ResultPath": "$.instances",
"Next": "Notify_Results"
},
"Notify_Results": {
"Type": "Task",
"InputPath": "$.instances",
"Resource": "arn:aws:lambda:us-east-1:#############:function:Notify_Results",
"End": true
}
}
}