I have created a cloud function that when triggered is supposed to create a VM instance and run a python script.
However, the VM is not being created.
I can see the following message in the CF log, to do with my deployment:
resource.type = "cloud_function"
resource.labels.region = "europe-west2"
severity>=DEFAULT
severity=DEBUG
...
However, for the life of me, I can't see where to go to actually view the error itself.
I then Googled and found the following thread about an issue where Cloud Functions is not showing any logs.
Thinking it may be the same issue I added the recommended environment variables to my deployment but still I cant find the error anywhere in the logs.
Can anyone point me in the right direction?
Here is my cloud function code as well:
import os
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
sa_file = "key.json"
zone = "europe-west2-c"
project_id = "<<proj id>>" # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(
sa_file, scopes=scopes
)
# Create the Cloud Compute Engine service object
service = discovery.build("compute", "v1", credentials=credentials)
def create_instance(compute, project, zone, name):
# Get the latest Debian Jessie image.
image_response = (
compute.images()
.getFromFamily(project="debian-cloud", family="debian-9")
.execute()
)
source_disk_image = image_response["selfLink"]
# Configure the machine
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
config = {
"name": name,
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "projects/my_account/global/images/instance-image3",
"diskType": "projects/my_account/zones/europe-west2-c/diskTypes/pd-standard",
"diskSizeGb": "10",
},
"diskEncryptionKey": {},
}
],
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": "sudo apt-get -y install python3-pip\npip3 install -r /home/will_charles/requirements.txt\ncd /home/will_peebles/\npython3 /home/will_charles/main.py",
}
],
},
"serviceAccounts": [
{
"email": "837516068454-compute#developer.gserviceaccount.com",
"scopes": ["https://www.googleapis.com/auth/cloud-platform"],
}
],
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
"tags": {"items": ["http-server", "https-server"]},
}
return compute.instances().insert(project=project, zone=zone, body=config).execute()
def run(data, context):
create_instance(service, project_id, zone, "pagespeed-vm-4")
The reason that you do not see anything in the logs for Cloud Functions is that your code is executing but is not logging the results of the API calls.
Your code is succeeding in calling the API to create a compute instance. This does not mean the API succeeded just the call itself. The API returns an operation handle that you then later call to check on status. You are not doing that, so your Cloud Function has no idea that the create instance failed.
To see logs for the create instance API, go to Operations Logging -> VM Instance. Select "All instance_id". If the API to create an instance does not succeed, there will be no instance id to select therefore you have to select all instances and then find logs related to the API call.
Related
I am following the steps listed here, but for python code:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-identity-based-connections-tutorial-2
Objective is to create a simple (hello world) function app which is triggered by Azure Service Bus message queue using identity-based connection. Function app works fine when ASB is reference via connection string, but gives this error when trying to connect via managed service identity of function app (used the specific configuration pattern __fullyQualifiedNamespace). MSI has been granted Role (Azure Service Bus Data Receiver) on ASB.
Microsoft.Azure.WebJobs.ServiceBus: Microsoft Azure WebJobs SDK ServiceBus connection string 'ServiceBusConnection__fullyQualifiedNamespace' is missing or empty.
Function code (autogenerated)
import logging
import azure.functions as func
def main(msg: func.ServiceBusMessage):
logging.info('Python ServiceBus queue trigger processed message: %s',
msg.get_body().decode('utf-8'))
function.json (connection value modified based on ms docs)
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "erpdemoqueue",
"connection": "ServiceBusConnection"
}
]
}
host.json (version modified based on ms docs)
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
To use a managed identity, you'll need to add a setting that identifies the fully qualified namespace of your Service Bus instance.
For example, in your local.settings.json file for local development:
{
"Values": {
"<connection_name>__fullyQualifiedNamespace": "<service_bus_namespace>.servicebus.windows.net"
}
}
Or in the application settings for your function when deployed to Azure:
<connection_name>__fullyQualifiedNamespace=<service_bus_namespace>.servicebus.windows.net
This is mentioned only briefly in the tutorial that you linked. The Microsoft.Azure.WebJobs.Extensions.ServiceBus documentation does covers this a bit better in the Managed identity authentication section.
I want to create an emr cluster triggered via Airflow on Amazon EMR. The emr cluster shows up in the UI of Amazon EMR but with an error saying:
"The VPC/subnet configuration was invalid: Subnet is required : The specified instance type m5.xlarge can only be used in a VPC"
Below is the code snippet and the config details in json format for this task that are used in the Airflow script.
My question is how can I incorporate the information (id codes) about VPC and subnet in the json (if this is even possible)? there are no explicit examples out there.
Hint: a network and an EC2 subnet is already created
JOB_FLOW_OVERRIDES = {
"Name": "sentiment_analysis",
"ReleaseLabel": "emr-5.33.0",
"Applications": [{"Name": "Hadoop"}, {"Name": "Spark"}], # We want our EMR cluster to have HDFS and Spark
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"Properties": {"PYSPARK_PYTHON": "/usr/bin/python3"}, # by default EMR uses py2, change it to py3
}
],
}
],
"Instances": {
"InstanceGroups": [
{
"Name": "Master node",
"Market": "SPOT",
"InstanceRole": "MASTER",
"InstanceType": "m5.xlarge",
"InstanceCount": 1,
},
{
"Name": "Core - 2",
"Market": "SPOT", # Spot instances are a "use as available" instances
"InstanceRole": "CORE",
"InstanceType": "m5.xlarge",
"InstanceCount": 2,
},
],
"KeepJobFlowAliveWhenNoSteps": True,
"TerminationProtected": False, # this lets us programmatically terminate the cluster
},
"JobFlowRole": "EMR_EC2_DefaultRole",
"ServiceRole": "EMR_DefaultRole",
}
create_emr_cluster = EmrCreateJobFlowOperator(
task_id="create_emr_cluster",
job_flow_overrides=JOB_FLOW_OVERRIDES,
aws_conn_id="aws_default",
emr_conn_id="emr_default",
dag=dag,
)
EmrCreateJobFlowOperator calls create_job_flow from emr.py which matches the same api from boto3 emr client.
Therefore you can put an item "Ec2SubnetId" with your subnet id as value within the "Instances" dictionary.
This works for me on Apache Airflow 2.0.2
I want to configure diagnostic setting for Azure database using Python. I know that I have to use DiagnosticSettingsOperations Class, and MonitorManagementClient Client, and create_or_update method to start. I am fairly new to Python development, and I am struggling to put the pieces together.
However, there is no proper examples on what parameters to pass for the DiagnosticSettingsOperations Class.
Sample code:
from azure.mgmt.monitor import MonitorManagementClient
from azure.identity import ClientSecretCredential
####### FUNCTION TO CREATE AZURE AUTHENTICATION USING SERVICE PRINCIPAL #######
def authenticateToAzureUsingServicePrincipal():
# Authenticate to Azure using Service Principal credentials
client_id = 'client_id'
client_secret = 'client_secret'
client_tenant_id = 'client_tenant_id'
# Create Azure credential object
servicePrincipalCredentialObject = ClientSecretCredential(tenant_id=client_tenant_id, client_id=client_id, client_secret=client_secret)
return servicePrincipalCredentialObject
azureCredential = authenticateToAzureUsingServicePrincipal()
monitorManagerClient = MonitorManagementClient(azureCredential)
I want to configure Diagnostic setting for Azure sql database, which selects ALL Metrics and Logs by default and sends to a Log analytics workspace. Does anyone know how to proceed further?
The code looks like below:
#other code
monitorManagerClient = MonitorManagementClient(azureCredential)
# Creates or Updates the diagnostic setting[put]
BODY = {
"workspace_id": "the resource id of the log analytics workspace",
"metrics": [
{
"category": "Basic",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
"logs": [
{
"category": "SQLInsights",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
# "log_analytics_destination_type": "Dedicated"
}
diagnostic_settings = self.mgmt_client.diagnostic_settings.create_or_update(RESOURCE_URI, INSIGHT_NAME, BODY)
There is an example in github, you can take a look at it. And if you want to select ALL Metrics and Logs, you should add them one by one in the metrics / logs in the BODY in the above code.
I am trying to create a Cloud Function in Python that builds a VM instance using a custom image I previously created.
The source image is present in the images section:
However, when i run the Cloud Function it can't find my image to build the instance and returns the following error:
Details: "Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'projects/<<project id>>/global/images/pandora-pagespeed-image'. The referenced image resource cannot be found.">
What is more, if I go to create a VM instance manually in the console the image is not appearing here either:
Stranger still, the image that is appearing (pandora-image) is an old image that was deleted yesterday.
What might be going on here?
My Cloud Function code is:
import os
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ['https://www.googleapis.com/auth/cloud-platform']
sa_file = <<credentials file>>
zone = 'europe-west2-c'
project_id = <<project id>> # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(sa_file, scopes=scopes)
# Create the Cloud Compute Engine service object
service = discovery.build('compute', 'v1', credentials=credentials)
def create_instance(compute, project, zone, name):
# Get the latest Debian Jessie image.
image_response = (
compute.images()
.getFromFamily(project="debian-cloud", family="debian-9")
.execute()
)
source_disk_image = image_response["selfLink"]
# Configure the machine
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
config = {
"name": name,
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "projects/<<project id>>/global/images/pandora-pagespeed-image",
"diskType": "projects/<<project id>>/zones/us-central1-a/diskTypes/pd-standard",
"diskSizeGb": "10",
},
"diskEncryptionKey": {},
}
],
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": "sudo apt-get -y install python3-pip\npip3 install -r /home/tommyp/pandora/requirements.txt\ncd /home/tommyp/pandora\npython3 /home/tommyp/pandora/main.py"
}
]
},
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
"tags": {"items": ["http-server", "https-server"]},
}
return compute.instances().insert(project=project, zone=zone, body=config).execute()
def run(data, context):
create_instance(service, project_id, zone, "vm-instance")
I have realized now what I did wrong.
I created my image in the 'machine images' section, not the 'images' section. Hence why my image could not be found...DOH!
I need to create Azure Automation account, and I want to create run book under automation account for auto-scheduling the VM's
Steps I followed for creating Azure automation account.
creating cloud service using API
https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername
Next step, is I am creating Azure automation account under the created cloud service using above api.
https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername/resources/automation/AutomationAccount/testacc2?resourceType=AutomationAccount&detailLevel=Full&resourceProviderNamespace=automation'
After that, i want to create runbook under that create automation account for this I am using the below API in Python
import adal
import requests
import json
token_response = adal.acquire_token_with_username_password(
'https://login.windows.net/rapiddirectory.onmicrosoft.com',
'test#xyz.onmicrosoft.com',
'abcd'
)
access_token = token_response.get('accessToken')
create_run_draft = 'https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername/resources/automation/~/automationAccounts/testacc2/runbooks/write-helloworld/draft?api-version=2014-12-08'
param3 = {
"tags":{
"Testing":"show value",
"Source":"TechNet Script Center"
},
"properties":{
"description":"Hello world",
"runbookType":"Script",
"logProgress":"false",
"logVerbose":"false",
"draft":{
"draftContentLink":{
"uri":"https://gallery.technet.microsoft.com/scriptcenter/The-Hello-World-of-Windows-81b69574/file/111354/1/Write-HelloWorld.ps1",
"contentVersion":"1.0.0.0",
"contentHash":{
"algorithm":"sha256",
"value":"EqdfsYoVzERQZ3l69N55y1RcYDwkib2+2X+aGUSdr4Q="
}
}
}
}
}
headers2 = {'x-ms-version' : '2013-06-01','Content-Type' : 'application/json',"Authorization": 'Bearer ' + access_token}
output = requests.put(create_run_draft,headers=headers2,data=param3).text
print output
I am using Python programming language to achieve this for Azure REST API
I am getting the below error
<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.
org/2001/XMLSchema-instance"><Code>InternalError</Code><Message>The server encou
ntered an internal error. Please retry the request.</Message></Error>
Please help me out of this problem I am struggling with error
Could be because you are passing the values of logProgress and logVerbose as strings ("false") instead of as booleans (false).
This worked for me:
Create runbook:
PUT https://management.core.windows.net/90751b51-7cb6-4480-8dbd-e199395b296f/cloudservices/OaaSCS/resources/automation/~/automationAccounts/JoeAutomationAccount/runbooks/testabc?api-version=2014-12-08
Request body:
{
"properties": {
"logVerbose": false,
"logProgress": false,
"runbookType": "Script",
"draft": {
"inEdit": false,
"creationTime": "0001-01-01T00:00:00+00:00",
"lastModifiedTime": "0001-01-01T00:00:00+00:00"
}
},
"name": "testabc"
}
Upload draft content:
PUT https://management.core.windows.net/90751b51-7cb6-4480-8dbd-e199395b296f/cloudservices/OaaSCS/resources/automation/~/automationAccounts/JoeAutomationAccount/runbooks/testabc/draft/content?api-version=2014-12-08
Request body:
workflow testabc {
"hello"
}