I'm trying to create a new AutomationAccount using Python SDK. There's no problem if I get, list, update or delete any account, but I'm getting a BadRequest error when I try to create a new one.
Documentation is pretty easy: AutomationAccountOperations Class > create_or_update()
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
from azure.identity import AzureCliCredential
from azure.mgmt.automation import AutomationClient
credential = AzureCliCredential()
automation_client = AutomationClient(credential, "xxxxx")
result = automation_client.automation_account.create_or_update("existing_rg", 'my_automation_account', {"location": "westeurope"})
print(f'Automation account {result.name} created')
This tiny script is throwing me this error:
Traceback (most recent call last):
File ".\deploy.py", line 10
result = automation_client.automation_account.create_or_update("*****", 'my_automation_account', {"location": "westeurope"})
File "C:\Users\Dave\.virtualenvs\new-azure-account-EfYek8IT\lib\site-packages\azure\mgmt\automation\operations\_automation_account_operations.py", line 174, in create_or_update
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (BadRequest) {"Message":"The request body on Account must be present, and must specify, at a minimum, the required fields set to valid values."}
Code: BadRequest
Message: {"Message":"The request body on Account must be present, and must specify, at a minimum, the required fields set to valid values."}
I've tried to use this method (create_or_update) on a different sdk like powershell using same parameters and it worked.
Some thoughts?
Solution is setting the Azure SKU parameter.
For some reason is not necessary on Powershell but it is on Python SDK. Now this snippet is creating my AutomationAccount successfully.
credential = AzureCliCredential()
automation_client = AutomationClient(credential, "xxxxx")
params = {"name": my_automation_account, "location": LOCATION, "tags": {}, "sku": {"name": "free"}}
result = automation_client.automation_account.create_or_update("existing_rg", 'my_automation_account', params)
print(f'Automation account {result.name} created')
Docs about this:
AutomationAccountOperations Class > create_or_update
AutomationAccountCreateOrUpdateParameters Class
Sku Class
Thanks #UpQuark
Related
I'm using Google's My Business API via Google's API Python Client Library.
Without further ado, here is a complete code example:
from dotenv import load_dotenv
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from os.path import exists
from pprint import pprint
import os
import pickle
load_dotenv()
API_DEVELOPER_KEY = os.getenv('API_DEVELOPER_KEY')
API_SCOPE = os.getenv('API_SCOPE')
STORED_CLIENT_CREDENTIALS = os.getenv('STORED_CLIENT_CREDENTIALS')
GOOGLE_APPLICATION_CREDENTIALS = os.getenv('GOOGLE_APPLICATION_CREDENTIALS')
def get_google_credentials(path=STORED_CLIENT_CREDENTIALS):
'''Loads stored credentials. Gets and stores new credentials if necessary.'''
if exists(path):
pickle_in = open(path, 'rb')
credentials = pickle.load(pickle_in)
else:
flow = InstalledAppFlow.from_GOOGLE_APPLICATION_CREDENTIALS_file(
GOOGLE_APPLICATION_CREDENTIALS_file=GOOGLE_APPLICATION_CREDENTIALS, scopes=API_SCOPE)
flow.run_local_server()
credentials = flow.credentials
store_google_credentials(credentials)
return credentials
def store_google_credentials(credentials, path=STORED_CLIENT_CREDENTIALS):
'''Store credentials for future reuse to avoid authenticating every time.'''
pickle_out = open(path, 'wb')
pickle.dump(credentials, pickle_out)
pickle_out.close()
def get_google_api_interface(credentials, service_name, service_version, service_discovery_url=None):
'''Get a resource object with methods for interacting with Google's API.'''
return build(service_name,
service_version,
credentials=credentials,
developerKey=API_DEVELOPER_KEY,
discoveryServiceUrl=service_discovery_url)
def extract_dict_key(dict, key):
'''Utility to extract particular values from a dictionary by their key.'''
return [d[key] for d in dict]
def transform_list_to_string(list, separator=' '):
return separator.join(map(str, list))
def get_google_account_names():
'''Get a list of all account names (unique ids).'''
google = get_google_api_interface(
get_google_credentials(),
service_name='mybusinessaccountmanagement',
service_version='v1',
service_discovery_url='https://mybusinessaccountmanagement.googleapis.com/$discovery/rest?version=v1')
accounts = google.accounts().list().execute()
return extract_dict_key(accounts['accounts'], 'name')
def get_google_store_reviews(account_name):
'''Get all store reviews for a specific account from Google My Business.'''
google = get_google_api_interface(
get_google_credentials(),
service_name='mybusiness',
service_version='v4',
service_discovery_url='https://mybusiness.googleapis.com/$discovery/rest?version=v4')
return google.accounts().locations().batchGetReviews(account_name).execute()
account_names = get_google_account_names()
pprint(account_names)
first_account_name = account_names[0]
pprint(get_google_store_reviews(first_account_name))
And here is the contents of .env:
API_DEVELOPER_KEY = ********
API_SCOPE = https://www.googleapis.com/auth/business.manage
STORED_CLIENT_CREDENTIALS = secrets/credentials.pickle
GOOGLE_APPLICATION_CREDENTIALS = secrets/client_secrets.json
My function get_google_account_names() works fine and returns the expected data:
['accounts/******************020',
'accounts/******************098',
'accounts/******************872',
'accounts/******************021',
'accounts/******************112']
I have tested and validated get_google_credentials() to ensure that CLIENT_CREDENTIALS and API_DEVELOPER_KEY are indeed loaded correctly and working.
Also, in .env, I'm setting the environment variable GOOGLE_APPLICATION_CREDENTIALS to the client_secret.json path, as required some methods in Google's Python Client Library.
My function get_google_store_reviews(), however, results in this error:
Traceback (most recent call last):
File "/my-project-dir/my-script.py", line 88, in <module>
pprint(get_google_store_reviews())
File "/my-project-dir/my-script.py", line 76, in get_google_store_reviews
google = get_google_api_interface(
File "/my-project-dir/my-script.py", line 46, in get_google_api_interface
return build(service_name,
File "/my-project-dir/.venv/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs)
File "/my-project-dir/.venv/lib/python3.9/site-packages/googleapiclient/discovery.py", line 324, in build
raise UnknownApiNameOrVersion("name: %s version: %s" % (serviceName, version))
googleapiclient.errors.UnknownApiNameOrVersion: name: mybusiness version: v4
I have also tried v1 of the Discovery Document with the same result.
Does anyone know what's going on here? It seems like the API mybusiness is not discoverable via the Discovery Document provided by Google, but I'm not sure how to verify my suspicion.
Note that this and this issue is related, but not exactly the same. The answers in those questions are old don't seem to be applicable anymore after recent changes by Google.
Update:
As a commenter pointed out, this API appears to be deprecated. That might explain the issues I'm having, however, Google's documentation states:
"Deprecated indicates that the version of the API will continue to function […]"
Furthermore, notice that even though the top-level accounts.locations is marked as deprecated, some other the underlying methods (including batchGetReviews) are not.
See screenshot for more details:
This issue has also been reported in GitHub.
The batchGetReviews method expects a single account as the path parameter.
You should thus loop over get_google_account_names() and call .batchGetReviews(google_account) instead of .batchGetReviews(google_accounts).
I have a specific number of documents from the azure search index to be deleted and I need a solution in python for the same.
I created an index in azure search already and the format of the index is given below
> {
> "#odata.context": "https://{name}.search.windows.net/indexes({'index name'})/$metadata#docs(*)",
> "value": [
> {
> "#search.score": ,
> "content": "",
> "metadata_storage_name": "",
> "metadata_storage_path": "",
> "metadata_storage_file_extension": "",}]}
metadata_storage_path is the unique key for each document in azure search index.
I got 2 ways to go about the problem using azure-python SDK and python request module but both the methods are throwing me an error which is listed below.
method - 1 (using python request module)
I got the reference from azure documentation
https://learn.microsoft.com/en-us/rest/api/searchservice/addupdate-or-delete-documents
import json
import requests
api_key = "B346FEAB56E6D5*******"
headers = {
'api-key': f'{api_key}',
'Content-Type': 'application/json'
}
doc_idx = "Index name"
doc_url = f"https://{name}.search.windows.net/indexes/{doc_idx}-index/docs/search?api-version=2020-06-30-Preview"
payload = json.dumps({
"#search.action": "delete",
"key_field_name":({"metadata_storage_path": "aHR0cHM6Ly9mc2NvZ******"})
},
)
response = json.loads(requests.request("POST", doc_url, headers=headers, data=payload).text)
print(response)
I am getting the following error.
{'error': {'code': '',
'message': "The request is invalid.## Heading ## Details: parameters : The parameter 'key_field_name' in the request payload is not a valid parameter for the operation 'search'.\r\n"}}
I also tried manipulating the code but I am not able to make it work please let me know weather i am making some mistake in the code or is there some issues with the python request module and azure search.
Method - 2 (using azure python SDK)
I got the Reference from azure documentation.
https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.searchclient?view=azure-python
I tried to delete one document inside the azure search index with azure python SDK and the code is given below.
from azure.core.credentials import AzureKeyCredential
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents import SearchClient
key = AzureKeyCredential('B346FEAB******')
doc_idx = "index name"
service_endpoint = f"https://{name}.search.windows.net/indexes/{doc_idx}-index/docs/search?api-version=2020-06-30-Preview"
search_client = SearchClient(service_endpoint, doc_idx , key,)
# result = search_client.delete_documents(documents=[DOCUMENT])
result = search_client.delete_documents(documents=[{"metadata_storage_name": "XYZ.jpg"}])
print("deletion of document succeeded: {}".format(result[0].succeeded))
I am getting the following error.
ResourceNotFoundError Traceback (most recent call last)
<ipython-input-7-88beecc15663> in <module>
13 # result = search_client.upload_documents(documents=[DOCUMENT])
14
---> 15 result = search_client.delete_documents(documents=[{"metadata_storage_name": "XYZ.jpg"}])----------------------------------------------
ResourceNotFoundError: Operation returned an invalid status 'Not Found'
I also tried using metadata_storage_path instead of metadata_storage_name and I got the same error.
please check the code and let me know where I am making mistake and also if there is any other method for deleting a specific document in azure search index.
You have not defined a variable for name.
service_endpoint = f"https://{name}.search.windows.net/indexes/{doc_idx}-index/docs/search?api-version=2020-06-30-Preview"
Becomes
https://.search.windows.net/indexes/Index%20name-index/docs/search?api-version=2020-06-30-Preview
I am a novice in Python programming and trying to create a blob container using python. Even after following the documented steps, I see the below error.
Here is my code:
import os, uuid
from azure.storage.blob import BlobServiceClient,BlobClient,ContainerClient,__version__
class BlobSamples():
print("Azure Blob Storage v" + __version__ + " - Python quickstart sample")
connection_str = os.getenv('AZURE_STORAGE_CONNECTION_STRING')
print("Connection established to Azure storage account from the Python App")
#--Begin Blob Samples-----------------------------------------------------------------
def create_container_sample(self):
# Instantiate a new BlobServiceClient using a connection string
blob_service_client = BlobServiceClient.from_connection_string(self.connection_str)
# Instantiate a new ContainerClient
container_client = blob_service_client.get_container_client("mycontainer")
try:
# Create new container in the service
container_client.create_container()
# List containers in the storage account
list_response = blob_service_client.list_containers()
except Exception as ex:
print('Exception:')
print(ex)
#main program
sample = BlobSamples()
sample.create_container_sample()
**Error:**
py ConnectionString.py
Azure Blob Storage v12.9.0 - Python quickstart sample
Connection established to Azure storage account from the Python App
Traceback (most recent call last):
File "C:\Technical docs\cloud computing\MS Azure\blob-quickstart-v12\menu-driven-strg-ops\ConnectionString.py", line 31, in
sample.create_container_sample()
File "C:\Technical docs\cloud computing\MS Azure\blob-quickstart-v12\menu-driven-strg-ops\ConnectionString.py", line 16, in create_container_sample
blob_service_client = BlobServiceClient.from_connection_string(self.connection_str)
File "C:\Python-InstallPath\lib\site-packages\azure\storage\blob_blob_service_client.py", line 174, in from_connection_string
enter code hereaccount_url, secondary, credential = parse_connection_str(conn_str, credential, 'blob')
File "C:\Python-InstallPath\lib\site-packages\azure\storage\blob_shared\base_client.py", line 363, in parse_connection_str
conn_str = conn_str.rstrip(";")
AttributeError: 'NoneType' object has no attribute 'rstrip'
I tried to reproduce the scenario in my system.
Please check with you added the environment variables properly. Use
'URL' in os.environ to check environment present or not (true or false)
Add Environment variable in command prompt
set URL=https://pythonazurestorage12345.blob.core.windows.net
set
Try with this code
import os, uuid
from azure.storage.blob import BlobServiceClient,BlobClient,ContainerClient,__version__
print('URL' in os.environ)
connection_str = os.getenv("URL")
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
# Instantiate a new ContainerClient
container_client = blob_service_client.get_container_client("testcontainers")
container_client.create_container()
OUTPUT
Successfully created container in Azure Portal
I see that you are trying to retrieve the connection_str with os.getenv. However, if the connection_str is not a environment value this method returns None which is probably the case since your error states AttributeError: 'NoneType' object has no attribute 'rstrip'.
Adding the connection_str to your environment variables will probably solve your error. Alternatively, you can also create an argument for the connection_str in the create_container_sample() method and then passing the connection_str as a variable for the sake of testing your code.
I was having the same error and with no solution. Until I read the documentation from the Azure, they slighly changed something.
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python
setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"
After this you need to restart your editor. And everything will work. :)
I am trying to register a data set via the Azure Machine Learning Studio designer but keep getting an error. Here is my code, used in a "Execute Python Script" module:
import pandas as pd
from azureml.core.dataset import Dataset
from azureml.core import Workspace
def azureml_main(dataframe1 = None, dataframe2 = None):
ws = Workspace.get(name = <my_workspace_name>, subscription_id = <my_id>, resource_group = <my_RG>)
ds = Dataset.from_pandas_dataframe(dataframe1)
ds.register(workspace = ws,
name = "data set name",
description = "example description",
create_new_version = True)
return dataframe1,
But I get the following error in the Workspace.get line:
Authentication Exception: Unknown error occurred during authentication. Error detail: Unexpected polling state code_expired.
Since I am inside the workspace and in the designer, I do not usually need to do any kind of authentication (or even reference the workspace). Can anybody offer some direction? Thanks!
when you're inside a "Execute Python Script" module or PythonScriptStep, the authentication for fetching the workspace is already done for you (unless you're trying to authenticate to different Azure ML workspace.
from azureml.core import Run
run = Run.get_context()
ws = run.experiment.workspace
You should be able to use that ws object to register a Dataset.
I am attempting to write a Google Cloud Function to set caps to disable usage above a certain limit. I followed the instructions here: https://cloud.google.com/billing/docs/how-to/notify#cap_disable_billing_to_stop_usage.
This is what my cloud function looks like (I am just copying and pasting from the Google Cloud docs page linked above):
import base64
import json
import os
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
PROJECT_ID = os.getenv('GCP_PROJECT')
PROJECT_NAME = f'projects/{PROJECT_ID}'
def stop_billing(data, context):
pubsub_data = base64.b64decode(data['data']).decode('utf-8')
pubsub_json = json.loads(pubsub_data)
cost_amount = pubsub_json['costAmount']
budget_amount = pubsub_json['budgetAmount']
if cost_amount <= budget_amount:
print(f'No action necessary. (Current cost: {cost_amount})')
return
billing = discovery.build(
'cloudbilling',
'v1',
cache_discovery=False,
credentials=GoogleCredentials.get_application_default()
)
projects = billing.projects()
if __is_billing_enabled(PROJECT_NAME, projects):
print(__disable_billing_for_project(PROJECT_NAME, projects))
else:
print('Billing already disabled')
def __is_billing_enabled(project_name, projects):
"""
Determine whether billing is enabled for a project
#param {string} project_name Name of project to check if billing is enabled
#return {bool} Whether project has billing enabled or not
"""
res = projects.getBillingInfo(name=project_name).execute()
return res['billingEnabled']
def __disable_billing_for_project(project_name, projects):
"""
Disable billing for a project by removing its billing account
#param {string} project_name Name of project disable billing on
#return {string} Text containing response from disabling billing
"""
body = {'billingAccountName': ''} # Disable billing
res = projects.updateBillingInfo(name=project_name, body=body).execute()
print(f'Billing disabled: {json.dumps(res)}')
Also attaching screenshot of what it looks like on Google Cloud Function UI:
I'm also attaching a screenshot to show that I copied and pasted the relevant things to the requirements.txt file as well.
But when I go to test the code, it gives me an error:
Expand all | Collapse all{
insertId: "000000-69dce50a-e079-45ed-b949-a241c97fdfe4"
labels: {…}
logName: "projects/stanford-cs-231n/logs/cloudfunctions.googleapis.com%2Fcloud-functions"
receiveTimestamp: "2020-02-06T16:24:26.800908134Z"
resource: {…}
severity: "ERROR"
textPayload: "Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function
_function_handler.invoke_user_function(event_object)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function
return call_user_function(request_or_event)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function
event_context.Context(**request_or_event.context))
File "/user_code/main.py", line 9, in stop_billing
pubsub_data = base64.b64decode(data['data']).decode('utf-8')
KeyError: 'data'
"
timestamp: "2020-02-06T16:24:25.411Z"
trace: "projects/stanford-cs-231n/traces/8e106d5ab629141d5d91b6b68fb30c82"
}
Any idea why?
Relevant Stack Overflow Post: https://stackoverflow.com/a/58673874/3507127
There seems to be an error in the code Google provided. I got it working when I changed the stop_billing function:
def stop_billing(data, context):
if 'data' in data.keys():
pubsub_data = base64.b64decode(data['data']).decode('utf-8')
pubsub_json = json.loads(pubsub_data)
cost_amount = pubsub_json['costAmount']
budget_amount = pubsub_json['budgetAmount']
else:
cost_amount = data['costAmount']
budget_amount = data['budgetAmount']
if cost_amount <= budget_amount:
print(f'No action necessary. (Current cost: {cost_amount})')
return
if PROJECT_ID is None:
print('No project specified with environment variable')
return
billing = discovery.build('cloudbilling', 'v1', cache_discovery=False, )
projects = billing.projects()
billing_enabled = __is_billing_enabled(PROJECT_NAME, projects)
if billing_enabled:
__disable_billing_for_project(PROJECT_NAME, projects)
else:
print('Billing already disabled')
The problem is that the pub/sub message provides input as a json message with a 'data' entry that is base64 encoded. In the testing functionality you provide the json entry without a 'data' key and without encoding it. This is checked for in the function that I rewrote above.