I am using the compute client to create a VM (using create_or_update) and I want the VM to have a standard hdd and not a premium ssd as its os disk. I should be able to specify that in the managed disk parameters but when I do, the VM still creates with a premium SSD.
Here are my VM parameters.
vm_parameters = {
'location': vm_location,
'os_profile': {
'computer_name': vm_name,
'admin_username': vm_name,
'admin_password': vm_password,
'custom_data': startup_script
},
'hardware_profile': {
'vm_size': 'Standard_B1ls'
},
'storage_profile': {
'image_reference': {
'publisher': 'Canonical',
'offer': 'UbuntuServer',
'sku': '16.04.0-LTS',
'version': 'latest'
},
'os_disk': {
'caching': 'None',
'create_option': 'FromImage',
'disk_size_gb': 30,
'managed_disk_parameters': {
'storage_account_type': 'Standard_LRS'
}
}
},
'network_profile': {
'network_interfaces': [{
'id': nic_info.id
}]
},
'tags': {
'expiration_date': 'expirationdatehere'
}
}
Just specifying the storage account type as Standard_LRS isn't changing anything. What should I do to make my VM create with a standard hdd as its os disk instead of a premium ssd?
According to my test, you use the wrong parameter in the vm_parameters. Please update managed_disk_parameters to managed_disk. For more details, please refer to https://learn.microsoft.com/en-us/python/api/azure-mgmt-compute/azure.mgmt.compute.v2019_03_01.models.osdisk?view=azure-python.
For example:
import os
import traceback
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.compute.models import DiskCreateOption
from msrestazure.azure_exceptions import CloudError
from haikunator import Haikunator
haikunator = Haikunator()
AZURE_TENANT_ID= ''
AZURE_CLIENT_ID=''
AZURE_CLIENT_SECRET=''
AZURE_SUBSCRIPTION_ID=''
credentials = ServicePrincipalCredentials(client_id=AZURE_CLIENT_ID,secret=AZURE_CLIENT_SECRET,tenant=AZURE_TENANT_ID)
resource_client = ResourceManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
compute_client = ComputeManagementClient(credentials,AZURE_SUBSCRIPTION_ID)
network_client = NetworkManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
GROUP_NAME='allenR'
LOCATION='eastus'
# Network
VNET_NAME = 'azure-sample-vnet'
SUBNET_NAME = 'azure-sample-subnet'
print('\nCreate Vnet')
async_vnet_creation = network_client.virtual_networks.create_or_update(
GROUP_NAME,
VNET_NAME,
{
'location': LOCATION,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
# Create Subnet
print('\nCreate Subnet')
async_subnet_creation = network_client.subnets.create_or_update(
GROUP_NAME,
VNET_NAME,
SUBNET_NAME,
{'address_prefix': '10.0.0.0/24'}
)
subnet_info = async_subnet_creation.result()
# Create NIC
print('\nCreate NIC')
async_nic_creation = network_client.network_interfaces.create_or_update(
GROUP_NAME,
'test19191',
{
'location': LOCATION,
'ip_configurations': [{
'name': 'test19191-ip',
'subnet': {
'id': subnet_info.id
}
}]
}
)
nic = async_nic_creation.result()
print(nic.id)
vm_parameters = {
'location': 'eastus',
'os_profile': {
'computer_name': 'jimtest120yue',
'admin_username': 'jimtest',
'admin_password': 'Password0123!',
#'custom_data': startup_script
},
'hardware_profile': {
'vm_size': 'Standard_B1ls'
},
'storage_profile': {
'image_reference': {
'publisher': 'Canonical',
'offer': 'UbuntuServer',
'sku': '16.04.0-LTS',
'version': 'latest'
},
'os_disk': {
'caching': 'ReadWrite',
'name' : 'jimtest120yue_disk',
'create_option': 'FromImage',
'disk_size_gb': 30,
'os_type': 'Linux',
'managed_disk': {
'storage_account_type': 'Standard_LRS'
}
}
},
'network_profile': {
'network_interfaces': [{
'id': nic.id
}]
},
'tags': {
'expiration_date': 'expirationdatehere'
}
}
async_vm_creation=compute_client.virtual_machines.create_or_update('allenR','jimtest120yue',vm_parameters)
print(async_vm_creation.result())
disk = compute_client.disks.get('allenR','jimtest120yue_disk')
print(disk.sku)
If you are using Rest API to create the VM then here is the sample JSOn request for creating the VM:
{
"location": "westus",
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D1_v2"
},
"storageProfile": {
"imageReference": {
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/images/{existing-custom-image-name}"
},
"osDisk": {
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"name": "myVMosdisk",
"createOption": "FromImage"
}
},
"osProfile": {
"adminUsername": "{your-username}",
"computerName": "myVM",
"adminPassword": "{your-password}"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
"properties": {
"primary": true
}
}
]
}
}
}
Here is the API for the same:
PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2019-03-01
If you are looking for a way to create Virtual Machine then you can follow below code sample:
https://github.com/Azure-Samples/Hybrid-Compute-Python-Manage-VM/blob/master/example.py
Hope it helps.
Related
We have written python script to create the cloud function , the trigger is Https. We need to invoke fetch the output of the function , So for that we are using the environment variables but some how that is not getting stored ?
def generate_config(context):
""" Entry point for the deployment resources. """
name = context.properties.get('name', context.env['name'])
project_id = context.properties.get('project', context.env['project'])
region = context.properties['region']
resources = []
resources.append(
{
'name': 'createfunction',
'type': 'gcp-types/cloudfunctions-v1:projects.locations.functions',
'properties':
{
'function': "licensevalidation",
'parent': 'projects//locations/',
'sourceArchiveUrl': 'gs://path',
'entryPoint':'handler',
'httpsTrigger': {"url": "https://.cloudfunctions.net/licensevalidation","securityLevel": "SECURE_ALWAYS"},
'timeout': '60s',
'serviceAccountEmail' : '.iam.gserviceaccount.com',
'availableMemoryMb': 256,
'runtime': 'python37' ,
'environmentvaiable' :
}
}
)
call ={
'type': 'gcp-types/cloudfunctions-v1:cloudfunctions.projects.locations.functions.call',
'name': 'call',
'properties':
{
'name':'/licensevalidation',
'data': '{""}'
},
'metadata': {
'dependsOn': ['createfunction']
}
}
resources.append(call)
return{
'resources': resources,
'outputs':
[
{
'name': 'installationtoken',
'value': 'os.environ.get(environment_variable)'
},
]
}
I'm tying to get some original data with geo-point mapping. I need to get satname and timestamp alone with "geo"
I get data from Restful API with python Elasticsearch.
settings = { "settings": {
"number_of_shards":1,
'number_of_replicas':0
},
"mappings" : {
"document" : {
"properties":{
"geo": {
"type": "geo_point"
}
}
}
}
}
es.indices.create(index = "new", body=settings)
def collect_data():
data = requests.get(url = URL).json()
del data['positions'][1]
new_data = {'geo':{'lat':data['positions'][0]['satlatitude'],
'lon':data['positions'][0]['satlongitude']}}, {data['info'][0]['satname']} ,
{data['positions'][0]['timestamp']}
es.index(index='new', doc_type='document', body=new_data)
schedule.every(10).seconds.do(collect_data)
while True:
schedule.run_pending()
time.sleep(1)
Error received:
SerializationError: (({'geo': {'lat': 37.43662067, 'lon': -26.09384821}}, {1591391688}),
TypeError("Unable to serialize {1591391688} (type: <class 'set'>)"))
RESTful json data sample--- {'info': {'satname': 'SPACE STATION', 'satid': 25544,
'transactionscount': 0}, 'positions': [{'satlatitude': 28.89539607,
'satlongitude': 90.44547739, 'sataltitude': 420.36, 'azimuth': 12.46,
'elevation': -52.81, 'ra': 215.55022984, 'dec': -5.00234017, 'timestamp': 1591196844, 'eclipsed':
True}]}
I need to have "geo", "satnam" and"timestamp".I'm wondering how could I obtain correct results.
Looks like you were setting the timestamp and satname without a key, try this to process the data:
import json
from datetime import datetime
response_json = '''
{
"info": {
"satname": "SPACE STATION",
"satid": 25544,
"transactionscount": 0
},
"positions": [
{
"satlatitude": 28.89539607,
"satlongitude": 90.44547739,
"sataltitude": 420.36,
"azimuth": 12.46,
"elevation": -52.81,
"ra": 215.55022984,
"dec": -5.00234017,
"timestamp": 1591196844,
"eclipsed": true
}
]
}
'''
response_data = json.loads(response_json)
def process_data(data):
return {
'satname': response_data['info']['satname'],
# comvert unix timestamp to iso time
'timestamp': datetime.fromtimestamp(response_data['positions'][0]['timestamp']).isoformat(),
'geo': {
'lat': response_data['positions'][0]['satlatitude'],
'lon': response_data['positions'][0]['satlongitude']
}
}
print(process_data(response_data))
Output:
{'satname': 'SPACE STATION', 'timestamp': '2020-06-03T15:07:24', 'geo': {'lat': 28.89539607, 'lon': 90.44547739}}
I have a lot of potential users for my website (not open to the public).
I do have a Google Analytics account and everything is working well.
I don't want to iterate through all potential users because calling for each individual user will take a very long time (I have about 1200 users).
Instead, I want a list of only active users in the given time period.
Surely this must be possible
(Simple problem, I am happy to answer any questions as I know this is a very brief question I am asking)
EDITED:
I am working in python and need to write code to achieve this
If you're looking for a list of user ids that you can use with the user activity API, the analytics API has a dimension called 'ga:clientId' that you can call and then filter using the standard parameters - there's a list of options of what you can filter on here:
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet#reportrequest
Depending on how you are describing 'active users', below is an example calling the REST API from python:
import requests
import json
credentials = #{ 'your credentials as a dict' }
r = requests.post("https://www.googleapis.com/oauth2/v4/token", data = {
"client_id": credentials["client_id"],
"client_secret": credentials["client_secret"],
"refresh_token": credentials["refresh_token"],
"grant_type": "refresh_token"
}
)
access_token =json.loads(r.text)
body = {
"reportRequests": [
{
'viewId': # "your ga view ID",
'pageSize': 100000,
"includeEmptyRows": True,
"samplingLevel": "LARGE",
'dateRanges': [
{
'startDate': "7DaysAgo",
'endDate': "yesterday"
}
],
'metrics': [
{
'expression': "ga:sessions"
}
],
'filtersExpression': "ga:sessions>2",
'dimensions': [
{
'name': "ga:clientId"
}
]
}
]
}
resp = requests.post("https://analyticsreporting.googleapis.com/v4/reports:batchGet",
json=body,
headers = {"Authorization" : "Bearer " + access_token["access_token"]}
)
resp = resp.json()
print(json.dumps(resp, indent = 4))
clientIds = [ x["dimensions"][0] for x in resp["reports"][0]["data"]["rows"] ]
print(clientIds)
To build on the answer above, you need to use a combination of the above plus the useractivity.list method.
I have written a full blog post on it https://medium.com/#alrowe/how-to-pull-out-the-user-explorer-report-with-python-useractivity-search-369bc5052093
Once you have used the above to get a list of client ids, you then need to iterate through those.
My 2 api calls look like this:
return analytics.reports().batchGet(
body = {
"reportRequests": [
{
'viewId': VIEW_ID,
'pageSize': 100000,
'includeEmptyRows': True,
'samplingLevel': 'LARGE',
'dateRanges': [
{
'startDate': '30DaysAgo',
'endDate': 'yesterday'
}
],
'metrics': [
{
'expression': 'ga:sessions'
}
],
'filtersExpression': 'ga:sessions>2',
'dimensions': [
{
'name': "ga:clientId"
}
]
}
]
}
).execute()
and then
def get_client_list_report(analytics,client_id):
return analytics.userActivity().search(
body = {
'user': {
'type': 'CLIENT_ID',
'userId': client_id
},
'dateRange':
{
'startDate': '30DaysAgo',
'endDate': 'yesterday'
},
'viewId': VIEW_ID,
'pageSize': 100000,
}
).execute()
I have completely read out the documentation of dynamic facebook marketing.Also successfully created an ad based on custom audience and pixel events.But the problem is that every time i am creating an ad it shows same product in ad templates.
Here is the code for setting up the Product set
product_set = ProductSet(None, <CATALOG ID>) # <CATALOG ID>
product_set[ProductSet.Field.name] = 'Product Set'
product_set[ProductSet.Field.filter] = {
'product_type': {
'i_contains': 'example product type',
},
}
product_set.remote_create()
product_set_id = product_set[ProductSet.Field.id]
And Code for creating AD after setting campaign and adset :
adset = AdSet(parent_id='<ACCOUNT_ID>')
adset[AdSet.Field.name] = 'Product Adset'
adset[AdSet.Field.bid_amount] = 9100
adset[AdSet.Field.billing_event] = AdSet.BillingEvent.link_clicks
adset[AdSet.Field.optimization_goal] = AdSet.OptimizationGoal.link_clicks
adset[AdSet.Field.daily_budget] = 45500
adset[AdSet.Field.campaign_id] = campaign_id
adset[AdSet.Field.targeting] = {
Targeting.Field.publisher_platforms: ['facebook', 'audience_network'],
Targeting.Field.device_platforms: ['desktop','mobile'],
Targeting.Field.geo_locations: {
Targeting.Field.countries: ['IN'],
},
Targeting.Field.product_audience_specs: [
{
'product_set_id': product_set_id,
'inclusions': [
{
'retention_seconds': 2592000,
'rule': {
'event': {
'eq': 'ViewContent',
},
},
},
],
'exclusions': [
{
'retention_seconds': 259200,
'rule': {
'event': {
'eq': 'Purchase',
},
},
},
],
},
],
Targeting.Field.excluded_product_audience_specs: [
{
'product_set_id': product_set_id,
'inclusions': [
{
'retention_seconds': 259200,
'rule': {
'event': {
'eq': 'ViewContent',
},
},
},
],
},
],
}
adset[AdSet.Field.promoted_object] = {
'product_set_id': product_set_id,
}
adset.remote_create()
adset_id = adset[AdSet.Field.id]
Can you guys help me out for the creating dynamic products from product set ?
Does your product catalog have items with product_type containing 'example product type'? you can make a api call to verify how many products are under that product set by /<PRODUCT_SET_ID>?fields=product_count,products
when the ads start running, it'll automatically render the relevant products from the product set automatically.
for the creative preview, you can specify specific product items to render in the preview: e.g. product_item_ids=["catalog:1000005:MTIzNDU2"]
more details about creative preview of Dynamic Ads: https://developers.facebook.com/docs/marketing-api/dynamic-product-ads/ads-management/v2.9
I'm trying to adapt the asynch_query.py script found at https://github.com/GoogleCloudPlatform/bigquery-samples-python/tree/master/python/samples for use in executing a query and having the output go to a BigQuery table. The JSON section of the script as I've created it for seting the parameters is as follows:
job_data = {
'jobReference': {
'projectId': project_id,
'job_id': str(uuid.uuid4())
},
'configuration': {
'query': {
'query': queryString,
'priority': 'BATCH' if batch else 'INTERACTIVE',
'createDisposition': 'CREATE_IF_NEEDED',
'defaultDataset': {
'datasetId': 'myDataset'
},
'destinationTable': {
'datasetID': 'myDataset',
'projectId': project_id,
'tableId': 'testTable'
},
'tableDefinitions': {
'(key)': {
'schema': {
'fields': [
{
'description': 'eventLabel',
'fields': [],
'mode': 'NULLABLE',
'name': 'eventLabel',
'type': 'STRING'
}]
}
}
}
}
}
}
When I run my script I get an error message that a "Required parameter is missing". I've been through the documentation at https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.query trying to figure out what is missing, but attempts at various configurations have failed. Can anyone identify what is missing and how I would fix this error?
Not sure what's going on. To insert the results of a query into another table I use this code:
def create_table_from_query(connector, query,dest_table):
body = {
'configuration': {
'query': {
'destinationTable': {
'projectId': your_project_id,
'tableId': dest_table,
'datasetId': your_dataset_id
},
'writeDisposition': 'WRITE_TRUNCATE',
'query': query,
},
}
}
response = connector.jobs().insert(projectId=self._project_id,
body=body).execute()
wait_job_completion(response['jobReference']['jobId'])
def wait_job_completion(connector, job_id):
while True:
response = connector.jobs().get(projectId=self._project_id,
jobId=job_id).execute()
if response['status']['state'] == 'DONE':
return
where connector is build('bigquery', 'v2', http=authorization)
Maybe you could start from there and keep adding new fields as you wish (notice that you don't have to define the schema of the table as it's already contained in the results of the query).