We have written python script to create the cloud function , the trigger is Https. We need to invoke fetch the output of the function , So for that we are using the environment variables but some how that is not getting stored ?
def generate_config(context):
""" Entry point for the deployment resources. """
name = context.properties.get('name', context.env['name'])
project_id = context.properties.get('project', context.env['project'])
region = context.properties['region']
resources = []
resources.append(
{
'name': 'createfunction',
'type': 'gcp-types/cloudfunctions-v1:projects.locations.functions',
'properties':
{
'function': "licensevalidation",
'parent': 'projects//locations/',
'sourceArchiveUrl': 'gs://path',
'entryPoint':'handler',
'httpsTrigger': {"url": "https://.cloudfunctions.net/licensevalidation","securityLevel": "SECURE_ALWAYS"},
'timeout': '60s',
'serviceAccountEmail' : '.iam.gserviceaccount.com',
'availableMemoryMb': 256,
'runtime': 'python37' ,
'environmentvaiable' :
}
}
)
call ={
'type': 'gcp-types/cloudfunctions-v1:cloudfunctions.projects.locations.functions.call',
'name': 'call',
'properties':
{
'name':'/licensevalidation',
'data': '{""}'
},
'metadata': {
'dependsOn': ['createfunction']
}
}
resources.append(call)
return{
'resources': resources,
'outputs':
[
{
'name': 'installationtoken',
'value': 'os.environ.get(environment_variable)'
},
]
}
Related
I have a lot of potential users for my website (not open to the public).
I do have a Google Analytics account and everything is working well.
I don't want to iterate through all potential users because calling for each individual user will take a very long time (I have about 1200 users).
Instead, I want a list of only active users in the given time period.
Surely this must be possible
(Simple problem, I am happy to answer any questions as I know this is a very brief question I am asking)
EDITED:
I am working in python and need to write code to achieve this
If you're looking for a list of user ids that you can use with the user activity API, the analytics API has a dimension called 'ga:clientId' that you can call and then filter using the standard parameters - there's a list of options of what you can filter on here:
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet#reportrequest
Depending on how you are describing 'active users', below is an example calling the REST API from python:
import requests
import json
credentials = #{ 'your credentials as a dict' }
r = requests.post("https://www.googleapis.com/oauth2/v4/token", data = {
"client_id": credentials["client_id"],
"client_secret": credentials["client_secret"],
"refresh_token": credentials["refresh_token"],
"grant_type": "refresh_token"
}
)
access_token =json.loads(r.text)
body = {
"reportRequests": [
{
'viewId': # "your ga view ID",
'pageSize': 100000,
"includeEmptyRows": True,
"samplingLevel": "LARGE",
'dateRanges': [
{
'startDate': "7DaysAgo",
'endDate': "yesterday"
}
],
'metrics': [
{
'expression': "ga:sessions"
}
],
'filtersExpression': "ga:sessions>2",
'dimensions': [
{
'name': "ga:clientId"
}
]
}
]
}
resp = requests.post("https://analyticsreporting.googleapis.com/v4/reports:batchGet",
json=body,
headers = {"Authorization" : "Bearer " + access_token["access_token"]}
)
resp = resp.json()
print(json.dumps(resp, indent = 4))
clientIds = [ x["dimensions"][0] for x in resp["reports"][0]["data"]["rows"] ]
print(clientIds)
To build on the answer above, you need to use a combination of the above plus the useractivity.list method.
I have written a full blog post on it https://medium.com/#alrowe/how-to-pull-out-the-user-explorer-report-with-python-useractivity-search-369bc5052093
Once you have used the above to get a list of client ids, you then need to iterate through those.
My 2 api calls look like this:
return analytics.reports().batchGet(
body = {
"reportRequests": [
{
'viewId': VIEW_ID,
'pageSize': 100000,
'includeEmptyRows': True,
'samplingLevel': 'LARGE',
'dateRanges': [
{
'startDate': '30DaysAgo',
'endDate': 'yesterday'
}
],
'metrics': [
{
'expression': 'ga:sessions'
}
],
'filtersExpression': 'ga:sessions>2',
'dimensions': [
{
'name': "ga:clientId"
}
]
}
]
}
).execute()
and then
def get_client_list_report(analytics,client_id):
return analytics.userActivity().search(
body = {
'user': {
'type': 'CLIENT_ID',
'userId': client_id
},
'dateRange':
{
'startDate': '30DaysAgo',
'endDate': 'yesterday'
},
'viewId': VIEW_ID,
'pageSize': 100000,
}
).execute()
I am using the compute client to create a VM (using create_or_update) and I want the VM to have a standard hdd and not a premium ssd as its os disk. I should be able to specify that in the managed disk parameters but when I do, the VM still creates with a premium SSD.
Here are my VM parameters.
vm_parameters = {
'location': vm_location,
'os_profile': {
'computer_name': vm_name,
'admin_username': vm_name,
'admin_password': vm_password,
'custom_data': startup_script
},
'hardware_profile': {
'vm_size': 'Standard_B1ls'
},
'storage_profile': {
'image_reference': {
'publisher': 'Canonical',
'offer': 'UbuntuServer',
'sku': '16.04.0-LTS',
'version': 'latest'
},
'os_disk': {
'caching': 'None',
'create_option': 'FromImage',
'disk_size_gb': 30,
'managed_disk_parameters': {
'storage_account_type': 'Standard_LRS'
}
}
},
'network_profile': {
'network_interfaces': [{
'id': nic_info.id
}]
},
'tags': {
'expiration_date': 'expirationdatehere'
}
}
Just specifying the storage account type as Standard_LRS isn't changing anything. What should I do to make my VM create with a standard hdd as its os disk instead of a premium ssd?
According to my test, you use the wrong parameter in the vm_parameters. Please update managed_disk_parameters to managed_disk. For more details, please refer to https://learn.microsoft.com/en-us/python/api/azure-mgmt-compute/azure.mgmt.compute.v2019_03_01.models.osdisk?view=azure-python.
For example:
import os
import traceback
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.compute.models import DiskCreateOption
from msrestazure.azure_exceptions import CloudError
from haikunator import Haikunator
haikunator = Haikunator()
AZURE_TENANT_ID= ''
AZURE_CLIENT_ID=''
AZURE_CLIENT_SECRET=''
AZURE_SUBSCRIPTION_ID=''
credentials = ServicePrincipalCredentials(client_id=AZURE_CLIENT_ID,secret=AZURE_CLIENT_SECRET,tenant=AZURE_TENANT_ID)
resource_client = ResourceManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
compute_client = ComputeManagementClient(credentials,AZURE_SUBSCRIPTION_ID)
network_client = NetworkManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
GROUP_NAME='allenR'
LOCATION='eastus'
# Network
VNET_NAME = 'azure-sample-vnet'
SUBNET_NAME = 'azure-sample-subnet'
print('\nCreate Vnet')
async_vnet_creation = network_client.virtual_networks.create_or_update(
GROUP_NAME,
VNET_NAME,
{
'location': LOCATION,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
# Create Subnet
print('\nCreate Subnet')
async_subnet_creation = network_client.subnets.create_or_update(
GROUP_NAME,
VNET_NAME,
SUBNET_NAME,
{'address_prefix': '10.0.0.0/24'}
)
subnet_info = async_subnet_creation.result()
# Create NIC
print('\nCreate NIC')
async_nic_creation = network_client.network_interfaces.create_or_update(
GROUP_NAME,
'test19191',
{
'location': LOCATION,
'ip_configurations': [{
'name': 'test19191-ip',
'subnet': {
'id': subnet_info.id
}
}]
}
)
nic = async_nic_creation.result()
print(nic.id)
vm_parameters = {
'location': 'eastus',
'os_profile': {
'computer_name': 'jimtest120yue',
'admin_username': 'jimtest',
'admin_password': 'Password0123!',
#'custom_data': startup_script
},
'hardware_profile': {
'vm_size': 'Standard_B1ls'
},
'storage_profile': {
'image_reference': {
'publisher': 'Canonical',
'offer': 'UbuntuServer',
'sku': '16.04.0-LTS',
'version': 'latest'
},
'os_disk': {
'caching': 'ReadWrite',
'name' : 'jimtest120yue_disk',
'create_option': 'FromImage',
'disk_size_gb': 30,
'os_type': 'Linux',
'managed_disk': {
'storage_account_type': 'Standard_LRS'
}
}
},
'network_profile': {
'network_interfaces': [{
'id': nic.id
}]
},
'tags': {
'expiration_date': 'expirationdatehere'
}
}
async_vm_creation=compute_client.virtual_machines.create_or_update('allenR','jimtest120yue',vm_parameters)
print(async_vm_creation.result())
disk = compute_client.disks.get('allenR','jimtest120yue_disk')
print(disk.sku)
If you are using Rest API to create the VM then here is the sample JSOn request for creating the VM:
{
"location": "westus",
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D1_v2"
},
"storageProfile": {
"imageReference": {
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/images/{existing-custom-image-name}"
},
"osDisk": {
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"name": "myVMosdisk",
"createOption": "FromImage"
}
},
"osProfile": {
"adminUsername": "{your-username}",
"computerName": "myVM",
"adminPassword": "{your-password}"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
"properties": {
"primary": true
}
}
]
}
}
}
Here is the API for the same:
PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2019-03-01
If you are looking for a way to create Virtual Machine then you can follow below code sample:
https://github.com/Azure-Samples/Hybrid-Compute-Python-Manage-VM/blob/master/example.py
Hope it helps.
Following the docs, there's an example to export all for a specific OU
def create_drive_ou_all_data_export(service, matter_id):
ou_to_search = 'ou id retrieved from admin sdk'
drive_query_options = {'includeSharedDrives': True}
drive_query = {
'corpus': 'DRIVE',
'dataScope': 'ALL_DATA',
'searchMethod': 'ORG_UNIT',
'orgUnitInfo': {
'org_unit_id': ou_to_search
},
'driveOptions': drive_query_options,
'startTime': '2017-03-16T00:00:00Z',
'endTime': '2017-09-23T00:00:00Z',
'timeZone': 'Etc/GMT+2'
}
drive_export_options = {'includeAccessInfo': False}
wanted_export = {
'name': 'My first drive ou export',
'query': drive_query,
'exportOptions': {
'driveOptions': drive_export_options
}
}
return service.matters().exports().create(
matterId=matter_id, body=wanted_export).execute()
However, it does not show how to just export for a given user, is this possible? Also, where are all of the different body options for creating an export? The examples do not seem to show all of the parameters available.
You'd want to use searchMethod:account
Reference Query: https://developers.google.com/vault/reference/rest/v1/Query
Reference searchmethod: https://developers.google.com/vault/reference/rest/v1/Query#SearchMethod
Reference AccountInfo: https://developers.google.com/vault/reference/rest/v1/Query#AccountInfo
drive_query = {
'corpus': 'DRIVE',
'dataScope': 'ALL_DATA',
'searchMethod': 'ACCOUNT', # This is different
'accountInfo': { # This is different
'emails': ['email1#company.com', 'email2#company.com', 'email3#company.com']
},
'driveOptions': drive_query_options,
'startTime': '2017-03-16T00:00:00Z',
'endTime': '2017-09-23T00:00:00Z',
'timeZone': 'Etc/GMT+2'
}
I'm trying to integrate CyberSource's REST API into a Django (Python) application. I'm following this GitHub example example.
It works like a charm but it is not clear to me from the example or from the documentation how to specify the device's fingerprint ID.
Here's a snippet of the request I'm sending in case it comes useful (note: this is just a method that lives inside a POPO):
def authorize_payment(self, card_token: str, total_amount: Money, customer: CustomerInformation = None,
merchant: MerchantInformation = None):
try:
request = {
'payment_information': {
# NOTE: REQUIRED.
'card': None,
'tokenized_card': None,
'customer': {
'customer_id': card_token,
},
},
'order_information': {
'amount_details': {
'total_amount': str(total_amount.amount),
'currency': str(total_amount.currency),
},
},
}
if customer:
request['order_information'].update({
'bill_to': {
'first_name': customer.first_name,
'last_name': customer.last_name,
'company': customer.company,
'address1': customer.address1,
'address2': customer.address2,
'locality': customer.locality,
'country': customer.country,
'email': customer.email,
'phone_number': customer.phone_number,
'administrative_area': customer.administrative_area,
'postalCode': customer.zip_code,
}
})
serialized_request = json.dumps(request)
data, status, body = self._payment_api_client.create_payment(create_payment_request=serialized_request)
return data.id
except Exception as e:
raise AuthorizePaymentError from e
I'm trying to adapt the asynch_query.py script found at https://github.com/GoogleCloudPlatform/bigquery-samples-python/tree/master/python/samples for use in executing a query and having the output go to a BigQuery table. The JSON section of the script as I've created it for seting the parameters is as follows:
job_data = {
'jobReference': {
'projectId': project_id,
'job_id': str(uuid.uuid4())
},
'configuration': {
'query': {
'query': queryString,
'priority': 'BATCH' if batch else 'INTERACTIVE',
'createDisposition': 'CREATE_IF_NEEDED',
'defaultDataset': {
'datasetId': 'myDataset'
},
'destinationTable': {
'datasetID': 'myDataset',
'projectId': project_id,
'tableId': 'testTable'
},
'tableDefinitions': {
'(key)': {
'schema': {
'fields': [
{
'description': 'eventLabel',
'fields': [],
'mode': 'NULLABLE',
'name': 'eventLabel',
'type': 'STRING'
}]
}
}
}
}
}
}
When I run my script I get an error message that a "Required parameter is missing". I've been through the documentation at https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.query trying to figure out what is missing, but attempts at various configurations have failed. Can anyone identify what is missing and how I would fix this error?
Not sure what's going on. To insert the results of a query into another table I use this code:
def create_table_from_query(connector, query,dest_table):
body = {
'configuration': {
'query': {
'destinationTable': {
'projectId': your_project_id,
'tableId': dest_table,
'datasetId': your_dataset_id
},
'writeDisposition': 'WRITE_TRUNCATE',
'query': query,
},
}
}
response = connector.jobs().insert(projectId=self._project_id,
body=body).execute()
wait_job_completion(response['jobReference']['jobId'])
def wait_job_completion(connector, job_id):
while True:
response = connector.jobs().get(projectId=self._project_id,
jobId=job_id).execute()
if response['status']['state'] == 'DONE':
return
where connector is build('bigquery', 'v2', http=authorization)
Maybe you could start from there and keep adding new fields as you wish (notice that you don't have to define the schema of the table as it's already contained in the results of the query).