I'm looking to find out all the active resources( like compute engine, gke etc) and the respective zones .
I tried below python code to print that but its printing all zone information wherever compute engine is available , can please someone guide me what functions are available to do so .
compute = googleapiclient.discovery.build('compute', 'v1')
request = compute.instances().aggregatedList(project=project)
while request is not None:
response = request.execute()
for name, instances_scoped_list in response['items'].items():
pprint((name, instances_scoped_list))
request = compute.instances().aggregatedList_next(previous_request=request, previous_response=response)
You can list all instances you have in your project, using the Cloud Console gcloud compute instances list command or the instances.list() method.
To list all instances in a project in table form, run:
gcloud compute instances list
You will get something like :
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-1 us-central1-a n1-standard-1 10.128.0.44 xx.xx.xxx.xx RUNNING
instance-2 us-central1-b n1-standard-1 10.128.0.49 xx.xx.xxx.xx RUNNING
Edit1
As you mentioned aggregatedList() is the correct one, and to get the information required is necessary to go over the JSON Response Body.
If you need some specific fields you can check the Response body information.
Also, you can use this code as a guide, I’m getting all the information from the instances.
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = "{Project-ID}" # TODO: Update placeholder value.
request = service.instances().aggregatedList(project=project)
while request is not None:
response = request.execute()
instance = response.get('items', {})
for instance in instance.values():
for a in instance.get('instances', []):
print(str(instance))
request = service.instances().aggregatedList_next(previous_request=request, previous_response=response)
Related
I'm trying to run a google app script function remotely from a python flask app. This function creates google calendar events with inputs from a google sheet. I referred to this documentation from Google in order to set up the python script to run the appscript function. I followed every step required to deploy the app script project as an executable API and connected it to a google developer project and made OAuth 2.0 ID credentials as well.
From the API executable documentation, I got the following code and modified it to run as an object which can be called from the main server file.
from __future__ import print_function
from googleapiclient import errors
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file as oauth_file, client, tools
class CreateGCalEvent:
def main(self):
"""Runs the sample.
"""
SCRIPT_ID = 'my app script deployment ID was put here'
# Set up the Apps Script API
SCOPES = [
'https://www.googleapis.com/auth/script.scriptapp',
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/drive',
]
store = oauth_file.Storage('token.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('app_script_creds.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('script', 'v1', credentials=creds)
# Create an execution request object.
request = {"function": "getFoldersUnderRoot"}
try:
# Make the API request.
response = service.scripts().run(body=request,
scriptId=SCRIPT_ID).execute()
if 'error' in response:
# The API executed, but the script returned an error.
# Extract the first (and only) set of error details. The values of
# this object are the script's 'errorMessage' and 'errorType', and
# an list of stack trace elements.
error = response['error']['details'][0]
print("Script error message: {0}".format(error['errorMessage']))
if 'scriptStackTraceElements' in error:
# There may not be a stacktrace if the script didn't start
# executing.
print("Script error stacktrace:")
for trace in error['scriptStackTraceElements']:
print("\t{0}: {1}".format(trace['function'],
trace['lineNumber']))
else:
# The structure of the result depends upon what the Apps Script
# function returns. Here, the function returns an Apps Script Object
# with String keys and values, and so the result is treated as a
# Python dictionary (folderSet).
folderSet = response['response'].get('result', {})
if not folderSet:
print('No folders returned!')
else:
print('Folders under your root folder:')
for (folderId, folder) in folderSet.items():
print("\t{0} ({1})".format(folder, folderId))
except errors.HttpError as e:
# The API encountered a problem before the script started executing.
print(e.content)
Here is where the error comes. It can neither locate token.json nor the app_script_creds.json.
Now with a service account and any normal OAuth2.0 ID, when I create it, I will be given the option to download the credentials.json but here, this is all I seem to be getting, an App Script ID with no edit access or credentials to download as JSON. I created another OAuth ID in the same project as shown in the screenshot which has the edit access and json ready for download. When I used that json file inside the python script, It told me that it was expecting redirect uris, which I don't know for what it is or where to redirect to.
What do I need to do to get this working?
I adapted some code that I used for connecting to the App Scripts API. I hope it works for you too. The code is pretty much the same thing as this.
You can use from_client_secrets_file since you're already loading these credentials from the file. So, what the code does is look for a token file first. If the token file is not there, it logs in the user (prompting using the Google authorization screen) and stores the new token in the file as pickle.
Regarding the credentials in the Google console you need to pick the Desktop application when creating them because that is basically what a server is.
Note: with this, you can only have one user that will be doing all of these actions. This is because the server script will start a local server on the server machine to authenticate you, your client code will not see any of this.
import logging
import pickle
from pathlib import Path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
class GoogleApiService:
def __init__(self, , scopes):
"""
Args:
scopes: scopes required by the script. There needs to be at least
one scope specified.
"""
self.client_secrets= Path('credentials/credentials.json')
self.token_path = Path('credentials/token.pickle')
self.credentials = None
self.scopes = scopes
def get_service(self):
self.__authenticate()
return build('script', 'v1', credentials=self.credentials)
def __authenticate(self):
log.debug(f'Looking for existing token in {self.token_path}')
if self.token_path.exists():
with self.token_path.open('rb') as token:
self.credentials = pickle.load(token)
if self.__token_expired():
self.credentials.refresh(Request())
# If we can't find any token, we log in and save it
else:
self.__log_in()
self.__save_token()
def __log_in(self):
flow = InstalledAppFlow.from_client_secrets_file(
self.client_secrets,
self.scopes
)
self.credentials = flow.run_local_server(port=0)
def __save_token(self):
with self.token_path.open('wb') as token:
pickle.dump(self.credentials, token)
def __token_expired(self):
return self.credentials and self.credentials.expired and \
self.credentials.refresh_token
# Example for Google Apps Scripts
def main():
request = {'function': 'some_function', 'parameters': params}
gapi_service = GoogleApiService()
with gapi_service.get_service() as service:
response = service.scripts().run(
scriptId=self.script_id,
body=request
).execute()
if response.get('error'):
message = response['error']['details'][0]['errorMessage']
raise RuntimeError(message)
else:
return response['response']['result']
I am trying to learn the python SDK to help me to manage my Google Cloud Platform resources. Can someone help me to understand. I got the following code snippet from the Google API.
This code works alone. Let's say if I want to list all of the roles in my organization, or list a role of a particular project, where do I do it and how?
Thank you very much in advance.
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('iam', 'v1', credentials=credentials)
request = service.roles().list()
while True:
response = request.execute()
for role in response.get('roles', []):
# TODO: Change code below to process each `role` resource:
pprint(role)
request = service.roles().list_next(previous_request=request, previous_response=response)
if request is None:
break
You can get IAM roles by using the "resourcemanager.projects.getIamPolicy for a project [1]. If this is what you are looking for, then you can use the following python library [2].
Here is a samp snippit that you can use that will return the IAM roles and the users assigned to them:
from apiclient.discovery import build
service = build('cloudresourcemanager', 'v1')
project_id = '[project_ID]'
policy_request = service.projects().getIamPolicy(resource=project_id, body={})
policy_response = policy_request.execute()
members = set()
for binding in policy_response['bindings']:
members |= set(binding['members'])
print('\n'.join(sorted(members)))
[1] https://developers.google.com/apis-explorer/#search/project/cloudresourcemanager/v1/cloudresourcemanager.projects.getIamPolicy
[2] https://developers.google.com/api-client-library/python/apis/cloudresourcemanager/v1
Using the python api for azure, I want to get the state of one of my machines.
I can't find anywhere to access this information.
Does someone know?
After looking around, I found this:
get_with_instance_view(resource_group_name, vm_name)
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view
if you are using the legacy api (this will work for classic virtual machines), use
from azure.servicemanagement import ServiceManagementService
sms = ServiceManagementService('your subscription id', 'your-azure-certificate.pem')
your_deployment = sms.get_deployment_by_name('service name', 'deployment name')
for role_instance in your_deployment.role_instance_list:
print role_instance.instance_name, role_instance.instance_status
if you are using the current api (will not work for classic vm's), use
from azure.common.credentials import UserPassCredentials
from azure.mgmt.compute import ComputeManagementClient
import retry
credentials = UserPassCredentials('username', 'password')
compute_client = ComputeManagementClient(credentials, 'your subscription id')
#retry.retry(RuntimeError, tries=3)
def get_vm(resource_group_name, vm_name):
'''
you need to retry this just in case the credentials token expires,
that's where the decorator comes in
this will return all the data about the virtual machine
'''
return compute_client.virtual_machines.get(
resource_group_name, vm_name, expand='instanceView')
#retry.retry((RuntimeError, IndexError,), tries=-1)
def get_vm_status(resource_group_name, vm_name):
'''
this will just return the status of the virtual machine
sometime the status may be unknown as shown by the azure portal;
in that case statuses[1] doesn't exist, hence retrying on IndexError
also, it may take on the order of minutes for the status to become
available so the decorator will bang on it forever
'''
return compute_client.virtual_machines.get(resource_group_name, vm_name, expand='instanceView').instance_view.statuses[1].display_status
If you are using Azure Cloud Services, you should use the Role Environment API, which provides state information regarding the current instance of your current service instance.
https://msdn.microsoft.com/en-us/library/azure/microsoft.windowsazure.serviceruntime.roleenvironment.aspx
In the new API resource manager
There's a function:
get_with_instance_view(resource_group_name, vm_name)
It's the same function as get machine, but it also returns an instance view that contains the machine state.
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view
Use this method get_deployment_by_name to get the instances status:
subscription_id = '****-***-***-**'
certificate_path = 'CURRENT_USER\\my\\***'
sms = ServiceManagementService(subscription_id, certificate_path)
result=sms.get_deployment_by_name("your service name","your deployment name")
You can get instance status via "instance_status" property.
Please see this post https://stackoverflow.com/a/31404545/4836342
As mentioned in other answers the Azure Resource Manager API has an instance view query to show the state of running VMs.
The documentation listing for this is here: VirtualMachineOperations.get_with_instance_view()
Typical code to get the status of a VM is something like this:
resource_group = "myResourceGroup"
vm_name = "myVMName"
creds = azure.mgmt.common.SubscriptionCloudCredentials(…)
compute_client = azure.mgmt.compute.ComputeManagementClient(creds)
vm = compute_client.virtual_machines.get_with_instance_view(resource_group, vm_name).virtual_machine
# Index 0 is the ProvisioningState, index 1 is the Instance PowerState, display_status will typically be "VM running, VM stopped, etc.
vm_status = vm.instance_view.statuses[1].display_status
There is no direct way to get the state of a virtual machine while listing them.
But, we can list out the vms by looping into them to get the instance_view of a machine and grab its power state.
In the code block below, I am doing the same and dumping the values into a .csv file to make a report.
import csv
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
def get_credentials():
subscription_id = "*******************************"
credential = ServicePrincipalCredentials(
client_id="*******************************",
secret="*******************************",
tenant="*******************************"
)
return credential, subscription_id
credentials, subscription_id = get_credentials()
# Initializing compute client with the credentials
compute_client = ComputeManagementClient(credentials, subscription_id)
resource_group_name = "**************"
json_list = []
json_object = {"Vm_name": "", "Vm_state": "", "Resource_group": resource_group_name}
# listing out the virtual machine names
vm_list = compute_client.virtual_machines.list(resource_group_name=resource_group_name)
# looping inside the list of virtual machines, to grab the state of each machine
for i in vm_list:
vm_state = compute_client.virtual_machines.instance_view(resource_group_name=resource_group_name, vm_name=i.name)
json_object["Vm_name"] = i.name
json_object["Vm_state"] = vm_state.statuses[1].code
json_list.append(json_object)
csv_columns = ["Vm_name", "Vm_state", "Resource_group"]
f = open("vm_state.csv", 'w+')
csv_file = csv.DictWriter(f, fieldnames=csv_columns)
csv_file.writeheader()
for i in json_list:
csv_file.writerow(i)
To grab the state of a single virtual machine, where you know its resource_group_name and vm_name, just use the block below.
vm_state = compute_client.virtual_machines.instance_view(resource_group_name="foo_rg_name", vm_name="foo_vm_name")
power_state = vm_state.statuses[1].code
print(power_state)
As per the new API reference, this worked for me
vm_status = compute_client.virtual_machines.instance_view(GROUP_NAME, VM_NAME).statuses[1].code
it will return any one of these states, based on the current state
"PowerState/stopped", "PowerState/running","PowerState/stopping", "PowerState/starting"
Is there any way of granting readonly access to a specific BigQuery Dataset to a given Client ID ?
I've tried using a service account, but this gives full access to all datasets.
Also tried creating a service account from a different application, and added the email address generated together with the certificate to the BigQuery > Some Dataset > Share Dataset > Can view, but this always results in a 403 "Access not Configured" error.
I'm using the server to server flow described in the documentation :
import httplib2
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
# REPLACE WITH YOUR Project ID
PROJECT_NUMBER = 'XXXXXXXXXXX'
# REPLACE WITH THE SERVICE ACCOUNT EMAIL FROM GOOGLE DEV CONSOLE
SERVICE_ACCOUNT_EMAIL = 'XXXXX#developer.gserviceaccount.com'
f = file('key.p12', 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/bigquery.readonly')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('bigquery', 'v2')
tables = service.tables()
response = tables.list(projectId=PROJECT_NUMBER, datasetId='SomeDataset').execute(http)
print(response)
I'm basically trying to provide readonly access to an external server based application to a single dataset.
As pointed out by Fh, it is required to activate the BigQuery API in the Google Account where the service account is created, regardless of the fact that it will be querying a BigQuery endpoint bound to a different application ID.
I just wrote this code that is supposed to check if calendar exists and if not create one. Well it returns error 404 when I try to create a calendar and the calendar does NOT appear. Any ideas? I blanked out clientid, secret, app key.
import gflags
import httplib2
import sys, traceback
from apiclient.discovery import build
from oauth2client.file import Storage
from oauth2client.client import OAuth2WebServerFlow
from oauth2client.tools import run
FLAGS = gflags.FLAGS
# Set up a Flow object to be used if we need to authenticate. This
# sample uses OAuth 2.0, and we set up the OAuth2WebServerFlow with
# the information it needs to authenticate. Note that it is called
# the Web Server Flow, but it can also handle the flow for native
# applications
# The client_id and client_secret are copied from the API Access tab on
# the Google APIs Console
FLOW = OAuth2WebServerFlow(
client_id='MY_CLIENT_ID',
client_secret='MY_SECRET',
scope='https://www.googleapis.com/auth/calendar',
user_agent='KUDOS_CALENDAR/v1')
# To disable the local server feature, uncomment the following line:
# FLAGS.auth_local_webserver = False
# If the Credentials don't exist or are invalid, run through the native client
# flow. The Storage object will ensure that if successful the good
# Credentials will get written back to a file.
storage = Storage('calendar.dat')
credentials = storage.get()
if credentials is None or credentials.invalid == True:
credentials = run(FLOW, storage)
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with our good Credentials.
http = httplib2.Http()
http = credentials.authorize(http)
# Build a service object for interacting with the API. Visit
# the Google APIs Console
# to get a developerKey for your own application.
service = build(serviceName='calendar', version='v3', http=http,
developerKey='MY_DEV_KEY')
kudos_calendar = None
try:
kudos_calendar = service.calendarList().get(calendarId='KudosCalendar').execute()
except:
print 'Calendar KudosCalendar does not exist!'
print 'Creating one right now...'
kudos_calendar_entry = {
'id': 'KudosCalendar'
}
kudos_calendar = service.calendarList().insert(body=kudos_calendar_entry).execute()
OK, I found a way around. I am not sure what exactly are google abstractions reflecting, but I am pretty sure one cannot just create calendar list. However if you just create a calendar then everything goes fine and then one can use calendar id to access calendarlist entry corresponding to that calendar.
Ufff.. Horribly confusing. Also while trying to do that I found at least two bugs in example python codes given in docs. I think they still did not properly rolled out v3.