My requirement is to get all VMs in a subscription with launch(created) time. I didn't find the VM created time in the dashboard where as in the Activity log found a timestamp. I would like to fetch all VMs which were created by one subscription id along with created time.
(For this account details 2FA is enabled so - UserPassCredentials won't work )
List of all VMs in a subscription id:
import os
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
subscription_id = os.environ['AZURE_SUBSCRIPTION_ID']
credentials = ServicePrincipalCredentials(client_id=os.environ['AZURE_CLIENT_ID'], secret=os.environ['AZURE_CLIENT_SECRET'], tenant=os.environ['AZURE_TENANT_ID'])
compute_client = ComputeManagementClient(credentials, subscription_id)
for vm in compute_client.virtual_machines.list_all():
print("\tVM: {}".format(vm.name))
Fetch created time from Activity log:
import os
import datetime
from pprint import pprint
from azure.monitor import MonitorClient
from azure.common.credentials import ServicePrincipalCredentials
today = datetime.datetime.now().date()
filter = " and ".join([ "eventTimestamp le '{}T00:00:00Z'".format(today), "resourceGroupName eq 'test-group'" ])
subscription_id = 'xxxxx'
credentials = ServicePrincipalCredentials(client_id=os.environ['AZURE_CLIENT_ID'], secret=os.environ['AZURE_CLIENT_SECRET'], tenant=os.environ['AZURE_TENANT_ID'])
client = MonitorClient(credentials, subscription_id)
select = ",".join([ "Administrative", "Write VirtualMachines" ])
activity_logs = client.activity_logs.list( filter=filter, select=select )
for i in activity_logs:
pprint(i.__dict__)
I'm able to get the all VMs(1st sample program), However while trying to fetch the Activity log get some error(2nd sample program).
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/msrest/paging.py", line 109, in __next__
self.advance_page()
File "/Library/Python/2.7/site-packages/msrest/paging.py", line 95, in advance_page
self._response = self._get_next(self.next_link)
File "/Library/Python/2.7/site-packages/azure/monitor/operations/activity_logs_operations.py", line 117, in internal_paging
raise models.ErrorResponseException(self._deserialize, response)
azure.monitor.models.error_response.ErrorResponseException: Operation returned an invalid status code 'Bad Request'
Can somebody help me to find the issue please? any help really appreciated.
I tried to fetch my active log of resource group today by using the code you provided and I reproduce your issue.
My code:
import os
import datetime
from pprint import pprint
from azure.monitor import MonitorClient
from azure.common.credentials import ServicePrincipalCredentials
subscription_id = '***'
client_id='***'
secret='***'
tenant='***'
today = datetime.datetime.now().date()
filter = " and ".join([ "eventTimestamp le '{}T00:00:00Z'".format(today), "resourceGroupName eq 'jay'" ])
credentials = ServicePrincipalCredentials(client_id=client_id, secret=secret, tenant=tenant)
client = MonitorClient(credentials, subscription_id)
select = ",".join([ "eventName", "operationName" ])
print select
print filter
activity_logs = client.activity_logs.list( filter=filter, select=select )
for log in activity_logs:
# assert isinstance(log, azure.monitor.models.EventData)
print(" ".join([
log.event_name.localized_value,
log.operation_name.localized_value
]))
Running result:
eventName,operationName
eventTimestamp le '2017-10-17T00:00:00Z' and resourceGroupName eq 'jay'
Traceback (most recent call last):
File "E:/PythonWorkSpace/ActiveLog/FetchActiveLog.py", line 24, in <module>
for log in activity_logs:
File "E:\Python27\lib\site-packages\msrest\paging.py", line 109, in __next__
self.advance_page()
File "E:\Python27\lib\site-packages\msrest\paging.py", line 95, in advance_page
self._response = self._get_next(self.next_link)
File "E:\Python27\lib\site-packages\azure\monitor\operations\activity_logs_operations.py", line 117, in internal_paging
raise models.ErrorResponseException(self._deserialize, response)
azure.monitor.models.error_response.ErrorResponseException: Operation returned an invalid status code 'Bad Request'
After rearching the Azure Monitor Python SDK, I found the difference.
filter = " and ".join([ "eventTimestamp ge '{}T00:00:00Z'".format(today), "resourceGroupName eq 'jay'" ])
Here is ge ,not le.
I modify the keyword then the code works well for me.
eventName,operationName
eventTimestamp ge '2017-10-17T00:00:00Z' and resourceGroupName eq 'jay'
End request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/delete
Begin request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/deallocate/action
End request Microsoft.Compute/virtualMachines/deallocate/action
Begin request Microsoft.Compute/virtualMachines/deallocate/action
End request Microsoft.Compute/virtualMachines/write
End request Microsoft.Compute/disks/write
End request Microsoft.Compute/virtualMachines/write
End request Microsoft.Network/networkSecurityGroups/write
End request Microsoft.Network/networkInterfaces/write
End request Microsoft.Network/publicIPAddresses/write
Hope it helps you.
Call az cli from python
use below command
az vm list
This will list json data with fields and you can filter
date = vm['timeCreated']
//"timeCreated": "2022-06-24T14:13:00.326985+00:00",
Based on the doc, it seems your date should be escaped. Moreover, seems they take a datetime (and not a date):
https://learn.microsoft.com/en-us/rest/api/monitor/activitylogs
filter = " and ".join([
"eventTimestamp le '{}T00:00:00Z'".format(today),
"resourceGroupName eq 'test-group'"
])
Related
I am trying to read data from excel and store the text data in firestore. When I try to add it one by one without for loop it is working but if I try to automate the process it is not working.
Working Code:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
cred = credentials.Certificate("certificate.json")
firebase_admin.initialize_app(cred)
db=firestore.client()
name = 'Ashwin'
date = '23/10/2022'
roll = 'sampledata'
cert = 'sampledata'
def store_add(name, date,roll,cert):
data = {'name':name, 'date':date,'roll':roll}
db.collection('certs').document(cert).set(data)
store_add(name,date,roll,cert)
This is my code (Not Working):
import pandas as pd
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
cred = credentials.Certificate("certificate.json")
firebase_admin.initialize_app(cred)
db=firestore.client()
def store_add(name, date,roll,cert):
data = {'name':name, 'date':date,'roll':roll}
db.collection('certs').document(cert).set(data)
df = pd.read_excel('data.xlsx')
name_list = list(df['name'])
cert = list(df['cert'])
roll = list(df['roll'])
date="24/10/2022"
for i in range(len(name_list)):
store_add(name_list[i],date,roll[i],cert[i])
print("Added:",name_list[i])
I am getting the following error:
Traceback (most recent call last):
File "e:\PSDC\Certificate-Generator\bulk.py", line 23, in <module>
store_add(name_list[i],date,roll[i],cert[i])
File "e:\PSDC\Certificate-Generator\bulk.py", line 14, in store_add
db.collection('certs').document(cert).set(data)
File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\base_collection.py", line 130, in document
return self._client.document(*child_path)
_init__ super(DocumentReference, self).__init__(*path, **kwargs)
File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\base_document.py", line 60, in __init__ _helpers.verify_path(path, is_collection=False)
File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\_helpers.py", line 150, in verify_path raise ValueError("A document must have an even number of path elements")
ValueError: A document must have an even number of path elements
Firestore Database Structure (Image)
As mentioned in the documentation, document IDs cannot contain a slash but in the provided screenshot there are 3. The value of cert is PSDC/2022/TCS/01 so the final path becomes certs/PSDC/2022/TCS/01 that has 5 segments i.e. a sub-collection. You can either replace the / with some other character like _:
db.collection('certs').document(cert.replace("/", "_")).set(data)
Alternatively, if the slashes are required, you can store the cert ID in a field in document data and use a random document ID.
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.customvision import prediction
from PIL import Image
endpoint = "https://southcentralus.api.cognitive.microsoft.com/"
project_id = "projectidhere"
prediction_key = "predictionkeyhere"
predict = CustomVisionPredictionClient(prediction_key, endpoint)
with open("c:/users/paul.barbin/pycharmprojects/hw3/TallowTest1.jpg", mode="rb") as image_data:
tallowresult = predict.detect_image(project_id, "test1", image_data)
Python 3.7, and I'm using Azure Custom Vision 3.1? (>azure.cognitiveservices.vision.customvision) (3.1.0)
Note that I've seen the same question on SO but no real solution. The posted answer on the other question says to use the REST API instead.
I believe the error is in the endpoint (as stated in the error), and I've tried a few variants - with the slash, without, using an environment variable, without, I've tried appending various strings to my endpoint but I keep getting the same message. Any help is appreciated.
Full error here:
Traceback (most recent call last):
File "GetError.py", line 15, in <module>
tallowresult = predict.detect_image(project_id, "test1", image_data)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\azure\cognitiveservices\vision\customvision\prediction\operations\_custom_vision_
prediction_client_operations.py", line 354, in detect_image
request = self._client.post(url, query_parameters, header_parameters, form_content=form_data_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 193, in post
request = self._request('POST', url, params, headers, content, form_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 108, in _request
request = ClientRequest(method, self.format_url(url))
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 155, in format_url
base = self.config.base_url.format(**kwargs).rstrip('/')
KeyError: 'Endpoint'
CustomVisionPredictionClient takes two required, positional parameters: endpoint and credentials. Endpoint needs to be passed in before credentials, try swapping the order:
predict = CustomVisionPredictionClient(endpoint, prediction_key)
I'm using Python to retrieve a Blob image from Azure storage and then send it to Custom Vision for a prediction.
This is the code:
import io
from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
block_blob_service = BlockBlobService(
account_name=account_name,
account_key=account_key
)
fp = io.BytesIO()
block_blob_service.get_blob_to_stream(
container_name,
blob_name,
fp,
max_connections=2
)
predictor = CustomVisionPredictionClient(
cv_prediction_key,
endpoint=cv_endpoint
)
# This call breaks with the below error message
results = predictor.predict_image(
cv_project_id,
image_data.getvalue(),
iteration_id=cv_iteration_id
)
However, executing the predict_image function results in the following error:
System.Private.CoreLib: Exception while executing function: Functions.ReloadPostgres. System.Private.CoreLib: Result: Failure
Exception: HttpOperationError: Operation returned an invalid status code 'Resource Not Found'
Stack: File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 288, in _handle__invocation_request
self.__run_sync_func, invocation_id, fi.func, args)
File "~/.pyenv/versions/3.6.8/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 347, in __run_sync_func
return func(**params)
File "~/py_func_app/ReloadPostgres/__init__.py", line 14, in main
data_handler.fetch_prediction_data()
File "~/py_func_app/Shared_Code/data_handler.py", line 127, in fetch_prediction_data
cv_handler.predict_image(image_data.getvalue(), cv_model)
File "~/py_func_app/Shared_Code/custom_vision.py", line 30, in predict_image
raise e
File "~/py_func_app/Shared_Code/custom_vision.py", line 26, in predict_image
iteration_id=cv_model.cv_iteration_id
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/cognitiveservices/vision/customvision/prediction/custom_vision_prediction_client.py", line 215, in predict_image
raise HttpOperationError(self._deserialize, response)
Here in below i am providing similar example using custom vision prediction using image URL, you can change it to image file :
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 11:04:54 2019
#author: moverm
"""
#from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
#block_blob_service = BlockBlobService(
# account_name=account_name,
# account_key=account_key
#)
#
#fp = io.BytesIO()
#block_blob_service.get_blob_to_stream(
# container_name,
# blob_name,
# fp,
# max_connections=2
#)
predictor = CustomVisionPredictionClient(
"prediction-key",
endpoint="https://southcentralus.api.cognitive.microsoft.com"
)
# This call breaks with the below error message
#results = predictor.predict_image(
# 'prediction-key',
# image_data.getvalue(),
# iteration_id=cv_iteration_id
#)
test_img_url = "https://pointsprizes-blog.s3-accelerate.amazonaws.com/316.jpg"
results = predictor.predict_image_url("project-Id", "Iteration-Id", url=test_img_url)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100))
Basically issue is related to endpoint.Use https://southcentralus.api.cognitive.microsoft.com for an endpoint.
It should work, and you should be able to see the prediction probability.
Hope it helps.
I tried to reproduce your issue and got a similar issue, which was caused by using the incorrect endpoint from Azure portal when I created a Cognitive Service on the region of Janpa East, as the figure below.
As the figure above shown, the endpoint is https://japaneast.api.cognitive.microsoft.com/customvision/training/v1.0 for version 1, but the azure-cognitiveservices-vision-customvision PyPI page points out the corrent endpoint which should be https://{AzureRegion}.api.cognitive.microsoft.com as the figure below.
So I got the similar issue with yours if using the incorrent endpoint, as below. My code used is the same as yours, the only difference is the running environment which yours is on Azure Functions, but mine is a console script.
Meanwhile, according to the source code custom_vision_prediction_client.py of Azure Cognitive Service SDK for Custom Vision, you can see the code base_url = '{Endpoint}/customvision/v2.0/Prediction' to concat your passed endpoint with /customvision/v2.0/Prediction to generate the real endpoint for calling prediction api.
Therefore, as #MohitVerma-MSFT said, using https://<your cognitive service region>.api.cognitive.microsoft.com for the current version of Python package.
Additional notes as below, there is an announce of important update for customvision.ai you need to know, it may make effect for your current code working soon after.
there. I'm building a simple scraping tool. Here's the code that I have for it.
from bs4 import BeautifulSoup
import requests
from lxml import html
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import datetime
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('Programming
4 Marketers-File-goes-here.json', scope)
site = 'http://nathanbarry.com/authority/'
hdr = {'User-Agent':'Mozilla/5.0'}
req = requests.get(site, headers=hdr)
soup = BeautifulSoup(req.content)
def getFullPrice(soup):
divs = soup.find_all('div', id='complete-package')
price = ""
for i in divs:
price = i.a
completePrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return completePrice
def getVideoPrice(soup):
divs = soup.find_all('div', id='video-package')
price = ""
for i in divs:
price = i.a
videoPrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return videoPrice
fullPrice = getFullPrice(soup)
videoPrice = getVideoPrice(soup)
date = datetime.date.today()
gc = gspread.authorize(credentials)
wks = gc.open("Authority Tracking").sheet1
row = len(wks.col_values(1))+1
wks.update_cell(row, 1, date)
wks.update_cell(row, 2, fullPrice)
wks.update_cell(row, 3, videoPrice)
This script runs on my local machine. But, when I deploy it as a part of an app to Heroku and try to run it, I get the following error:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 219, in put_feed
r = self.session.put(url, data, headers=headers)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 82, in put
return self.request('PUT', url, params=params, data=data, **kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 69, in request
response.status_code, response.content))
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "AuthorityScraper.py", line 44, in
wks.update_cell(row, 1, date)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/models.py", line 517, in update_cell
self.client.put_feed(uri, ElementTree.tostring(feed))
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 221, in put_feed
if ex[0] == 403:
TypeError: 'RequestError' object does not support indexing
What do you think might be causing this error? Do you have any suggestions for how I can fix it?
There are a couple of things going on:
1) The Google Sheets API returned an error: "Invalid query parameter value for cell_id":
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
2) A bug in gspread caused an exception upon receipt of the error:
TypeError: 'RequestError' object does not support indexing
Python 3 removed __getitem__ from BaseException, which this gspread error handling relies on. This doesn't matter too much because it would have raised an UpdateCellError exception anyways.
My guess is that you are passing an invalid row number to update_cell. It would be helpful to add some debug logging to your script to show, for example, which row it is trying to update.
It may be better to start with a worksheet with zero rows and use append_row instead. However there does seem to be an outstanding issue in gspread with append_row, and it may actually be the same issue you are running into.
I encountered the same problem. BS4 works fine at a local machine. However, for some reason, it is way too slow in the Heroku server resulting into giving error.
I switched to lxml and it is working fine now.
Install it by command:
pip install lxml
A sample code snippet is given below:
from lxml import html
import requests
getpage = requests.get("https://url_here")
gethtmlcontent = html.fromstring(getpage.content)
data = gethtmlcontent.xpath('//div[#class = "class-name"]/text()')
#this is a sample for fetching data from the dummy div
data = data[0:n] # as per your requirement
#now inject the data into django tmeplate.
I cannot figure out how to get fan_count from a page.
I always get this error
Traceback (most recent call last):
File "./facebook_api.py", line 37, in <module>
facebook_graph.get_object('somepublicpage')['fan_count']
KeyError: 'fan_count'
The object only contains id/name and I cannot figure out how to give more permissions in order to get the 'fan_count' data.
Here is the code i'm using:
import facebook
import urllib
import urlparse
import subprocess
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
oauth_args = dict(client_id = FACEBOOK_APP_ID,
client_secret = FACEBOOK_APP_SECRET,
grant_type = 'client_credentials')
oauth_curl_cmd = ['curl',
'https://graph.facebook.com/oauth/access_token?' + urllib.urlencode(oauth_args)]
oauth_response = subprocess.Popen(oauth_curl_cmd,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE).communicate()[0]
try:
oauth_access_token = urlparse.parse_qs(str(oauth_response))['access_token'][0]
except KeyError:
print('Unable to grab an access token!')
exit()
print oauth_access_token
facebook_graph = facebook.GraphAPI(oauth_access_token)
print facebook_graph.get_object(PROFILE_ID)['fan_count']
Since v2.4 of the Graph API, you have to specify the fields you want to get returned. This would be the correct API call:
/{page-id}?fields=name,fan_count
It is called "Declarative Fields".