Hi I am new to google cloud ,and I want to print vm instance in json format, but print(json.dumps(instance)) raised an error: TypeError: Object of type Instance is not JSON serializable.
My code is below:
import json
from google.oauth2 import service_account
from google.cloud import compute_v1
class VM:
def __init__(self, cred_json_path):
self.cred_json_path = cred_json_path
self.credentials = self.create_credentials()
self.page_size = 500
def create_credentials(self):
return service_account.Credentials.from_service_account_file(self.cred_json_path)
def list_vms(self):
client = compute_v1.InstancesClient(credentials=self.credentials)
for zone, instances in client.aggregated_list(request={"project": self.credentials.project_id, "max_results": self.page_size}):
for instance in instances.instances:
print(json.dumps(instance))
return
vm = VM("/tmp/1.json")
vm.list_vms()
Is there any easy way to do this? I think gcp api should have some method that I can easy to achieve this, but I cannot find. Thanks for help.
Finally I know how solve the problem.
from google.protobuf.json_format import MessageToJson
MessageToJson(instance._pb)
This could print json format data.
Related
I'd like to save Tweepy stream tweets to a .txt with json format. According to the documentation it should be possible to set return_type=dict with StreamingClient.
With the following code I get: TypeError: Object of type Tweet is not JSON serializable. Maybe I would need to set the parameter return_type=dict in the superclass? Afte a lot of trying I haven't been able to make it work. I'd be very grateful for any help!
import tweepy
import json
from tweepy import StreamingClient, StreamRule
class TweetPrinter(tweepy.StreamingClient):
def on_tweet(self, tweet):
with open("fetched_tweets.txt", "a") as f:
f.write(json.dumps(tweet, indent=4))
return True
printer = TweetPrinter(bearer_token=bearer_token, return_type=dict) # I don't get dict as output.
rule = StreamRule(value="Python")
printer.add_rules(rule)
printer.filter(expansions=['author_id', 'geo.place_id'], tweet_fields="created_at")
I'm writing a python application requiring that I download a folder from OneDrive. I understand that there was a package called onedrivesdk in the past for doing this, but it has since been deprecated and it is now recommended that the Microsoft Graph API be used to access OneDrive files (https://pypi.org/project/onedrivesdk/). It is my understanding that this requires somehow producing a DriveItem object referring to the target folder.
I was able to access the folder via GraphClient in msgraph.core :
from azure.identity import DeviceCodeCredential
from msgraph.core import GraphClient
import configparser
config = configparser.ConfigParser()
config.read(['config.cfg'])
azure_settings = config['azure']
scopes = azure_settings['graphUserScopes'].split(' ')
device_code_credential = DeviceCodeCredential(azure_settings['clientId'], tenant_id=azure_settings['tenantId'])
client = GraphClient(credential=device_code_credential, scopes=scopes)
import json
endpoint = '/me/drive/root:/Desktop'
x = client.get(endpoint)
x is the requests.models.Response object referring to the target folder (Desktop). I don't know how to extract from x a DriveItem or otherwise iterate over its contents programmatically. How can I do this?
Thanks
I have a problem. I have several JSON files. I do not want to create manually Collections and import these files. I found this question Bulk import of .json files in arangodb with python, but unfortunately I got an error [OUT] AttributeError: 'Database' object has no attribute 'collection'.
How can I import several JSON files and import them fully automatically via Python in Collections?
from pyArango.connection import *
conn = Connection(username="root", password="")
db = conn.createDatabase(name="test")
a = db.collection('collection_name') # <- here is the error
for x in list_of_json_files:
with open(x,'r') as json_file:
data = json.load(json_file)
a.import_bulk(data)
I also looked at the documentation from ArangoDB https://www.arangodb.com/tutorials/tutorial-python/
There is no "collection" method in db instance, which you try to call in your code on this line:
a = db.collection('collection_name') # <- here is the error
According to docs you should use db.createCollection method of db instance.
studentsCollection = db.createCollection(name="Students")
I am trying to import json data from a link containing valid json data to MongoDB.
When I run the script I get the following error:
TypeError: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping
What am I missing here or doing wrong?
import pymongo
import urllib.parse
import requests
replay_url = "http://live.ksmobile.net/live/getreplayvideos?"
userid = 769630584166547456
url2 = replay_url + urllib.parse.urlencode({'userid': userid}) + '&page_size=1000'
print(f"Replay url: {url2}")
raw_replay_data = requests.get(url2).json()
uri = 'mongodb://testuser:password#ds245687.mlab.com:45687/liveme'
client = pymongo.MongoClient(uri)
db = client.get_default_database()
replays = db['replays']
replays.insert_many(raw_replay_data)
client.close()
I saw that you are getting the video information data for 22 videos.
You can use :
replays.insert_many(raw_replay_data['data']['video_info'])
for saving them
You can make one field as _id for mongodb document
use the following line before insert_many
for i in raw_replay_data['data']['video_info']:
i['_id'] = i['vid']
this will make the 'vid' field as your '_id'. Just make sure that the 'vid' is unique for all videos.
In .net I should be able to connect to the azure storage using SAS like this:
var someTable = new CloudTable("https://account-name.table.core.windows.net/table-name?sv={key}");
How can I do it in python? I could not find a CloudTable class in azure.storage.table (from azure sdk for python: https://github.com/Azure/azure-storage-python)
Is there any other way?
Try something like the following:
import os
import json
from azure import *
from azure.storage import *
from azure.storage.table import TableService, Entity
table_service = TableService(account_name='[account-name]', sas_token='[sas-token]')
list = table_service.query_entities('[table-name]', top=100)
Replace [account-name], [sas-token] and [table-name] with actual values.
Also please do not include ? from the SAS token for the [sas-token] field.
Source: See the documentation here - https://github.com/Azure/azure-storage-python/blob/master/azure/storage/table/tableservice.py.