Google PubSub Subscription Name for multiple VMs? - python

I would like to create a managed Compute Engine group, with some startup scripts, including one that creates a subscription to an Object Change Notification. Following these tutorials, these subscriptions all require a "subscription_name".
I can't set it to a static name since it'll cause clashes when >1 VMs are spun up.
Is there a way to automatically 'increment' the name, i.e. 'VM_sub_1', 'VM_sub_2', 'VM_sub_3'... etc.?
Or is leaving it blank and accepting randomly-generated names the only way to avoid clashes?
Code sample for creating subscription:
def create_push_subscription(project_id, topic_name, subscription_name, endpoint):
"""Create a new push subscription on the given topic."""
# [START pubsub_create_push_subscription]
from google.cloud
import pubsub_v1
project_id = "bucketcfpubsub"
topic_name = "bucketcfpubsub"
subscription_name = "VM_sub_1"
endpoint = "https://bucketcfpubsub.appspot.com/push"
subscriber = pubsub_v1.SubscriberClient()
topic_path = subscriber.topic_path(project_id, topic_name)
subscription_path = subscriber.subscription_path(project_id, subscription_name)
push_config = pubsub_v1.types.PushConfig(push_endpoint = endpoint)
subscription = subscriber.create_subscription(subscription_path, topic_path, push_config)
print('Push subscription created: {}'.format(subscription))
print('Endpoint for subscription is: {}'.format(endpoint))#[END pubsub_create_push_subscription]
Attempt at fetching metadata:
def check_instance():
import time
import requests
METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/instance/id'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
instance_id = requests.get(METADATA_URL, headers=METADATA_HEADERS)
print(instance_id)
check_instance()
returns
Response [200]
meanning
Success! A value was changed, or you reached your specified timeout_sec and the request returned successfully.

Related

Automatically subscribe to YouTube channels via Channel_ID

I'm fairly new to Python and I'm trying to migrate subscriptions from an older youtube account to a newer one that I'll use going forward. I pulled my subscriptions export from the old one and have around 470+ subs that I'll need to migrate over.
I found this article which absolutely works with automatically subscribing to a youtube channel via their channel_id but it seems like in the key value pair I can only run the .py script once per value.
I tried all sorts of googling to see how I can include multiple values in the key (channelId) but it always only auto subs to the last one in the dictionary.
Can someone please help show me what I'm missing? I feel like there has to be a way to add multiple channelId values in there key dictionary, right?!
Here's what my code looks like > screenshot
import os
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from google_auth_oauthlib.flow import InstalledAppFlow
# The CLIENT_SECRETS_FILE variable specifies
# the name of a file that contains
# client_id and client_secret.
CLIENT_SECRETS_FILE = "client_secret.json"
# This scope allows for full read/write access
# to the authenticated user's account and
# requires requests to use an SSL connection.
SCOPES = ['https://www.googleapis.com/auth/youtube.force-ssl']
API_SERVICE_NAME = 'youtube'
API_VERSION = 'v3'
def get_authenticated_service():
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_console()
return build(API_SERVICE_NAME, API_VERSION, credentials = credentials)
def print_response(response):
print(response)
# Build a resource based on a list of
# properties given as key-value pairs.
# Leave properties with empty values out
# of the inserted resource.
def build_resource(properties):
resource = {}
for p in properties:
# Given a key like "snippet.title", split into
# "snippet" and "title", where "snippet" will be
# an object and "title" will be a property in that object.
prop_array = p.split('.')
ref = resource
for pa in range(0, len(prop_array)):
is_array = False
key = prop_array[pa]
# For properties that have array values, convert a name like
# "snippet.tags[]" to snippet.tags, and set a flag to handle
# the value as an array.
if key[-2:] == '[]':
key = key[0:len(key)-2:]
is_array = True
if pa == (len(prop_array) - 1):
# Leave properties without values
# out of inserted resource.
if properties[p]:
if is_array:
ref[key] = properties[p].split(', ')
else:
ref[key] = properties[p]
elif key not in ref:
# For example, the property is "snippet.title",
# but the resource does not yet have a "snippet"
# object. Create the snippet object here.
# Setting "ref = ref[key]" means that in the
# next time through the "for pa in range ..." loop,
# we will be setting a property in the
# resource's "snippet" object.
ref[key] = {}
ref = ref[key]
else:
# For example, the property is "snippet.description",
# and the resource already has a "snippet" object.
ref = ref[key]
return resource
# Remove keyword arguments that are not set
def remove_empty_kwargs(**kwargs):
good_kwargs = {}
if kwargs is not None:
for key, value in kwargs.items():
if value:
good_kwargs[key] = value
return good_kwargs
def subscriptions_insert(client, properties, **kwargs):
resource = build_resource(properties)
kwargs = remove_empty_kwargs(**kwargs)
response = client.subscriptions().insert(
body = resource,**kwargs).execute()
return print_response(response)
if __name__ == '__main__':
# When running locally, disable OAuthlib's
# HTTPs verification. When running in production
# * do not * leave this option enabled.
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
client = get_authenticated_service()
subscriptions_insert(client,
{'snippet.resourceId.kind': 'youtube# channel',
'snippet.resourceId.channelId': 'UC09fL42MpkktKZWmWxYiDhw', 'UC0Q7Hlz75NYhYAuq6O0fqHw'},
part ='snippet')```
According to YouTube Data API v3 documentation (Subscriptions: insert endpoint and Subscriptions resource), it seems that you can only subscribe a channel at a time. As you have by default 10,000 of quota per day, except if you request extended quota, because Subscriptions: insert costs 50 of quota, then for 470+ subscriptions, you would need 3 days to proceed.
Otherwise you can proceed as follows, it seems that the first time I tried with ~500 channels I have been subscribed to ~290 of them but now I mostly only receive (when removing -H 'Accept-Encoding: gzip, deflate, br' from the cURL request):
{
"error": {
"code": 429,
"message": "Resource has been exhausted (e.g. check quota).",
"errors": [
{
"message": "Resource has been exhausted (e.g. check quota).",
"domain": "global",
"reason": "rateLimitExceeded"
}
],
"status": "RESOURCE_EXHAUSTED"
}
}
So it's an unsure method that you can try to deepen.
Ever wondered how to do that in a single request without using any quota?
Go on an ad hoc YouTube channel YOUR_CHANNEL that you want to subscribed to: https://www.youtube.com/channel/YOUR_CHANNEL_ID
Open the Network tab of your web-browser by using Ctrl + Shift + E (on Firefox) and filter XHR requests.
Now click on Subscribe.
You should see a request to subscribe, copy it as cURL (by right-clicking).
Change at the end
"channelIds":["YOUR_CHANNEL_ID"]
to:
"channelIds":["YOUR_CHANNEL_ID_0, YOUR_CHANNEL_ID_1, ..., YOUR_CHANNEL_ID_499"]
Where YOUR_CHANNEL_ID_0 is your YOUR_CHANNEL_ID and YOUR_CHANNEL_ID_1 the second channel you want to subscribe to and so forth.
Execute the modified cURL request in a terminal and that's it!
Note that this webpage contains a subscriptions count and this one contains all your subscriptions.
To get more than 249 different channels, I used:
import requests, json
channelIds = set()
pageToken = ''
API_KEY = 'AIzaSy...'
i = 0
while len(channelIds) < 250:
url = f'https://www.googleapis.com/youtube/v3/search?q={i}&type=channel&maxResults=50&key={API_KEY}'
if pageToken != '':
url += f"&pageToken={pageToken}"
content = requests.get(url).text
data = json.loads(content)
for item in data['items']:
channelIds.add(item['id']['channelId'])
print(len(channelIds))
if 'nextPageToken' in data:
pageToken = data['nextPageToken']
else:
break
i += 1
print('["' + '","'.join(channelIds) + '"]', len(channelIds))
As #Benjamin Loison has mentioned, there is a quota on the limit on the usage of the API. If you'd like to raise the limit, I think there is a form you can fill out to request more. However, I don't recommend you do so since the form is applicable mainly for a large application that will be used for a long time and involve a long process of human inspection on what you're trying to build (This is based on my personal experience, might not be entirely accurate).
My suggestion would be to use the script you have to print out a list of channel links, and you can click into each of them and press the subscribe button. 470-ish channels should not take you a long time.

How to start a conversation using an event in Dialogflow cx sending by python

I'm trying to start a conversation from dialogflow cx python API. I have seen this question Start a conversation at the beginning of a flow using flow ID that solves the problem using Node.js but I'm not able to replicate in python.
In my code I have:
text_input = session.TextInput(text=msg)
query_input = session.QueryInput(text=text_input, language_code=language_code)
request = session.DetectIntentRequest(session=session_path, query_input=query_input)
response = session_client.detect_intent(request=request)
I would like to change session.TextInput() to session.EventInput, for example, as here but it does not work with dialogflow CX and the library dialogflowcx_v3beta1
To use an event as query_input, you should use type EventInput() in your QueryInput(). See code below on how to implement it.
from google.cloud import dialogflowcx_v3beta1 as dialogflow
from google.cloud.dialogflowcx_v3beta1 import types
import uuid
project_id = "your-project"
location = "us-central1" # my project is located here hence us-central1
session_id = uuid.uuid4()
agent_id = "999999-aaaa-aaaa" # to get your agent_id see https://stackoverflow.com/questions/65389964/where-can-i-find-the-dialogflow-cx-agent-id
session_path = f"projects/{project_id}/locations/{location}/agents/{agent_id}/sessions/{session_id}"
api_endpoint = f"{location}-dialogflow.googleapis.com"
client_options = {"api_endpoint": api_endpoint}
client = dialogflow.services.sessions.SessionsClient(client_options=client_options)
event = "custom_event"
event_input = types.EventInput(event=event)
query_input = types.QueryInput(event=event_input,language_code="en-US")
request = types.DetectIntentRequest(
session=session_path,query_input=query_input
)
response = client.detect_intent(request=request)
print(response.query_result.response_messages[0])
Custom event:
Output using the custom event created:

Azure Dev/Ops - Ingestion of Analytics View data using Python

I would like to access Analytics View data from Azure DevOps to have access to registered project activities.
Would anyone have any examples of how they would do it using the azure-devops python library?
I found no example involving extracting data from Analytics View. Basically, I need a Python script that shows all the Analytics View fields of my projects.
After some research, I managed to partially solve my problem because this solution does not bring everything from Analytics View, in addition there is a limitation of 20k records in the query result:
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
token = 'xxx'
team_instance = 'https://dev.azure.com/xxx'
credentials = BasicAuthentication("", token)
connection = Connection(base_url=team_instance, creds=credentials)
def print_work_items(work_items):
for work_item in work_items:
print(
"{0} {1}: {2}".format(
work_item.fields["System.WorkItemType"],
work_item.id,
work_item.fields["System.Title"],
)
)
wit_client = connection.clients.get_work_item_tracking_client()
def get_TC_from_query(query):
query_wiql = Wiql(query=query)
results = wit_client.query_by_wiql(query_wiql).work_items
# WIQL query gives a WorkItemReference => we get the corresponding WorkItem from id
work_items = (wit_client.get_work_item(int(result.id)) for result in results)
print_work_items(work_items)
get_TC_from_query(
"""\
SELECT
[System.Id],
[System.WorkItemType],
[System.Title],
[System.State],
[System.AreaPath],
[System.IterationPath]
FROM workitems
WHERE
[System.TeamProject] = 'Project'
and [System.WorkItemType] = 'Product Backlog Item'
and [System.State] = 'Done'
ORDER BY [System.ChangedDate] DESC
"""
)

How to design api implementation for web using flask restplus?

I am writing REST api in flask for the first time,
so now I have something like this:
import uuid
import pytz
from datetime import datetime
from flask_restplus import Resource, Api, fields
from ..models import publicip_schema
from ..controller import (
jsonified,
get_user_ip,
add_new_userIp,
get_specificIp,
get_all_publicIp
)
from flask import request, jsonify
from src import app
from src import db
from src import models
api = Api(app, endpoint="/api", versio="0.0.1", title="Capture API", description="Capture API to get, modify or delete system services")
add_userIp = api.model("Ip", {"ip": fields.String("An IP address.")})
get_userIp = api.model("userIp", {
"ipid": fields.String("ID of an ip address."),
"urlmap" : fields.String("URL mapped to ip address.")
})
class CaptureApi(Resource):
# decorator = ["jwt_required()"]
# #jwt_required()
#api.expect(get_userIp)
def get(self, ipid=None, urlmap=None):
"""
this function handles request to provide all or specific ip
:return:
"""
# handle request to get detail of site with specific location id.
if ipid:
ipobj = get_user_ip({"id": ipid})
return jsonified(ipobj)
# handle request to get detail of site based on site abbreviation
if urlmap:
locate = get_user_ip({"urlmap": urlmap})
return jsonified(locate)
return jsonify(get_all_publicIp())
# #jwt_required()
#api.expect(add_userIp)
def post(self, username=None):
"""
Add a new location.
URI /location/add
:return: json response of newly added location
"""
data = request.get_json(force=True)
if not data:
return jsonify({"status": "no data passed"}), 200
if not data["ip"]:
return jsonify({"status" : "please pass the new ip you want to update"})
if get_user_ip({"ipaddress": data["ip"]}):
return jsonify({"status": "IP: {} is already registered.".format(data["ip"])})
_capIpObj = get_user_ip({"user_name": username})
if _capIpObj:
# update existing ip address
if "ip" in data:
if _capIpObj.ipaddress == data["ip"]:
return jsonify({"status": "nothing to update."}), 200
else:
_capIpObj.ipaddress = data["ip"]
else:
return jsonify({
"status" : "please pass the new ip you want to update"
})
db.session.commit()
return jsonified(_capIpObj)
else:
device = ""
service = ""
ipaddress = data["ip"]
if "port" in data:
port = data["port"]
else:
port = 80
if "device" in data:
device = data["device"]
if "service" in data:
service = data["service"]
date_modified = datetime.now(tz=pytz.timezone('UTC'))
urlmap = str(uuid.uuid4().get_hex().upper()[0:8])
new_public_ip = add_new_userIp(username, ipaddress, port, urlmap, device, service, date_modified)
return publicip_schema.jsonify(new_public_ip)
api.add_resource(
CaptureApi,
"/getallips", # GET
"/getip/id/<ipid>", # GET
"/getip/urlmap/<urlmap>", # GET
"/updateip/username/<username>" # POST
)
I have faced two problems
if I specify
get_userIp = api.model("userIp", {
"ipid": fields.String("ID of an ip address."),
"urlmap" : fields.String("URL mapped to ip address.")
})
and add #api.expect(get_userIp) on get method above. I am forced to pass optional parameters with any value (even to get list of all ip's i.e. from "/getallips"): see screenshot below.
but these option parameters are not required tog et all IP's, but I do need to use those parameters to get ip based on ipid, or urlmap using the get method.
looking at swagger documentation generated by flask_restplus.Api I am seeing
get and post for all the endpoints, whereas I have defined endpoint get and post only. So technically updateip/username/<username> should not be listing get
How do I fix this ?
Good question! You can fix both problems by defining separate Resource subclasses for each of your endpoints. Here is an example where I split the endpoints for "/getallips", "/getip/id/", and "/getip/urlmap/".
Ip = api.model("Ip", {"ip": fields.String("An IP address.")})
Urlmap = api.model("UrlMap", {"urlmap": fields.String("URL mapped to ip address.")})
#api.route("/getallips")
class IpList(Resource):
def get(self):
return jsonify(get_all_publicIp())
#api.route("/getip/id/<ipid>")
class IpById(Resource):
#api.expect(Ip)
def get(self, ipid):
ipobj = get_user_ip({"id": ipid})
return jsonified(ipobj)
#api.route("/getip/urlmap/<urlmap>")
class IpByUrlmap(Resource):
#api.expect(Urlmap)
def get(self, urlmap):
ipobj = get_user_ip({"id": ipid})
return jsonified(ipobj)
Notice that you solve your expect problem for free - because each endpoint now fully defines its interface, it's easy to attach a clear expectation to it. You also solve your "get and post defined for endpoints that shouldn't", you can decide for each endpoint whether it should have a get or post.
I'm using the api.route decorator instead of calling api.add_resource for each class because of personal preference. You can get the same behavior by calling api.add_resource(<resource subclass>, <endpoint>) for each new Resource subclass (e.g. api.add_resource(IpList, "/getallips"))

Google App Engine Python - Protorpc && Taskqueue

How to use Task Queue (Push Queue) with Protorpc.
I have a landing page form that do multiple actions when sending it:
Save the fields in the DataStore
Send an email to the form's sender
Send the fields to a third party application (let's say a CRM)
The form send is implemented in the server side with protorpc.
class FormRequest(messages.Message)
field1 = messages.StringField(1, required=True)
field2 = messages.StringField(2, required=True)
...
class FormApi(remote.Service):
#remote.method(TravelRequest, message_types.VoidMessage)
def insert(self, request):
# Save the form in the DataStore
travel = FormModel(field1=request.field1, field2=request.field2)
travel.put()
# Send an email to the client
...
# Send the data to a third party
...
return message_types.VoidMessage()
This solution is stuck because the user need to wait all this request time. (In this case it is only 2-3s but it is a lot for a landing page form)
A good solution will be to use taskqueue to minimise the time the user need to wait:
(As an example)
class ...
#remote ...
def ...
# Save the form in the DataStore
taskqueue.add(url='/api/worker/save_to_db', params={'field1': request.field1, 'field2': request.field2})
# Send an email to the client
taskqueue.add(url='/api/worker/send_email', params={'field1': request.field1, 'field2': request.field2})
# Send the data to a third party (CRM)
taskqueue.add(url='/api/worker/send_to_crm', params={'field1': request.field1, 'field2': request.field2})
The "problem" is that protorpc get only json object as request.
How to do this with TaskQueue(Push) ?
The default behavior of TaskQueue is to send params as a string of urlencoded and it's not conveniant to protorpc.
Let's define a Worker service for the taskqueue:
class WorkersApi(remote.Service):
#remote.method(TravelRequest, message_types.VoidMessage)
def save_to_db(self, request):
# Instead of write each parameter, I am using this "cheat"
params = {}
for field in request.all_fields():
params[field.name] = getattr(request, field.name)
# Save data in the datastore
form_model = FormModel(**params)
form_model.put()
return message_types.VoidMessage()
Pay attention that I am using the same message object for the real request and for the taskqueue request (It is a big advantage to need not create and different message object for each request)
The question is how to use taskqueue with this protorpc function.
As I say in the question, the default behavior of taskqueue is not conveniant.
The solution is to convert the orignal request/message (in our example the FormRequest) object back to string and set a header to taskqueue that the payload is application/json.
Here's the code:
# This format string is take from the util file in the protorpc folder in Google App Engine source code
format_string = '%Y-%m-%dT%H:%M:%S.%f'
params = {}
for field in request.all_fields():
value = getattr(request, field.name)
if (isinstance(value, datetime.datetime)):
value = value.strftime(format_string)
params[field.name] = value
taskqueue.add(url='/api/workers.save_to_db', payload=json.dumps(params), headers={'content-type':'application/json'})
Do the same for the "email" and the "crm".
you can used put_async() for write with no time : Asynchronously writes the entity's data to the Datastore.
for example:
travel = FormModel(field1=request.field1, field2=request.field2)
travel.put_async()
# next action

Categories