Windows Azure API: Programmatically create VM - python

I'm trying to programmatically spin up an Azure VM using the Python REST API wrapper. All I want is a simple VM, not part of a deployment or anything like that. I've followed the example here: http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/#CreateVM
I've gotten the code to run, but I am not seeing any new VM in the portal; all it does is create a new cloud service that says "You have nothing deployed to the production environment." What am I doing wrong?

You've created a hosted_service (cloud service) but haven't put deployed anything in that service. You need to do a few more things so I'll coninue from where you left off, where name is the name of the VM:
# Where should the OS VHD be created:
media_link = 'http://portalvhdsexample.blob.core.windows.net/vhds/%s.vhd' % name
# Linux username/password details:
linux_config = azure.servicemanagement.LinuxConfigurationSet(name, 'username', 'password', True)
# Endpoint (port) configuration example, since documentation on this is lacking:
endpoint_config = azure.servicemanagement.ConfigurationSet()
endpoint_config.configuration_set_type = 'NetworkConfiguration'
endpoint1 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='HTTP', protocol='tcp', port='80', local_port='80', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint2 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='SSH', protocol='tcp', port='22', local_port='22', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint_config.input_endpoints.input_endpoints.append(endpoint1)
endpoint_config.input_endpoints.input_endpoints.append(endpoint2)
# Set the OS HD up for the API:
os_hd = azure.servicemanagement.OSVirtualHardDisk(image_name, media_link)
# Actually create the machine and start it up:
try:
sms.create_virtual_machine_deployment(service_name=name, deployment_name=name,
deployment_slot='production', label=name, role_name=name,
system_config=linux_config, network_config=endpoint_config,
os_virtual_hard_disk=os_hd, role_size='Small')
except Exception as e:
logging.error('AZURE ERROR: %s' % str(e))
return False
return True

Maybe I'm not understanding your problem, but a VM is essentially a deployment within a cloud service (think of it like a logical container for machines).

Related

Google Cloud Run does not find os.environ['GOOGLE_APPLICATION_CREDENTIALS'] variable

I am trying to deploy a Python app in Google Cloud Run to perform some tasks automatically and these tasks require access to my BigQuery.
I have tested the implementation in localhost through Cloud Shell, and it worked just as expected. Then I created a Cloud Run Service and all functions that do not require access to BigQuery work normally, but when I they require, I get the following error:
google.auth.exceptions.DefaultCredentialsError: File /XXXXXX/gbq.json was not found.
However, the file is there (the folders are correct, and I also tested adding copies of the file in other folders):
Any suggestions to solve the problem or a workaround I could use?
Thanks in advance
ADDITIONAL INFO:
main.py function:
(the bottom part of the code is used to test the app in localhost, which works perfectly)
from flask import Flask, request
from test_py import test as t
app = Flask(__name__)
#app.get("/")
def hello():
"""Return a friendly HTTP greeting."""
chamado = request.args.get("chamado", default="test")
print(chamado)
if chamado == 'test':
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_data(chamado)}'
elif chamado == 'bigqueer':
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_bq_data()}'
else:
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_not_data(chamado)}'
print(dados)
return dados
if __name__ == "__main__":
# Development only: run "python main.py" and open http://localhost:8080
# When deploying to Cloud Run, a production-grade WSGI HTTP server,
# such as Gunicorn, will serve the app.
app.run(host="localhost", port=8080, debug=True)
BigQuery class:
class GoogleBigQuery:
def __init__(self):
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/XXXXXX/gbq.json'
self.client = bq.Client()
def executar_query(self, query):
client_query = self.client.query(query)
result = client_query.result()
return result
Cloud Run deploy:
gcloud run deploy pythontest \
--source . \
--platform managed \
--region $REGION \
--allow-unauthenticated
YOU DO NOT NEED THAT
Excuse my first brutal words but it's extremely dangerous to do what you do. Let me explain.
In your container, you put, in plain text a secret. Keep in mind that your container is like a zip. There is nothing secret, encrypted in it. You can convince yourselves by using dive and exploring your container layers and data.
Therefore: DO NOT DO THAT!
So now, what to do?
On Google Cloud, all the services can use the Metadata server to get credentials. The client libraries leverage it and you can rely on the default credential with you initialise your code. That mechanism is named ADC.
In your code, simpy remove that line: os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/XXXXXX/gbq.json'
So that, when you deploy your Cloud Run service, specify the runtime service account that you want to use. That's all! The Google Cloud environment and libraries will do the rest

Transferring data from product datastore to local development environment datastore in Google App Engine (Python)

TL;DR I need to find a real solution to download my data from product datastore and load it to the local development environment.
The detailed problem:
I need to test my app in local development server with the real data (not real-time data) on datastore of the product server. The documentation and other resources offer three option:
Using appfg.py downloading data from the product server then loading it into the local development environment. When I use this method I am getting 'bad request' error due to Oauth problem. Besides, this method will be deprecated. The official documentation advises using the second method:
Using the gcloud via managed export and the import. The epic documentation of this method explains how we backup all data on console (in https://console.cloud.google.com/). I have tried this method. The backup data is being generated on storage in the cloud. I downloaded it. It is in the LevelDB format. I need to load it into local development server. There is no official explanation for it. The loading method of the first method is not compatible with LevelDB format. I couldn't find an official way to solve the problem. There is a StackOverflow entry but it is not worked for me because of it just gets all entities as the dict. The conversation the 'dic' object to the 'ndb' Entities becomes the tricky problem.
I have lost my hope with the first two methods then I have decided the use Cloud Datastore Emulator (beta) which provides the emulating real data on local development environment. It is still beta and has several problems. When I run the command I encountered the problem DATASTORE_EMULATOR_HOST anyway.
It sounds like you should be using a remote sandbox
Even if you get this to work, the localhost datastore still behaves differently than the actual datastore.
If you want to truly simulate your production environment, then i would recommend setting up a clone of your app engine project as a remote sandbox. You could deploy your app to a new gae project id appcfg.py update . -A sandbox-id, and use datastore admin to create a backup of production in google cloud storage and then use datastore admin in your sandbox to restore this backup in your sandbox.
Cloning production data into localhost
I do prime my localhost datastore with some production data, but this is not a complete clone. Just the core required objects and a few test users.
To do this I wrote a google dataflow job that exports select models and saves them in google cloud storage in jsonl format. Then on my local host I have an endpoint called /init/ which launches a taskqueue job to download these exports and import them.
To do this i reuse my JSON REST handler code which is able to convert any model to json and vice versa.
In theory you could do this for your entire datastore.
EDIT - This is what my to-json/from-json code looks like:
All of my ndb.Models subclass my BaseModel which has generic conversion code:
get_dto_typemap = {
ndb.DateTimeProperty: dt_to_timestamp,
ndb.KeyProperty: key_to_dto,
ndb.StringProperty: str_to_dto,
ndb.EnumProperty: str,
}
set_from_dto_typemap = {
ndb.DateTimeProperty: timestamp_to_dt,
ndb.KeyProperty: dto_to_key,
ndb.FloatProperty: float_from_dto,
ndb.StringProperty: strip,
ndb.BlobProperty: str,
ndb.IntegerProperty: int,
}
class BaseModel(ndb.Model):
def to_dto(self):
dto = {'key': key_to_dto(self.key)}
for name, obj in self._properties.iteritems():
key = obj._name
value = getattr(self, obj._name)
if obj.__class__ in get_dto_typemap:
if obj._repeated:
value = [get_dto_typemap[obj.__class__](v) for v in value]
else:
value = get_dto_typemap[obj.__class__](value)
dto[key] = value
return dto
def set_from_dto(self, dto):
for name, obj in self._properties.iteritems():
if isinstance(obj, ndb.ComputedProperty):
continue
key = obj._name
if key in dto:
value = dto[key]
if not obj._repeated and obj.__class__ in set_from_dto_typemap:
try:
value = set_from_dto_typemap[obj.__class__](value)
except Exception as e:
raise Exception("Error setting "+self.__class__.__name__+"."+str(key)+" to '"+str(value) + "': " + e.message)
try:
setattr(self, obj._name, value)
except Exception as e:
print dir(obj)
raise Exception("Error setting "+self.__class__.__name__+"."+str(key)+" to '"+str(value)+"': "+e.message)
class User(BaseModel):
# user fields, etc
My request handlers then use set_from_dto & to_dto like this (BaseHandler also provides some convenience methods for converting json payloads to python dicts and what not):
class RestHandler(BaseHandler):
MODEL = None
def put(self, resource_id=None):
if resource_id:
obj = ndb.Key(self.MODEL, urlsafe=resource_id).get()
if obj:
obj.set_from_dto(self.json_body)
obj.put()
return obj.to_dto()
else:
self.abort(422, "Unknown id")
else:
self.abort(405)
def post(self, resource_id=None):
if resource_id:
self.abort(405)
else:
obj = self.MODEL()
obj.set_from_dto(self.json_body)
obj.put()
return obj.to_dto()
def get(self, resource_id=None):
if resource_id:
obj = ndb.Key(self.MODEL, urlsafe=resource_id).get()
if obj:
return obj.to_dto()
else:
self.abort(422, "Unknown id")
else:
cursor_key = self.request.GET.pop('$cursor', None)
limit = max(min(200, self.request.GET.pop('$limit', 200)), 10)
qs = self.MODEL.query()
# ... other code that handles query params
results, next_cursor, more = qs.fetch_page(limit, start_cursor=cursor)
return {
'$cursor': next_cursor.urlsafe() if more else None,
'results': [result.to_dto() for result in results],
}
class UserHandler(RestHandler):
MODEL = User

azure batch network_configuration -> "failed to authenticate"

I am using the Azure batch service for calculations on ubuntu nodes, and it works fine. Recently I wanted to change the nodes to be on the same subnet, so I will have the possibility to use mpi in the future as well as use NFS for file acces to a common files server also on azure.
But after adding:
network_configuration = batchmodels.NetworkConfiguration(subnet_id=subnet.id)
to my batchmodels.PoolAddParameter I suddenly receive:
{'value': 'Server failed to authenticate the request. Make sure the
value of Authorization header is formed
correctly.\nRequestId:a815194a-8a66-4cb4-847e-60db4ca3ff10\nTime:2017-10-23T15:04:00.3938448Z',
'lang': 'en-US'}
Any ideas to why? Without the network_configuration my pool starts fine...
finally got it to work...
I needed to have the same credentials (and then again not fully) for the two clients in use here. Also I needed to activate batch in the app I have to set up to get credentials... I ended with something like this:
def get_credentials(res):
if res=='mgmt':
r='https://management.core.windows.net/'
elif res=='batch':
r="https://batch.core.windows.net/"
credentials = ServicePrincipalCredentials(
client_id = id,
secret = secret,
tenant = tenant,
resource = r
)
return credentials
network_client = NetworkManagementClient(get_credentials('mgmt'), sub_id)
batch_client = batch.BatchServiceClient( get_credentials('batch'), base_url=batchserviceurl)
You will need to use Azure Active Directory for authenticating with the Batch Service to enable NetworkConfiguration on a pool with Batch Service pool allocation mode accounts (which are the default).

Login to registry with Docker Python SDK (docker-py)

I am trying to use the Docker Python API to login to a Docker cloud:
https://docker-py.readthedocs.io/en/stable/client.html#creating-a-client1
What is the URL? What is the Port?
I have tried to get it to work with cloud.docker.com, but I am fine with any registry server, so long as it is free to use and I can use it to upload Docker images from one computer and run them on another.
I have already got everything running using my own locally hosted registry, but I can’t seem to figure out how to connect to a server. It’s kind of ridiculous that hosting my own registry is easier than using an existing registry server.
My code looks like this, but I am unsure what the args.* parameters should be:
client = docker.DockerClient(base_url=args.docker_registry)
client.login(username=args.docker_user, password=args.docker_password)
I’m not sure what the base_url needs to be so that I can log in, and the error messages are not helpful at all.
Can you give me an example that works?
The base_url parameter is the URL of the Docker server, not the Docker Registry.
Try something like:
from docker.errors import APIError, TLSParameterError
try:
client = docker.from_env()
client.login(username=args.docker_user, password=args.docker_password, registry=args.docker_registry)
except (APIError, TLSParameterError) as err:
# ...
Here's how I have logged in to Docker using Python:
import docker
client = docker.from_env()
client.login(username='USERNAME', password='PASSWORD', email='EMAIL',
registry='https://index.docker.io/v1/')
and here's what it returns:
{'IdentityToken': '', 'Status': 'Login Succeeded'}
So, that means it has been logged in successfully.
I still haven't figured out what the registry of cloud.docker.com is called, but I got it to work by switching to quay.io as my registry server, which works with the intuitive registry='quay.io'

Django and Apache: Create folder on another server in the network

I'm currently developing a Django application for internal use which runs on one server (Server 1) in a local network but needs write access to another server (Server 2) when data is saved to the database.
When a new record is saved, Django creates a new directory on the external server (Server 2) with an appropriate foldername. This was working well on the Django testserver which seemed to have access to the entire local network.
I've now successfully deployed my Django application with Apache and mod_wsgi but the folder creation procedure doesn't seems to work any more. I've tried a few things but can't seem to fix it quickly. Any ideas? Can this actually be achieved with Django and Apache?
def create_folder(self,request,obj,form, change, serverfolder, templatefolder):
try:
source_dir = templatefolder # Replace with path to project folder template
if not os.path.exists(destination_dir):
dir_util.copy_tree(source_dir,destination_dir)
obj.projectfolder = destination_dir
messages.success(request,"Project folder created on %s" % (serverfolder))
obj.create_folder = False
obj.has_folder = True
else:
messages.warning(request,"No new project folder created on %s server" % (obj.office.abbreviation))
except Exception,e:
messages.warning(request,str(e) + " Error during project folder creation on %s server!" % (obj.office.abbreviation))
def save_model(self, request, obj, form, change):
serverfolder = r'\\C-s-002\Projects' #C-s-002 is the external server in the same local network as the server on which Django is running
templatefolder = r'\\C-s-002\Projects\XXX Project Template'
self.create_folder(request,obj,form, change, serverfolder, templatefolder)
There are various approaches you can take here, so I will not attempt to exhaust all possibilities:
Option 1: Call an external command with Python. This is not specific to Django or Apache.
Option 2: Set up a web service on Server 2 that you can access via API calls to handle the file/directory creation needed by Server 1. This could be implemented with Django.

Categories