How to monitor Virtual Machines on OpenNebula via cli? - python

I am trying to develop a cloud-bursting solution for our cluster.
What I need is a way to monitor the VM's on the openNebula cluster and turn off those vm's whose cpu consumption is less then 10% for a certain amount of time.
I am stuck at monitoring part.
I am not able to find any way via which I can timely monitor the VM's for the CPU/Memory consumption status.
I am writing code on python.
I am also using libcloud to access the openNebula from my code.
Any ideas?
Thanks.

You should use the OpenNebula XMLRPC API instead of libcloud, since libcloud does not include monitoring information of the VMs.
You can use any of the available binding to interact with the OpenNebula XMLRPC API (ruby & java)
Calling the info method on a Virtual Machine instance will retrieve the Virtual Machine information including the monitoring values for CPU and MEMORY

Related

How can I call python scripts on a remote server from z/OS?

As part of migrating batch jobs (and used EXEC PGM) to other language (python here), facing challenge in cross server connection.
We are targeting to migrate few of our mainframes batch jobs COBOL programs to python. In this process, some batch jobs will be fully controlled using schedulers and programs will be rewrite in python scripts. But some mainframes programs will remain intact and not be migrated in python for now. As we are targeting partial migration for now, some mainframe batch jobs need to call python scripts on cloud. I am facing challenge here, how to call python scripts from mainframe batch jobs.
I'm assuming in this answer the COBOL applications run on the z/OS operating system on your mainframe, but if that assumption is not correct, please post a follow-up.
Cschneid has a great answer: just run the Python scripts on your mainframe. Python for z/OS is available for download free of charge from Rocket Software here:
https://www.rocketsoftware.com/zos-open-source
You can optionally purchase Python support on z/OS from Rocket Software if you wish. (All Linux distributions for IBM Z machines also include Python, typically supported by the Linux distributor.) Python running on IBM Z can directly operate on IBM Z-based data stores/databases, including well protected, z/OS-encrypted data sets. And you can quite easily create and manage hybrid cloud architectures that include IBM Z resources across all operating systems. That'd be the best arrangement all around since otherwise you're likely to have operational and management issues. You don't have to look very far to find real world instances of organizations that have suffered major, hugely business impactful batch scheduling problems that have completely wrecked their payment processes, for example. (Relatedly, Python is not an enterprise job scheduler.)
OK, that said, if you're still going to proceed down this (probably unwise) path this way, then here are some other options in no particular order:
Configure z/OS Management Facility (included as a base, included, supported feature in z/OS), and use its authorized REST APIs to submit jobs. Details are available here (z/OS 2.4 asssumed, but this feature is available in all currently supported z/OS releases and even prior):
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.4.0/com.ibm.zos.v2r4.izua700/IZUHPINFO_API_RESTJOBS.htm
Make sure you take reasonable, appropriate steps to secure this job submission path since it's quite powerful.
Equip your z/OS installation with IBM's z/OS Connect Enterprise Edition software product, create the REST APIs you need (both easy and powerful), and invoke them from Python. More information on z/OS Connect EE is available here:
https://www.ibm.com/us-en/marketplace/connect-enterprise-edition
If you have MQ for z/OS, then go grab the MQ client, send an appropriately formatted MQ message from Python to an appropriately configured MQ queue on z/OS, and invoke/trigger your programs that way. (MQ Advanced for z/OS is recommended for Advanced Message Security.) The MQ clients are free for unlimited use when connecting to all currently IBM supported, licensed versions of MQ and MQ Advanced for z/OS. Recent releases of MQ and MQ Advanced for z/OS also support REST APIs (and JSON payloads), so you can format your messages that way now. MQ clients are available for download here:
https://developer.ibm.com/messaging/mq-downloads/
At least some of the choices I'm providing on this list can be combined with MQ, which provides assured messaging -- which is quite helpful if you're trying to make this all work robustly.
Go find out what enterprise job scheduler your mainframe has installed (it probably has one), and use its authorized APIs to schedule and to run programs. For example, IBM Z Workload Scheduler provides authorized REST APIs. Refer to this documentation for an introduction:
https://www.ibm.com/support/knowledgecenter/en/SSRULV_9.5.0/com.ibm.tivoli.itws.doc_9.5/common/src_dgd/awsddrestapi.htm
If you click through to the samples you'll find some Python sample code.
....And there are lots of other possible ways, so if for some reason you don't like any of these choices, please post a follow-up.
Cschneid has another reasonable answer: Dovetailed's Co:Z Toolkit ("z/OS Hybrid Batch"). Here are some more possibilities, in no particular order:
The z/OS Client Web Enablement Toolkit, an included, IBM supported feature in the base z/OS operating system. This toolkit allows you to call a REST API from practically any program on z/OS. A COBOL sample is available here:
https://github.com/IBM/zOS-Client-Web-Enablement-Toolkit
z/OS Connect Enterprise Edition, which is bidirectional.
The enterprise job scheduler often installed and hosted on z/OS typically can trigger and manage "remote" tasks on other systems. IBM Z Workload Scheduler (for example) certainly can, and there's a whole manual discussing the topic here:
https://www.ibm.com/support/knowledgecenter/SSRULV_9.5.0/com.ibm.tivoli.itws.doc_9.5/eqqlwmst.pdf
Remote Procedure Calls (RPC), per IETF RFCs 1831 and 1832. If you're using a COBOL program with RPC you'd call the C interfaces, a minor bit of mixed language programming.
Dovetailed Technologies hybrid batch is another product that allows you to execute code residing on remote servers as steps in a batch job, similar to the solutions in the answers posted by #TimothySipples and #KevinMcKenzie.
Without knowing more, this question is impossible to answer.
However, generically speaking, you can issue USS commands from batch, using bpxbatch. So, you could install something like curl or wget from Rocket Software, and then call python via a REST API, or something similar on the cloud end, built in Django or Flask. If you really wanted to do something horrible, you could write a shell script that would ssh in to the cloud system, and issue a command on the remote system.
However, and I realize you probably don't have much say over this, I'd also point to Timothy Sipples' answer, and say this isn't a good idea, and it's going to be fragile. You'll need multiple such scripts, because you'll need to submit work, and then come back later and get the results, and behave appropriately based on the results. You're going to have to build all sorts of error handling capabilities into these batch jobs/shell scripts.

How to perform load balancing for a python API

I have build as application with REST API.
I have to send request from a cluster (of about 10 nodes) to get some information from this API.
Now if I run it on single system, it is expected that it may become a bottleneck and mapreduce job will take a lot of time to complete.
Is there any way or solution that I can run this service on multiple systems for load balancing?. I am using Linux OS. My service is built in Python and I am using Flash to get it via REST.

Is GPU available in Azure Cloud Services Worker role?

Coming from AWS, I am completely new to Azure in general and to Cloud Services spesifically.
I want to write a python application that leverages GPU on Azure in a PaaS architecture (Platform as a Service). The application will hopefully be deployed somewhere central, and then a number of GPU enabled nodes will spin up and run the application until it is done before closing down again.
I want to know, what is the shortest way to accomplish this in Azure?
Is my assumption correct that I will need to use what is called Cloud Services with a worker role, or will I have to create my own infrastructure based on single VMs running in IaaS?
It sounds like you created an application which need to do some general-purpose computing on GPU via Cuda or OpenCL. If so, you need to install GPGPU driver on Azure to support your Python application, so the Azure NC & NV Series VMs are suitable for this scenario like on AWS, as the figure below from here.
Hope it helps. Any concern, please feel free to let me know.

Python service similar to rpyc.Service but only local

A very naive question...
I need to have a central Python program offering services to other Python applications (all running on the same machine). The easiest for me would be that the other applications could directly call functions provided by the server. I know rpyc (Remote Python Call) and the rpyc.Service class could do the job (like in this tutorial]) but I do not want anything "remote" (all clients have to be local, running on the same machine as the server) so I was wondering if there was a better way to do this than using rpyc?

Hosting a non-trivial python program on the web?

I'm a complete novice in this area, so please excuse my ignorance.
I have three questions:
What's the best (fastest, easiest, headache-free) way of hosting a python program online?
I'm currently looking at Google App Engine and Web Frameworks for Python, but all the options are a bit overwhelming.
Which gui/viz libraries will transfer to a web app environment without problems?
I'm willing to sacrifice some performance for the sake of simplicity.
(Google App Engine can't do C libraries, so this is causing a dilemma.)
Where can I learn more about running a program locally vs. having a program continuously run on a server and taking requests from multiple users?
Currently I have a working Python program that only uses standard Python libraries. It currently uses around 2.7gb of ram, but as I increase my dataset, I'm predicting it will use closer to 6gb. I can run it on my personal machine, and everything is just peachy. I'd like to continue developing on the front end on my home machine and implement the web app later.
Here is a relevant, previous post of mine.
Depending on your knowledge with server administration, you should consider a dedicated server. I was doing running some custom Python modules with Numpy, Scipy, Pandas, etc. on some data on a shared server with Godaddy. One program I wrote took 120 seconds to complete. Recently we switched to a dedicated server and it now takes 2 seconds. The shared environment used CGI to run Python and I installed mod_python on the dedicated server.
Using a dedicated server allows COMPLETE control (including root access) to the server which allows the compilation and/or installation of anything. It is a bit pricy but if you're making money with your stuff it might be worth it.
Another option would be to use something like http://www.dyndns.com/ where you can host a domain on your own machine.
So with that said, perhaps some answers:
It depends on your requirements. ~4gb of RAM might require a dedicated server. What you are asking is not necessarily an easy task so don't be afraid to get your hands dirty.
Not sure what you mean here.
A server is just a computer that responds to requests. On the dedicated server (I keep mentioning) you are operating in a Unix (or Windows) environment just like you would locally. You use SOFTWARE (e.g. Apache web server) to serve client requests. My vote is mod_python.
It's a greater headache than a dedicated server, but it should be much closer to your needs to go with an Amazon EC2 instance.
http://aws.amazon.com/ec2/#instance
Their extra large instance should be more than large enough for what you need to do, and you only turn the instance on when you need it so you don't have the massive bill that you get with a dedicated server that's the same size.
There are some nice javascript based visualization toolkits out there, so you can model your application to return raw (json) data and render that on the client.
I can mention d3.js http://mbostock.github.com/d3/ and the JavaScript InfoVis Toolkit http://thejit.org/

Categories