I am trying to write a python program that will alert me when a VM is down. I know PowerShell might be better but would prefer python.
Why do you think it would be better with PowerShell :). Python rules ;)
If you want a more reactive programming, you should look at EventGrid + LogicApp + WebApp/Function first. It's like IFTTT for Azure, EventGrid will trigger an event, and LogicApp will be able to consume this event and send it to a WebApp or Function (that you can write in Python).
Example:
https://learn.microsoft.com/en-us/azure/event-grid/monitor-virtual-machine-changes-event-grid-logic-app
If you want a more "I pull every minute" experience, just use the azure-mgmt-compute package:
https://pypi.org/project/azure-mgmt-compute/
Basic sample:
https://github.com/Azure-Samples/virtual-machines-python-manage
You will need the instance view of the VM, to get the power state, with instance_view
Hope this helps!
(I work at MS in the Azure SDK for Python team)
EDIT:
It seems EventGrid does not support trigger from VM power state yet, you could still use LogicApp with a poll schedule for solution 1: https://learn.microsoft.com/en-us/azure/connectors/connectors-native-recurrence
Related
Hi is there anyone who is help me to Integrate BIRT report with Django Projects? or any suggestion for connect third party reporting tools with Django like Crystal or Crystal Clear Report.
Some of the 3rd-party Crystal Reports viewers listed here provide a full command line API, so your python code can preview/export/print reports via subprocess.call()
The resulting process can span anything between an interactive Crystal Report viewer session (user can login, set/change parameters, print, export) and an automated (no user interaction) report printing/exporting.
While this would simplify your code, it would restrict deployment to Windows.
For prototyping, or if you don't mind performance, you can call from BIRT from the command line.
For example, download the POJO runtime and use the script genReport.bat (IIRC) to generate a report to a file (eg. PDF format). You can specify the output options and the report parameters on the command line.
However, the BIRT startup is heavy overhead (several seconds).
For achieving reasonable performance, it is much better to perform this only once.
To achieve this goal, there are at least two possible ways:
You can use the BIRT viewer servlet (which is included as a WAR file with the POJO runtime). So you start the servlet with a web server, then you use HTTP requests to generate reports.
This looks technically old-fashioned (eg. no JSON Requests), but it should work. However, I never used this approach.
The other option is to write your own BIRT server.
In our product, we followed this approach.
You can take the viewer servlet as a template for seeing how this could work.
The basic idea is:
You start one (or possibly more than one) Java process.
The Java process initializes the BIRT runtime (this is what takes some seconds).
After that, the Java process listens for requests somehow (we used a plain socket listener, but of course you could use HTTP or some REST server framework as well).
A request would contain the following information:
which module to run
which output format
report parameters (specific to the module)
possibly other data/metadata, e.g. for authentication
This would create a RunAndRenderTask or separate RunTask and RenderTasks.
Depending on your reports, you might consider returning the resulting output (e.g. PDF) directly as a response, or using an asynchronous approach.
Note that BIRT will happily create several reports at the same time - multi-threading is no problem (except for the initialization), given enough RAM.
Be warned, however, that you will need at least a few days to build a POC for this "create your own server" approach, and probably some weeks for prodction quality.
So if you just want to build something fast to see if the right tool for you, you should start with the command line approach, then the servlet approach and only then, and only if you find that the servlet approach is not quite good enough, you should go the "create your own server" way.
It's a pity that currently there doesn't seem to exist an open-source, production-quality, modern BIRT REST service.
That would make a really good contribution to the BIRT open-source project... (https://github.com/eclipse/birt)
I have a working Python automation program combine_excel.py. It accesses a server and extracts excel files and combining them with an automation workflow. Currently, I need to execute this automation manually.
I like to host this program in a cloud/server and activate the script at preset timings and at regular intervals. I like to know if there is any service out there that will allow me to do that. Can I do this on Google Cloud or AWS?
The program will generate an output that I could have it saved into to my Google Drive.
An easy/cost-effective way to achieve this could be to use AWS Lambda functions. Lambda functions can be set to trigger at certain time intervals using CRON syntax.
You might need to make some minor adjustments to match some format requirements Lambda has, maybe workout a way to include dependencies if you have any, but everything should be pretty straightforward as there is a lot of information available on the web.
The same can be achieved using Google Cloud Functions.
You could also try Serverless Framework which would take care of the deployment for you, you only need to set it up once.
Another options is to try Zeit it's quite simple to use, and it has a free tier (as the others).
Some useful links:
https://serverless.com/blog/serverless-python-packaging/
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
https://cloud.google.com/functions/docs/quickstart-python
https://zeit.co/docs/runtimes#official-runtimes/python
I have an Interactive Brokers [IB] account and am using the IB API to make an automated trading system in python. Version 1.0 is nearing the test stage.
I am thinking about creating a GUI for it so that I can real-time watch various custom indicators and adjust trading parameters. It is all (IB TWS/IB Gateway and my app) running on my local windows 10 pc (I could run it on Ubuntu if that made it easier) with startup config files presently being the only way to adjust parameter and then watch the results scroll by on the console window.
Eventually I would like to run both IB TWS/IB Gateway and the app on Amazon EC2/AWS and access it from anywhere. I only mention this as may be a consideration on how to setup the GUI now to avoid having to redo it then.
I am not going to write this myself and will contract someone else to do it. After spending 30+ hrs researching this I still really have no idea on what the best way would be to implement this (browser based, standalone app, etc.) and/or what skills the programmer would need for me to describe the job.
An estimate on how long it would take to get a bare bones GUI real-time displaying data from my app and real-time sending inputs back to my app would be additionally helpful.
The simplest and quickest way will probably be to add GUI directly to your Python App. If you don't need it to be pretty or to run on mobile, I'd say go with TKinter for simplicity. Then, connect to wherever the App is located and control it remotely.
Adding another component that will communicate with your Python App introduces a higher level of complexity which I think is redundant in this case.
You didn't specify in details what kind of data you will require the app to display. If this includes any form of charting, I'd use an existing charting software such as Ninjatrader / Multicharts / Sierracharts to run my indicators and see the positions status, and restrict the GUI of the python app to adjusting the trading parameters and reporting numerical stats.
I have a couple of Python apps in CloudFoundry. Now I would like to schedule their execution. For example a specific app has to be executed on the second day of each month.
I coudldn't find anything on the internet. Is that even possible?
Cloud Foundry will deploy your application inside a container. You could use libraries to execute your code on a specific schedule but either way you're paying to have that instance run the whole time.
What you're trying to do is a perfect candidate for "serverless computing" (also known as "event-driven" or "function as a service" computing.
These deployment technologies execute functions on response to a trigger e.g. a REST api call, a certain timestamp, a new database insert etc...
You could execute your python cloud foundry apps using the Openwhisk serverless compute platform.
IBM offer a hosted version of this running on their cloud platform, Bluemix.
I don't know what your code looks like so I'll use this sample hello world function:
import sys
def main(dict):
if 'message' in dict:
name = dict['message']
else:
name = 'stranger'
greeting = 'Hello ' + name + '!'
print(greeting)
return {'greeting':greeting}
You can upload your actions (functions) to OpenWhisk using either the online editor or the CLI.
Once you've uploaded your actions you can automate them on a specific schedule by using the Alarm Package. To do this in the online editor click "automate this process" and pick the alarm package.
To do this via the CLI we need to first create a trigger:
$ wsk trigger create regular_hello_world --feed /whisk.system/alarms/alarm -p cron '0 0 9 * * *'
ok: created trigger feed regular_hello_world
This will trigger every day at 9am. We then need to link this trigger to our action by creating a rule:
$ wsk rule create regular_hello_rule regular_hello_world hello_world
ok: created rule regular_hello_rule
For more info see the docs on creating python actions.
The CloudFoundry platform itself does not have a scheduler (at least not at this time) and the containers where you application runs do not have cron installed (unlikely to ever happen).
If you want to schedule code to periodically run, you have a few options.
You can deploy an application that includes a scheduler. The scheduler can run your code directly in that container or it can trigger the code to run elsewhere (ex: it sends an HTTP request to another application and that request triggers the code to run). If you trigger the code to run elsewhere, you can make the scheduler app run pretty lean (maybe with 64m of memory or less) to reduce costs.
You can look for a third party scheduler service. The availability of and cost of services like this will vary depending on your CF provider, but there are service offerings to handle scheduling. These typically function like the previous example where an HTTP request is sent to your app at a specific time and that triggers your scheduled code. Many service providers offer free tiers, which give you a small number of triggers per month at no cost.
If you have a server outside of CF with cron installed, you can use cron there to schedule the tasks and trigger the code to run on CF. You can do this like the previous examples by sending HTTP requests to your app, however, this option also gives you the possibility to make use of CloudFoundry's task feature.
CloudFoundry has the concept of a task, which is a one-time execution of some code. With it, you can execute the cf run-task command to trigger the task to run. Ex: cf run-task <app-name> "python my-task.py". More on that in the docs, here. The nice part about using tasks is that your provider will only bill you while the task is running.
To see if your provider has tasks available, run cf feature-flags and look to see if task_creation is set to enabled.
Hope that helps!
I wrote a Python script that will pull data from a 3rd party API and push it into a SQL table I set up in AWS RDS. I want to automate this script so that it runs every night (e.g., the script will only take about a minute to run). I need to find a good place and way to set up this script so that it runs each night.
I could set up an EC2 instance, and a cron job on that instance, and run it from there, but it seems expensive to keep an EC2 instance alive all day for only 1 minute of run-time per night. Would AWS data pipeline work for this purpose? Are there other better alternatives?
(I've seen similar topics discussed when googling around but haven't seen recent answers.)
Thanks
Based on your case, I think you can try to use shellCommandActivity in data pipeline. It will launch a ec2 instance and execute the command you give to data pipeline on your schedule. After finishing the task, pipeline will terminate ec2 instance.
Here is doc:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-ec2resource.html
Alternatively, you could use a 3rd-party service like Crono. Crono is a simple REST API to manage time-based jobs programmatically.