Zabbix Action - How to use Default Field in Custom Script - python

I am making some script in python which is run by Zabbix Action.
I want to add value in
Default subject and Default message in Action fields and then use this values in my script. So I am running script and forward all needed macros in script parameters like:
python /path/script.py -A "{HOST.NAME}" -B "{ALERT.MESSAGE}" -C "{ALERT.SUBJECT}"
and i can get only HOST.NAME value, for others I get only macros name but no value
Have you any idea where is the problem? Those macros are unavailable using by Custom scripts?
example

After doing some research & testing myself, it seems as if these Alert macros are indeed not available in a custom script operation.1
You have two options for a workaround:
If you need to be able to execute this script on the host itself, the quick option is to simply replace the macro with the actual text of your subject & alert names. Some testing is definitely necessary to make sure it will work with your environment, and it's not the most elegant solution, but something like this may well work with little extra effort:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
Verifying of course that e.g. the newlines do not break your custom script in your environment.
It doesn't look pretty but it may well be the easiest option.
If you can run the command on any host, the nicer option is to create a new Media type, which will let you use these variables and may even make adding this script to other hosts much easier. These macros can definitely be used as part of a custom Media type (see Zabbix Documentation - Media Types) which can include custom scripts.
You'll need to make a bash or similar script file for the Zabbix server to run (which means doing anything on a host outside the Zabbix server itself is going to be more difficult, but not impossible).
Once the media type is setup, as a bit of a workaround (not ideal, of course) you'll need a user to 'send' to; assigning that media type to the user and then 'sending' the alert to the user with that media type should execute your script with the macros just like executing the custom command.
1: While I did do my own testing on this, I couldn't found any documentation which specifically states that these macros aren't supported in this case, and they definitely look like they should be - more than happy to edit/revoke this answer if anyone can find documentation that confirms or denies this.

I should also explain how it works now, so I did sth like:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
works for me :)

Related

Is it possible to run a script using CRON that requires a redirect URI?

I created a Spotipy script that pulls the last 50 songs I listened to and adds them, and their audio features, to a Google Sheets. However, I'd love to be able to run this script daily without me manually having to run it, but I have very little experience with CRON scheduling. I'm having trouble wrapping my head around how it can be run given all of the command line arguments I need to enter.
The code requires multiple command line arguments passed first, such as
export SPOTIPY_REDIRECT_URI="http://google.com"
export SPOTIPY_CLIENT_SECRET='secret'
and similar for the client ID.
Additionally, the first argument after the script call is the username username = sys.argv[1].
Most importantly, it prompts me to copy and paste a redirect URL into the command line, which is unique each run.
Is it at all possible to pass the redirect URL to the command line each time the script is run using CRON?
I think what you're looking to accomplish can be achieved in one of two ways.
First, you could write a shell script to handle the export commands and passing the redirect URI to your script. Second, with a few tweaks to your script, you should be able to run it headlessly, without the intermediary copy/paste step.
I'm going to explore the second route in my answer.
Regarding the environment variables, you have a couple options. Depending on your distribution, you may be able to set these variables in the crontab before running the job. Alternatively, you can set the exports in the same job you use to run your script, separating the commands by semicolon. See this answer for a detailed explanation.
Then, regarding the username argument: fortunately, script arguments can also be passed via cron. You just list the username after the script's name in the job; cron will pass it as an argument to your script.
Finally, regarding the redirect URI: if you change your redirect URI to something like localhost with a port number, the script will automatically redirect for you. This wasn't actually made clear to me in the spotipy documentation, but rather from the command line when authenticating with localhost: "Using localhost as redirect URI without a port. Specify a port (e.g. localhost:8080) to allow automatic retrieval of authentication code instead of having to copy and paste the URL your browser is redirected to." Following this advice, I was able to run my script without having to copy/paste the redirect URL.
Add something like "http://localhost:8080" to your app's Redirect URI's in the Spotify Dashboard and then export it in your environment and run your script--it should run without your input!
Assuming that works, you can put everything together in a job like this to execute your script daily at 17:30:
30 17 * * * export SPOTIPY_REDIRECT_URI="http://localhost:8080"; export SPOTIPY_CLIENT_SECRET='secret'; python your_script.py "your_username"

Can the Ansible raw ssh module be used like "expect"?

In other words, can the ansible "expect" module be used over a raw SSH connection?
I'm trying automate some simple embedded devices that do not have a full Python or even shell. The ansible raw module works fine for simple commands, but I'd like to drive things in an expect-like manner.
Can this be done?
The expect module is written in python, so no, that won't work.
Ansible does have a model for interacting with devices like network switches that similarly don't run python; you can read about that in the How Network Automation Is Different. I don't think that will offer any sort of immediate solution to your problem, but it suggests a way of pursuing things if it was really important to integrate with Ansible.
It would probably be simpler to just use the actual expect program instead of Ansible.
With Ansible 2.7 and up, you can use the cli_command module. It's kind of like 'expect' and it does not need python on the target.
https://docs.ansible.com/ansible/latest/modules/cli_command_module.html
- name: multiple prompt, multiple answer (mandatory check for all prompts)
cli_command:
command: "copy sftp sftp://user#host//user/test.img"
check_all: True
prompt:
- "Confirm download operation"
- "Password"
- "Do you want to change that to the standby image"
answer:
- 'y'
- <password>
- 'y'

How can a CGI server based on CGIHTTPRequestHandler require that a script start its response with headers that include a `content-type`?

Later note: the issues in the original posting below have been largely resolved.
Here's the background: For an introductory comp sci course, students develop html and server-side Python 2.7 scripts using a server provided by the instructors. That server is based on CGIHTTPRequestHandler, like the one at pointlessprogramming. When the students' html and scripts seem correct, they port those files to a remote, slow Apache server. Why support two servers? Well, the initial development using a local server has the benefit of reducing network issues and dependency on the remote, weak machine that is running Apache. Eventually porting to the Apache-running machine has the benefit of publishing their results for others to see.
For the local development to be most useful, the local server should closely resemble the Apache server. Currently there is an important difference: Apache requires that a script start its response with headers that include a content-type; if the script fails to provide such a header, Apache sends the client a 500 error ("Internal Server Error"), which too generic to help the students, who cannot use the server logs. CGIHTTPRequestHandler imposes no similar requirement. So it is common for a student to write header-free scripts that work with the local server, but get the baffling 500 error after copying files to the Apache server. It would be helpful to have a version of the local server that checks for a content-type header and gives a good error if there is none.
I seek advice about creating such a server. I am new to Python and to writing servers. Here are the issues that occur to me, but any helpful advice would be appreciated.
Is a content-type header required by the CGI standard? If so, other people might benefit from an answer to the main question here. Also, if so, I despair of finding a way to disable Apache's requirement. Maybe the relevant part of the CGI RFC is section 6.3.1 (CGI Response, Content-Type): "If an entity body is returned, the script MUST supply a Content-Type field in the response."
To make a local server that checks for the content-type header, perhaps I should sub-class CGIHTTPServer.CGIHTTPRequestHandler, to override run_cgi() with a version that issues an error for a missing header. I am looking at CGIHTTPServer.py __version__ = "0.4", which was installed with Python 2.7.3. But run_cgi() does a lot of processing, so it is a little unappealing to copy all its code, just to add a couple calls to a header-checking routine. Is there a better way?
If the answer to (2) is something like "No, overriding run_cgi() is recommended," I anticipate writing a version that invokes the desired script, then checks the script's output for headers before that output is sent to the client. There are apparently two places in the existing run_cgi() where the script is invoked:
3a. When run_cgi() is executed on a non-Unix system, the script is executed using Python's subprocess module. As a result, the standard output from the script will be available as an in-memory string, which I can presumably check for headers before the call to self.wfile.write. Does this sound right?
3b. But when run_cgi() is executed on a *nix system, the script is executed by a forked process. I think the child's stdout will write directly to self.wfile (I'm a little hazy on this), so I see no opportunity for the code in run_cgi() to check the output. Ugh. Any suggestions?
If analyzing the script's output is recommended, is email.parser the standard way to recognize whether there is a content-type header? Is another standard module recommended instead?
Is there a more appropriate forum for asking the main question ("How can a CGI server based on CGIHTTPRequestHandler require...")? It seems odd to ask if there is a better forum for asking programming questions than Stack Overflow, but I guess anything is possible.
Thanks for any help.

Python: Verifying NFS authentication

Folks,
I believe there are two questions I have: one python specific and the other NFS.
The basic point is that my program gets the 'username', 'uid', NFS server IP and exported_path as input from the user. It now has to verify that the NFS exported path is readable/writable by this user/uid.
My program is running as root on the local machine. The straight-forward approach is to 'useradd' a user with the given username and uid, mount the NFS exported path (run as root for mount) on some temporary mount_point and then execute 'su username -c touch /mnt_pt/tempfile'. IF the username and userid input were correct (and the NFS server was setup correctly) this touch of tempfile will succeed creating tempfile on the NFS remote directory. This is the goal.
Now the two questions are:
(i) Is there a simpler way to do this than creating a new unix user, mounting and touching a file to verify the NFS permissions?
(ii) If this is what needs to be done, then I wonder if there are any python modules/packages that will help me execute 'useradd', 'userdel' related commands? I currently intend to use the respective binaries(/usr/sbin/useradd etc) and then invoke subprocess.Popen to execute the command and get the output.
Thank you for any insight.
i) You could do something more arcane, but short of actually touching the file you probably aren't going to be testing exactly what you need to test, so I think I'd probably do it the way you suggest.
ii) You might want to check out the python pwd module if you want to verify user existance or the like, but you'll probably need to leverage the useradd/userdel programs themselves to do the dirty work.
You might want to consider leveraging sudo for your program so the entire thing doesn't have to run as root, it seems like a pretty risky proposition.
There is a python suite to test NFS server functionality.
git://git.linux-nfs.org/projects/bfields/pynfs.git
While it's for NFSv4 you can simply adopt it for v3 as well.

Django and root processes

In my Django project I need to be able to check whether a host on the LAN is up using an ICMP ping. I found this SO question which answers how to ping something in Python and this SO question which links to resources explaining how to use the sodoers file.
The Setting
A Device model stores an IP address for a host on the LAN, and after adding a new Device instance to the DB (via a custom view, not the admin) I envisage checking to see if the device responds to a ping using an AJAX call to an API which exposes the capability.
The Problem
However (from the docstring of a library suggested in the the first SO question) "Note that ICMP messages can only be sent from processes running as root."
I don't want to run Django as the root user, since it is bad practice. However this part of the process (sending and ICMP ping) needs to run as root. If with a Django view I wish to send off a ping packet to test the liveness of a host then Django itself is required to be running as root since that is the process which would be invoking the ping.
Solutions
These are the solutions I can think of, and my question is are there any better ways to only execute select parts of a Django project as root, other than these:
Run Django as root (please no!)
Put a "ping request" in a queue that another processes -- run as root -- can periodically check and fulfil. Maybe something like celery.
Is there not a simpler way?
I want something like a "Django run as root" library, is this possible?
Absolutely no way, do not run the Django code as root!
I would run a daemon as root (written in Python, why not) and then IPC between the Django instance and your daemon. As long as you're sure to validate the content and properly handle it (e.g. use subprocess.call with an array etc) and only pass in data (not commands to execute) it should be fine.
Here is an example client and server, using web.py
Server: http://gist.github.com/788639
Client: http://gist.github.com/788658
You'll need to install webpy.org but it's worth having around anyway. If you can hard-wire the IP (or hostname) into the server and remove the argument, all the better.
What's your OS here? You might be able to write a little program that does what you want given a parameter, and stick that in the sudoers file, and give your django user permission to run it as root.
/etc/sudoers
I don't know what kind of system you're on, but on any box I've encountered, one does not have to be root to run the command-line ping program (it has the suid bit set, so it becomes root as necessary). So you could just invoke that. It's a bit more overhead, but probably negligible compared to network latency.

Categories