I am running a python script using a cron job. When the script runs, it either alerts that it completed or failed. It is supposed to send a message to cronitor along with the completion ping. An example of the link is below:
https://cronitor.link/p/id/key?state=complete&msg='Success'
When I just put the link into the search bar and hit Enter, the message shows up in cronitor. However, when I try to get the message to print when running the link through the script it doesn't work. Cronitor gets the ping that it completed successfully but no message shows up:
cron_alert = "/usr/local/bin/curl --silent https://cronitor.link/p/id/key?state=complete&msg='Success'"
os.system(cron_alert)
I tried removing '--silent' but that didn't make a difference. Any ideas on how to fix this? Thanks
Your command contains an unescaped &, which the shell used by os.system parses as a command terminator that runs curl the the background. You need to escape it, e.g.
cron_alert = \
"/usr/local/bin/curl --silent \"https://cronitor.link/p/id/key?state=complete&msg='Success'\""
but even better would be to stop using os.system and use subprocess.run instead.
import subprocess
subprocess.run(["/usr/local/bin/curl",
"--silent",
"https://cronitor.link/p/id/key?state=complete&msg='Success'"])
which bypasses the shell altogether.
Best, use a library like requests to send the request from Python, rather than forking a new process.
import requests
requests.get("https://cronitor.link/p/id/key",
params={'state': 'complete', 'msg': "'Success'"})
(In all cases, does the API really require single quotes around Success?)
Related
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
I have a python script I'm successfully executing every night at midnight. It's outputting the log file, however, I want it to also send an email with the log contents.
I've read this is pretty use to do, but I've had no luck thus far. I've tried this but it does not work. Does anyone else have some other suggestions?
I'm running Ubuntu 14.04, if that makes a difference with the mail smtp.
MAILTO=mcgoga12#wfu.edu
0 0 * * * /usr/bin/python /home/grant/Developer/Projects/StudyBug/Main.py > /home/grant/Desktop/Studybuglog.log 2>&1
Cron will send everything sent by the command to its standard output (what would be sent to the screen if you ran the command from the command line) in an email to the email address in MAILTO.
Unfortunately for you, you are changing the behaviour of this command using shell redirection. If you ran the command exactly as written above, there would be nothing shown on the screen because everything is written to the file (because you redirect standard output to the file using the '>' operator).
If you want an email, remove the >, and everything after it and then test.
If you also want to write to a log file, you might try the 'tee' command, or changing your script to take a log file as a command line argument, and write to both the log file and the standard output.
I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)
I'm beginning to learn twisted.conch to automate some tasks over SSH.
I tried to modify the sample sshclient.py from http://www.devshed.com/c/a/Python/SSH-with-Twisted/4/ . It runs 1 command after login and prints captured output.
What I wanted to do is to run a series commands, and maybe decide what to do based on the output.
The problem I ran into is that twisted.conch.ssh.channel.SSHChannel appears to be always closing itself after running a command (such as df -h). The example will sendRequest after channelOpen. Then the channel is always closed after dataReceived no matter what I did.
I'm wondering if this is due to server sending an EOF after the command. And therefore this channel must be closed? Should I just open multiple channels for multiple commands?
Another problem is those interactive commands (such as rm -i somefile). It seems that because the server didn't send EOF, SSHChannel.dataReceived never gets called. How do I manage to capture output in this situation, and what do I do to send back a response?
Should I just open multiple channels for multiple commands?
Yep. That's how SSH works.
SSHChannel.dataReceived never gets called
This doesn't sound like what should happen. Perhaps you can include a minimal example which reproduces the behavior.
Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).