Trouble with Cron sending email after python script execution - python

I have a python script I'm successfully executing every night at midnight. It's outputting the log file, however, I want it to also send an email with the log contents.
I've read this is pretty use to do, but I've had no luck thus far. I've tried this but it does not work. Does anyone else have some other suggestions?
I'm running Ubuntu 14.04, if that makes a difference with the mail smtp.
MAILTO=mcgoga12#wfu.edu
0 0 * * * /usr/bin/python /home/grant/Developer/Projects/StudyBug/Main.py > /home/grant/Desktop/Studybuglog.log 2>&1

Cron will send everything sent by the command to its standard output (what would be sent to the screen if you ran the command from the command line) in an email to the email address in MAILTO.
Unfortunately for you, you are changing the behaviour of this command using shell redirection. If you ran the command exactly as written above, there would be nothing shown on the screen because everything is written to the file (because you redirect standard output to the file using the '>' operator).
If you want an email, remove the >, and everything after it and then test.
If you also want to write to a log file, you might try the 'tee' command, or changing your script to take a log file as a command line argument, and write to both the log file and the standard output.

Related

Os.system doesn't push message in cron alert to cronitor?

I am running a python script using a cron job. When the script runs, it either alerts that it completed or failed. It is supposed to send a message to cronitor along with the completion ping. An example of the link is below:
https://cronitor.link/p/id/key?state=complete&msg='Success'
When I just put the link into the search bar and hit Enter, the message shows up in cronitor. However, when I try to get the message to print when running the link through the script it doesn't work. Cronitor gets the ping that it completed successfully but no message shows up:
cron_alert = "/usr/local/bin/curl --silent https://cronitor.link/p/id/key?state=complete&msg='Success'"
os.system(cron_alert)
I tried removing '--silent' but that didn't make a difference. Any ideas on how to fix this? Thanks
Your command contains an unescaped &, which the shell used by os.system parses as a command terminator that runs curl the the background. You need to escape it, e.g.
cron_alert = \
"/usr/local/bin/curl --silent \"https://cronitor.link/p/id/key?state=complete&msg='Success'\""
but even better would be to stop using os.system and use subprocess.run instead.
import subprocess
subprocess.run(["/usr/local/bin/curl",
"--silent",
"https://cronitor.link/p/id/key?state=complete&msg='Success'"])
which bypasses the shell altogether.
Best, use a library like requests to send the request from Python, rather than forking a new process.
import requests
requests.get("https://cronitor.link/p/id/key",
params={'state': 'complete', 'msg': "'Success'"})
(In all cases, does the API really require single quotes around Success?)

Remote sh script executed in Python (Paramiko) never ends [duplicate]

I've got a Python program which sits on a remote server which uploads a file to an AWS bucket when run. If I ssh onto the server and run it with the command sudo python3 /path/to/backup.py it works as expected.
I'm writing a Python program to automate a bigger process which includes running backup.py. I created a function to do this using the paramiko library. This is where the command gets run
ssh_stdin, ssh_stdout, ssh_stderr = self.ssh.exec_command('sudo python3 /path/to/backup.py', 1800)
logging.debug(f'ssh_stdout: {ssh_stdout.readline()}')
logging.debug(f'ssh_stderr: {ssh_stderr.readline()}')
My automation gives me this output:
ssh_stdout: Tue, 19 May 2020 14:36:43 INFO The COS endpoint is 9.11.200.206, writing to vault: SD_BACKUP_4058
The program doesn't do anything after that. When I log onto the server and check the logs of backup.py, I can see that it is still running and seems to be sitting at the file upload. This is the code it's stuck at:
s3_client.upload_file(
Filename=BACKUP,
Bucket=BUCKET_NAME,
Key=SPLIT_FILE_NAME,
Callback=pp(BACKUP),
Config=config)
I can't understand why it's getting stuck here when started by my automation program and not when I run it from a command line in the terminal. I can't see anything in the logs which help me. It just seems to be stuck at that point in its execution. Could it be something to do with the callback not getting returned?
You read only one line of the output.
logging.debug(f'ssh_stdout: {ssh_stdout.readline()}')
If the remote program produces lot of output, as soon as its output buffer fills in, the program hangs on the next attempt to write some output.
If you want the program to finish, you have to keep reading the output.
The simplest way is to use readlines or read:
print(stdout.read())
But that's inefficient for large outputs like yours.
Instead you can read the output line by line:
for line in stdout:
print(line.strip())
It gets more complicated, when the commands produces also an error output, as then you have to read both output streams.
See Paramiko ssh die/hang with big output.
And you should check the error output in any case, for good error handling. See also:
Command executed with Paramiko does not produce any output

A command does not finish when executed using Python Paramiko exec_command

I've got a Python program which sits on a remote server which uploads a file to an AWS bucket when run. If I ssh onto the server and run it with the command sudo python3 /path/to/backup.py it works as expected.
I'm writing a Python program to automate a bigger process which includes running backup.py. I created a function to do this using the paramiko library. This is where the command gets run
ssh_stdin, ssh_stdout, ssh_stderr = self.ssh.exec_command('sudo python3 /path/to/backup.py', 1800)
logging.debug(f'ssh_stdout: {ssh_stdout.readline()}')
logging.debug(f'ssh_stderr: {ssh_stderr.readline()}')
My automation gives me this output:
ssh_stdout: Tue, 19 May 2020 14:36:43 INFO The COS endpoint is 9.11.200.206, writing to vault: SD_BACKUP_4058
The program doesn't do anything after that. When I log onto the server and check the logs of backup.py, I can see that it is still running and seems to be sitting at the file upload. This is the code it's stuck at:
s3_client.upload_file(
Filename=BACKUP,
Bucket=BUCKET_NAME,
Key=SPLIT_FILE_NAME,
Callback=pp(BACKUP),
Config=config)
I can't understand why it's getting stuck here when started by my automation program and not when I run it from a command line in the terminal. I can't see anything in the logs which help me. It just seems to be stuck at that point in its execution. Could it be something to do with the callback not getting returned?
You read only one line of the output.
logging.debug(f'ssh_stdout: {ssh_stdout.readline()}')
If the remote program produces lot of output, as soon as its output buffer fills in, the program hangs on the next attempt to write some output.
If you want the program to finish, you have to keep reading the output.
The simplest way is to use readlines or read:
print(stdout.read())
But that's inefficient for large outputs like yours.
Instead you can read the output line by line:
for line in stdout:
print(line.strip())
It gets more complicated, when the commands produces also an error output, as then you have to read both output streams.
See Paramiko ssh die/hang with big output.
And you should check the error output in any case, for good error handling. See also:
Command executed with Paramiko does not produce any output

No "nohup" python command results

I'm developing a real time website. It's a map where the color of each city changes based on the current emotion.
I have the python part which is connected to my database.
So whenever I run the python code a new record is added to the database. - it's a streaming code so it's never ending.
The command line that is suitable for my python code is (nohup) since I want it always running.
I'm using (Bluehost) as hosting server - VPS package.
I opened my SSH command line and run the command:
So this means it's working? It created an out file
but no record is added to the database!
What's the problem?
Thank you
The line Exit 2 means that there is a problem. You'll find a description in nohup.out (see the line that says ignoring input and appending nohup.out)
For a hint more clarity: the line that has Exit ... means the process called through nohup has terminated. The integer generally has meaning (more on those here), but you need to look at the actual nohup.out file before you'll learn anything.

How do I get the log file a program creates when running it with subprocess.call()?

I work with Gaussian, which is a program for molecular geometry optimization, among other applications. Gaussian can take days to end a single optimization so I decided to make a program on Python to send me an e-mail when it finishes running. The e-mail sending I figured out. The problem is that Gaussian automatically generates a log file and a chk file, which contains the actual results of the process and by using subprocess.call(['command'], shell=False) both files are not generated.
I also tried to solve the problem with os.system(command), which gives me the .log file and the .chk file, but the e-mail is sent without waiting for the optimization completion.
Another important thing, I have to run the entire process in the background, because as I said at the beginning it might take days to be over and I can't leave the terminal open that long.
by using subprocess.call(['command'], shell=False) both files are not generated.
Your comment suggests that you are trying to run subprocess.call(['g09 input.com &'], shell=False) that is wrong.
Your code should raise FileNotFoundError. If you don't see it; it means stderr is hidden. You should fix it (make sure that you can see the output of sys.stderr.write('stderr\n')). By default, stderr is not hidden i.e., the way you start your parent script is broken. To be able to disconnect from the session, try:
$ nohup python /path/to/your_script.py &>your_script.log &
or use screen, tmux.
shell=False (btw, it is default—no need to pass it explicitly) should hint strongly that call() function does not expect a shell command. And indeed, subprocess.call() accepts an executable and its parameters as a list instead—it does not run the shell:
subprocess.check_call(['g09', 'input.com', 'arg 2', 'etc'])
Note: check_call() raises an exception if g09 returns with a non-zero exit code (it indicates an error usually).

Categories