i have a java program which runs on a particular port in ubuntu. While running the program i need to take the output from the program and needs to save it in the log file. I use nohub to run them currently.when they fail I don't know why they fail.Then the process restart the nohub get overwritten . I want the process to restart and update the log file, I can check it at a later date. Currently I don't know the state of it, is it running or failed.
I heard that it is pretty easy to do it using python scripts .
Anyone please help me to do this?
Thanks in advance
Renjith Raj
You should use the subprocess module of python.
If your logs are not too big, you can simply use :
# for python >=2.7
result = subprocess.check_output(["/path/to/process_to_lauch", "arg1"])
# for python < 2.7
process = subprocess.Popen(["/path/to/process_to_lauch", "arg1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
str_out, str_errr = process.communicate()
# in str_out you will find the standard output of your process
# in str_err you will find the standard output of your process
But if your outputs are really big (let's talk in Mo, not in Ko), this may cause some memory overflow...
In case of big output, use file handles for stdout and stderr:
out_file = open(out_file_name, "w")
err_file = open(out_file_name, "w")
process = subprocess.Popen(["/path/to/process_to_lauch", "arg1"], stdout=out_file, stderr=err_file)
return_code = process.wait()
out_file.close()
err_file.close()
And then, in out_file you'll find the output of the process, and in err_file the error output.
Of course, if you want to relaunch the process when it die, put this code in a loop ;)
Related
Well the first problem I run into is that I have no idea how to respond to a command prompt.
bat_location = "F:/SteamLibrary/steamapps/common/Terraria"
os.chdir(bat_location)
os.system("TerrariaServer.exe -steam -lobby friends -config serverconfig.txt")
all of this works, but then when I want to respond to the command prompt which asks me which world I want to run ( the worlds are indexed by numbers from 1 - n (amount of worlds)) I dont know how to respond to it.
I've looked all over google but the code doesnt seem to work.
So basically what I need is when a cmd asks me for example :
Choose World:
I want to automatically respond with the number 10.
os.system("10")
this doesnt seem to do anything, I've also tried a lot with subprocesses but im clearly lost.
Any help is apriciated!
EDIT NR.1 :
Welp, now I've tried this :
bat_location = r'F:\SteamLibrary\steamapps\common\Terraria'
with Popen('TerrariaServer.exe -steam -lobby friends -config serverconfig.txt',
cwd=f'{bat_location}', stdin=PIPE, shell=True) as proc:
proc.stdin.write(b'10\n')
and all it does, im guessing by the response, it just loops around and around.
EDIT NR.2 :
I WILL CLOSE THIS AND START A NEW THREAD, SINCE MY PROBLEM COMPLETELY DERIVED FROM THE ORIGINAL.
From your last few comments, I realized the problem you were having with Popen. When you pass stdout=PIPE and stderr=PIPE, the outputs of the process are captured by the pipes, so you won't see them unless you read from the pipes.
So here's a simple example that you should be able to work with:
import subprocess
from subprocess import PIPE
from textwrap import dedent
with open('tmp.py', 'w') as f:
f.write(dedent("""
print(input())
print(input())
"""))
with subprocess.Popen(['python3', 'tmp.py'], stdin=PIPE) as proc:
proc.stdin.write(b'Hello, world!\n') # write to the process' input
proc.stdin.write(b'Good bye, world!\n') # write to the process' input
If you want to read the data from the function in Python, you can use stdout=PIPE, then use proc.stdout.read and the like, but you may have to be careful about how you get data from the blocking read functions.
Thank you in advance for the time you'll give to read this question. I am learning Python and I looked up a lot before asking here, please forgive me for the newbie question.
So I created this script in python 3 using subprocess module to search for another python script's PID, while only knowing the beginning of the script's name and terminate it nicely.
Basically I run python clocks on my LCD screen through Raspberry and I2C, and I terminate the script, clear the LCD and turn it off. This "off" script code is provided below.
The issue is that when I run it from the directory it sits in with a:
python3 off.py
It works perfectly, getting parsing and terminating the PID, then turning off the LCD display.
Ideally I want to trigger it through telegram-cli because I did it in bash and it worked nicely, I find it to be a nice feature. In python it fails.
So I tested and it appears that when I try to launch it from another directory like this:
python3 ~/code/off.py
The grep subprocess returns more than the one PID it returns normally when launched from the script residing directory. For instance (with python3 -v):
kill: failed to parse argument: '25977
26044'
The second PID number is from a sub process created by the script, I can't seem to find what it is as it terminates when the script ends but fails it initial purpose.
Any help in understanding what is happening here would be really appreciated.
I came so far, as show below, from two ugly lines of bash mixed with a call to an dummy four lines python scripts, so I really feel I am getting close to a proper way of achieving my first real python script.
I tried to decompose the script line by line in the interpreter and could not reproduce the error, everything behave as expected. I only get this double PID result when running the script from an outer location.
Thank you in advance for any helpful insight on how to understand what is happening!
#!/usr/bin/env python3
import subprocess
import I2C_LCD_driver
import string
# Defining variables for searched strings and string encoding
searched_process_name = 'lcd_'
cut_grep_out_of_results = 'grep'
result_string_encoding = 'utf-8'
mylcd = I2C_LCD_driver.lcd()
LCD_NOBACKLIGHT = 0x00
run = True
def kill_script():
# Listing processes and getting the searched process
ps_process = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE)
grep_process = subprocess.Popen(["grep", "-i", searched_process_name], stdin=ps_process.stdout, stdout=subprocess.PIPE)
# The .stdout.close() lines below allow the previous process to receive a SIGPIPE if the next process exits.
ps_process.stdout.close()
# Cleaning the result until only the PID number is returned in a string
grep_cutout = subprocess.Popen(["grep", "-v", cut_grep_out_of_results], stdin=grep_process.stdout, stdout=subprocess.PIPE)
grep_process.stdout.close()
awk = subprocess.Popen(["cut", "-c", "10-14"], stdin=grep_cutout.stdout, stdout=subprocess.PIPE)
grep_cutout.stdout.close()
output = awk.communicate()[0]
clean_output = output.decode(result_string_encoding)
clean_output_no_new_line = clean_output.rstrip()
clean_output_no_quote = clean_output_no_new_line.replace("'", '')
PID = clean_output_no_quote
# Terminating the LCD script process
subprocess.Popen(["kill", "-9", PID])
while run:
kill_script()
# Cleaning and shutting off LCD screen
mylcd.lcd_clear()
mylcd.lcd_device.write_cmd(LCD_NOBACKLIGHT)
break
I found out the reason of this weird comportment. An error on my end:
I forgot I called some directories with a name including the characters string I was running grep -i against provoking the double result when running the script from outside its directory using its full path.
Turns out the script runs pretty well using subprocess.
So in the end, I renamed the scripts I wanted to terminate with disp_ rather than lcd_ and added shell=False to my subprocesses to make sure there were no risk of unwantedly sending the output to bash while the running the script.
I have a script that uses a really simple file based IPC to communicate with another program. I write a tmp file with the new content and mv it onto the IPC file to keep stuff atomar (the other program listens of rename events).
But now comes the catch: This works like 2 or 3 times but then the exchange is stuck.
time.sleep(10)
# check lsof => target file not opened
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
# check lsof => target file STILL open
time.sleep(10)
/tmp/tempfile will get prepared for every write
The first run results in:
$ lsof /tmp/target
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 1714 <user> 3u REG 0,18 302 10058 /tmp/target
which leave it open until I terminate the main python program. Consecutive runs change the content as expected, the inode and file descriptor but its still open what I would not expect from a mv.
The file is finally gets closed when the python program featuring these lines above is getting closed.
EDIT:
Found the bug: mishandeling the tempfile.mkstemp(). See: https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp
I created the tempfile like so:
_fd, temp_file_path = tempfile.mkstemp()
where I discarded the filedescriptor _fd which was open by default. I did not close it and so it was left open even after the move. This resulted in an open target and since I was just lsofing on the target, I did not see that the tempfile was already opened. This would be the corrected version:
fd, temp_file_path = tempfile.mkstemp()
fd.write(content)
fd.close()
# ... mv/rename via shell execution/shutil/pathlib
Thank you all very much for your help and your suggestions!
I wasn't able reproduce this behavior. I created a file /tmp/tempfile and ran a python script with the subprocess.run call you give followed by a long sleep. /tmp/target was not in use, nor did I see any unexpected open files in lsof -p <pid>.
(edit) I'm not surprised at this, because there's no way that your subprocess command is opening the file: mv does not open its arguments (you can check this with ltrace) and subprocess.run does not parse its argument or do anything with it besides pass it along to be exec-ed.
However, when I added some lines to open a file and write to it and then move that file, I see the same behavior you describe. This is the code:
import subprocess
out=open('/tmp/tempfile', 'w')
out.write('hello')
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
import time
time.sleep(5000)
In this case, the file is still open because it was never closed, and even though it's been renamed the original file handle still exists. My bet would be that you have something similar in your code that's creating this file and leaving open a handle to it.
Is there any reason why you don't use shutil.move? Otherwise it may be necassary to wait for the mv command to finish moving and then kill it, read stdin, run something like
p = subprocess.run(...)
# wait to finish moving/read from stdin
p.terminate()
Of course terminate would be a bit harsh.
Edit: depending on your use rsync, which is not part of python, may be a elegant solution to keep your data synced over the network without writing a single line of code
you say it is still open by "mv" but you lsof result shown open by python. As it is an sub process see if the pid is the same of the python process maybe it is another python process.
I would like to run a section of code as long as a forked subprocess (rsync) is running. This is how I did it in my code:
rsync_proc = subprocess.Popen(proc_args, stdout=subprocess.PIPE)
while rsync_proc.poll() == None:
sys.stdout.write('\r'+
rsync_progress_report(source_size_kb, dest, start)),
sys.stdout.flush()
time.sleep(1)
For some reason, this causes the rsync subprocess to get stuck when it's almost finished. The while loop just continues looping with the rsync_proc.poll() returning None.
When I do run this same rsync call without the while loop code, it finishes without a problem.
Thanks in advance.
If you attach strace to your stuck rsync child process, you'll probably see it's blocked writing to stdout.
If it's blocked writing to stdout, it's probably because the pipe is full because you never read from it.
Try reading from the pipe and just discarding the output - or, if you really don't want the output, don't connect it to a pipe in the first place.
Ok so the problem I'm trying to solve is this:
I need to run a program with some flags set, check on its progress and report back to a server. So I need my script to avoid blocking while the program executes, but I also need to be able to read the output. Unfortunately, I don't think any of the methods available from Popen will read the output without blocking. I tried the following, which is a bit hack-y (are we allowed to read and write to the same file from two different objects?)
import time
import subprocess
from subprocess import *
with open("stdout.txt", "wb") as outf:
with open("stderr.txt", "wb") as errf:
command = ['Path\\To\\Program.exe', 'para', 'met', 'ers']
p = subprocess.Popen(command, stdout=outf, stderr=errf)
isdone = False
while not isdone :
with open("stdout.txt", "rb") as readoutf: #this feels wrong
for line in readoutf:
print(line)
print("waiting...\\r\\n")
if(p.poll() != None) :
done = True
time.sleep(1)
output = p.communicate()[0]
print(output)
Unfortunately, Popen doesn't seem to write to my file until after the command terminates.
Does anyone know of a way to do this? I'm not dedicated to using python, but I do need to send POST requests to a server in the same script, so python seemed like an easier choice than, say, shell scripting.
Thanks!
Will
Basically you have 3 options:
Use threading to read in another thread without blocking the main thread.
select on stdout, stderr instead of communicate. This way you can read just when data is available and avoid blocking.
Let a library solve this, twisted is a obvious choice.
You can use twisted library for this use case. I think it will be great for that
http://www.cs.lth.se/EDA046/assignments/assignment4/twisted/listings/process/quotes.py
documentation : http://www.cs.lth.se/EDA046/assignments/assignment4/twisted/process.html