Autorun python script save output to txt file raspberry pi - python

I have a issue with my raspberry pi that starts up a python script.How do I save the printed output to a file when it is running on boot? I found script below on the internet but it doesn't seem to write the printed text,it creates the file but the content is empty.
sudo python /home/pi/python.py > /home/pi/output.log

It does write its output to the file but you cannot see it until the python file has finished executing due to buffer never flushed.
If you change the output to a file within your python script you can periodicity call flush in your code to push the output through to the file as and when you wish, something like this.
import sys
import time
outputFile = "output.txt";
with open(outputFile, "w+") as sys.stdout:
while True:
print("some output")
sys.stdout.flush() # force buffer content out to file
time.sleep(5) # wait 5 seconds
if you want to set the output back to the terminal, you may want to save a reference to the original stdout like this
import time
outputFile = "output.txt";
original_stdout = sys.stdout
with open(outputFile, "w+") as sys.stdout:
print("some output in file")
sys.stdout.flush()
time.sleep(5)
sys.stdout = original_stdout
print("back in terminal")

Related

Run command in CMD via python and extract the data

I am trying to use the below code to run a command and extract the data from the cmd.
the file with the commands and data is a txt file. (let me know if I should change it or use an excel if better).
the commands look something like this: ping "host name" which would result in some data in the cmd.there is list of these in the file. so it would ping "hostname1" then line two ping "hostname2"..etc
THE QUESTION: I want it to run every line individually and extract the results from the cmd and store them in a txt file or excel file - Ideally I want all the results in the same file. is this possible? and how?
here is the code so far:
root_dir = pathlib.Path(r"path to file here")
cmds_file = root_dir.joinpath('actual file here with commands and data')
#fail = []
cmds = cmds_file.read_text().splitlines()
try:
for cmd in cmds:
args = cmd.split()
print(f"\nRunning: {args[0]}")
output = subprocess.check_output(args)
print(output.decode("utf-8"))
out_file = root_dir.joinpath(f"Name of file where I want results printed in")
out_file.write_text(output.decode("utf-8"))
except:
pass
You can use a module called subprocess import subprocess
Then you can define a variable like this
run = subprocess.run(command_to_execute, capture_output=True)
After that you can do print(run.stdout) to print the command output.
If you want to write it to a file you can do this after you run the above code
with open("PATH TO YOUR FILE", "w") as file:
file.write(run.stdout)
This should write a file which contains the output of your command
After that close the file using file.close() and reopen it but in "a" mode
with open("PATH TO YOUR FILE", "a") as file:
file.write(\n + run.stdout)
This should append data to your file.
Remember to close the file just for best practice, I have some bad memorys about not closing the file after I opened it :D
My plan is simple:
Open input, output file
Read input file line by line
Execute the command and direct the output to the output file
#!/usr/bin/env python3
import pathlib
import shlex
import subprocess
cmds_file = pathlib.Path(__file__).with_name("cmds.txt")
output_file = pathlib.Path(__file__).with_name("out.txt")
with open(cmds_file, encoding="utf-8") as commands, open(output_file, "w", encoding="utf-8") as output:
for command in commands:
command = shlex.split(command)
output.write(f"\n# {shlex.join(command)}\n")
output.flush()
subprocess.run(command, stdout=output, stderr=subprocess.STDOUT, encoding="utf-8")
Notes
Use shlex.split() to simulate the bash shell's command split
The line output.write(...) is optional. You can remove it
With subprocess.run(...), the stdout=output will redirect the command's output to the file. You don't have to do anything.
Update
I updated the subprocess.run line to redirect stderr to stdout, so error will show.

Read from file while it is being written to in Python?

I followed the solution proposed here
In order to test it, I used two programs, writer.py and reader.py respectively.
# writer.py
import time
with open('pipe.txt', 'w', encoding = 'utf-8') as f:
i = 0
while True:
f.write('{}'.format(i))
print('I wrote {}'.format(i))
time.sleep(3)
i += 1
# reader.py
import time, os
#Set the filename and open the file
filename = 'pipe.txt'
file = open(filename, 'r', encoding = 'utf-8')
#Find the size of the file and move to the end
st_results = os.stat(filename)
st_size = st_results[6]
file.seek(st_size)
while 1:
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
print(line)
But when I run:
> python writer.py
> python reader.py
the reader will print the lines after the writer has exited (when I kill the process)
Is there any other way around to read the contents the time they are being written ?
[EDIT]
The program that actually writes to the file is an .exe application and I don't have access to the source code.
You need to flush your writes/prints to files, or they'll default to being block-buffered (so you'd have to write several kilobytes before the user mode buffer would actually be sent to the OS for writing).
Simplest solution is to call .flush for after write calls:
f.write('{}'.format(i))
f.flush()
There are 2 different problems here:
OS and file system must allow concurrent accesses to a file. If you get no error it is the case, but on some systems it could be disallowed
The writer must flush its output to have it reach the disk so that the reader can find it. If you do not, the output stays in in memory buffer until those buffers are full which can require several kbytes
So writer should become:
# writer.py
import time
with open('pipe.txt', 'w', encoding = 'utf-8') as f:
i = 0
while True:
f.write('{}'.format(i))
f.flush()
print('I wrote {}'.format(i))
time.sleep(3)
i += 1

Python3: ValueError: I/O operation

I just want to make the print output redirect to a file. my code as below:
import sys
# define the log file that receives your log info
log_file = open("message.log", "w")
# redirect print output to log file
sys.stdout = log_file
print ("Now all print info will be written to message.log")
# any command line that you will execute
...
log_file.close()
print ("Now this will be presented on screen")
After execute the script, it comes an error:
[~/Liaohaifeng]$ python3 log.py
Traceback (most recent call last):
File "log.py", line 14, in <module>
print ("Now this will be presented on screen")
ValueError: I/O operation on closed file.
why does this happen? if I update my script as below:
import sys
# make a copy of original stdout route
stdout_backup = sys.stdout
# define the log file that receives your log info
log_file = open("message.log", "w")
# redirect print output to log file
sys.stdout = log_file
print ("Now all print info will be written to message.log"
# any command line that you will execute
...
log_file.close()
# restore the output to initial pattern
sys.stdout = stdout_backup
print ("Now this will be presented on screen")
It will be OK. So, could you please kindly tell me the theory in this issue?
As mentioned in comments, print does not print to a closed filehandle and you have closed sys.stdout, potentially breaking any prints evoked after it closed. Which may happen even without your knowledge, eg somewhere in the imported code. That's why you shouldn't fiddle with sys.* variables (or any variables you didn't create, really) unless you absolutely need to. There is a proper way to redirect print output to a file and it goes like this:
log_file = open('message.log', 'w')
print('Stuff to print in the log', file=log_file)
log_file.close()
Or even safer like this:
with open('message.log', 'w') as log_file:
# Do stuff
print('Stuff to print in the log', file=log_file)
The handle will automatically flush and close when the with block finishes.

Reading last error message in log file

In Python 2.7 I have the following code inside certain loop
file = open("log.txt", 'a+')
last_position = file.tell()
subprocess.Popen(["os_command_producing_error"], stderr = file)
file.seek(last_position)
error = file.read()
print(error) # example of some action with the error
The intention is that the error that was just given by stderr gets, say printed, while file is keeping the whole record.
I am a beginner in Python and I am not clear what happens in the stderr = file.
My problem is that error keeps being empty, even though errors keep getting logged in the file.
Could someone explain why?
I have tried adding closing and opening the file again, or file.flush() right after the subprocess line. But still the same effect.
Edit: The code in the answer below makes sense to me and it seems to work for for the author of that post. For me (in Windows) it is not working. It gives an empty err and an empty file log.txt. If I run it line by line (e.g. debugging) it does work. How to understand and solve this problem?
Edit: I changed the Popen with call and now it works. I guess call waits for the subprocess to finish in order to continue with the script.
error is empty because you are reading too soon before the process has a chance to write anything to the file. Popen() starts a new process; it does not wait for it to finish.
call() is equivalent to Popen().wait() that does wait for the child process to exit that is why you should see non-empty error in this case (if the subprocess does write anything to stderr).
#!/usr/bin/env python
import subprocess
with open("log.txt", 'a+') as file:
subprocess.check_call(["os_command_producing_error"], stderr=file)
error = file.read()
print(error)
You should be careful with mixing buffered (.read()) and unbuffered I/O (subprocess).
You don't need the external file here, to read the error:
#!/usr/bin/env python
import subprocess
error = subprocess.check_output(["os_command_producing_error"],
stderr=subprocess.STDOUT)
print(error)
It merges stderr and stdout and returns the output.
If you don't want to capture stdout then to get only stderr, you could use Popen.communicate():
#!/usr/bin/env python
import subprocess
p = subprocess.Popen(["os_command_producing_error"], stderr=subprocess.PIPE)
error = p.communicate()[1]
print(error)
You could both capture stderr and append it to a file:
#!/usr/bin/env python
import subprocess
error = bytearray()
p = subprocess.Popen(["os_command_producing_error"],
stderr=subprocess.PIPE, bufsize=1)
with p.stderr as pipe, open('log.txt', 'ab') as file:
for line in iter(pipe.readline, b''):
error += line
file.write(line)
p.wait()
print(error)
See Python: read streaming input from subprocess.communicate().
Try these following codes:
file = open("log.txt", 'a+')
sys.stderr = file
last_position = file.tell()
try:
subprocess.call(["os_command_producing_error"])
except:
file.close()
err_file = open("log.txt", 'r')
err_file.seek(last_position)
err = err_file.read()
print err
err_file.close()
sys.stderr map the standard error message like sys.stdout(map standard output) and sys.stdin(map standard input).
And this will map the standard error to file. So all of the standard error will be write to the file log.txt.

Named pipe won't block

I'm trying to make multiple program communicate using Named Pipes under python.
Here's how I'm proceeding :
import os
os.mkfifo("/tmp/p")
file = os.open("/tmp/p", os.O_RDONLY)
while True:
line = os.read(file, 255)
print("'%s'" % line)
Then, after starting it I'm sending a simple data through the pipe :
echo "test" > /tmp/p
I expected here to have test\n showing up, and the python blocks at os.read() again.
What is happening is python to print the 'test\n' and then print '' (empty string) infinitely.
Why is that happening, and what can I do about that ?
From http://man7.org/linux/man-pages/man7/pipe.7.html :
If all file descriptors referring to the write end of a pipe have been
closed, then an attempt to read(2) from the pipe will see end-of-file
From https://docs.python.org/2/library/os.html#os.read :
If the end of the file referred to by fd has been reached, an empty string is returned.
So, you're closing the write end of the pipe (when your echo command finishes) and Python is reporting that as end-of-file.
If you want to wait for another process to open the FIFO, then you could detect when read() returns end-of-file, close the FIFO, and open it again. The open should block until a new writer comes along.
As an alternative to user9876's answer you can open your pipe for writing right after creating it, this allows it to stay open for writing at all times.
Here's an example contextmanager for working with pipes:
#contextlib.contextmanager
def pipe(path):
try:
os.mkfifo(path)
except FileExistsError:
pass
try:
with open(path, 'w'): # dummy writer
with open(path, 'r') as reader:
yield reader
finally:
os.unlink(path)
And here is how you use it:
with pipe('myfile') as reader:
while True:
print(reader.readline(), end='')

Categories