This may not be the best wording for the question. I am trying to see 2 files at once on my screen.
I run:
multitail ~/path/to/somefile.err ~/path/to/somefile.out
I have a python script with the following lines:
sys.stdout = open('~/path/to/somefile.out', 'a')
sys.stderr = open('~/path/to/somefile.err', 'a')
My multitail command seems to only output my .out file, regardless of which order I put the files in the command.
I verified that my script is indeed writing to the files. What is also interesting, was that when I run the following command:
echo "text" >> ~/path/to/somefile.err
All of a sudden I see all the output from the .err file in the multitail screen (including that which didn't show up before)!
What is going on here that I can not see?
P.S. this is my first time using multitail so maybe I overlooked something simple. If it means anything, I am using CentOS 7.
You need to pass either buffering=0 (for unbuffered) or buffering=1 (for line-buffered - probably what you want) in your call to open.
The default is buffering=-1, which is equivalent to something like buffering=512 with a value depending on the system, so nothing will be written to the file until 512 (or whatever) bytes are written.
Alternatively, you could leave buffering set to its default value, and call .flush() every time you want the data to appear in the file.
When you use >> in the shell, that will close the file when the command exits, and closing implies a flush. (You can defer the close by using exec >> file.txt)
Related
I can successfully redirect my output to a file, however this appears to overwrite the file's existing data:
import subprocess
outfile = open('test','w') #same with "w" or "a" as opening mode
outfile.write('Hello')
subprocess.Popen('ls',stdout=outfile)
will remove the 'Hello' line from the file.
I guess a workaround is to store the output elsewhere as a string or something (it won't be too long), and append this manually with outfile.write(thestring) - but I was wondering if I am missing something within the module that facilitates this.
You sure can append the output of subprocess.Popen to a file, and I make a daily use of it. Here's how I do it:
log = open('some file.txt', 'a') # so that data written to it will be appended
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
(of course, this is a dummy example, I'm not using subprocess to list files...)
By the way, other objects behaving like file (with write() method in particular) could replace this log item, so you can buffer the output, and do whatever you want with it (write to file, display, etc) [but this seems not so easy, see my comment below].
Note: what may be misleading, is the fact that subprocess, for some reason I don't understand, will write before what you want to write. So, here's the way to use this:
log = open('some file.txt', 'a')
log.write('some text, as header of the file\n')
log.flush() # <-- here's something not to forget!
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
So the hint is: do not forget to flush the output!
Well the problem is if you want the header to be header, then you need to flush before the rest of the output is written to file :D
Are data in file really overwritten? On my Linux host I have the following behavior:
1) your code execution in the separate directory gets:
$ cat test
test
test.py
test.py~
Hello
2) if I add outfile.flush() after outfile.write('Hello'), results is slightly different:
$ cat test
Hello
test
test.py
test.py~
But output file has Hello in both cases. Without explicit flush() call stdout buffer will be flushed when python process is terminated.
Where is the problem?
I'm looking for a way to monitor a file that is written to by a program on Linux. I found the tail -F command in here, and also recommended was less +FG. I tested it by running tail -F file in one terminal, and a simple python script:
import time
for i in range(20):
print i
time.sleep(0.5)
in another. I redirected the output to the file:
python script.py >> file
I expected that tail would track the file contents and update the display in fixed intervals, instead it only shows what was written to the file after the command terminates.
The same thing happens with less +FG and also if I watch the output from cat. I've also tried using the usual redirect which truncates the file > instead of >>. Here it says the file was truncated, but still does not track it in real time.
Any idea why this doesn't work? (It's suggested here that it might be due to buffered writes, but since my script runs over 10 seconds, I suspect this might not be the cause)
Edit: In case it matters, I'm running Linux Mint 18.1
Python's standard out is buffered. If when you close the script / script is done, you see all the output - that's definitely buffer issue.
You can use this instead:
import time
import sys
for i in range(20):
sys.stdout.write('%d\n' % i)
sys.stdout.flush()
time.sleep(0.5)
I've tested it and it prints values in real time. To overcome buffer issue, after each .write() method I use .flush() force "flushing" the buffer.
Additional options from the comments:
Use the original print statement with sys.stdout.flush() afterwords
Run the python script with python -u for unbuffered binary stdout and stderr
Regarding jon1467 answer (sorry can't comment your answer), your understanding of redirection is wrong.
Try this :
dd if=/dev/urandom > test.txt
while looking at the file size with :
ls -l test.txt
You'll see the file grow while dd is running.
Vinny's answer is correct, python standard output is buffered.
The more common way to the "buffering effect" you notice is by flushing the stdout as Vinny showed you.
You could also use -u option to disable buffering for the whole python process, or you could just reopen standard output with a buffer size of 0 as below (in python2 at least):
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Terminal automatically cut the output as it scrolled up with each iteration.
By default mac terminal will have a limited no of lines of buffer. Say 1,000.
When you run a program and the output crossed 1000 lines your lines will be lost from memory. It's like FIFO buffer queue.
Basically the answer to your question:
`Is there a way to store the output of the previously run command in a text file?` is no. Sorry.
You can re-run the program and preserve the output by redirecting it to another file. Or increase the number of lines in the buffer (Maybe make it unlimited)
You can use less to go through the output:
your_command | less
Your Enter key will take you down.
Also, press q to exit.
Or you can reroute the output to a file
In one terminal tab run the program and redirect the output to a output.log file like this.
python program.py > output.log
In another tab you can tailf on the same log file and see the output.
tailf output.log
To see the complete output open the log file in any text editor.
You can consider increasing the scrollback buffer.
Or
If you want to see the data and also run it to a file, use tee, e.g,
spark-shell | tee tmp.out
Using Python 2.7 on Raspberry Pi B+, I want to call the command "raspistill -o image.jpg" from Python and find using this is recommended:
from subprocress import call
call(["raspistill","-o image.jpg"])
However, this doesn't work since the image.jpg isn't created although outside Python,
raspistill -o
does create the file.
Next try is to first create the image file and writing to it.
f = open("image.jpg","w")
call(["raspistill","-o image.jpg"], stdout = f)
Now the image file is created, but nothing is written to it: its size remains 0. So how can I get this to work?
Thank you.
You are passing -o image.jpg as a single argument. You should pass them like two. Here is how:
call(["raspistill", "-o", "image.jpg"])
The way you did it it's like calling raspistill "-o image.jpg" from the command line, which will likely result in an error.
First, you're creating and truncating the file image.jpg:
f = open("image.jpg","w")
Then you're sending raspistill's stdout to that same file:
call(["raspistill","-o image.jpg"], stdout = f)
When you eventually get around to close-ing the file in Python, now image.jpg is just going to hold whatever raspistill wrote to stdout. Or, if you never close it, it'll be that minus the last buffer, which may be nothing at all.
Meanwhile, you're also trying to get raspistill to create a file with the same name, by passing it as part of the -o argument. You're doing that wrong, as Ionut Hulub's answer explains. Some programs will take "-o image.jpg" "-oimage.jpg", and "-o", "image.jpg" as meaning the same thing, some won't. But, even if this one does, at best you've now got two programs fighting over what file gets created and written as image.jpg.
If raspistill has an option to write the still to stdout, then you can use that option, together with passing stdout=f, and making sure to close the file. Or, if it has an option to write to a filename, then you can use that option. But doing both is not going to work.
If you don't know how to split the command, you can use shlex.split. For example,
>>> import shlex
>>> args = shlex.split('raspistill -o image.jpg')
>>> args
['raspistill', '-o', 'image.jpg']
>>> call(args)
I am trying to write a small program in bash and part of it needs to be able to get some values from a txt file where the different files are separated by a line, and then either add each line to a variable or add each line to one array.
So far I have tried this:
FILE=$"transfer_config.csv"
while read line
do
MYARRAY[$index]="$line"
index=$(($index+1))
done < $FILE
echo ${MYARRAY[0]}
This just produces a blank line though, and not what was on the first line of the config file.
I am not returned with any errors which is why I am not too sure why this is happening.
The bash script is called though a python script using os.system("$HOME/bin/mcserver_config/server_transfer/down/createRemoteFolder"), but if I simply call it after the python program has made the file which the bash script reads, it works.
I am almost 100% sure it is not an issue with the directories, because pwd at the top of the bash script shows it in the correct directory, and the python program is also creating the data file in the correct place.
Any help is much appreciated.
EDIT:
I also tried the subprocess.call("path_to_script", shell=True) to see if it would make a difference, I know it is unlikely but it didn't.
I suspect that when calling the bash script from python, having just created the file, you are not really finished with that file: you should either explicitly close the file or use a with construct.
Otherwise, the written data is still in any buffer (from the file object, or in the OS, or wherever). Only closing (or at least flushing) the file makes sure the data is indeed in the file.
BTW, instead of os.system, you should use the subprocess module...