Writing from file_A to file_B using IDLE always makes IDLE print out the lines as they are being written. If the file is very large, then the process would take hours to finish.
How can I make IDLE not print anything while the process of writing to a new file is ongoing, in order to speed things up?
A simple code to demonstrate that IDLE prints the lines as they are being written:
file = open('file.csv','r')
copy = open('copy.csv','w')
for i in file:
i = i.split()
copy.write(str(i))
I assume you are using Python3 where write returns the number of characters written to the file and IDLE's python shell prints this return value when you call it. In Python2 write returns None that is not printed by IDLE's shell.
The workaround is to assign the return value of write to a temporary dummy variable
dummy = f.write("my text")
For your example the following code should work
file = open('file.csv','r')
copy = open('copy.csv','w')
for i in file:
i = i.split()
dummy = copy.write(str(i))
I added two screenshots for all of you to see the difference between the writes in Python 2 and Python 3 on my system.
Related
I can successfully redirect my output to a file, however this appears to overwrite the file's existing data:
import subprocess
outfile = open('test','w') #same with "w" or "a" as opening mode
outfile.write('Hello')
subprocess.Popen('ls',stdout=outfile)
will remove the 'Hello' line from the file.
I guess a workaround is to store the output elsewhere as a string or something (it won't be too long), and append this manually with outfile.write(thestring) - but I was wondering if I am missing something within the module that facilitates this.
You sure can append the output of subprocess.Popen to a file, and I make a daily use of it. Here's how I do it:
log = open('some file.txt', 'a') # so that data written to it will be appended
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
(of course, this is a dummy example, I'm not using subprocess to list files...)
By the way, other objects behaving like file (with write() method in particular) could replace this log item, so you can buffer the output, and do whatever you want with it (write to file, display, etc) [but this seems not so easy, see my comment below].
Note: what may be misleading, is the fact that subprocess, for some reason I don't understand, will write before what you want to write. So, here's the way to use this:
log = open('some file.txt', 'a')
log.write('some text, as header of the file\n')
log.flush() # <-- here's something not to forget!
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
So the hint is: do not forget to flush the output!
Well the problem is if you want the header to be header, then you need to flush before the rest of the output is written to file :D
Are data in file really overwritten? On my Linux host I have the following behavior:
1) your code execution in the separate directory gets:
$ cat test
test
test.py
test.py~
Hello
2) if I add outfile.flush() after outfile.write('Hello'), results is slightly different:
$ cat test
Hello
test
test.py
test.py~
But output file has Hello in both cases. Without explicit flush() call stdout buffer will be flushed when python process is terminated.
Where is the problem?
I'm trying to read a log file, written line by line, via readline.
I'm surprised to observe the following behaviour (code executed in the interpreter, but same happens when variations are executed from a file):
f = open('myfile.log')
line = readline()
while line:
print(line)
line = f.readline()
# --> This displays all lines the file contains so far, as expected
# At this point, I open the log file with a text editor (Vim),
# add a line, save and close the editor.
line = f.readline()
print(line)
# --> I'm expecting to see the new line, but this does not print anything!
Is this behaviour standard? Am I missing something?
Note: I know there are better way to deal with an updated file for instance with generators as pointed here: Reading from a frequently updated file. I'm just interested in understanding the issue with this precise use case.
For your specific use case, the explanation is that Vim uses a write-to-temp strategy. This means that all writing operations are performed on a temporary file.
On the contrary, your scripts reads from the original file, so it does not see any change on it.
To further test, instead of Vim, you can try to directly write on the file using:
echo "Hello World" >> myfile.log
You should see the new line from python.
for following your file, you can use this code:
f = open('myfile.log')
while True:
line = readline()
if not line:
print(line)
I have a python program, which is supposed to calculate changes based on a value written in a temporary file (eg. "12345\n"). It is always an integer.
I have tried different methods to read the file, but python wasn't able to read it. So then I had the idea to execute a shell command ("cat") that will return content. When I execute this in the shell it works fine, but python the feedback I get is empty. Then I tried writing a bash and then a php skript, which would read the file and then return the value. In python I called them over the shell and the feedback I get is empty as well.
I was wondering if that was a general problem in python and made my scripts return the content of other temporary files, which worked fine.
Inside my scripts I was able to do calculations with the value and in the shell the output is exactly as expected, but not when called via python. I also noticed that I don't get the value with my extra scripts when they are called by phython (I tried to write it into another file; it was updated but empty).
The file I am trying to read is in the /tmp directory and is written into serveral time per second by another script.
I am looking for a solution (open for new ideas) in which I end up having the value of the file in a python variable.
Thanks for the help
Here are my programs:
python:
# python script
import subprocess
stdout = subprocess.Popen(["php /path/to/my/script.php"], shell = True, stdout = subprocess.PIPE).communicate()[0].decode("utf-8")
# other things I tried
#with open("/tmp/value.txt", "r") as file:
# stdout = file.readline() # output = "--"
#stdout = os.popen("cat /tmp/value.txt").read() # output = "--"
#stdout = subprocess.check_output(["php /path/to/my/script.php"], shell = True, stdout = subprocess.PIPE).decode("utf-8") # output = "--"
print(str("-" + stdout + "-")) # output = "--"
php:
# php script
valueFile = fopen("/tmp/value.txt", "r");
value = trim(fgets($valueFile), "\n");
fclose($valueFile);
echo $value; # output in the shell is the value of $value
Edit: context: my python script is started by another python script, which listens for commands from an apache server on the pi. The value I want to read comes from a "1wire" device that listens for S0-signals.
I am making a game in python 2.7 for fun and am trying to make a map to go along with it. I am using file I/O to read and write the map and have also got notepad ++ set to silent update, however I can only see the changes once my program has fully run and want to view the file as it is updated.
I have this code which i am testing with:
from time import sleep
map = open('C:\Users\Ryan\Desktop\Codes\Python RPG\Maps\map.txt', 'r+')
map.truncate()
print "file deleted"
sleep(1)
worldMap = open('C:\Users\Ryan\Desktop\Codes\Python RPG\Maps\worldMap.txt', 'r')
for line in worldMap:
map.write(line)
print "file updated"
worldMap.close()
map.close()
Any help is greatly appricated :)
By default Python uses buffered I/O. This means that written data is stored in memory before actually written to the file. Calling file's flush method causes the data to be written to the file.
Turns out it is an error with my C program. I changed my printf to only print a preset string and redirected it to a file and the extra characters were still there. I still don't know why though.
Hi I'm writing a python script to run analysis on a C program I'm making parallel. Write now I have the number of processors used and the iterations I want to pass to my C program in a separate file called tests. I'm extremely new to Python, here's my sample code I wrote to figure out how to write results to a file which fill eventually be a .csv file.
#!/usr/bin/env python
import subprocess
mpiProcess = "runmpi"
piProcess = "picalc"
tests = open("tests.txt")
analysis = open("analysis.txt", "w")
def runPiCalc (numProcs, numIterations):
numProcs = str(numProcs)
numIterations = str(numIterations)
args = (mpiProcess, piProcess, numProcs, numIterations)
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
popen.wait()
output = popen.stdout.read()
return output
def runTest (testArgs):
testProcs = testArgs[0]
testIterations = testArgs[1]
output = runPiCalc(testProcs,testIterations)
appendResults(output)
def appendResults (results):
print results
analysis.write(results + '\n')
for testLine in tests:
testArgs = testLine.split()
runTest(testArgs)
tests.close()
analysis.close()
My problem right now is when I "print results" to stdout the output comes out as expected and I get 3.14blablablablawhatever. When I check the analysis.txt file though I get [H[2J (weirder characters that are encoded as ESC not on the web) at the start of every line before my pi calculation shows up. I can't figure out why that is. Why would file.write have different output than print. Again this is my first time with Python so I'm probably just missing something easy.
This is on a ubuntu server I'm sshing to btw.
Here's the tests.txt and a picture of how the characters look on linux
The problem was I had a bash script executing my C program. The bash script was inserting the weird characters before the program output and adding it to its standard output. Putting the command I was calling inside the python script directly instead of calling a bash script fixed the problem.