reading a file and writing in reverse - python

I'm trying to read a file and write on with reverse lines. Here is my code so far:
def write_reversed_file(input_filename, output_filename):
"""Writes a reverse file"""
with open(input_filename, 'r') as input_file:
data = input_file.readlines()
with open(output_filename, 'w') as output_file:
return output_file.reversed(data)
try:
write_reversed_file('data.txt', 'reversed.txt')
print(open('reversed.txt').read())
except IOError:
print ("Error: can't find file or read data"

You are not asking anything!!! Also, you should try to find the answer by your means or on StackOverflow before asking people to fix your code.
Well, your second half is wrong.
with open(output_filename, 'w') as output_file:
for i in reversed(data):
output_file.write(i)

Related

Unable to find string in text file

I am trying to simple find if a string exists in a text file, but I am having issues. I am assuming its something on the incorrect line, but I am boggled.
def extract(mPath, frequency):
if not os.path.exists('history.db'):
f = open("history.db", "w+")
f.close()
for cFile in fileList:
with open('history.db', "a+") as f:
if cFile in f.read():
print("File found - skip")
else:
#with ZipFile(cFile, 'r') as zip_ref:
#zip_ref.extractall(mPath)
print("File Not Found")
f.writelines(cFile + "\n")
print(cFile)
Output:
File Not Found
C:\Users\jefhill\Desktop\Python Stuff\Projects\autoExtract\Test1.zip
File Not Found
C:\Users\jefhill\Desktop\Python Stuff\Projects\autoExtract\test2.zip
Text within the history.db file:
C:\Users\jefhill\Desktop\Python Stuff\Projects\autoExtract\Test1.zip
C:\Users\jefhill\Desktop\Python Stuff\Projects\autoExtract\test2.zip
What am I missing? Thanks in advance
Note: cFile is the file path shown in the output and fileList is the list of both the paths from the output.
You're using the wrong flags for what you want to do. open(file, 'a') opens a file for append-writing, meaning that it seeks to the end of the file. Adding the + modifier means that you can also read from the file, but you're doing so from the end of the file; so read() returns nothing, because there's nothing beyond the end of the file.
You can use r+ to read from the start of the file while having the option of writing to it. But keep in mind that anytime you write you'll be writing to the reader's current position in the file.
I haven't tested the code but this should put you on the right track!
def extract(mPath, frequency):
if not os.path.exists('history.db'):
f = open("history.db", "w+")
f.close()
with open('history.db', "rb") as f:
data = f.readlines()
for line in data:
if line.rstrip() in fileList: #assuming fileList is a list of strings
#do everything else here

Python: Improvement (Code reduction): Compact write json to FTP

This is my current code:
def ftp_upload(local_ftpfile_path, ftp_ftpfile_path):
json_data = {...}
with open(local_ftpfile_path, 'w') as f:
json.dump(json_data, f)
f.close()
file = open(local_ftpfile_path, 'rb') # file to send
try:
session.storbinary('STOR ' + ftp_ftpfile_path, file) # send the file
except ... :
...
else:
...
Everything works, but I think it's a little stupid to open two files.
How can I convert it to just one file, so either just to f or to file, but not both.
How would you have to change the w and the rb?
Thanks for the help!
best regards,
Lukas

Reproduce the Unix cat command in Python

I am currently reproducing the following Unix command:
cat command.info fort.13 > command.fort.13
in Python with the following:
with open('command.fort.13', 'w') as outFile:
with open('fort.13', 'r') as fort13, open('command.info', 'r') as com:
for line in com.read().split('\n'):
if line.strip() != '':
print >>outFile, line
for line in fort13.read().split('\n'):
if line.strip() != '':
print >>outFile, line
which works, but there has to be a better way. Any suggestions?
Edit (2016):
This question has started getting attention again after four years. I wrote up some thoughts in a longer Jupyter Notebook here.
The crux of the issue is that my question was pertaining to the (unexpected by me) behavior of readlines. The answer I was aiming toward could have been better asked, and that question would have been better answered with read().splitlines().
The easiest way might be simply to forget about the lines, and just read in the entire file, then write it to the output:
with open('command.fort.13', 'wb') as outFile:
with open('command.info', 'rb') as com, open('fort.13', 'rb') as fort13:
outFile.write(com.read())
outFile.write(fort13.read())
As pointed out in a comment, this can cause high memory usage if either of the inputs is large (as it copies the entire file into memory first). If this might be an issue, the following will work just as well (by copying the input files in chunks):
import shutil
with open('command.fort.13', 'wb') as outFile:
with open('command.info', 'rb') as com, open('fort.13', 'rb') as fort13:
shutil.copyfileobj(com, outFile)
shutil.copyfileobj(fort13, outFile)
def cat(outfilename, *infilenames):
with open(outfilename, 'w') as outfile:
for infilename in infilenames:
with open(infilename) as infile:
for line in infile:
if line.strip():
outfile.write(line)
cat('command.fort.13', 'fort.13', 'command.info')
#!/usr/bin/env python
import fileinput
for line in fileinput.input():
print line,
Usage:
$ python cat.py command.info fort.13 > command.fort.13
Or to allow arbitrary large lines:
#!/usr/bin/env python
import sys
from shutil import copyfileobj as copy
for filename in sys.argv[1:] or ["-"]:
if filename == "-":
copy(sys.stdin, sys.stdout)
else:
with open(filename, 'rb') as file:
copy(file, sys.stdout)
The usage is the same.
Or on Python 3.3 using os.sendfile():
#!/usr/bin/env python3.3
import os
import sys
output_fd = sys.stdout.buffer.fileno()
for filename in sys.argv[1:]:
with open(filename, 'rb') as file:
while os.sendfile(output_fd, file.fileno(), None, 1 << 30) != 0:
pass
The above sendfile() call is written for Linux > 2.6.33. In principle, sendfile() can be more efficient than a combination of read/write used by other approaches.
Iterating over a file yields lines.
for line in infile:
outfile.write(line)
You can simplify this in a few ways:
with open('command.fort.13', 'w') as outFile:
with open('fort.13', 'r') as fort13, open('command.info', 'r') as com:
for line in com:
if line.strip():
print >>outFile, line
for line in fort13:
if line.strip():
print >>outFile, line
More importantly, the shutil module has the copyfileobj function:
with open('command.fort.13', 'w') as outFile:
with open('fort.13', 'r') as fort13:
shutil.copyfileobj(com, outFile)
with open('command.info', 'r') as com:
shutil.copyfileobj(fort13, outFile)
This doesn't skip the blank lines, but cat doesn't do that either, so I'm not sure you really want to.
List comprehensions are awesome for things like this:
with open('command.fort.13', 'w') as output:
for f in ['fort.13', 'command.info']:
output.write(''.join([line for line in open(f).readlines() if line.strip()]))

Read in file - change contents - write out to same file

I have to read in a file, change a sections of the text here and there, and then write out to the same file.
Currently I do:
f = open(file)
file_str = f.read() # read it in as a string, Not line by line
f.close()
#
# do_actions_on_file_str
#
f = open(file, 'w') # to clear the file
f.write(file_str)
f.close()
But I would imagine that there is a more pythonic approach that yields the same result.
Suggestions?
That looks straightforward, and clear already. Any suggestion depends on how big the files are. If not really huge that looks fine. If really large, you could process in chunks.
But you could use a context manager, to avoid the explicit closes.
with open(filename) as f:
file_str = f.read()
# do stuff with file_str
with open(filename, "w") as f:
f.write(file_str)
If you work line by line you can use fileinput with inplace mode
import fileinput
for line in fileinput.input(mifile, inplace=1):
print process(line)
if you need to process all the text at once, then your code can be optimized a bit using with that takes care of closing the file:
with open(myfile) as f:
file_str = f.read()
#
do_actions_on_file_str
#
with open(myfile, 'w') as f:
f.write(file_str)

Remove lines from a text file which do not contain a certain string with python

I am trying to form a quotes file of a specific user name in a log file. How do I remove every line that does not contain the specific user name in it? Or how do I write all the lines which contain this user name to a new file?
with open('input.txt', 'r') as rfp:
with open('output.txt', 'w') as wfp:
for line in rfp:
if ilikethis(line):
wfp.write(line)
with open(logfile) as f_in:
lines = [l for l in f_in if username in l]
with open(outfile, 'w') as f_out:
f_out.writelines(lines)
Or if you don't want to store all the lines in memory
with open(logfile) as f_in:
lines = (l for l in f_in if username in l)
with open(outfile, 'w') as f_out:
f_out.writelines(lines)
I sort of like the first one better but for a large file, it might drag.
Something along this line should suffice:
newfile = open(newfilename, 'w')
for line in file(filename, 'r'):
if name in line:
newfile.write(line)
newfile.close()
See : http://docs.python.org/tutorial/inputoutput.html#methods-of-file-objects
f.readlines() returns a list containing all the lines of data in the file.
An alternative approach to reading lines is to loop over the file object. This is memory efficient, fast, and leads to simpler code
>>> for line in f:
print line
Also you can checkout the use of with keyword. The advantage that the file is properly closed after its suite finishes
>>> with open(filename, 'r') as f:
... read_data = f.read()
>>> f.closed
True
I know you asked for python, but if you're on unix this is a job for grep.
grep name file
If you're not on unix, well... the answer above does the trick :)

Categories