Python writing in files from a specific line - python

is there a python function to write in a file from a specific line , I mean if I know the index of the line is there any possibility to begin writing from that line ?

I don't think there is any way to do that in the way you are trying to
Alternatively, read the file into a string, then use list methods to insert your data.
source_file = open("myfile", "r")
file_data = list(source_file.read())
source_file.close()
file_data.insert(position, data)
open("myfile", "wb").write(file_data)

you can use this simple process
source_file = open("file.txt", "r")
file_data = list(source_file.read())
source_file.close()
file_data.insert(position, data)
open("file.txt", "wb").write(file_data)

Related

Meditation with texts in a text file in the case of threading

iam using this code to to pull the first line at text file at threading mod before delete it from the file
with open(r'C:\datanames\names.txt','r') as fin:
name = fin.readline()
with open(r'C:\datanames\names.txt', 'r') as fin:
data = fin.read().splitlines(True)
with open(r'C:\datanames\names.txt', 'w') as fout:
fout.writelines(data[1:])
put it make me lose the data Often
Is there a more efficient and practical way to use it in such a situation? (threading)
I see no reason to use threading for this. It's very straightforward.
To remove the first line from a file do this:
FILENAME = 'foo.txt'
with open(FILENAME, 'r+') as file:
lines = file.readlines()
file.seek(0)
file.writelines(lines[1:])
file.truncate()

os.write() appends file instead of overwriting, but O_APPEND isn't used [duplicate]

I have the following code:
import re
#open the xml file for reading:
file = open('path/test.xml','r+')
#convert to string:
data = file.read()
file.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data))
file.close()
where I'd like to replace the old content that's in the file with the new content. However, when I execute my code, the file "test.xml" is appended, i.e. I have the old content follwed by the new "replaced" content. What can I do in order to delete the old stuff and only keep the new?
You need seek to the beginning of the file before writing and then use file.truncate() if you want to do inplace replace:
import re
myfile = "path/test.xml"
with open(myfile, "r+") as f:
data = f.read()
f.seek(0)
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data))
f.truncate()
The other way is to read the file then open it again with open(myfile, 'w'):
with open(myfile, "r") as f:
data = f.read()
with open(myfile, "w") as f:
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data))
Neither truncate nor open(..., 'w') will change the inode number of the file (I tested twice, once with Ubuntu 12.04 NFS and once with ext4).
By the way, this is not really related to Python. The interpreter calls the corresponding low level API. The method truncate() works the same in the C programming language: See http://man7.org/linux/man-pages/man2/truncate.2.html
file='path/test.xml'
with open(file, 'w') as filetowrite:
filetowrite.write('new content')
Open the file in 'w' mode, you will be able to replace its current text save the file with new contents.
Using truncate(), the solution could be
import re
#open the xml file for reading:
with open('path/test.xml','r+') as f:
#convert to string:
data = f.read()
f.seek(0)
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data))
f.truncate()
import os#must import this library
if os.path.exists('TwitterDB.csv'):
os.remove('TwitterDB.csv') #this deletes the file
else:
print("The file does not exist")#add this to prevent errors
I had a similar problem, and instead of overwriting my existing file using the different 'modes', I just deleted the file before using it again, so that it would be as if I was appending to a new file on each run of my code.
See from How to Replace String in File works in a simple way and is an answer that works with replace
fin = open("data.txt", "rt")
fout = open("out.txt", "wt")
for line in fin:
fout.write(line.replace('pyton', 'python'))
fin.close()
fout.close()
in my case the following code did the trick
with open("output.json", "w+") as outfile: #using w+ mode to create file if it not exists. and overwrite the existing content
json.dump(result_plot, outfile)
Using python3 pathlib library:
import re
from pathlib import Path
import shutil
shutil.copy2("/tmp/test.xml", "/tmp/test.xml.bak") # create backup
filepath = Path("/tmp/test.xml")
content = filepath.read_text()
filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content))
Similar method using different approach to backups:
from pathlib import Path
filepath = Path("/tmp/test.xml")
filepath.rename(filepath.with_suffix('.bak')) # different approach to backups
content = filepath.read_text()
filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content))

how to open a file using strings of other file in python?

I have 1000 files, and the name of these are "numbers", for example, 2323.csv.
I have these name in a file called 1.txt.
Now I want to open these files one by one in python, using 1.txt to open them.
How can I do this?
Why not this?
with open('1.txt', 'r') as listFile:
for line in listFile:
with open(line.rstrip(), 'r') as individualFile:
# do stuff
Roughly and very basic but understandable code (no error handling).
with open('1.txt', 'r') as f:
for line in f.readlines(): # This assumes each line has a number
with open('.'.join([line, 'csv']) as cf:
file_content = cf.readlines()
print(file_content)

Read in file - change contents - write out to same file

I have to read in a file, change a sections of the text here and there, and then write out to the same file.
Currently I do:
f = open(file)
file_str = f.read() # read it in as a string, Not line by line
f.close()
#
# do_actions_on_file_str
#
f = open(file, 'w') # to clear the file
f.write(file_str)
f.close()
But I would imagine that there is a more pythonic approach that yields the same result.
Suggestions?
That looks straightforward, and clear already. Any suggestion depends on how big the files are. If not really huge that looks fine. If really large, you could process in chunks.
But you could use a context manager, to avoid the explicit closes.
with open(filename) as f:
file_str = f.read()
# do stuff with file_str
with open(filename, "w") as f:
f.write(file_str)
If you work line by line you can use fileinput with inplace mode
import fileinput
for line in fileinput.input(mifile, inplace=1):
print process(line)
if you need to process all the text at once, then your code can be optimized a bit using with that takes care of closing the file:
with open(myfile) as f:
file_str = f.read()
#
do_actions_on_file_str
#
with open(myfile, 'w') as f:
f.write(file_str)

Remove lines from a text file which do not contain a certain string with python

I am trying to form a quotes file of a specific user name in a log file. How do I remove every line that does not contain the specific user name in it? Or how do I write all the lines which contain this user name to a new file?
with open('input.txt', 'r') as rfp:
with open('output.txt', 'w') as wfp:
for line in rfp:
if ilikethis(line):
wfp.write(line)
with open(logfile) as f_in:
lines = [l for l in f_in if username in l]
with open(outfile, 'w') as f_out:
f_out.writelines(lines)
Or if you don't want to store all the lines in memory
with open(logfile) as f_in:
lines = (l for l in f_in if username in l)
with open(outfile, 'w') as f_out:
f_out.writelines(lines)
I sort of like the first one better but for a large file, it might drag.
Something along this line should suffice:
newfile = open(newfilename, 'w')
for line in file(filename, 'r'):
if name in line:
newfile.write(line)
newfile.close()
See : http://docs.python.org/tutorial/inputoutput.html#methods-of-file-objects
f.readlines() returns a list containing all the lines of data in the file.
An alternative approach to reading lines is to loop over the file object. This is memory efficient, fast, and leads to simpler code
>>> for line in f:
print line
Also you can checkout the use of with keyword. The advantage that the file is properly closed after its suite finishes
>>> with open(filename, 'r') as f:
... read_data = f.read()
>>> f.closed
True
I know you asked for python, but if you're on unix this is a job for grep.
grep name file
If you're not on unix, well... the answer above does the trick :)

Categories