I have a constantly updating text file foo.txt. I want to read the last line of foo.txt only if the file updates. I have a while loop constantly opening the file and checking it. Inside the loop, I have some code that prints the last line of the file, then stores it in lastmsg.txt. On the next iteration, it checks that the last line of the file is not equal to whatever is stored in lastmsg.txt. If the two values are not equal (the file has updated and a new line has been added), it will print the last message in the file. Here is the code
import time
while True:
fileHandle = open ("foo.txt","r" )
lineList = fileHandle.readlines()
fileHandle.close()
msg = lineList[len(lineList)-1]
if(open("lastmsg.txt", "r").read() != msg):
f = open("lastmsg.txt", "w")
f.write(msg)
print(msg)
time.sleep(0.5)
This seems to work, however, it prints msg twice. So if abc is amended to the file, the output will be
abc
abc
I added f.close() line and after that, it prints msg just once.
import time
while True:
fileHandle = open ("foo.txt","r" )
lineList = fileHandle.readlines()
fileHandle.close()
msg = lineList[-1]
if(open("lastmsg.txt", "r").read() != msg):
f = open("lastmsg.txt", "w")
f.write(msg)
f.close() # <---- new line
print(msg)
time.sleep(0.5)
The below solution does make an assumption that you were only writing to the last line file as a buffer to know if you read the line before. Instead this solution just opens the file and reads to the end. Then will print any new lines that are written to the file after we read to the end.
When the file is updated the next call to readline() will retrieve the next line. this saves you reading the entire file every time.
import time
with open('query4.txt') as myfile:
#run to the end of the file we are discarding these as
#only want to tail the last line of the file when its updated
myfile.readlines()
while True:
line = myfile.readline()
if line:
print(line, end='')
time.sleep(0.5)
Related
My expected_cmd.txt (say f1) is
mpls ldp
snmp go
exit
and my configured.txt (say f2) is
exit
Here is the code I'm trying with, searching all lines of f1 in f2
with open('expected_cmd.txt', 'r') as rcmd, open('%s.txt' %configured, 'r') as f2:
for line in rcmd:
print 'line present is ' + line
if line in f2:
continue
else:
print line
So basically I'm trying to print the line from first file that is not present in the second file.
But with above code I'm getting output as
#python validateion.py
line present is mpls ldp
mpls ldp
line present is snmp go
snmp go
line present is exit
exit
Not sure why is it printing exit which is matched.
Also I'm wondering if there a built-in function to do this in python ?
with open('%s.txt' %configured,'r') as f2:
cmds = set(i.strip() for i in f2)
with open('expected_cmd.txt', 'r') as rcmd:
for line in rcmd:
if line.strip() in cmds:
continue
else:
print line
This fixed my issue.
The file object, which you get when you open a file contains information about the file and the current location within file. The position is the beginning of the file by default when you open it in 'r' mode1.
When you read some data from the file (or write to it), the position moves. For instance, f.read() reads everything and moves to the end of the file. A repeated f.read() reads nothing.
A similar thing happens when you iterate through file (e.g. line in f2).
I would suggest that, unless the files are many gigabytes in size, you should read both files and then do the rest of the logic in memory, e.g.:
with open('expected_cmd.txt', 'r') as f1:
lines1 = list(f1)
with open('%s.txt' %configured, 'r') as f2:
lines2 = list(f2)
Then you can implement the logic:
for line in lines1:
if line not in lines2:
print(line)
You read the configured.txt fully and do the search by stripping the lines in rcmd.
the code reads from multiple text files so far i have it to display on the terminal but i would like to have the info written into a text file but the text file shows up blank and dont know why new to python so still haven't figured out all the commands.
directory = 'C:\Assignments\\CPLfiles\*'
test = False
start_text = '^GMWE'
for filename in glob.glob(directory):
with open(filename) as f:
with open('file.txt', 'w') as f1:
for line in f:
#for x in line:
if test is False:
if re.search(start_text, line.strip()) is not None:
x = line.strip()
f1.write(x+ '\n')
print(x)
break
test = False
I think you should change the order of opening files to the following.
The problem is that for each file you open to read, you're also re-opening the file to write, whipping it's contents.
Also, due to the break you will write at maximum one line per file due to the break after the write statement.
If the last file that you opened does not have any match with the regular expression, then nothing will exist in the final file.
Hope it makes sense
directory = 'C:\Assignments\\CPLfiles\*'
test = False
start_text = '^GMWE'
with open('file.txt', 'w') as f1:
for filename in glob.glob(directory):
with open(filename) as f:
for line in f:
#for x in line:
if test is False:
if re.search(start_text, line.strip()) is not None:
x = line.strip()
f1.write(x+ '\n')
print(x)
break
test = False
I think that the main problem here is that you reopen file.txt for each file in you globbing. Each time opening it in write mode erases the file. If no line match in the last file you will end up with an empty file as a result. So your loop should be inside your with that opens this file.
I am reading a text file and passing it to the API, but then I am getting the result only for the first line in the file, the subsequent lines are not being read.
code below :
filename = 'c:\myfile.txt'
with open(filename) as f:
plain_text = f.readline()
response = client_comprehend.detect_entities(
Text=plain_text,
LanguageCode='en'
)
entites = list(set([x['Type'] for x in response['Entities']]))
print response
print entites
When you are doing with f.readline() it will only take the first line of the file. So if you want to go through each line of the file you have to loop through it. Otherwise if you want to read the entire file(not meant for big files) you can use f.read()
filename = 'c:\myfile.txt'
with open(filename) as f:
for plain_text in f:
response = client_comprehend.detect_entities(
Text=plain_text,
LanguageCode='en'
)
entites = list(set([x['Type'] for x in response['Entities']]))
print response
print entites
As csblo has pointed out in the comments, your readline is only reading the first line of the file because it's only being called once. readline is called once in your program as it is written, it performs the actions for the single line that has been read, and then the program closes without doing anything else.
Conveniently, file objects can be iterated over in a for loop like you would a list. Iterating over a file will return one line per iteration, as though you had called readline and assigned it to a value. Using this, your code will work when rewritten as such:
filename = 'c:\myfile.txt'
with open(filename) as f:
for plain_text_line in f:
response = client_comprehend.detect_entities(
Text=plain_text_line,
LanguageCode='en'
)
entites = list(set([x['Type'] for x in response['Entities']]))
print response
print entites
This should iterate over all lines of the file in turn.
When I look at what it wrote it's always double. For example if I write 'dog' ill get 'dogdog'. Why?
Reading and writing to file, filename taken from command line arguments:
from sys import argv
script,text=argv
def reading(f):
print f.read()
def writing(f):
print f.write(line)
filename=open(text)
#opening file
reading(filename)
filename.close()
filename=open(text,'w')
line=raw_input()
filename.write(line)
writing(filename)
filename.close()
As I said the output I am getting is the double value of what input I am giving.
You are getting double value because you are writing two times
1) From the Function call
def writing(f):
print f.write(line)
2) By writing in file using filename.write(line)
Use this code:
from sys import argv
script,text=argv
def reading(f):
print f.read()
def writing(f):
print f.write(line)
filename=open(text,'w')
line=raw_input()
writing(filename)
filename.close()
And also no need to close file two times, once you finished all the read and write operations then just close it.
If you want to display each line and then write a new line, you should probably just read the entire file first, and then loop over the lines when writing new content.
Here's how you can do it. When you use with open(), you don't have to close() the file, since that's done automatically.
from sys import argv
filename = argv[1]
# first read the file content
with open(filename, 'r') as fp:
lines = fp.readlines()
# `lines` is now a list of strings.
# then open the file for writing.
# This will empty the file so we can write from the start.
with open(filename, 'w') as fp:
# by using enumerate, we can get the line numbers as well.
for index, line in enumerate(lines, 1):
print 'line %d of %d:\n%s' % (index, len(lines), line.rstrip())
new_line = raw_input()
fp.write(new_line + '\n')
The context is the following one, I have two text file that I need to edit.
I open the first text file read it line by line and edit it but sometimes when I encounter a specific line in the first text file I need to overwritte content of the the second file.
However, each time I re-open the second text file instead of overwritting its content the below code appends it to the file...
Thanks in advance.
def edit_custom_class(custom_class_path, my_message):
with open(custom_class_path, "r+") as file:
file.seek(0)
for line in file:
if(some_condition):
file.write(mu_message)
def process_file(file_path):
with open(file_path, "r+") as file:
for line in file:
if(some_condition):
edit_custom_class(custom_class_path, my_message)
In my opinion, simultaneously reading and modifying a file is a bad thing to do. Consider using something like this. First read the file, make modifications, and then overwrite the file completely.
def modify(path):
out = []
f = open(path)
for line in f:
if some_condition:
out.append(edited_line) #make sure it has a \n at the end
else:
out.append(original_line)
f.close()
with open(path,'w') as f:
for line in out:
f.write(line)