I am trying to read a text file into a string, do something with the string, then ever X seconds,
re-read the text file (in case it has changed) to update the string and do the same thing again, over and over in a loop, without spawning an infinite number of processes.
so something like :
in an infinite loop
open 'MyTextFile' and read it into a String
do stuff with the string of text it reads from the file
close the file (if I need to...to allow another script to write to it)
wait x seconds before clearing the string and re-reading the
same 'MyTextFile' file to update the string and do it all again
(in an infinite loop until I tell it to stop)
What would be a good sturdy (reliable to run for a long time) way to do that?
Thanks!
<code>
import os
import time
myDataFile = "MyData.txt"
data = ""
def ReadFile(data):
# only need to read it if data has changed
# detect 'modified' date of myDataFile and only PrintToPrinter IF it has change
# and if it has NOT changed, wait 10 seconds and check again
# IF CHANGED
with open (myDataFile, "r") as myFile:
try:
data=myFile.read().replace('\n', ' ')
finally:
myFile.close()
ShowOnScreen(data)
PrintToPrinter(data)
# ELSE
# data has not changed..
# no need to print it so I shouldn't send back to PrintToPrinter
# but I still want to ShowOnScreen(data) on screen
# and keep the program running to check for changes
ShowOnScreen(data)
sleep(10)
ReadFile(data)
def ShowOnScreen(data):
print(time+' '+data)
def PrintToPrinter(data):
# send DateTimeStamp+data to printer here
# go back to checking for changes
ReadFile(data)
# kick off the program to start looking for changes
# data starts out as "" so it should always read the first time
# and send to printer and screen
ReadFile(data)
</code>
This can be easily done using the time.sleep from the time module. A sample code is pasted below:
import time
def file_handler(filename):
with open(filename) as fh:
line = fh.read()
print line
print "Length of string:%s"% len(line)
fh.close()
while True:
file_handler("test.txt")
time.sleep(10)
Ended up doing something like this
#!/usr/bin/env python
import sys
import time from sleep
import datetime
import time
# probably should not import both time and datetime
MyFile = "MyFile.txt"
MyText = ""
MyDate = ""
def GetDateTime():
MyDate = time.strftime("%x %I:%M %p")
return MyDate+" "
def ReadFile(MyText):
with open (MyFile, "r" as myfile:
try:
MyText2=myfile.read().replace('\n',' ')
finally:
myfile.close()
CurDateTime = GetDateTime()
if (MyText == MyText2):
# file hasn't changed...print to screen only
print CurDateTime
sleep(5)
ReadFile(MyText2)
else:
# print to screen and printer
print CurDateTime +MyText2
# send CurDateTime +MyText2 to printer
sleep(5)
ReadFile(MyText2)
ReadFile(MyText)
Hope this helps somebody
Related
I have a continuously growing CSV File, that I want to periodically read. I am also only interested in new values.
I was hoping to do something like:
file_chunks = pd.read_csv('file.csv', chunksize=1)
while True:
do_something(next(file_chunks))
time.sleep(0.1)
in a frequency, that is faster than the .csv file is growing.
However, as soon as the iterator does not return a value once, it "breaks" and does not return values, even if the .csv file has grown in the meantime.
Is there a way to read continuously growing .csv files line by line?
you could build a try: except: around it or make and if statement that checks if file_chunks is not none first.
Like this it shouldnt break anymore and he only sleeps when he there are no more chunks left.
while True:
file_chunks = pd.read_csv('file.csv', chunksize=1)
while True:
try:
do_something(next(file_chunks))
except:
time.sleep(0.1)
This is easier to do with the standard csv module where you can write your own line iterator that knows how to read an updating file. This generator would read in binary mode so that it can track file position, close the file at EOF and poll its size for appended data. This can fail if the reader gets a partial file update because the other side hasn't flushed yet, or if a CSV cell contains and embedded new line that invalidates the reader's assumption that a binary mode newline always terminates a row.
import csv
import time
import os
import threading
import random
def rolling_reader(filename, poll_period=.1, encoding="utf-8"):
pos = 0
while True:
while True:
try:
if os.stat(filename).st_size > pos:
break
except FileNotFoundError:
pass
time.sleep(poll_period)
fp = open(filename, "rb")
fp.seek(pos)
for line in fp:
if line.strip():
yield line.decode("utf-8")
pos = fp.tell()
# ---- TEST - thread updates test.csv periodically
class GenCSVThread(threading.Thread):
def __init__(self, csv_name):
super().__init__(daemon=True)
self.csv_name = csv_name
self.start()
def run(self):
val = 1
while True:
with open(self.csv_name, "a") as fp:
for _ in range(random.randrange(4)):
fp.write(",".join(str(val) for _ in range(4)) + "\n")
val += 1
time.sleep(random.random())
if os.path.exists("test.csv"):
os.remove("test.csv")
test_gen = GenCSVThread("test.csv")
reader = csv.reader(rolling_reader("test.csv"))
for row in reader:
print(row)
A platform dependent update would be to use a facility such as inotify to trigger reads off of a file close operation to reduce the risk of partial data.
I have a project that needs the data or value from PLC to trigger a printing program. This program I made can only read the value but when the value changes the action of saving it into a txt file didn't execute. Instead the program only stops reading the tag's value. Where did my code go wrong?
from pylogix import PLC
import time
with PLC() as comm:
comm.IPAddress = 'IPAddress'
read = True
ret = comm.Read('TagName')
new = ret
old = new
while read:
if new == old :
print(ret.TagName, ret.Value)
time.sleep(60)
else:
print('exiting')
for r in new:
with open("output.txt", "a") as f:
print(r.TagName, r.Value, file=f)
old = new
read = False
Actually I have this code :
#!/usr/bin/env python3
import sys
import requests
import random
from multiprocessing.dummy import Pool
from pathlib import Path
requests.urllib3.disable_warnings()
print ('Give name of txt file on _listeNDD directory (without.txt)'),
file = str(input())
if Path('_listeNDD/'+file+'.txt').is_file():
print ('--------------------------------------------------------')
print ("Found")
print ('--------------------------------------------------------')
print ('Choose name for the output list (without .txt)'),
nomRez = str(input())
filename = '_listeNDD/'+file+'.txt'
domains = [i.strip() for i in open(filename , mode='r').readlines()]
else:
print ('--------------------------------------------------------')
exit('No txt found with this name')
def check(domain):
try:
r = requests.get('https://'+domain+'/test', timeout=5, allow_redirects = False)
if "[core]" in r.text:
with open('_rez/'+nomRez+'.txt', "a+") as f:
print('https://'+domain+'/test', file=f)
except:pass
mp = Pool(100)
mp.map(check, domains)
mp.close()
mp.join()
exit('finished')
Screen of the root file
With this code, it open text file on directory "_listeNDD" and I write new text file on directory "_rez".
Obviously it's super fast for ten elements but when it gets a bigger I would like a progress bar to know if I have time to make a coffee or not.
I had personally tried using the github tqdm but unfortunately it shows a progress bar for every job it does, while I only want one for everything...
Any idea?
Thank you
EDIT : Using this post, I did not succeed with
if __name__ == '__main__':
p = Pool(100)
r = p.map(check, tqdm.tqdm(range(0, 30)))
p.close()
p.join()
I don't have a high enough python level to master this so I may have badly integrated this into my code.
I also saw:
if __name__ == '__main__':
r = process_map(check, range(0, 30), max_workers=2)
I am using the keyboard and mouse modules to record user interaction with our software so that we can have some high level GUI tests.
Currently I am trying to store the recorded events in a text file and later play this recording again.
However when I load the recorded events from said file I only see played mouse events and no keyboard events.
One cause for this problem may be the implementation of KeyboardEvents.
KeyboardEvents does not contain a correct implementation of __repr__. This prevents us from calling print(keyboard_events, file=f) and reading the lines with eval(line). (This works with mouse and ButtonEvent and MoveEvent)
So we have decided to work with the json format of KeyboardEvents. Basically what we're doing is we retrieve the json format of each KeyboardEvent and write the json in the file. Then we load the json file and parse the json as KeyboardEvents.
Currently we're storing both mouse & keyboard input in a single file. However, since mouse supports a correct implementation of __repr__ we can directly print and the mouse events and use eval() on it to retrieve the stored events.
This is the file used for recording and playing:
import threading
import mouse
import keyboard
from mouse import ButtonEvent
from mouse import MoveEvent
from mouse import WheelEvent
from keyboard import KeyboardEvent
import time
import json
import sys
def record(file='record.txt'):
f = open(file, 'w+')
mouse_events = []
keyboard_events = []
keyboard.start_recording()
starttime = time.time()
mouse.hook(mouse_events.append)
keyboard.wait('esc')
keyboard_events = keyboard.stop_recording()
mouse.unhook(mouse_events.append)
#first line = start of recording
#mouse events = second line
#keyboard events = every remaining line = 1 event
print(starttime, file=f)
print(mouse_events, file=f)
for kevent in range(0, len(keyboard_events)):
print(keyboard_events[kevent].to_json(), file = f)
f.close()
def play(file, speed = 0.5):
f = open(file, 'r')
#per definition the first line is mouse events and the rest is keyboard events
lines = f.readlines()
f.close()
mouse_events = eval(lines[1])
keyboard_events = []
for index in range(2,len(lines)):
keyboard_events.append(keyboard.KeyboardEvent(**json.loads(lines[index])))
starttime = float(lines[0])
keyboard_time_interval = keyboard_events[0].time - starttime
keyboard_time_interval /= speed
mouse_time_interval = mouse_events[0].time - starttime
mouse_time_interval /= speed
print(keyboard_time_interval)
print(mouse_time_interval)
#Keyboard threadings:
k_thread = threading.Thread(target = lambda : time.sleep(keyboard_time_interval) == keyboard.play(keyboard_events, speed_factor=speed) )
#Mouse threadings:
m_thread = threading.Thread(target = lambda : time.sleep(mouse_time_interval) == mouse.play(mouse_events, speed_factor=speed))
#start threads
m_thread.start()
k_thread.start()
#waiting for both threadings to be completed
k_thread.join()
m_thread.join()
if __name__ == '__main__':
if len(sys.argv) > 2 and sys.argv[1] == 'play':
play(sys.argv[2])
elif len(sys.argv) >= 2 and sys.argv[1] == 'record':
if(len(sys.argv)) == 3:
record(sys.argv[2])
else:
record()
else:
print("missing either 'play' or 'record' or filename")
I expect the same behavior with this code like when it is run in a single function (see edit in https://stackoverflow.com/a/57670484/7345513).
Meaning: I expect the playback in the threads to be synced and the keys to be pressed. What i actually get is that the mouse events are played back as desired but no KeyboardEvents are being processed. When I use the function from the linked SO it works.
Can someone please point me to the right direction?
I forgot to answer my own question but here it goes:
Thanks to the guidance from blubberdiblub I have tracked the start time for the tracking and can then add an offset to when the first input occurred. This allows playbacks to be timed somewhat accurately.
I currently have a program that appends to an already existing file called "ConcentrationData.txt". However, I would like to create a new text file every time the program is run, preferably with a file name that has the date and time. This is what my current script looks like:
def measureSample(self):
sys.stdout.flush()
freqD1, trandD1, absoD1 = dev.getMeasurement(LED_TO_COLOR='D1'])
freqD2, trandD2, absoD2 = dev.getMeasurement(LED_TO_COLOR='D2'])
absoDiff= absoD1 - absoD2
Coeff= 1
Conc = absoDiff/Coeff
Conc3SD = '{Value:1.{digits}f'.format(Value = Conc, digits=3)
self.textEdit.clear()
self.textEdit.setText('Concentration is {0}'.format(Conc3SD))
timeStr = time.strftime('%m-%d-%Y %H:%M:%S %Z')
outFile = open('ConcentrationData.txt','a')
outFile.write('{0} || Concentration: {1}'.format(timeStr, Conc3SD))
outFile.close()
How would I go about doing that?
(Also, I'm pretty new to python so I'm sorry if this sounds like a silly question).
You can do something on the lines of the following
class my_class:
_data_fd = None
def __init__(self,create,filename):
if(create):
self._data_fd = open(filename,'w')
def __del__(self):
if(self._data_fd != None):
self._data_fd.close()
def measureSample(self):
##do something here
outFile = self._data_fd
outFile.write('{0} || Concentration: {1}'.format(timeStr, Conc3SD))
if __name__ == '__main__':
timeStr = time.strftime('%m-%d-%Y_%H_%M_%S_%Z') #use unerscore instead of spaces
filename = "{0}.{1}".format("Data.txt",timeStr)
imy_class = my_class(1,filename)
imy_class.measureSample()
imy_class.measureSample() ##call multiple times the fd remains open for the lifetime of the object
del imy_class ### the file closes now and you will have multiple lines of data