Never-ending file existence check loop - python

I'm sorry, I'm a complete beginner but very fascinated by scripting automation. I'm trying to check for the existence of a file that arrives once in a while. I want to read it and then delete it. I can't figure out how to keep this action running without the goto Label feature.
Can anyone advise me please?
import os
import os.path
import time
path = os.path.exists('file.txt')
#loop
if path is True:
print("File exists.")
time.sleep(1)
os.remove("file.txt") # Remove the file.
# Now I need to start the loop again.
else:
print("File doesn't exist")
time.sleep(1)
# Now I need to start the loop again.
# And keep it running forever.

This is what "while" loops are for.
import os.path
import time
while True:
time.sleep(1)
if os.path.exists('file.txt'):
print("File exists.")
os.remove("file.txt") #remove the file.
else:
print("File doesn't exist")
You can do this with a batch file. You don't need Python.

I think what you are looking for is a folder monitor which performs actions based on the event handling in the folder. I recommend using the 'watchdog' library in python to monitor the folder for incoming or outgoing files while the 'subprocess' library executes actions like reading and deleting. Refer to the code below
Code:
import subprocess
import time
import os
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
def on_created(event):
print("A File is arrived",os.path.basename(event.src_path))
time.sleep(10)
os.remove(event.src_path)
def on_deleted(event):
print("File Deleted")
if __name__ == "__main__":
event_handler= FileSystemEventHandler()
event_handler.on_created=on_created
event_handler.on_deleted=on_deleted
path="A:/foldername" #"Enter the folder path" you want to monitor here
observer= Observer()
observer.schedule(event_handler,path,recursive=False) #If recursive is set to be True it will check in the subdirectories for contents if a folder is added to our path
observer.start()
try:
while True:
time.sleep(15) # Sleeps for 15 seconds
except KeyboardInterrupt:
observer.stop() # The code terminates if we hit any key in the folder or else it keeps running for ever
observer.join()
Also, if you want to read the file before deleting then add the file reading code in the on_created method before "os.remove" and the code should work fine for you now!

Related

Creating a Flag file

I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...

How to use watchdog to monitor whenever a file created or deleted in Python

I am using watchdog in Python to monitor realtime whenever a file is created or deleted.
Following examples, I tried with the following:
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class MyHandler(FileSystemEventHandler):
def on_created(self, event):
print("File is created!")
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, path='C:/daten/dog.txt', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
Of course, the file path='C:/daten/dog.txt' does not exist when this script begins to run. But I still get error messages as
FileNotFoundError: [WinError 2] The system cannot find the file specified.
Why it's telling me it cannot find the file specified at the first place. I need it running to watch for the creation of the file after all.
Update:
Now I understand that watchdog is for monitoring a folder rather than a file.
Is there a similar package for monitoring a file or is it just better done by while and sleep statements together?
Why it's telling me it cannot find the file specified at the first place.
Because the path you give to watchdog is where it's going to hook itself to listen for events.
You can't watch a file and expect its creation event to be recorded. File creation events are posted on the parent directory, so that is what you should be watching.
In fact I don't know that watching a specific file makes any sense with watchdog, its tagline is
Directory monitoring made easy with

Is there a way to interrupt shutil copytree operation in Python?

I'm fairly new to programming in general. I need to develop a program that can copy multiple directories at once and also take into account multiple file type exceptions. I came across the shutil module which offers the copytree and ignore_patterns functions. Here is a snippet of my code which also uses the wxPython Multiple Directory Dialog:
import os
import wx
import wx.lib.agw.multidirdialog as MDD
from shutil import copytree
from shutil import ignore_patterns
app = wx.App(0)
dlg = MDD.MultiDirDialog(None, title="Custom MultiDirDialog", defaultPath=os.getcwd(), agwStyle=MDD.DD_MULTIPLE|MDD.DD_DIR_MUST_EXIST)
dest = "Destination Path"
if dlg.ShowModal() != wx.ID_OK:
dlg.Destroy()
paths = dlg.GetPaths()
ext = ['*.tiff', '*.raw', '*.p4p', '*.hkl', '*.xlsx']
for path in enumerate(paths):
directory = path[1].replace('Local Disk (C:)','C:')
copytree(directory, dest, ignore=ignore_patterns(directory, *ext))
dlg.Destroy()
app.MainLoop()
This code works well for me. At times, I'll be copying terabytes worth of data. Is there anyway that the shutil.copytree can be interrupted? I ask this, because the first time I ran this program, I selected a rather large directory and copied a ton of files (Successfully!) by accident and wanted to stop it :( . Once I get around this, I'll finally start on the GUI! If there is anymore information that I can provide, please let me know! Thanks in advance for any and all help!
You can run the copy in separate python process using multiprocessing module. The code may look something like this:
import time
import shutil
from multiprocessing import Process
def cp(src: str, dest: str):
shutil.copytree(src, dest)
if __name__ == '__main__':
proc = Process(target=cp, args=('Downloads', 'Tmp'), daemon=True)
proc.start()
time.sleep(3)
proc.terminate()
In my example the main process starts a child process, which does the actual coping, and after 3 seconds terminates it. Also you can check if the process is running by calling is_alive() method of the process.
copytree accepts copy_function as a parameter. If you pass a function that checks for a flag you could raise an error to interrupt the operation.
from shutil import copytree, copy2
# set this flag to True to interrupt a copytree operation
interrupt = False
class Interrupt(Exception):
""" interrupts the copy operation """
def interruptable_copy(*args, **kwargs):
if interrupt:
raise Interrupt("Interrupting copy operation")
return copy2(*args, **kwargs)
copytree(src, dst, copy_function=interruptable_copy)

python wget library not returning if internet goes down for some times. How can I return with error if Internet is down?

I have below code to download a file inside a loop,
import wget
try:
wget.download(url)
except:
pass
But if the Internet goes down, it doesn't return!
So my whole loop is stuck.
I want to repeat the same download if internet goes down. So I wanna know does any error happen.
How can i mitigate this?
One simple solution is to move your download code to a thread and make it a separate process which can be interrupted.
You can use python Thread and Timer module to achieve it.
from threading import Thread, Timer
from functools import partial
import time
import urllib
def check_connectivity(t):
try:
urllib.request.urlopen("http://google.com", timeout=2)
except Exception as e:
t._Thread__stop()
class Download(Thread):
def run(self):
print("Trying to download file....")
con = partial(check_connectivity, self)
while True:
t = Timer(5, con) # Checks the connectivity every 5 second or less.
t.start()
# your download code....
def main():
down = Download()
down.start()
down.join()
You code move your main download loop inside the thread's run method. And start a timer inside which listens for the network connectivity.

Pyinotify / Watchdog triggers a modify event twice on one edit

I am trying to start a python script everytime is a certain file modified. To be accurate, I have a device on Raspberry Pi's serial ports which writes data into a text file (test.txt) . I have tried both tools - Watchdog / Pyinotify. Everytime the file is modified (triggers event Watchdog: on_modified / Pyinotify: IN_MODIFY), it makes duplicate trigger. I have tried every other method, even IN_CLOSE_WRITE as some people suggest, but this doesn't work at all.
Does someone know, how can just a single event be triggered on one file update?
My code using Pyinotify (a bit edited tutorial file):
import pyinotify,subprocess
def onChange(ev):
cmd = ['/usr/bin/env', 'python', 'doThisFile.py', ev.pathname]
subprocess.Popen(cmd).communicate()
wm = pyinotify.WatchManager()
wm.add_watch('/home/pi/test.txt', pyinotify.IN_MODIFY, onChange)
notifier = pyinotify.Notifier(wm)
notifier.loop()
or Watchdog:
#!/usr/bin/python
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import subprocess
class MyHandler(FileSystemEventHandler):
def on_modified(self, event):
subprocess.call("/home/pi/doThisFile.py")
print("Code started")
if __name__ == "__main__":
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, path='.', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
I was facing the same issue while using pyinotify. But changing IN_MODIFY to IN_CLOSE_WRITE solved the issue. You can get more info from this answer
This probably because of the text editor which was used for editing your source code.
solution: calculate the time between 2 events, for example, If the events trigger too frequently, exit the handler from one of the events.
class SubEventHandler(FileSystemEventHandler):
def __init__(self):
self._time0 = self.getTime()
def on_modified(self, event):
time1 = self.getTime()
if (time1 - self._time0) < 5:
exit(0) # not sure it will exit properly
The 2 events that you are getting are:
The file was modified
The directory was modified
If you run the demo code and then in a touch a file in the directory being watched, you'll see the following output:
2022-11-04 10:28:45 - Modified file: ./test
2022-11-04 10:28:45 - Modified directory: .

Categories