I'm working on a new project in Python:
it's a sort of file-locker that protects with a password whichever file I decide to lock; then, when the right password is put into my script, the file gets opened.
Now I'm stuck into this problem: how can I make my script get executed when I try to open a normal file that's been locked before without changing it (like putting a piece of code at the start of the original file or what else)?
Should I try to make a "listener" that opens at every Windows startup checking whether the registered files are being opened, so that it will block their execution (if possible, I didn't find anything like that) until another script does not finish?
Related
The main Python code calls my written functions that for better overview are stored in separate .py files. For improvement of my code the program shall start from the beginning and stop at a defined position where I want to do some repair. After stopping I want to have access to the local variables where the code has stopped. This means I want to select some code that was performed before the halt and manipulate this code in the console. After testing in the console I want to correct my original code and run again the program.
Example:
You suppose that the following line doesn't execute as you expect:
if a.find('xyz')==-1:
Therefore you stop the program just before:
breakpoint()
if a.find('xyz')==-1:
Now you want to find out why exactly the line is not working as you expected. Maybe it depends on the variable a or on the string xyz or the find command is not correctly applied? So I would now enter a.find('xyz') in the console, vary and adjust the command. After a few tests in the console I find out that the right command must be a.find('XYZ'). Now I correct the line in my original code and restart the program. ---
But this is not possible because the halt command breakpoint() or pdb.set_trace() prohibits me from using the console. Instead I end up in debug mode where I can only run through the code line by line or display variables.
How can I debug my code as desired?
The following workarounds also do not help:
sys.exit()
I stop the code with sys.exit(). Main problem with this method is that I have no access to the variables if the code has stopped in another file. Also I do not see where the program has stopped. If I have several sys.exit() instances distributed in large code I do not know at which instance it has stopped. I can define individual outputs by sys.exit(‘position1’), sys.exit(‘position2’), but still I have to manually find the file and scroll to the given position.
cells
I define cells with #%%. Then I can run these cells separately, but I cannot run all cells from the beginning until the end of a given cell.
Spyder 5.2.2
I have made a code to open an inputed python file on key command in python pygame_functions
import os, pygame_functions
if spriteclicked(Sprite1):
os.system('file.py')
Similarly how do I close an inputed python file on key command
Your question is not very clear. By 'inputed file' do you mean that the name of the file is from user input? Or that the data in the file is from user input in some way and you want to access it?
I'm going to skip past that part and try to address what I think you are asking about. The line:
os.system('file.py')
tells the OS to run the script file.py. Because you are running it with os.system() your control is limited after you do that. You run the program and do not regain control until that program exits.
If you want to be able to run the command and then stop it when the user types a key, you need to run it in a different way. You would have to run it in a subprocess or a different thread so that you still have an active thread that is not blocked. It can monitor for the user input and then have it do something to shut it down. Exactly how you would shut it down would depend to some degree on the command you ran and how you started it.
Try looking here for some guidance on replacing the os.sytem() call.
I'm developing a PyQt applications so there's a good possibility segfaults could happen.
I want to use the faulthandler module to catch these. Now instead of writing to stderr I want to do the following:
Make faulthandler write into a file with a known location.
When starting again (normally or after a crash), check if that file exists and has a crash log in it.
If it does, open a dialog which asks the user to open a bug report with the crash log, then delete it.
Now this works fine, except when I run multiple instances of my application.
Then I thought I could write into a random file with a known location (say, crash-XXXXX.log), and then when starting check for crash-*.log, and if it's non-empty do the same thing as above.
However when doing it that way, at least on Linux I'll be able to delete the file while another instance might still have it open, and then if that instance crashes the log gets lost.
I also can't just open() the file at the right time as faulthandler wants an open file.
I'm searching for a solution which:
Works with multiple instances
Catches crashes of all these instances correctly
Only opens the crash dialog one time, when starting a new instance after one crashed.
Doesn't leave any stale files behind after all instances are closed.
Works at least on Linux and Windows
I've thought about some different approaches, but all of them have one of these drawbacks.
So far I have been getting a lot of help and been able to successfully put together a Python script. The script basically calls a Windows executable and then does some action like pulling down some files from a remote server. And at the end of the script I have a function which does a compression and moves the retrieved files to another server. So far the script was working great, but now looks like I have hit a road hurdle.
The script basically accepts a ParentNumber as a input and finds 1 or more ChildNumbers. Once the list of ChildNumbers are gathered the script goes calls the windows executable with the number repeatedly till it completes pulling data for all of them.
As mentioned above the Function I have built to Archive, Move Files and Email Notification is being called at end of the script, the function works perfectly fine if there is only one ChildNumber. If there are many ChildNumbers and when the executable moves on the 2nd ChildNumber the command line kinda treats it as end and stats with new line something like below:
.........
C:\Scripts\startscript.py
Input> ParentNumber
Retrieval started
Retrieval finished
**Email Sent Successfully**
Preparing ParentNumber #childNumber
C:\Scritps\ParentNumber123\childNumber2
Retrieval Started
Retrieval finished
.........
`
If you see the script flow above the Email Sent successfully message shows up under first ChildNumber only, which means it's called way before the completion of the script.
The actual behavior I want is that all ArchiveMoveEmailFunction should be called once all of the ChildNumbers are processed, but not sure where it's going wrong.
My function for the ArchiveMoveEmailFunction as below and it's at ending of all other lines in the script:
def archiveMoveEmailNotification(startTime, sender, receivers):
"""
Function to archive, move and email
"""
Code for Archive
Code for Move to remote server
Code for email
archiveMoveEmailNotification(startTime, sender,receivers)
Please let me know if I am missing something here to specify on when exactly this function should be executed. As mentioned it works totally fine if the ParentNumber has only 1 ChildNumber, so not sure if the second retrieval jump is causing some issue here. Is there a way I can just have this function wait till rest of the functions in the script are called or would be be logical to move this function to another script completely and call that function from the master script?
Here is the exe call part:
def execExe(childNumb):
cmd = "myExe retriveeAll -u \"%s\" -l \"%s\"" % (childNum.Url(),childAccount.workDir))
return os.system(cmd)
def retriveChildNumb(childNumb):
#Run the retrive
if not (execExe(childNumb)==0):
DisplayResult(childNumb,3)
else:
DisplayResult(childNumb,0)
return 0
Any inputs thoughts on this is very helpful.
Your question is verbose but hard to understand; providing the code would make this much easier to troubleshoot.
That said, my suspicion is that the code you're using to call the Windows executable is asynchronous, meaning your program continues (and finishes) without waiting for the executable to return a value.
for large files or slow connections, copying files may take some time.
using pyinotify, i have been watching for the IN_CREATE event code. but this seems to occur at the start of a file transfer. i need to know when a file is completely copied - it aint much use if it's only half there.
when a file transfer is finished and completed, what inotify event is fired?
IN_CLOSE probably means the write is complete. This isn't for sure since some applications are bad actors and open and close files constantly while working with them, but if you know the app you're dealing with (file transfer, etc.) and understand its' behaviour, you're probably fine. (Note, this doesn't mean the transfer completed successfully, obviously, it just means that the process that opened the file handle closed it).
IN_CLOSE catches both IN_CLOSE_WRITE and IN_CLOSE_NOWRITE, so make your own decisions about whether you want to just catch one of those. (You probably want them both - WRITE/NOWRITE have to do with file permissions and not whether any writes were actually made).
There is more documentation (although annoyingly, not this piece of information) in Documentation/filesystems/inotify.txt.
For my case I wanted to execute a script after a file was fully uploaded. I was using WinSCP which writes large files with a .filepart extension till done.
I first started modifying my script to ignore files if they're themselves ending with .filepart or if there's another file existing in the same directory with the same name but .filepart extension, hence that means the upload is not fully completed yet.
But then I noticed at the end of the upload, when all the parts have been finished, I have a IN_MOVED_IN notification getting triggered which helped me run my script exactly when I wanted it.
If you want to know how your file uploader behaves, add this to the incrontab:
/your/directory/ IN_ALL_EVENTS echo "$$ $# $# $% $&"
and then
tail -F /var/log/cron
and monitor all the events getting triggered to find out which one suits you best.
Good luck!
Why don't you add a dummy file at the end of the transfer? You can use the IN_CLOSE or IN_CREATE event code on the dummy. The important thing is that the dummy as to be transfered as the last file in the sequence.
I hope it'll help.