Automatically overwrite existing file with an incoming file - python

as of right now I have a file called song.mp3 that I have integrated into a Python program which will act as an alarm. I would like it so that whenever I send the Raspberry Pi a new song via Bluetooth, it will just automatically rename this song to be song.mp3, thereby overwriting the previous song. That way I don't have to change my alarm program for different songs. Any help?

Assuming that the mp3 files are all in the same directory, you could perhaps have a cron job running that periodically renames the most recent file and so something like:
mv $(ls -1t *.txt | head -1) song.mp3
This is a quick example. It would be more preferable to add the ablve to a script and add some "belts and braces" to ensure that the script doesn't crash.

Related

Move only file contents and not any file handler/pointer

I am working on analyzing data of a log file on frequent intervals and process accordingly. The log file which is the input, is an infinitely growing file. A long running process writes to it and it belongs to the root user.
I have all the file permissions for the log file. What I want to do is to move only the file contents until that point(take the file contents and clear the file) without disturbing the another process. Preferably through a python script.
[EDIT]
(i.e)., I need to cut & paste all the contents from the log file(primary) until that point of time and put them into another(secondary) log file. I will use this secondary log file for my data analysis. In the mean time, if the long running process writes anything to the primary log file, it should not be lost. It will not be a problem, if I take the new data to the secondary log file along with the other contents.
[EDIT 2]
The main problem I face is to clear the file contents once they are fetched from the primary log file. I need to ensure that any log written to the file will not be lost while I read from the primary log & write them in the secondary log and remove those contents from the file.
I looked into the TimedRotatingFileHandler but it doesn't help me in this regard. Any other suggestions?
Thanks
The linux way to tail a file is simple.
Use this command on your log file as soon as the logging process starts:
tail -f log_file_name.log >> /tmp/new_file_name.log &
[EDIT] tail -f log_file_name.log >> /tmp/new_file_name.log | tail -f /tmp/new_file_name.log | xargs -I TailOutput echo sed -i '/TailOutput/d' log_file_name.log
Then you can use this new_file_name.log to do whatever you want to do with this new file. Also your original log file is intact.
I understand this is getting little twisted, but that's the way I can think now!!!

Multiple processes reading&deleting files in the same directory

I have a directory with thousands of files and each of them has to be processed (by a python script) and subsequently deleted.
I would like to write a bash script that reads a file in the folder, processes it, deletes it and moves onto another file - the order is not important. There will be n running instances of this bash script (e.g. 10), all operating on the same directory. They quit when there are no more files left in the directory.
I think this creates a race condition. Could you give me an advice (or a code snippet) how to make sure that no two bash scripts operate on the same file?
Or do you think I should rather implement multithreading in Python (instead of running n different bash scripts)?
You can use the fact the file renames (on the same file system) are atomic on Unix systems, i.e. a file was either renamed or not. For the sake of clarity, let us assume that all files you need to process have name beginning with A (you can avoid this by having some separate folder for the files you are processing right now).
Then, your bash script iterates over the files, tries to rename them, calls the python script (I call it process here) if it succeeds and else just continues. Like this:
#!/bin/bash
for file in A*; do
pfile=processing.$file
if mv "$file" "$pfile"; then
process "$pfile"
rm "$pfile"
fi
done
This snippet uses the fact that mv returns a 0 exit code if it was able to move the file and a non-zero exit code else.
The only sure way that no two scripts will act on the same file at the same time is to employ some kind of file locking mechanism. A simple way to do this could be to rename the file before beginning work, by appending some known string to the file name. The work is then done and the file deleted. Each script tests the file name before doing anything, and moves on if it is 'special'.
A more complex approach would be to maintain a temporary file containing the names of files that are 'in process'. This file would obviously need to be removed once everything is finished.
I think the solution to your problem is a consumer producer pattern. I think this solution is the right way to start:
producer/consumer problem with python multiprocessing

How to display a file owned by another thread?

I'm trying to build an application that displays in a GUI the contents of a log file, written by a separate program that I call through subprocess. The application runs in Windows, and is a binary that I have no control over. Also, this application (Actel Designer if anyone cares) will write its output to a log file regardless of how I redirect the output of subprocess, so using a pipe for the output doesn't seem to be an option. The bottom line is that I seem to be forced into reading from a log file at the same time another thread may be writing to it. My question is if there is a way that I can keep the GUI's display of the log file's contents up to date in a robust way?
I've tried the following:
Naively opening the file for reading periodically while the child
process is running causes Python to crash (I'm guessing because the
child thread is writing to the file while I'm attempting to read its
contents)
Next I tried to open a file handle to the log filename before invoking the child process with GENERIC_READ, and SHARED_READ | SHARED_WRITE | SHARED_DELETE and reading back from that file. With this approach, the file appears empty
Thanks for any help you can provide - I'm not a professional programmer and I've been pulling my hair out over this for a week.
You should register for notifications on file change, the way tail -f does (you can find out what system calls it uses by executing strace tail -f logfile).
pyinotify provides a Python interface for these file change notifications.

which inotify event signals the completion of a large file operation?

for large files or slow connections, copying files may take some time.
using pyinotify, i have been watching for the IN_CREATE event code. but this seems to occur at the start of a file transfer. i need to know when a file is completely copied - it aint much use if it's only half there.
when a file transfer is finished and completed, what inotify event is fired?
IN_CLOSE probably means the write is complete. This isn't for sure since some applications are bad actors and open and close files constantly while working with them, but if you know the app you're dealing with (file transfer, etc.) and understand its' behaviour, you're probably fine. (Note, this doesn't mean the transfer completed successfully, obviously, it just means that the process that opened the file handle closed it).
IN_CLOSE catches both IN_CLOSE_WRITE and IN_CLOSE_NOWRITE, so make your own decisions about whether you want to just catch one of those. (You probably want them both - WRITE/NOWRITE have to do with file permissions and not whether any writes were actually made).
There is more documentation (although annoyingly, not this piece of information) in Documentation/filesystems/inotify.txt.
For my case I wanted to execute a script after a file was fully uploaded. I was using WinSCP which writes large files with a .filepart extension till done.
I first started modifying my script to ignore files if they're themselves ending with .filepart or if there's another file existing in the same directory with the same name but .filepart extension, hence that means the upload is not fully completed yet.
But then I noticed at the end of the upload, when all the parts have been finished, I have a IN_MOVED_IN notification getting triggered which helped me run my script exactly when I wanted it.
If you want to know how your file uploader behaves, add this to the incrontab:
/your/directory/ IN_ALL_EVENTS echo "$$ $# $# $% $&"
and then
tail -F /var/log/cron
and monitor all the events getting triggered to find out which one suits you best.
Good luck!
Why don't you add a dummy file at the end of the transfer? You can use the IN_CLOSE or IN_CREATE event code on the dummy. The important thing is that the dummy as to be transfered as the last file in the sequence.
I hope it'll help.

How to handle new files to process in cron job

How can I check files that I already processed in a script so I don't process those again? and/or
What is wrong with the way I am doing this now?
Hello,
I am running tshark with the ring buffer option to dump to files after 5MB or 1 hour. I wrote a python script to read these files in XML and dump into a database, this works fine.
My issue is that this is really process intensive, one of those 5MB can turn into a 200MB file when converted to XML, so I do not want to do any unnecessary processing.
The script is running every 10 minutes and processes ~5 files per run, since is scanning the folder where the files are created for any new entries, I dump a hash of the file into the database and on the next run check the hash and if it isn't in the database I scan the file.
The problem is that this does not appear to work every time, it ends up processing files that it has already done. When I check the hash of the file that it keeps trying to process it doesn't show up anywhere in the database, hence why is trying to process it over and over.
I am printing out the filename + hash in the output of the script:
using file /var/ss01/SS01_00086_20100107100828.cap with hash: 982d664b574b84d6a8a5093889454e59
using file /var/ss02/SS02_00053_20100106125828.cap with hash: 8caceb6af7328c4aed2ea349062b74e9
using file /var/ss02/SS02_00075_20100106184519.cap with hash: 1b664b2e900d56ca9750d27ed1ec28fc
using file /var/ss02/SS02_00098_20100107104437.cap with hash: e0d7f5b004016febe707e9823f339fce
using file /var/ss02/SS02_00095_20100105132356.cap with hash: 41a3938150ec8e2d48ae9498c79a8d0c
using file /var/ss02/SS02_00097_20100107103332.cap with hash: 4e08b6926c87f5967484add22a76f220
using file /var/ss02/SS02_00090_20100105122531.cap with hash: 470b378ee5a2f4a14ca28330c2009f56
using file /var/ss03/SS03_00089_20100107104530.cap with hash: 468a01753a97a6a5dfa60418064574cc
using file /var/ss03/SS03_00086_20100105122537.cap with hash: 1fb8641f10f733384de01e94926e0853
using file /var/ss03/SS03_00090_20100107105832.cap with hash: d6209e65348029c3d211d1715301b9f8
using file /var/ss03/SS03_00088_20100107103248.cap with hash: 56a26b4e84b853e1f2128c831628c65e
using file /var/ss03/SS03_00072_20100105093543.cap with hash: dca18deb04b7c08e206a3b6f62262465
using file /var/ss03/SS03_00050_20100106140218.cap with hash: 36761e3f67017c626563601eaf68a133
using file /var/ss04/SS04_00010_20100105105912.cap with hash: 5188dc70616fa2971d57d4bfe029ec46
using file /var/ss04/SS04_00071_20100107094806.cap with hash: ab72eaddd9f368e01f9a57471ccead1a
using file /var/ss04/SS04_00072_20100107100234.cap with hash: 79dea347b04a05753cb4ff3576883494
using file /var/ss04/SS04_00070_20100107093350.cap with hash: 535920197129176c4d7a9891c71e0243
using file /var/ss04/SS04_00067_20100107084826.cap with hash: 64a88ecc1253e67d49e3cb68febb2e25
using file /var/ss04/SS04_00042_20100106144048.cap with hash: bb9bfa773f3bf94fd3af2514395d8d9e
using file /var/ss04/SS04_00007_20100105101951.cap with hash: d949e673f6138af2d388884f4a6b0f08
The only files it should be doing are one per folder, so only 4 files. This causes unecessary processing and I have to deal with overlapping cron jobs + other services been affected.
What I am hoping to get from this post is a better way to do this or hopefully someone can tell me why is happening, I know that the latter might be hard since it can be a bunch of reasons.
Here is the code (I am not a coder but a sys admin so be kind :P) line 30-32 handle the hash comparisons.
Thanks in advance.
A good way to handle/process files that are created at random times is to use
incron rather than cron. (Note: since incron uses the Linux kernel's
inotify syscalls, this solution only works with Linux.)
Whereas cron runs a job based on dates and times, incron runs a job based on
changes in a monitored directory. For example, you can configure incron to run a
job every time a new file is created or modified.
On Ubuntu, the package is called incron. I'm not sure about RedHat, but I believe this is the right package: http://rpmfind.net//linux/RPM/dag/redhat/el5/i386/incron-0.5.9-1.el5.rf.i386.html.
Once you install the incron package, read
man 5 incrontab
for information on how to setup the incrontab config file. Your incron_config file might look something like this:
/var/ss01/ IN_CLOSE_WRITE /path/to/processing/script.py $#
/var/ss02/ IN_CLOSE_WRITE /path/to/processing/script.py $#
/var/ss03/ IN_CLOSE_WRITE /path/to/processing/script.py $#
/var/ss04/ IN_CLOSE_WRITE /path/to/processing/script.py $#
Then to register this config with the incrond daemon, you'd run
incrontab /path/to/incron_config
That's all there is to it. Now whenever a file is created in /var/ss01, /var/ss02, /var/ss03 or /var/ss04, the command
/path/to/processing/script.py $#
is run, with $# replaced by the name of the newly created file.
This will obviate the need to store/compare hashes, and files will only get processed once -- immediately after they are created.
Just make sure your processing script does not write into the top level of the monitored directories.
If it does, then incrond will notice the new file created, and launch script.py again, sending you into an infinite loop.
incrond monitors individual directories, and does not recursively monitor subdirectories. So you could direct tshark to write to /var/ss01/tobeprocessed, use incron to monitor
/var/ss01/tobeprocessed, and have your script.py write to /var/ss01, for example.
PS. There is also a python interface to inotify, called pyinotify. Unlike incron, pyinotify can recursively monitor subdirectories. However, in your case, I don't think the recursive monitoring feature is useful or necessary.
I don't know enough about what is in these files, so this may not work for you, but if you have only one intended consumer, I would recommend using directories and moving the files to reflect their state. Specifically, you could have a dir structure like
/waiting
/progress
/done
and use the relative atomicity of mv to change the "state" of each file. (Whether mv is truly atomic depends on your filesystem, I believe.)
When your processing task wants to work on a file, it moves it from waiting to progress (and makes sure that the move succeeded). That way, no other task can pick it up, since it's no longer waiting. When the file is complete, it gets moved from progress to done, where a cleanup task might delete or archive old files that are no longer needed.
I see several issues.
If you have overlapping cron jobs you need to have a locking mechanism to control access. Only allow one process at a time to eliminate the overlap problem. You might setup a shell script to do that. Create a 'lock' by making a directory (mkdir is atomic), process the data, then delete the lock directory. If the shell script finds the directory already exists when it tries to make it then you know another copy is already running and it can just exit.
If you can't change the cron table(s) then just rename the executable and name your shell script the same as the old executable.
Hashes are not guaranteed to be unique identifiers for files, it's likely they are, but it's not absolutely guaranteed.
Why not just move a processed file to a different directory?
You mentioned overlapping cron jobs. Does this mean one conversion process can start before the previous one finished? That means you would perform the move at the beginning of the conversion. If you are worries about an interrupted conversion, use an intermediate directory, and move to a final directory after completion.
If I'm reading the code correctly, you're updating the database (by which I mean the log of files processed) at the very end. So when you have a huge file that's being processed and not yet complete, another cron job will 'legally' start working on it. - both completing succesfully resulting in two entries in the database.
I suggest you move up the logging-to-database, which would act as a lock for subsequent cronjobs and having a 'success' or 'completed' at the very end. The latter part is important as something that's shown as processing but doesnt have a completed state (coupled with the notion of time) can be programtically concluded as an error. (That is to say, a cronjob tried processing it but never completed it and the log show processing for 1 week!)
To summarize
Move up the log-to-database so that it would act as a lock
Add a 'success' or 'completed' state which would give the notion of errored state
PS: Dont take it in the wrong way, but the code is a little hard to understand. I am not sure whether I do at all.

Categories