I'm working on a Python project that is required some file transferring. One side of the connection is highly available ( REHL 6 ) and always online. But the other side is going on and off ( Windows 7 ) and the connection period is not guaranteed. The files are transporting on both directions and sizes are between 10MB to 2GB.
Is it possible to resume the file transferring with paramiko instead of transferring the entire file from the beginning.
I would like to use rSync but one side is windows and I would like to avoid cwRsync and DeltaCopy
Paramiko doesn't offer an out of the box 'resume' function however, Syncrify, DeltaCopy's big successor has a retry built in and if the backup goes down the server waits up to six hours for a reconnect. Pretty trusty, easy to use and data diff by default.
paramiko.sftp_client.SFTPClient contains an open function, which functions exactly like python's built-in open function.
You can use this to open both a local and remote file, and manually transfer data from one to the other, all the while recording how much data has been transferred. When the connection is interrupted, you should be able to pick up right where you left off (assuming that neither file has been changed by a 3rd party) by using the seek method.
Keep in mind that a naive implementation of this is likely to be slower than paramiko's get and put functions.
Related
Files are being pushed to my server via FTP. I process them with PHP code in a Drupal module. O/S is Ubuntu and the FTP server is vsftp.
At regular intervals I will check for new files, process them with SimpleXML and move them to a "Done" folder. How do I avoid processing a partially uploaded file?
vsftp has lock_upload_files defaulted to yes. I thought of attempting to move the files first, expecting the move to fail on a currently uploading file. That doesn't seem to happen, at least on the command line. If I start uploading a large file and move, it just keeps growing in the new location. I guess the directory entry is not locked.
Should I try fopen with mode 'a' or 'r+' just to see if it succeeds before attempting to load into SimpleXML or is there a better way to do this? I guess I could just detect SimpleXML load failing but... that seems messy.
I don't have control of the sender. They won't do an upload and rename.
Thanks
Using the lock_upload_files configuration option of vsftpd leads to locking files with the fcntl() function. This places advisory lock(s) on uploaded file(s) which are in progress. Other programs don't need to consider advisory locks, and mv for example does not. Advisory locks are in general just an advice for programs that care about such locks.
You need another command line tool like lockrun which respects advisory locks.
Note: lockrun must be compiled with the WAIT_AND_LOCK(fd) macro to use the lockf() and not the flock() function in order to work with locks that are set by fcntl() under Linux. So when lockrun is compiled with using lockf() then it will cooperate with the locks set by vsftpd.
With such features (lockrun, mv, lock_upload_files) you can build a shell script or similar that moves files one by one, checking if the file is locked beforehand and holding an advisory lock on it as long as the file is moved. If the file is locked by vsftpd then lockrun can skip the call to mv so that running uploads are skipped.
If locking doesn't work, I don't know of a solution as clean/simple as you'd like. You could make an educated guess by not processing files whose last modified time (which you can get with filemtime()) is within the past x minutes.
If you want a higher degree of confidence than that, you could check and store each file's size (using filesize()) in a simple database, and every x minutes check new size against its old size. If the size hasn't changed in x minutes, you can assume nothing more is being sent.
The lsof linux command lists opened files on your system. I suggest executing it with shell_exec() from PHP and parsing the output to see what files are still being used by your FTP server.
Picking up on the previous answer, you could copy the file over and then compare the sizes of the copied file and the original file at a fixed interval.
If the sizes match, the upload is done, delete the copy, work with the file.
If the sizes do not match, copy the file again.
repeat.
Here's another idea: create a super (but hopefully not root) FTP user that can access some or all of the upload directories. Instead of your PHP code reading uploaded files right off the disk, make it connect to the local FTP server and download files. This way vsftpd handles the locking for you (assuming you leave lock_upload_files enabled). You'll only be able to download a file once vsftp releases the exclusive/write lock (once writing is complete).
You mentioned trying flock in your comment (and how it fails). It does indeed seem painful to try to match whatever locking vsftpd is doing, but dio_fcntl might be worth a shot.
I guess you've solved your problem years ago but still.
If you use some pattern to find the files you need you can ask the party uploading the file to use different name and rename the file once the upload has completed.
You should check the Hidden Stores in proftp, more info here:
http://www.proftpd.org/docs/directives/linked/config_ref_HiddenStores.html
When appending lines to a remote file via SFTP with pysftp:
import pysftp
with pysftp.Connection('192.168.0.2', username='root', password='') as sftp:
with sftp.cd('/home/www/test'):
with sftp.open('test.txt', 'a+') as f:
for i in range(100):
s = (("%04d" % i).encode()*10000) + b'\n' # 40'001 bytes
f.write(s)
if I terminate the process in the middle of the operation, sometimes (if I'm lucky), the whole line s is written on the distant file.
On other cases, the last line is truncated in the middle, at the time the process has been interrupted.
Is there a way to make the SFTP f.write(s) operation atomic? i.e. either it fails in the middle, then nothing is written, or it succeeds and the full 40'001 byte-line is written?
I don't believe this is possible. First of all, in order for it to be possible at all, the remote system's write(2) syscall would have to guarantee that, and POSIX does not require that behavior. There are many reasons a write may be non-atomic, such as if the remote disk is full and you can only write part of the data to disk, or if the remote user has a quota and your full write would exceed that.
Additionally, you're trying to write over 40 kB over a network connection, and it's likely that doesn't fit in one packet. Consequently, it wouldn't make sense for any network software to write a packet that large.
If it's important to you to write a file completely or not at all, you can write to another file on the same disk and then rename over the original file. This is the way programs like Git guarantee atomic file updates. I believe for SFTP that requires that both sides support the posix-rename#openssh.com extension; OpenSSH does, but I don't know if pysftp does, so you'd need to consult the documentation.
Firstly I am new to Python.
Now my question goes like this:
I have a call back script running in remote machine
which sends some data and run a script in local machine
which process that data and write to a file. Now another
script of mine locally needs to process the file data
one by one and delete them from the file if done.
The problem is the file may be updating continuoulsy.
How do i schyncronize the work so that it doesnt mess up
my file.
Also please suggest me if the same work can be done in some
better way.
I would suggest you to look into named pipes or sockets which seem to be more suited for your purpose than a file. If it's really just between those two applications and you have control on the source code of both.
For example, on unix, you could create a pipe like (see os.mkfifo):
import os
os.mkfifo("/some/unique/path")
And then access it like a file:
dest = open("/some/unique/path", "w") # on the sending side
src = open("/some/unique/path", "r") # on the reading side
The data will be queued between your processes. It's a First In First Out really, but it behaves like a file (mostly).
If you cannot go for named pipes like this, I'd suggest to use IP sockets over localhost from the socket module, preferably DGRAM sockets, as you do not need to do some connection handling there. You seem to know how to do networking already.
I would suggest using a database whose transactions allow for concurrent processing.
I have a program that creates a bunch of movie files. I runs as a cron job and every time it runs the movies from the previous iteration are moved to a 'previous' folder so that there is always a previous version to look at.
These movie files are accessed across a network by various users and that's where I'm running into a problem.
When the script runs and tries to move the files it throws a resource busy error because the files are open by various users. Is there a way in Python to force close these files before I attempt to move them?
Further clarification:
JMax is correct when he mentions it is server level problem. I can access our windows server through Administrative Tools > Computer Management > Shared Folders > Open Files and manually close the files there, but I am wondering whether there is a Python equivalent which will achieve the same result.
something like this:
try:
shutil.move(src, dst)
except OSError:
# Close src file on all machines that are currently accessing it and try again.
This question has nothing to do with Python, and everything to do with the particular operating system and file system you're using. Could you please provide these details?
At least in Windows you can use Sysinternals Handle to force a particular handle to a file to be closed. Especially as this file is opened by another user over a network this operation is extremely destabilising and will probably render the network connection subsequently useless. You're looking for the "-c" command-line argument, where the documentation reads:
Closes the specified handle (interpreted as a hexadecimal number). You
must specify the process by its PID.
WARNING: Closing handles can cause application or system instability.
And if you're force-closing a file mounted over Samba in Linux, speaking from experience this is an excruciating experience in futility. However, others have tried with mixed success; see Force a Samba process to close a file.
As far as I know you have to end the processes which access the file. At least on Windows
The .close() method doesn't work on your object file?
See dive into Python for more information on file objects
[EDIT] I've re-read your question. Your problem is that users do open the same file from the network and you want them to close the file? But can you access to their OS?
[EDIT2] The problem is more on a server level to disconnect the user that access the file. See this example for Windows servers.
We have several cron jobs that ftp proxy logs to a centralized server. These files can be rather large and take some time to transfer. Part of the requirement of this project is to provide a logging mechanism in which we log the success or failure of these transfers. This is simple enough.
My question is, is there a way to check if a file is currently being written to? My first solution was to just check the file size twice within a given timeframe and check the file size. But a co-worker said that there may be able to hook into the EXT3 file system via python and check the attributes to see if the file is currently being appended to. My Google-Fu came up empty.
Is there a module for EXT3 or something else that would allow me to check the state of a file? The server is running Fedora Core 9 with EXT3 file system.
no need for ext3-specific hooks; just check lsof, or more exactly, /proc/<pid>/fd/* and /proc/<pid>/fdinfo/* (that's where lsof gets it's info, AFAICT). There you can check if the file is open, if it's writeable, and the 'cursor' position.
That's not the whole picture; but any more is done in processpace by stdlib on the writing process, as most writes are buffered and the kernel only sees bigger chunks of data, so any 'ext3-aware' monitor wouldn't get that either.
There's no ext3 hooks to check what you'd want directly.
I suppose you could dig through the source code of Fuser linux command, replicate the part that finds which process owns a file, and watch that resource. When noone longer has the file opened, it's done transferring.
Another approach:
Your cron jobs should tell that they're finished.
We have our cron jobs that transport files just write an empty filename.finished after it's transferred the filename. Another approach is to transfer them to a temporary filename, e.g. filename.part and then rename it to filename Renaming is atomic. In both cases you check repeatedly until the presence of filename or filename.finished