Python: ftp file stuck in buffer? - python

When I download a file with ftplib using this method:
ftp = ftplib.FTP()
ftp.connect("host", "port")
ftp.login("user", "pwd")
size = ftp.size('locked')
def handleDownload(block):
f.write(block)
pbar.update(pbar.currval+len(block))
f = open("locked", "wb")
pbar=ProgressBar(widgets=[FileTransferSpeed(), Bar('>'), ' ', ETA(), ' ', ReverseBar('<'), Percentage()], maxval=size).start()
ftp.retrbinary("RETR locked",handleDownload, 1024)
pbar.finish()
if the file is less than 1mb the file will be stuck in the buffer until I download another file with enough data to push it out. I have tried to make a dynamic buffer by dividing the ftp.size(filename) by 20 but the same thing still happens. So how do I make it so I can download single files less than 1 mb and still use the callback function?

As Wooble stated in comments I did not f.close() the file like an idiot. It fixed the problem.

Related

write on FTP file without overwrite previous text ftplib python [duplicate]

I'm appending values to a log file every 6th second. Every 30 sec I'm transferring this log to an FTP server as a file. But instead of transfering the whole file, I just want to append the collected data to the file on my server. I haven't been able to figure out how to open the server file and then append the values.
My code so far:
session = ftplib.FTP(authData[0],authData[1],authData[2])
session.cwd("//"+serverCatalog()+"//") # open server catalog
file = open(fileName(),'rb')
with open(fileName(), 'rb') as f:
f = f.readlines()
for line in f:
collected = line
# In some way open server file, write lines to it
session.storbinary('STOR ' + fileName(), open(fileName(), 'a'), 1024)
file.close()
session.quit()
Instead, do I have to download the server file open and append, then send it back?
Above was my question, the full solution is below:
session.cwd("//"+serverCatalog()+"//") # open server catalog
localfile = open("logfile.txt",'rb')
session.storbinary('APPE serverfile.txt', localfile)
localfile.close()
Use APPE instead of STOR.
Source: http://www.gossamer-threads.com/lists/python/python/22960 (link to web.archive.org)

Writing to data in Python to a local file and uploading to FTP at the same time does not work

I have this weird issue with my code on Raspberry Pi 4.
from gpiozero import CPUTemperature
from datetime import datetime
import ftplib
cpu = CPUTemperature()
now = datetime.now()
time = now.strftime('%H:%M:%S')
# Save data to file
f = open('/home/pi/temp/temp.txt', 'a+')
f.write(str(time) + ' - Temperature is: ' + str(cpu.temperature) + ' C\n')
# Login and store file to FTP server
ftp = ftplib.FTP('10.0.0.2', 'username', 'pass')
ftp.cwd('AiDisk_a1/usb/temperature_logs')
ftp.storbinary('STOR temp.txt', f)
# Close file and connection
ftp.close()
f.close()
When I have this code, script doesn't write anything to the .txt file and file that is transferred to FTP server has size of 0 bytes.
When I remove this part of code, script is writing to the file just fine.
# Login and store file to FTP server
ftp = ftplib.FTP('10.0.0.2', 'username', 'pass')
ftp.cwd('AiDisk_a1/usb/temperature_logs')
ftp.storbinary('STOR temp.txt', f)
...
ftp.close()
I also tried to write some random text to the file and run the script, and the file transferred normally.
Do you have any idea, what am I missing?
After you write the file, the file pointer is at the end. So if you pass file handle to FTP, it reads nothing. Hence nothing is uploaded.
I do not have a direct explanation for the fact the local file ends up empty. But the strange way of combining "append" mode and reading may be the reason. I do not even see a+ mode defined in open function documentation.
If you want to both append data to a local file and FTP, I suggest your either:
Append the data to the file – Seek back to the original position – And upload the appended file contents.
Write the data to memory and then separately 1) dump the in-memory data to a file and 2) upload it.

save video in python from bytes

i have 2 microservices, A is written in java and sending a video in the form of bytes[ ] to B which is written in python.
B is doing some treatement over the video using openCV and this command in particular
stream = cv2.VideoCapture(video)
the command works fine when provided by a streaming or a ready local video, but when i give it my request.data which java is sending it says
TypeError: an integer is required (got type bytes)
so my question is :
is there any way to save a video to disk from that bytes i'm receiving from java or can i just give the bytes to cv2.capture ?
Thank you.
Just a slight improvement to your own solution: using the with context-manager closes the file for you even if something unexpected happens:
FILE_OUTPUT = 'output.avi'
# Checks and deletes the output file
# You cant have a existing file or it will through an error
if os.path.isfile(FILE_OUTPUT):
os.remove(FILE_OUTPUT)
# opens the file 'output.avi' which is accessable as 'out_file'
with open(FILE_OUTPUT, "wb") as out_file: # open for [w]riting as [b]inary
out_file.write(request.data)
i solved my problem like this :
FILE_OUTPUT = 'output.avi'
# Checks and deletes the output file
# You cant have a existing file or it will through an error
if os.path.isfile(FILE_OUTPUT):
os.remove(FILE_OUTPUT)
out_file = open(FILE_OUTPUT, "wb") # open for [w]riting as [b]inary
out_file.write(request.data)
out_file.close()

FTP library error: got more than 8192 bytes

Python fails while uploading a file which its size bigger than 8192 bytes. And the exception is only "got more than 8192 bytes". Is there a solution to upload larger files.
try:
ftp = ftplib.FTP(str_ftp_server )
ftp.login(str_ftp_user, str_ftp_pass)
except Exception as e:
print('Connecting ftp server failed')
return False
try:
print('Uploading file ' + str_param_filename)
file_for_ftp_upload = open(str_param_filename, 'r')
ftp.storlines('STOR ' + str_param_filename, file_for_ftp_upload)
ftp.close()
file_for_ftp_upload.close()
print('File upload is successful.')
except Exception as e:
print('File upload failed !!!exception is here!!!')
print(e.args)
return False
return True
storlines reads a text file one line at a time, and 8192 is the maximum size of each line. You're probably better off using, as the heart of your upload function:
with open(str_param_filename, 'rb') as ftpup:
ftp.storbinary('STOR ' + str_param_filename, ftpup)
ftp.close()
This reads and stores in binary, one block at a time (same default of 8192), but should work fine for files of any size.
I had a similar issue and solved it by increasing the value of ftplib's maxline variable. You can set it to any integer value you wish. It represents the maximum number of characters per line in your file. This affects uploading and downloading.
I would recommend using ftp.storbinary in most cases per Alex Martelli's answer, but that was not an option in my case (not the norm).
ftplib.FTP.maxline = 16384 # This is double the default value
Just call that line at any point before you start the file transfer.

Python: Unpredictable memory error when downloading large files

I wrote a python script which I am using to download a large number of video files (50-400 MB each) from an HTTP server. It has worked well so far on long lists of downloads, but for some reason it rarely has a memory error.
The machine has about 1 GB of RAM free, but I don't think it's ever maxed out on RAM while running this script.
I've monitored the memory usage in the task manager and perfmon and it always behaves the same from what I've seen: slowly increases during the download, then returns to normal level after it finishes the download (There's no small leaks that creep up or anything like that).
The way the download behaves is that it creates the file, which remains at 0 KB until the download finishes (or the program crashes), then it writes the whole file at once and closes it.
for i in range(len(urls)):
if os.path.exists(folderName + '/' + filenames[i] + '.mov'):
print 'File exists, continuing.'
continue
# Request the download page
req = urllib2.Request(urls[i], headers = headers)
sock = urllib2.urlopen(req)
responseHeaders = sock.headers
body = sock.read()
sock.close()
# Search the page for the download URL
tmp = body.find('/getfile/')
downloadSuffix = body[tmp:body.find('"', tmp)]
downloadUrl = domain + downloadSuffix
req = urllib2.Request(downloadUrl, headers = headers)
print '%s Downloading %s, file %i of %i'
% (time.ctime(), filenames[i], i+1, len(urls))
f = urllib2.urlopen(req)
# Open our local file for writing, 'b' for binary file mode
video_file = open(foldername + '/' + filenames[i] + '.mov', 'wb')
# Write the downloaded data to the local file
video_file.write(f.read()) ##### MemoryError: out of memory #####
video_file.close()
print '%s Download complete!' % (time.ctime())
# Free up memory, in hopes of preventing memory errors
del f
del video_file
Here is the stack trace:
File "downloadVideos.py", line 159, in <module>
main()
File "downloadVideos.py", line 136, in main
video_file.write(f.read())
File "c:\python27\lib\socket.py", line 358, in read
buf.write(data)
MemoryError: out of memory
Your problem is here: f.read(). That line attempts to download the entire file into memory. Instead of that, read in chunks (chunk = f.read(4096)), and save the pieces to temporary file.

Categories