I'm trying to write a FTP upload program for text files. However I'm getting this error:
builtins.TypeError: a bytes-like object is required, not 'str'.
I am using Python 3.6.
Here is my code:
def _upload_to_ftp(self, ftp_handle, name):
# upload a single file to ftp directory
with open(name,'r') as f:
print("uploading"+name)
filename = os.path.basename(name)
ftp_handle.storlines('STOR %s' %filename, f)
I could not figure out why.
Unfortunately, what FTP calls text is still bytes for Python 3. Python 3 strings use Unicode characters that need to be encoded to bytes in order to be written to files and FTP deals with files. But here it is even simpler: you have just to open the local file in binary mode to have it deliver bytes instead of strings:
def _upload_to_ftp(self, ftp_handle, name):
# upload a single file to ftp directory
with open(name,'rb') as f: # use binary mode for file
print("uploading"+name)
filename = os.path.basename(name)
ftp_handle.storlines('STOR %s' %filename, f)
Related
I have a task requirement of reading a tar.gz file from a ftp server and store it on a blob storage.
How I think I can accomplish is that I must first create a temp file in azure function temp directory, write all content on it, close it and then upload it on the blob storage.
What I have done so far is:
fp = tempfile.NamedTemporaryFile()
filesDirListInTemp = listdir(tempFilePath)
logging.info(filesDirListInTemp)
try:
with open('/tmp/fp', 'w+') as fp:
data = BytesIO()
save_file = ftp.retrbinary('RETR '+ filesDirListInTemp, data.write, 1024)
data.seek(0)
blobservice=BlobClient.from_connection_string(conn_str=connection_string,container_name=container_name,blob_name=filename,max_block_size=4*1024*1024,max_single_put_size=16*1024*1024)
blobservice.upload_blob(gzip.decompress(data.read()))
print("File Uploaded!")
except Exception as X:logging.info(X)
But I am getting error as: expected str, bytes or os.PathLike object, not list.
Please tell me what I am doing wrong here?
I am trying to write the contents read from S3 object to a file . I am getting syntax error while doing the same.
object =s3.get_object(Bucket=bucket_name, Key="toollib/{0}/{1}/stages/{0}.groovy".format(tool,platform))
print(object)
jenkinsfile = object['Body'].read()
print(jenkinsfile)
basepath = '/mnt/efs/{0}/{1}/{2}/'.format(orderid, platform, technology)
filename = basepath+fileName
print(filename)
#file1=open(filename, "a")
with open(filename, 'a') as file:
file.write(jenkinsfile)
Error : "errorMessage": "write() argument must be str, not bytes"
Opening the file in binary mode should do the trick:
with open(filename, 'ab') as file:
file.write(jenkinsfile)
For whatever reason i cannot open or access the file in this subdirectory. I need to be able to open and read files within subdirectories of a zipped folder. Here is my code.
import zipfile
import os
for root, dirs, files in os.walk('Z:\\STAR'):
for name in files:
if '.zip' in name:
try:
zipt=zipfile.ZipFile(os.path.join(root,name),'r')
dirlist=zipfile.ZipFile.namelist(zipt)
for item in dirlist:
if 'Usb' in item:
input(item)
with zipt.open(item,'r') as f:
a=f.readlines()
input(a[0])
else:pass
except Exception as e:
print('passed trc file {}{} because of {}'.format(root,name,e))
else:pass
This code currently gives me the error:
File "StarMe_tracer2.py", line 133, in tracer
if 'er99' in line:
TypeError: a bytes-like object is required, not 'str'
The content read from the file object opened with ZipFile.open is bytes rather than a string, so testing if a string 'er99' is in a line of bytes would fail with a TypeError.
Instead, you can either decode the line before you test:
if 'er99' in line.decode():
or convert the bytes stream to a text stream with io.TextIOWrapper:
import io
...
with io.TextIOWrapper(zipt.open(item,'r'), encoding='utf-8') as f:
I need to extract a list of xml files that are in a tar.gz file that I'm trying to read.
I tried this:
import os
from ftplib import FTP
def writeline(data):
filedata.write(data)
filedata.write(os.linesep)
ftp = FTP('ftp.my.domain.com')
ftp.login(user="username",passwd="password")
ftp.cwd('inner_folder')
filedata = open('mytargz.tar.gz', 'w')
ftp.retrlines('RETR %s' % ftp.nlst()[0], writeline)
I used ftp.nlst()[0] because I have a list of tar.gz files in my ftp.
It looks like the data that I'm receiving in my writeline callback is some weird symbols, and than the filedata.write(data) is throwing an error:
{UnicodeEncodeError}'charmap' codec can't encode character '\x8b' in position 1: character maps to <undefined>.
I can really use some help here..
I dont have an ftp server to try this with, but this should work:
import os
from ftplib import FTP
def writeline(data):
filedata.write(data)
ftp = FTP('ftp.my.domain.com')
ftp.login(user="username",passwd="password")
ftp.cwd('inner_folder')
filedata = open('mytargz.tar.gz', 'wb')
ftp.retrbinary('RETR %s' % ftp.nlst()[0], writeline)
note that we open the file with write binary 'wb' and we ask the ftp to return binary and not text and that our callback function only write without adding seperators
I have multiple broken video files of one same video I need to join them together as one video again but when I tried this
import os
path = 'C:/temp/test'
files = os.listdir(path)
for file in files:
mainFile = open('C:/temp/main.mp4','ab')
with open(path+'/'+file,'rb') as read:
print (read)
mainFile.write(read)
mainFile.close()
It threw an Error saying
TypeError: must be string or buffer, not file
So I don't know how do I make a video file buffer. I tried googling it and I found something called ffmpeg but it's a third party app. All I need is buffer of a file.
Note that open() returns a file object rather than the content of the file. The error occurs because a file object is being passed into write().
You can call read() method of a file object to read and return the content of the file.
Try
import os
path = 'C:/temp/test'
files = os.listdir(path)
for file in files:
mainFile = open('C:/temp/main.mp4','ab')
with open(path+'/'+file,'rb') as f:
mainFile.write(f.read())
mainFile.close()