Amazon s3 problems with gif files using urllib2 (python) - python

We are using urllib2 to upload images to s3:
img_url = pic
imgData = urllib2.urlopen(img_url).read()
unipart.coverart = store_in_s3_partpic(name, imgData)
def store_in_s3_partpic(name, content):
conn = S3Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
pathtofile = "partpics/%s" % (name)
b = conn.create_bucket('bucketname')
k = Key(b)
path = "/media/" + pathtofile
k.key = path
k.set_contents_from_string(content)
k.set_acl("public-read")
return pathtofile
JPG files load properly, but GIF files consistently give the following error when we go to the file URL on Amazon:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>E9482112F0EBAAD7</RequestId>
<HostId>
9Xoh6fuKdsDIwYASEg64VHt5sxw1aYXmmBGtacsG1JYMgr18GUooZReB5WyRN1TW
</HostId>
</Error>
Does anyone know why?

Related

Error while reading PDF files from SharePoint Online

I am using SharePoint Office365 python libraries to read the pdf files in a folder and copy them to s3 but I am getting error as:
b'The length of the URL for this request exceeds the configured maxUrlLength value.'
Here is my code
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
from office365.runtime.auth.user_credential import UserCredential
def sharepoint_connection(username, password, site_url, relative_url):
ctx = ClientContext(site_url).with_credentials(UserCredential(username, password))
web = ctx.web.get().execute_query()
return ctx
def sharepoint_files(relative_url):
file_names = []
file_details = {}
ctx = sharepoint_connection(username,password,site_url,relative_url)
files = ctx.web.get_folder_by_server_relative_url(relative_url).files
ctx.load(files)
ctx.execute_query()
for file in files:
file_names.append(file.properties['ServerRelativeUrl'])
file_url = file.properties['ServerRelativeUrl']
file_name = file_url[file_url.rfind("/")+1:]
file_details[file_name] = file_url
# print(file_details)
return file_details
site_url = "https://account.sharepoint.com/sites/ExternalSharing"
relative_url = "/sites/ExternalSharing/Shared Documents/OCE********"
ctx = sharepoint_connection(username,password,site_url,relative_url)
file_url = file_details[file]
response = File.open_binary(ctx, file_url)
print(response. Content)
I understand that the URL is too long. So I tried to map the sharepoint folder to one drive and then upload it but its the same issue.
Is there a way to handle this scenario.
Thanks in advance. Please let me know if any more information is needed.
Thanks,
Ashish

Is there a way to rename images in restAPI local server using python?

i created a localhost api to analyis images and compare them (computer vision) project!
my plan is to upload images from my data folder to the server, each image file in folder is named (fake_name.jpg/jpeg) i am trying to add the file name as a person name in parameters but can only do it manually and for each file.
i am also trying to figure out how to upload multiple files.
def image_to_base64(self, img):
# convert image to base64
prependInfo = 'data:image/jpeg;base64,'
encodedString = base64.b64encode(img).decode("utf-8")
fullString = str(prependInfo) + encodedString
return str(fullString)
# the following part is to create entry in database:
def create_person_entry(self,img):
base_url = "localhost:8080/service/api/person/create?"
parameters = {
"person-name": 'homer simson' #manual change name from here before each upload
}
data = {
"image-data": self.image_to_base64(img)
}
r = requests.post(base_url+urllib.parse.urlencode(parameters),headers{'Authorization':self.auth_tok}, data=data).json()
return r
#to import 1 image i used:
#with open("///data/homer simson.jpg", "rb") as img:
person_name = cvis.create_person(img.read())
print (person_name)
it uploads successfuly but i have to manualy name the person entry from parameters "person-name" for each person i upload! researched everywhere to automate solution!
edit1:
i managed to get this code working and it worked
# to upload multiple images
#folder with JPEG/JPG files to upload
folder = "/home///data/"
#dict for files
upload_list = []
for files in os.listdir(folder): with open("{folder}{name}".format(folder=folder, name=files), "rb") as data:
upload_list.append(files)
person_name = cvis.create_person(data.read())
print (person_name)
i managed to upload all images from directory to server it worked but now all my files are named homer simpson :)
i finally managed to get this right at the suggestion made by AKX his solution is below plz upvote, thanks
Now i need to figure out how to delete the previous no name entries.. will check API documentation.
Am I missing something – why not just add another argument to your create_person_entry() function?
def create_person_entry(self, name, img):
parameters = {
"person-name": name,
}
# ...
return r
# ...
cvis.create_person_entry("homer_simpson", img.read())
And if you have a folderful of images,
import os
import glob
for filename in glob.glob("people/*.jpg"):
file_basename = os.path.splitext(os.path.basename(filename))[0]
with open(filename, "rb") as img:
cvis.create_person_entry(file_basename, img.read())
will use the file's name sans extension, e.g. people/homer_simpson.jpg is homer_simpson.

Python - Download files from SharePoint site

I have a requirement of downloading and uploading the files to Sharepoint sites. This has to be done using python.
My site will be as https://ourOrganizationName.sharepoint.com/Followed by Further links
Initially I thought I could do this using Request, BeautifulSoup etc., But I am not at all able to go to "Inspect Element" on the body of the site.
I have tried libraries such as Sharepoint,HttpNtlmAuth,office365 etc., but I am not successful. It always returning 403.
I tried google as much I can but again not successful. Even Youtube hasn't helped me.
Could anyone help me how to do that? Suggestion on Libraries with documentation link is really appreciated.
Thanks
Have you tried Office365-REST-Python-Client library, it supports SharePoint Online authentication and allows to download/upload a file as demonstrated below:
Download a file
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
ctx_auth = AuthenticationContext(url)
ctx_auth.acquire_token_for_user(username, password)
ctx = ClientContext(url, ctx_auth)
response = File.open_binary(ctx, "/Shared Documents/User Guide.docx")
with open("./User Guide.docx", "wb") as local_file:
local_file.write(response.content)
Upload a file
ctx_auth = AuthenticationContext(url)
ctx_auth.acquire_token_for_user(username, password)
ctx = ClientContext(url, ctx_auth)
path = "./User Guide.docx" #local path
with open(path, 'rb') as content_file:
file_content = content_file.read()
target_url = "/Shared Documents/{0}".format(os.path.basename(path)) # target url of a file
File.save_binary(ctx, target_url, file_content) # upload a file
Usage
Install the latest version (from GitHub):
pip install git+https://github.com/vgrem/Office365-REST-Python-Client.git
Refer /examples/shrepoint/files/* for a more details
You can also try this solution to upload file. For me, first solution to upload doesn't work.
First step: pip3 install Office365-REST-Python-Client==2.3.11
import os
from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.user_credential import UserCredential
def print_upload_progress(offset):
print("Uploaded '{0}' bytes from '{1}'...[{2}%]".format(offset, file_size, round(offset / file_size * 100, 2)))
# Load file to upload:
path = './' + filename # if file to upload is in the same directory
try:
with open(path, 'rb') as content_file:
file_content = content_file.read()
except Exception as e:
print(e)
file_size = os.path.getsize(path)
site_url = "https://YOURDOMAIN.sharepoint.com"
user_credentials = UserCredential('user_login', 'user_password') # this user must login to space
ctx = ClientContext(site_url).with_credentials(user_credentials)
size_chunk = 1000000
target_url = "/sites/folder1/folder2/folder3/"
target_folder = ctx.web.get_folder_by_server_relative_url(target_url)
# Upload file to SharePoint:
try:
uploaded_file = target_folder.files.create_upload_session(path, size_chunk, print_upload_progress).execute_query()
print('File {0} has been uploaded successfully'.format(uploaded_file.serverRelativeUrl))
except Exception as e:
print("Error while uploading to SharePoint:\n", e)
Based on: https://github.com/vgrem/Office365-REST-Python-Client/blob/e2b089e7a9cf9a288204ce152cd3565497f77215/examples/sharepoint/files/upload_large_file.py

Serving pdf files by link, downloaded as pdf.html

Crated function to allow users download pdf files by link. Works fine, the only problem that what user save is .html. So all files are file.pdf.html.
def download(request,ticket_id):
ticket_path = str(Ticket.objects.get(id=ticket_id).upload)
with open('files/media/' + ticket_path, 'rb') as pdf:
response = HttpResponse(pdf.read())
response['content_type'] = 'application/pdf'
response['Content-Disposition'] = 'attachment;filename="file.pdf"'
return response
Why?
You should move content_type into HttpResponse(pdf.read(), content_type='application/pdf'), it's an attribute of HttpResponse

Python requests.put() corrupts the uploaded image file

I am trying to use the Python requests package to upload an image file to my Amazon AWS S3 bucket.
The code I have opens the bucket, downloads an image file, resizes the image, saves the image locally, then tries to upload the saved image to the S3 bucket.
It all works fine except that the uploaded jpg file is corrupt in some way in as much as it can no longer be viewed as an image. I have checked that the original file that is being uploaded is not corrupt.
My code is:
conn = S3Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(settings.AWS_STORAGE_BUCKET_NAME)
for key in bucket.list(prefix='media/userphotos'):
file_name=key.name
full_path_filename = 'https://' + settings.AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/' + file_name
fd_img = urlopen(full_path_filename);
img = Image.open(fd_img)
img = resizeimage.resize_width(img, 800)
new_filename = full_path_filename.replace('userphotos', 'webversion')
# Save temporarily before uploading to S3 bucket
img.save('temp.jpg', img.format)
the_file = {'media': open('temp.jpg', 'rb')}
r = requests.put(new_filename, files=the_file, auth=S3Auth(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY))
fd_img.close()
UPDATE
I have just noticed that while the jpg file cannot be opened with a web browser or with Preview on my Mac it can be opened successfully with Adobe Photoshop! Clearly the image is in the file but there is something about the jpg file created by requests.put() which is doing something to the file that stops it being readable by web browsers. Strange!
Do this instead:
requests.put(url, data=open(filename, 'rb'))
I noticed using "files" as documented in requests library prepends a bunch of garbage to the file. You inspect that with xxd <filename>

Categories