Django. How to upload file from memory to flickr? - python

I would like to know, how to upload a file from memory to Flickr.
I am using the Python Flickr API kit (http://stuvel.eu/flickrapi).
Does the file in memory have a path that can be passed as filename?
Code
response = flickr.upload(filename=f.read(), callback=None, **keywords)
Error
TypeError at /image/new/
must be encoded string without NULL bytes, not str
Thanks in advance

You can try using the tempfile module to write it to disk before uploading it
import tempfile
with tempfile.NamedTemporaryFile(delete=True) as tfile:
tfile.write(f.read())
tfile.flush()
response = flickr.upload(filename=tfile.name,callback=None,**keywords)

Related

Gzip a file in Python before uploading to Cloud Storage

I have the following Python function to write the given content to a bucket in Cloud Storage:
import gzip
from google.cloud import storage
def upload_to_cloud_storage(json):
"""Write to Cloud Storage."""
# The contents to upload as a JSON string.
contents = json
storage_client = storage.Client()
# Path and name of the file to upload (file doesn't yet exist).
destination = "path/to/name.json.gz"
# Gzip the contents before uploading
with gzip.open(destination, "wb") as f:
f.write(contents.encode("utf-8"))
# Bucket
my_bucket = storage_client.bucket('my_bucket')
# Blob (content)
blob = my_bucket.blob(destination)
blob.content_encoding = 'gzip'
# Write to storage
blob.upload_from_string(contents, content_type='application/json')
However, I receive an error when running the function:
FileNotFoundError: [Errno 2] No such file or directory: 'path/to/name.json.gz'
Highlighting this line as the cause:
with gzip.open(destination, "wb") as f:
I can confirm that the bucket and path both exist although the file itself is new and to be written.
I can also confirm that removing the Gzipping part sees the file successfully written to Cloud Storage.
How can I gzip a new file and upload to Cloud Storage?
Other answers I've used for reference:
https://stackoverflow.com/a/54769937
https://stackoverflow.com/a/67995040
Although #David's answer wasn't complete at the time of solving my problem, it got me on the right track. Here's what I ended up using along with explanations I found out along the way.
import gzip
from google.cloud import storage
from google.cloud.storage import fileio
def upload_to_cloud_storage(json_string):
"""Gzip and write to Cloud Storage."""
storage_client = storage.Client()
bucket = storage_client.bucket('my_bucket')
# Filename (include path)
blob = bucket.blob('path/to/file.json')
# Set blog meta data for decompressive transcoding
blob.content_encoding = 'gzip'
blob.content_type = 'application/json'
writer = fileio.BlobWriter(blob)
# Must write as bytes
gz = gzip.GzipFile(fileobj=writer, mode="wb")
# When writing as bytes we must encode our JSON string.
gz.write(json_string.encode('utf-8'))
# Close connections
gz.close()
writer.close()
We use the GzipFile() class instead of convenience method (compress) to enable us to pass in the mode. When trying to write using w or wt you will receive the error:
TypeError: memoryview: a bytes-like object is required, not 'str'
So we must write in binary mode (wb). This will also enable the .write() method. When doing so however we need to encode our JSON string. This can be done using str.encode() and setting it as utf-8. Failing to do this will also result in the same error.
Finally, I wanted to be able to enable decompressive transcoding where the requester (browser in my case) will receive the uncompressed version of the file when requested. To enable this google.cloud.storage.blob allows you to set some meta data including content_type and content_encoding so we can can follow best practices.
This sees the JSON object in memory written to your chosen destination in Cloud Storage in a compressed format and decompressed on the fly (without needing to download a gzip archive).
Thanks also to #JohnHanley for the troubleshooting advice.
The best solution is not to write the gzip to a file at all, and directly compress and stream to GCS.
from google.cloud import storage
from google.cloud.storage import fileio
storage_client = storage.Client()
bucket = storage_client.bucket('my_bucket')
blob = bucket.blob('my_object')
writer = fileio.BlobWriter(blob)
gz = gzip.GzipFile(fileobj=writer, mode="w") # use "wb" if bytes
gz.write(contents)
gz.close()
writer.close()

How to convert a blob file to a specific format?

I am building a web application with ReactJS and Django framework.
In this web application, there is a part where I record an audio file and send it to the backend to save it.
This is the blob data from ReactJS that I send:
BlobĀ {
size: 29535,
type: "audio/wav; codecs=0"
}
And this is the code I am using in the backend:
#api_view(['POST'])
#csrf_exempt
def AudioModel(request):
try:
audio = request.FILES.get('audio')
except KeyError:
return Response({'audio': ['no audio ?']}, status=HTTP_400_BAD_REQUEST)
destination = open('audio_name.wav', 'wb')
for chunk in audio.chunks():
destination.write(chunk)
destination.close() # closing the file
return Response("Done!", status=HTTP_200_OK)
When I play the file I saved, it plays some sound but it crashes when it achieves the end.
This problem makes me look for some information about the file I saved (extension,...).
For this reason I used fleep library:
import fleep
with open("audio_name.wav", "rb") as file:
info = fleep.get(file.read(128))
print(info.type)
print(info.extension)
print(info.mime)
OUTPUT:
['video']
['webm']
['video/webm']
But getting video in output!
Am I doing something wrong?
How can I fix this issue?
Is there anything I can use to save my file in the desired format?
Any help is appreciated.
EDIT:
Output of first 128 bytes of the saved file:
b'\x1aE\xdf\xa3\x9fB\x86\x81\x01B\xf7\x81\x01B\xf2\x81\x04B\xf3\x81\x08B\x82\x84webmB\x87\x81\x04B\x85\x81\x02\x18S\x80g\x01\xff\xff\xff\xff\xff\xff\xff\x15I\xa9f\x99*\xd7\xb1\x83\x0fB#M\x80\x86ChromeWA\x86Chrome\x16T\xaek\xbf\xae\xbd\xd7\x81\x01s\xc5\x87\xbd\x8d\xc0\xd5\xc6\xaf\xd0\x83\x81\x02\x86\x86A_OPUSc\xa2\x93OpusHead\x01\x01\x00\x00\x80\xbb\x00\x00'
Use SciPy to read data from and write data to a variety of file formats.
Usage examples:
Writing wav file in Python with wavfile.write from SciPy
scipy.io.wavfile.write
Looks like inside react-voice-recorder uses MediaRecorder object with default options. But in options you can set correct mimeType, for example audio/webm; codecs=opus

Python FileNotFoundError when reading a public json file

I have a public viewable JSON file that is hosted on s3. I can access the file directly by clicking on the public url to the s3 object and view the JSON in my browser fine. Anyone on the internet can view the file easily.
Yet with the below code is ran in Python (using Lambda connected to an API trigger) I get [Errno 2] No such file or directory: as the errorMessage, and FileNotFoundError as the errorType.
def readLeetDictionary(self):
jsonfile = 'https://s3.amazonaws.com/url/to/public/facing/file/data.json'
with open(jsonfile, 'r') as data_file:
self.data = json.load(data_file)
What am I missing here? Since the file is a publicly viewable JSON file I would assume I wouldn't be forced to use boto3 library and formally handshake to the file in order to read the file (with object = s3.get_object(Bucket=bucket_names,Key=object_name) for example) - would I?
The conde you need should be something like:
import urllib, json
def readLeetDictionary():
jsonfile = 'https://s3.amazonaws.com/url/to/public/facing/file/data.json'
response = urllib.urlopen(jsonfile)
data = json.loads(response.read())
print data
Please feel free to ask further or explain if this does not suit you.

Dropbox Python API not updating file

my code is uploading a txt file to my drop box, but the document it self is empty of content. It only reading inside the title of the file 'test_data.txt', the data itself which is in the real file is not there. The file never updates either when running the script a second time, but I suspect this is because the file is not being updated (it's not actually reading the contents of the .txt file). If anyone could help me with this I would appreciate it.
import dropbox
from dropbox.files import WriteMode
overwrite = WriteMode('overwrite', None)
token = 'xxxx'
dbx = dropbox.Dropbox(token)
dbx.users_get_current_account()
dbx.files_upload('test_data.txt', '/test_data.txt', mode = WriteMode('overwrite'))
files_upload should recieve a content to upload. In your current code you are asking to upload string "test_data.txt" as file "/test_data.txt".
with open('test_data.txt', 'rb') as fh:
dbx.files_upload(fh.read(), '/test_data.txt')

Errow while uploading gzip via http POST

So trying I'm POSTing a compressed file via httplib2 in Python 3.2. I get the following error:
io.UnsupportedOperation: fileno
I used to post just an xml file but since those files are getting too big I want to compress them inside the memory first.
This is how I create the compressed file in memory:
contentfile = open(os.path.join(r'path', os.path.basename(fname)), 'rb')
tempfile = io.BytesIO()
compressedFile = gzip.GzipFile(fileobj=tempfile, mode='wb')
compressedFile.write(contentfile.read())
compressedFile.close()
tempfile.seek(0)
and this is how I'm trying to POST it.
http.request(self.URL,'POST', tempfile, headers={"Content-Type": "application/x-gzip", "Connection": "keep-alive"})
Any ideas ?
Like i said, it worked well when using the xml file i.e. contentfile
Solved by providing the "Content-Length" header which obviously removes the need for httplib2 to check the length.

Categories