I want to store media files on certain mongo documents.
I was thinking of using eve's put_internal method call to update the document.
How would I use the payload param to provide the file as payload ?
You want to provide the file value as a FileStorage object. So suppose your media field is called media, a hypothetical payload would look like:
{'media': <FileStorage: u'example.jpg' ('image/jpeg')>, ...}
In order to achieve that you would do something like:
from werkzeug import FileStorage
f = open('example.jpg','r')
fs = FileStorage(f)
payload['media'] = fs
Related
Currently crating inf_conf from entry script (score.py) and environment however, I have a json file that i also want to include in this.
Is there a way i can do this?
I have seen source_directory argument but json file is not in the same folder as score.py file. https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py
inf_conf = InferenceConfig(entry_script="score.py",environment=environment)
Currently, it is required that all the necessary files and objects related to the endpoint be placed in the source_directory:
inference_config = InferenceConfig(
environment=env,
source_directory='./endpoint_source',
entry_script="./score.py",
)
One workaround is to upload your JSON file somewhere else, e.g., on the Blob Storage, and download it in the init() function of your entry script. For example:
score.py:
import requests
def init():
"""
This function is called when the container is initialized/started,
typically after create/update of the deployment.
"""
global model
# things related to initializing the model
model = ...
# download your JSON file
json_file_rul = 'https://sampleazurestorage.blob.core.windows.net/data/my-configs.json'
response = requests.get(json_file_rul)
open('my_configs.json', "wb").write(response.content)
I have an html file that I am uploading to s3 using python. For some reason, s3 adds a system defined metadata saying that the file Content-Type is "binary/octet-stream":
I need to change this value to "text/html". I can do it manually, but I want it to be done automatically when I upload the file.
I tried the following code:
metadata = {
"Content-Type": "text/html"
}
s3_file_key = str(Path("index.html"))
local_file_path = Path("~", "index.html").expanduser()
s3 = session.resource("s3")
bucket = s3.Bucket(bucket_name)
with open(local_file_path, READ_BINARY) as local_file:
bucket.put_object(Key=s3_file_key, Body=local_file, Metadata=metadata)
but the result was that the file had 2 metadata keys:
I couldn't find any documentation about how to change a system defined metadata.
Thanks for the help
Use MetadataDirective parameter:
bucket.put_object(Key=s3_file_key, Body=local_file, Metadata=metadata, MetadataDirective='REPLACE')
MetadataDirective -- Specifies whether the metadata is copied from the source object or replaced with metadata provided in the request ('COPY' | 'REPLACE').
S3 - Boto3 Docs
I was also having a similar problem and I found out you need to use the ContentType parameter for the put_object function.
bucket.put_object(Key=s3_file_key, Body=local_file, ContentType='text/html')
I want to add tags to the files as I upload them to S3. Boto3 supports specifying tags with put_object method, however considering expected file size, I am using upload_file function which handles multipart uploads. But this function rejects 'Tagging' as keyword argument.
import boto3
client = boto3.client('s3', region_name='us-west-2')
client.upload_file('test.mp4', 'bucket_name', 'test.mp4',
ExtraArgs={'Tagging': 'type=test'})
ValueError: Invalid extra_args key 'Tagging', must be one of: ACL, CacheControl, ContentDisposition, ContentEncoding, ContentLanguage, ContentType, Expires, GrantFullControl, GrantRead, GrantReadACP, GrantWriteACP, Metadata, RequestPayer, ServerSideEncryption, StorageClass, SSECustomerAlgorithm, SSECustomerKey, SSECustomerKeyMD5, SSEKMSKeyId, WebsiteRedirectLocation
I found a way to make this work by using S3 transfer manager directly and modifying allowed keyword list.
from s3transfer import S3Transfer
import boto3
client = boto3.client('s3', region_name='us-west-2')
transfer = S3Transfer(client)
transfer.ALLOWED_UPLOAD_ARGS.append('Tagging')
transfer.upload_file('test.mp4', 'bucket_name', 'test.mp4',
extra_args={'Tagging': 'type=test'})
Even though this works, I don't think this is the best way. It might create other side effects. Currently I am not able to find correct way to achieve this. Any advice would be great. Thanks.
Tagging directive is now supported by boto3. You can do the following to add tags:
import boto3
from urllib import parse
s3 = boto3.client("s3")
tags = {"key1": "value1", "key2": "value2"}
s3.upload_file(
"file_path",
"bucket",
"key",
ExtraArgs={"Tagging": parse.urlencode(tags)},
)
The S3 Customization Reference — Boto 3 Docs documentation lists valid values for extra_args as:
ALLOWED_UPLOAD_ARGS = ['ACL', 'CacheControl', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACP', 'Metadata', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId', 'WebsiteRedirectLocation']
Therefore, this does not appear to be a valid way to specify a tag.
It appears that you might need to call put_object_tagging() to add the tag(s) after creating the object.
I am new to python and firebase and I am trying to flaten my firebase database.
I have a database in this format
each cat has thousands of data in it. All I want is to fetch the cat names and put them in an array. for example I want the output to be ['cat1','cat2'....]
I was using this tutorial
http://ozgur.github.io/python-firebase/
from firebase import firebase
firebase = firebase.FirebaseApplication('https://your_storage.firebaseio.com', None)
result = firebase.get('/Data', None)
the problem with the above code is it'll attempt to fetch all the data under Data. How can I only fetch the "cats"?
if you want to get the values inside the cats as columns, try using the pyrebase, using pip install pyrebase at cmd / anaconda prompt(later prefered if you didn't set up PIP or Python at your environment paths. after installing:
import pyrebase
config {"apiKey": yourapikey
"authDomain": yourapidomain
"databaseURL": yourdatabaseurl,
"storageBucket": yourstoragebucket,
"serviceAccount": yourserviceaccount
}
Note: you can find all the information above at your Firebase's console:
https://console.firebase.google.com/project/ >>> your project >>> click on the icon "<'/>" with the tag "add firebase to your web app
back to the code...
make a neat definition so you can store it into a py file:
def connect_firebase():
# add a way to encrypt those, I'm a starter myself and don't know how
username: "usernameyoucreatedatfirebase"
password: "passwordforaboveuser"
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
#authenticate a user > descobrir como não deixar hardcoded
user = auth.sign_in_with_email_and_password(username, password)
#user['idToken']
# At pyrebase's git the author said the token expires every 1 hour, so it's needed to refresh it
user = auth.refresh(user['refreshToken'])
#set database
db = firebase.database()
return db
Ok, now save this into a neat .py file
NEXT, at your new notebook or main .py you're going to import this new .py file that we'll call auth.py from now on...
from auth import *
# add do a variable
db = connect_firebase()
#and now the hard/ easy part that took me a while to figure out:
# notice the value inside the .child, it should be the parent name with all the cats keys
values = db.child('cats').get()
# adding all to a dataframe you'll need to use the .val()
data = pd.DataFrame(values.val())
and thats it, print(data.head()) to check if the values / columns are where they're expected to be.
Firebase Realtime Database is one big JSON tree:
when you fetch data at a location in your database, you also retrieve
all of its child nodes.
The best practice is to denormalize your data, creating multiple locations (nodes) for the same data:
Many times you can denormalize the data by using a query to retrieve a
subset of the data
In your case, you may create a second node named "categories" where you list "only" the category names.
/cat1
/...
/cat2
/...
/cat3
/...
/cat4
/...
/categories
/cat1
/cat2
/cat3
/cat4
In this scenario you can use the update() method to write to more than one location at the same time.
I was exploring pyrebase documentation. As per that, we may extract only keys from some path.
To return just the keys at a particular path use the shallow() method.
all_user_ids = db.child("users").shallow().get()
In your case, it'll be something like:
firebase = pyrebase.initialize_app(config)
db = firebase.database()
allCats = db.child("data").shallow().get()
Let me know if it didn't help.
I am trying to implement a function in Django to upload an image from a client (an iPhone app) to an Amazon S3 server. The iPhone app sends a HttpRequest (method PUT) with the content of the image in the HTTPBody. For instance, the client PUTs the image to the following URL: http://127.0.0.1:8000/uploadimage/sampleImage.png/
My function in Django looks like this to handle such a PUT request and save the file to S3:
def store_in_s3(filename, content):
conn = S3Connection(settings.ACCESS_KEY, settings.PASS_KEY) # gets access key and pass key from settings.py
bucket = conn.create_bucket("somepicturebucket")
k = Key(bucket)
k.key = filename
mime = mimetypes.guess_type(filename)[0]
k.set_metadata("Content-Type", mime)
k.set_contents_from_string(content)
k.set_acl("public-read")
def upload_raw_data(request, name):
if request.method == 'PUT':
store_in_s3(name,request.raw_post_data)
return HttpResponse('Upload of raw data to S3 successful')
else:
return HttpResponse('Upload not successful')
My problem is how to tell my function the name of the image. In my urls.py I have the following but it won't work:
url(r'^uploadrawdata/(\d+)/', upload_raw_data ),
Now as far as I'm aware, d+ stands for digits, so it's obviously of no use here when I pass the name of a file. However, I was wondering if this is the correct way in the first place. I read this post here and it suggests the following line of code which I don't understand at all:
file_name = path.split("/")[-1:][0]
Also, I have no clue what the rest of the code is all about. I'm a bit new to all of this, so any suggestions of how to simply upload an image would be very welcome. Thanks!
This question is not really about uploading, and the linked answer is irrelevant. If you want to accept a string rather than digits in the URL, in order to pass a filename, you can just use w instead of d in the regex.
Edit to clarify Sorry, didn't realise you were trying to pass a whole file+extension. You probably want this:
r'^uploadrawdata/(.+)/$'
so that it matches any character. You should probably read an introduction to regular expressions, though.