I am using django-storages and Amazon S3 for file storages. In my model I have:
avatar = models.ImageField(_('Avatar'), upload_to='avatars/profiles/', blank=True, null=True)
The image is uploaded successfully on save, but full url with credentials is saved. In my Retrieve requests/ when I read the url from db via console) I get something like:
https://subdomain.amazonaws.com/avatars/profiles/filename.jpg?X-Amz-Algorithm=XXX&X-Amz-Expires=XXX&X-Amz-SignedHeaders=XXXX&X-Amz-Signature=XXXX&X-Amz-Date=XXXXXX&X-Amz-Credential=XXXX
How can I prevent this? I could strip the url before responding, but I do not need and therefore do not want to save them in this format, because all files can be accessed publicly, also no need for credentials.
Ps. I though of using the post_save hook but it seemed like a hack to me.
To remove the authentication credentials in the query string, set AWS_QUERYSTRING_AUTH = False in your settings.py. From django-storages documentation at https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html:
AWS_QUERYSTRING_AUTH (optional; default is True)
Setting AWS_QUERYSTRING_AUTH to False to remove query parameter authentication from generated URLs. This can be useful if your S3 buckets are public.
What you see in X-Amz-Credentials is your access key. In Amazon context it is not considered sensitive information, so it can be stored in plain text.
if you set AWS_S3_CUSTOM_DOMAIN in settings.py,
django-storages will return custom-doamin without query string
you can reference below piece of code of class S3BotoStorage
def url(self, name, headers=None, response_headers=None, expire=None):
# Preserve the trailing slash after normalizing the path.
name = self._normalize_name(self._clean_name(name))
if self.custom_domain:
return "%s//%s/%s" % (self.url_protocol,
self.custom_domain, filepath_to_uri(name))
if expire is None:
expire = self.querystring_expire
return self.connection.generate_url(
expire,
method='GET',
bucket=self.bucket.name,
key=self._encode_name(name),
headers=headers,
query_auth=self.querystring_auth,
force_http=not self.secure_urls,
response_headers=response_headers,
)
Related
I am running an app built with djang-tenants. The app asks the user (tenant) to upload some data. I want the data to be segregated in sub directories for each tenant.
Acccording to the doc (https://django-tenants.readthedocs.io/en/latest/files.html), here is how media root is configured:
settings.py
MEDIA_ROOT = "/Users/murcielago/desktop/simulation_application/data"
MULTITENANT_RELATIVE_MEDIA_ROOT = "%s"
On the upload everything is great.
Now, I can't find a way to retrieve the file being uploaded within the app. Basically I need the app to serve the file corresponding to which tenant is requesting it.
Here is how I thought this would work:
from django.conf import settings
media_file_dir = settings.MULTITENANT_RELATIVE_MEDIA_ROOT
df = pd.read_csv(media_file_dir+'/uploads/sample_orders_data.csv')
but this does not work.
I have made it work so far grabbing the tenant name from the url and passing it to the app using pickle but this is not right in terms of security and won't scale.
Would someone has a clue on the best way to handle the lecture of tenant specific files?
When a user creates or registers for a new account on my website, an image is created(generated) and is supposed to be uploaded to the s3 bucket. The image is successfully created(verified by running the ls command on the server in the media directory) but it's not getting uploaded to s3. However, when I try uploading an image for a user account from the admin panel, changes are correctly reflected in s3 (i.e newly uploaded image from admin panel is shown in s3 bucket's directory, but this is not feasible as the users cannot be given admin panel access). I aim to auto-upload the generated image to the s3 bucket when a new account is created.
Here's some related code.
views.py
def signup(request):
if request.method == "POST":
base_form = UserForm(data=request.POST)
addnl_form = AddnlForm(data=request.POST)
if base_form.is_valid() and addnl_form.is_valid():
usrnm = base_form.cleaned_data['username']
if UserModel.objects.filter(user__username=usrnm).count()==0:
user = base_form.save()
user.set_password(user.password)
user.save()
#print(img)
addnl = addnl_form.save(commit=False )
addnl.user = user
img = qr.make_image() #create a qr code image, full code not included.
img.save('media/qrcodes/%s.png'%usrnm)
addnl.qr_gen = 'qrcodes/%s.png'%usrnm
addnl.save()
else:
messages.error(request,base_form.errors,addnl_form.errors)
else:
base_form = UserForm()
addnl_form = AddnlForm()
return render(request,'app/signup.html',{'base_form':base_form,'addnl_form':addnl_form} )
models.py
class UserModel(models.Model):
.
.
.
qr_gen = models.ImageField(upload_to='qrcodes',default=None,null=True,blank=True)
settings.py
DEFAULT_FILE_STORAGE = 'project.storage_backend.MediaStorage'
storage_backend.py
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
location = 'media'
default_acl = 'public-read'
file_overwrite = False
UPDATE
Instead of auto-generating, an image and uploading it to s3, if I upload any image in the registration form, even in that case it's successfully uploading to s3, the only case where it fails is when I need to auto-upload without user intervention.
Please help me solve this problem. Thank you.
I would recommend taking a look at django-storages which automates all of this so you should only worry about the form and the view instead of anything else. In there you will find help on how to deal with images easily.
Instead of globally setting media storage settng, set it on the field
class UserModel(models.Model):
...
qr_gen = models.ImageField(upload_to='qrcodes',storage=MediaStorage())
Auto uploading an image directly to s3 requires one to directly communicate with S3's backend API. Plainly using django-storages or tweaking DEFAULT_FILE_STORAGE path isn't just enough as it only helps to point user-uploaded files to the specified s3 bucket/path. This problem can be tackled using the boto3 library's upload_file method.
Usage example:
import boto3
s3 = boto3.resource('s3')
s3.Bucket('mybucket').upload_file('/tmp/hello.txt', 'hello.txt')
Params:
Filename (str) -- The path to the file to upload.
Key (str) -- The name of the key to upload to.
ExtraArgs (dict) -- Extra arguments that may be passed to the client operation.
Callback (function) -- A method which takes a number of bytes transferred to be periodically called during the upload.
In a personal project, I am trying to use Django as my front end and then allow data entered by users in a particular form to be copied to google sheets.
Google's own docs recommend using https://github.com/google/oauth2client which is now deprecated, and the docs have not been updated. With this, I have started attempting to use Python Social Auth and Gspread. For Gspread to be able to function correctly, I need to be able to pass it not only an access token but also a refresh token. Python Social Auth however is not persisting the refresh token along with the rest of the "extra data". Looking at the data preserved and the URLs routed to, it seems to me more like somewhere it is routing through Google+.
I have the following configurations in my Django settings files:
AUTHENTICATION_BACKENDS = (
'social_core.backends.google.GoogleOAuth2',
'django.contrib.auth.backends.ModelBackend',
)
SOCIAL_AUTH_PIPELINE = (
'social_core.pipeline.social_auth.social_details',
'social_core.pipeline.social_auth.social_uid',
'social_core.pipeline.social_auth.social_user',
'social_core.pipeline.user.get_username',
'social_core.pipeline.user.create_user',
'social_core.pipeline.social_auth.associate_user',
'social_core.pipeline.social_auth.load_extra_data',
'social_core.pipeline.user.user_details',
'social_core.pipeline.social_auth.associate_by_email',
)
SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = '...'
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = '...'
SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['https://www.googleapis.com/auth/spreadsheets']
Is there a better way to access a google sheet?
Am I correct that PSA or google is redirecting me into a Google+ auth flow instead of the Google Oauth2?
If not, what must change so that Python Social Auth keeps the refresh token?
It's true that python-social-auth will use some bits of the Google+ platform, at least the API to retrieve details about the user to fill in the account.
From your settings, I see you have associate_by_email at the bottom, at that point, at that point it has no use since the user is already be created, if you really plan to use it, it must be before the create_user one, you can check the DEFAULT_PIPELINE as a reference.
In order to get a refresh_token from google, you need to tell it that you want one, to do that you need to set the offline access type:
SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = {
'access_type': 'offline'
}
With that setting Google will give you a refresh_token and it will automatically stored in extra_data.
Just provide this in your settings.py:
SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = {
'access_type': 'offline',
'hd': 'xyzabc.com',
'approval_prompt':'force'
}
remeber there is {'approval_prompt' : 'force'} which will force the user to select the gmail account, this way you will get refresh token.
You can send extra parameters to the OAuth2 provider using the variable
SOCIAL_AUTH_<PROVIDER>_AUTH_EXTRA_ARGUMENTS
For Google, you can see the extra parameters they accept in their documentation (scroll down to "parameters"). The one we are looking for is access_type:
access_type: Indicates whether your application can refresh access tokens when the user is not present at the browser. Valid parameter values are online, which is the default value, and offline.
So we can add the following to settings.py, to indicate that we want to receive a refresh token:
SOCIAL_AUTH_GOOGLE_OAUTH2_EXTRA_ARGUMENTS = {"access_type: offline"}
The results from EXTRA_ARGUMENTS will be stored in extra_data, so the refresh token can be accessed like this:
refresh_token = user.social_auth.get(provider="google-oauth2").extra_data["refresh_token"]
One possible solution is to store the refresh token alongside the user in a UserProfile model, by adding a custom function to the social-auth pipeline:
Create the model
# models.py
class UserProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="profile")
refresh_token = models.CharField(max_length=255, default="")
Add a function to access store the refresh token
# pipeline.py
from .models import UserProfile
def store_refresh_token(user=none, *args, **kwargs):
extra_data = user.social_auth.get(provider="google-oauth2").extra_data
UserProfile.objects.get_or_create(
user=user, defaults={"refresh_token": extra_data["refresh_token"]}
)
Add our new function to the social-auth pipeline.
# settings.py
...
SOCIAL_AUTH_PIPELINE = (
...
"my-app.pipeline.store_refresh_token"
)
SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [
'https://www.googleapis.com/auth/spreadsheets'
# any other scopes you need
]
...
The token is now stored alongside the user and can be used to initialise the sheets client or whatever else you need.
I have configured my django app's default file storage to use boto.
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage
I also have a model that stores uploaded images to s3
...
profile_pic = models.ImageField(upload_to=get_upload_path, null=True)
...
However, when I reference this field, it shows up with an S3 url.
How do I configure this to return a cloudfront address?
Use AWS_S3_CUSTOM_DOMAIN in Djangp Storage there is option to specify CloudFront URL
I am creating a website using html as a frontend and python as a backend using EVE framework. I have enabled token authentication for my usersRESTful Account Management. But when I pass the values to the EVE framework it gives me a 401.
var login = function (loginData) {
var deferred = $q.defer();
$http.post(appConfig.serviceUrl + 'user',{data:loginData})
here the loginData holds the username and password of my user from the html page this piece of code is inside a .js file.
My api.py holds the following authentication code.
class RolesAuth(TokenAuth):
def check_auth(self, token, allowed_roles, resource, method):
# use Eve's own db driver; no additional connections/resources are used
accounts = app.data.driver.db['user']
lookup = {'token': token}
if allowed_roles:
lookup['roles'] = {'$in': allowed_roles}
account = accounts.find_one(lookup)
return account
def add_token(documents):
# Don't use this in production:
# You should at least make sure that the token is unique.
for document in documents:
document["token"] = (''.join(random.choice(string.ascii_uppercase)
for x in range(10)))
My problem is as soon as the api.py is run it asks to provide proper credentials. How can i send the token directly to the auth mechanism so that it lets me access the db.
How will you suggest me to get rid of the authentication alert box.
I want the token to be automatically sent to the api.
If suppose I use basic authentication how can I send the username and password values directly and validate it? Without having the browser pop-up box asking for username and password
Thanks in advance.
Does it work with curl ? Refer to this question
Also, refer to this and this thread on the mailing list.