I use a Minio-backend in the Django app. I have a feature that users can use the object storage(user online gallery), and Also, users can upload a new image from the device to create a post. But when I use the object storage images and create an image, images duplicate in object storage(because each time I create a post, I want to upload new images( images that upload locally) in object storage). What should I do to prevent these duplicates?
It is my model:
class MediaStorage(models.Model):
file = models.FileField(verbose_name="Object Upload",
storage=MinioBackend(bucket_name='django-backend-dev-private'),
upload_to=iso_date_prefix)
this is my create new post :
class CreatePostView(generics.CreateAPIView):
...
def post(self, request, *args, **kwargs):
user = request.user
data = request.data
...
for media_file in post_files:
file_team=post_team
f = MediaStorage.objects.create(team=file_team,owner=user)
f.media=media_file
f.save()
post.multimedia.add(f)
return Response(post_serializer.PostSerializer(post).data, status=status.HTTP_201_CREATED)
Thank you so much.
I think you shouldn't upload directly,u could get image md5 first,and a attr to MediaStorage to save md5 value,before upload check is there same md5 in DB,If has that may you already upload this image before
Related
I am using Celery - Redis - Django rest framework together.
The error happens when I try to pass the serializer to the delay of celery within the Django rest framework.
Here is the viewset
class TestSet(viewsets.ModelViewSet):
queryset = Test.objects.all()
serializer_class = ImageSerializer
def create(self, request, *args, **kwargs):
serializer = TestSerializer(data=request.data)
if serializer.is_valid():
image_uploaded= "static_path"
json_data = base64.b64encode(np.array(image_uploaded)).decode('ascii')
result = test_call.delay({'image': json_data})
result = test_call.delay(serializer)
data = {"task": result.task_id}
return Response(data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
#shared_task(name="values")
def test_call(decoded_image):
return decoded_image
The error I get is
EncodeError(TypeError('Object of type Response is not JSON
serializable'))
Update:
Even when I do this, I still get an error
result = test_call.delay({"task": 1})
#shared_task(name="values")
def test_call(decoded_image):
return {"task": 2}
This isn't going to answer your question, but I can't leave a comment (low reputation).
It seems that you are trying to JSON Serialize something that obviously isn't JSON serializable. Based on the name, it is some kind of image data. You could try a different workflow that should be JSON Serializable, example:
One Example:
first save the image somewhere that is accessible later and add the location in the serializer (S3 bucket and then a link to the image)
In your celery task, fetch the image data based on that location
Second Example:
convert the image data into something JSON serializable like a Base64 image string
When a user creates or registers for a new account on my website, an image is created(generated) and is supposed to be uploaded to the s3 bucket. The image is successfully created(verified by running the ls command on the server in the media directory) but it's not getting uploaded to s3. However, when I try uploading an image for a user account from the admin panel, changes are correctly reflected in s3 (i.e newly uploaded image from admin panel is shown in s3 bucket's directory, but this is not feasible as the users cannot be given admin panel access). I aim to auto-upload the generated image to the s3 bucket when a new account is created.
Here's some related code.
views.py
def signup(request):
if request.method == "POST":
base_form = UserForm(data=request.POST)
addnl_form = AddnlForm(data=request.POST)
if base_form.is_valid() and addnl_form.is_valid():
usrnm = base_form.cleaned_data['username']
if UserModel.objects.filter(user__username=usrnm).count()==0:
user = base_form.save()
user.set_password(user.password)
user.save()
#print(img)
addnl = addnl_form.save(commit=False )
addnl.user = user
img = qr.make_image() #create a qr code image, full code not included.
img.save('media/qrcodes/%s.png'%usrnm)
addnl.qr_gen = 'qrcodes/%s.png'%usrnm
addnl.save()
else:
messages.error(request,base_form.errors,addnl_form.errors)
else:
base_form = UserForm()
addnl_form = AddnlForm()
return render(request,'app/signup.html',{'base_form':base_form,'addnl_form':addnl_form} )
models.py
class UserModel(models.Model):
.
.
.
qr_gen = models.ImageField(upload_to='qrcodes',default=None,null=True,blank=True)
settings.py
DEFAULT_FILE_STORAGE = 'project.storage_backend.MediaStorage'
storage_backend.py
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
location = 'media'
default_acl = 'public-read'
file_overwrite = False
UPDATE
Instead of auto-generating, an image and uploading it to s3, if I upload any image in the registration form, even in that case it's successfully uploading to s3, the only case where it fails is when I need to auto-upload without user intervention.
Please help me solve this problem. Thank you.
I would recommend taking a look at django-storages which automates all of this so you should only worry about the form and the view instead of anything else. In there you will find help on how to deal with images easily.
Instead of globally setting media storage settng, set it on the field
class UserModel(models.Model):
...
qr_gen = models.ImageField(upload_to='qrcodes',storage=MediaStorage())
Auto uploading an image directly to s3 requires one to directly communicate with S3's backend API. Plainly using django-storages or tweaking DEFAULT_FILE_STORAGE path isn't just enough as it only helps to point user-uploaded files to the specified s3 bucket/path. This problem can be tackled using the boto3 library's upload_file method.
Usage example:
import boto3
s3 = boto3.resource('s3')
s3.Bucket('mybucket').upload_file('/tmp/hello.txt', 'hello.txt')
Params:
Filename (str) -- The path to the file to upload.
Key (str) -- The name of the key to upload to.
ExtraArgs (dict) -- Extra arguments that may be passed to the client operation.
Callback (function) -- A method which takes a number of bytes transferred to be periodically called during the upload.
I am trying to serialize multiple files, send the files using 'Postman' and have tried several ways to save several files at once.
This is the model:
class PrtFiles(models.Model):
file_name = models.FileField(null=True, blank=True)
And I getting this request in my view in Django:
<MultiValueDict: {'file_name[0]': [<InMemoryUploadedFile: Inventario Personal_Users [SHORT].xlsx (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet)>], 'file_name[1]': [<InMemoryUploadedFile: Planilla_de_Usuarios_MEL [NEW][SHORT].xlsx (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet)>]}>
Whit this Request in Postman:
enter image description here
Any method that to do that?
P.S. I'm beginner with Django, thanks beforehand.
Iterate over the files where you can see the request.
def save_to_model(files):
for f in files:
m = PrtFiles()
m.file_name = f
m.save()
This is the idea what you can do it. I hope you'll implement a better way.
I have used Python for many of my projects but I am new to django and the django rest framework, which I am using to design and develop a set of Web APIs for my current project.
Weaving the backend, we are using Postgres for user information and DynamoDb for other set of resources users have to work upon.
In the basic implementation, I tried to write the viewset as below:
class WorkViewSet(viewsets.ViewSet):
serializer_class = serializers.WorkSerializer
permission_map = {
'create' : [IsAuthenticated, IsUser, ], # post
'list' : [IsAuthenticated, ], # get
'retrieve' : [IsAuthenticated, ], # get
'work_approval' : [IsAuthenticated, IsAdmin, ], # post
'work_disapproval' : [IsAuthenticated, IsAdmin, ], # post
}
def list(self, request):
...
def create(self, request):
...
def retrieve(self, request, pk=None):
...
#action(methods=['post'], detail=True, url_path='approve', url_name='work_approval')
def work_approval(self, request, pk=None, *args, **kwargs):
...
#action(methods=['post'], detail=True, url_path='disapprove', url_name='work_disapproval')
def work_disapproval(self, request, pk=None, *args, **kwargs):
...
def get_permissions(self):
try:
return [permission() for permission in self.permission_map[self.action]]
except KeyError:
return [permission() for permission in self.permission_classes]
and the serializer as below:
class WorkSerializer(serializers.Serializer):
STATUSES = (
'0',
'1',
'2',
)
work_id = serializers.IntegerField(read_only=True)
work_name = serializers.CharField(max_length=256)
work_type = serializers.CharField(max_length=256)
work_status = serializers.ChoiceField(choices=STATUSES, default='0')
def create(self, validated_data):
...
def update(self, instance, validated_data):
...
This set of code is working absolutely fine for me, but with latest requirement changes, I need to configure the POST request to additionally accept a csv file and parse this file to extract the contents which must be pushed to database in additional fields (not as file field). I tried to look for the solutions to this problem and found this link but this solution mainly targets bulk submission of single type of resource which differs from my need.
I am using Python 3.6.5, Django 2.0.6 and Django Rest Framework 3.8.2
Please help me how should I proceed.
Extend your serializer to include a CSV file:
class WorkSerializer(serializers.Serializer):
csv_file = serializers.FileField()
in serializer's create function:
def create(self, validated_data):
csv_input = validated_data.pop("csv_file", None)
instance = super().create(validated_data)
if csv_input:
** Process your csv file **
return instance
Personally, i would suggest you to create a background to process csv files and update DB. Because thsi could be long running task.
So instead of processing the csv file directly during the POST request, you would schedule a task.
Updated to answer the comment
Background processing - it requires little bit of configuration and you have multiple ways you can choose from. Perhaps, the easiest is to use django background tasks
And this would serve your purpose well. You simple create a function, add background decorator and when it is called, a task is scheduled.
Do you think if this approach of using a csv file to post bulk data is a good one or we should use a huge json instead ?
Well that depends.
If you upload a file, you will need to configure a storage for it where your scheduled task has access to ( local or remote one , again depends on your use case ).
One huge json - hm, that depends how huge is huge. You would need to run some test to determine where are your limits.
Maybe a possible solution would be to upload your csv file directly to your storage from client ( if you'd use S3 - that would be easy ) and then just tell your server to process it from there.
I have a website, that lets user upload files. These files are attached to a node, which ID is part of the upload request. Since the same file might be attached to different nodes, Django will rename the file by adding a hash to the filename. Thus if a user downloads a previously uploaded file, it won't have the original filename.
Is it possible to create a subdirectory (named after the node ID) inside the media folder a file is uploaded? The closest solution I found was to change the System Storage of the FileField, but this is static for all files of that one model. Or is there another, better way to solve the problem with duplicate files?
Model:
class Attachment(models.Model):
node = models.IntegerField(default=-1)
file = models.FileField(upload_to=".")
View:
def file_upload(request):
if request.method == "POST":
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
instance = Attachment(file=request.FILES["file"], node_id=request.POST["node_id"])
instance.save()
return HttpResponse(instance.file.url)
Yes, take a look at the documentation on upload_to.
You could do something like this, which includes the node id (defined as an integer in your model in the upload_to path:
def attachment_path_with_node(instance, filename):
return "attachments/{}/{}".format(instance.node, filename)
class Attachment(models.Model):
node = models.IntegerField(default=-1)
file = models.FileField(upload_to=attachment_path_with_node)
Also path can be further customized like this:
document = models.FileField(upload_to='documents/%Y/%m/%d/')
which would upload to: MEDIA_ROOT/documents/2020/12/22/.
See more at https://simpleisbetterthancomplex.com/tutorial/2016/08/01/how-to-upload-files-with-django.html