I have an base64 encoded variable in my python script, it was originally an image. The images files will be around 1MB in size. I have two questions
First:
Should I decode this variable before saving it to mySQL database?
Second:
And this might depend on the previous answer. Should I save this as a BLOB or TEXT. I realize this might depend on what I am going to do with it later.
Later I am going to pull get this data from my database and display it in a website and in a iOS and android mobile app.
Related
I am trying to overwrite an image in my Cloud Storage over the Python API, but after I overwrite it and refresh (and delete browsercache) the Cloud Webpage or the public link the image is still the same, even the next day but sometimes it gets randomly updated to the new image!
Edit: The Metadata get updated, but not the filesize-info and it still shows the old image in the Cloud-Webpage and at the public url.
What I am expecting is that if I am uploading a file to Cloud Storage via a API that I can download the new file from the public link a short time afterwards instead of the old image.
I expected to be able to define the cache behaviour with the Cache-Control File-directive (Edit: it is propably not an issue about caching because even the next day the image stays the old one).
This is my code:
blob = bucket.blob(name)
blob.cache_control = "no-store"
blob.upload_from_filename(name)
I tried:
Deleting the old image over the Cloud-Webpage and then after a few
seconds upload the new image with the same name via Python: It works!
I can download the new image from the public link and see it in the
Cloud-Webpage. Edit: It seems to work only some times!
Deleting the Image with Python and directly afterwards upload the new
image via Python: Not working. While it is deleted the public link
doesnt show it. But after I uploaded the new one the public link
shows the old one again.
I read that the standard cache settings of public bucket files is
"public, max-age=3600". So I used the Cache-Control Directive and set
it to "no-store" or "public, age=0". Then I confirmed these
Cache-Control settings are reflected in the headers in the browser
debug console. But still the old image is loading anytime.
I changed the bucket type to regional instead of multi-region. Even after deleting the bucket, recreating it and moving the data inside it again the old image is still showing up!
Any tip is highly appreciated!
I made it work!
It was propably not related to Google Cloud Storage.
But if someone might did the same mistake as I:
I used Django's FilesSystemStorage-Class and saved the new file with the same name as the old one in the /temp directory, assuming that the old one will be overriden if it still exists. But instead it gives the new file another name. And later I upload the old file with blob.upload_from_filename(name)
Thats why all the things happend so randomly.
Thanks to all who thought about solving this!
I am writing a webcrawler that finds and saves the urls of all the images on a website. I can get these without problem. I need to upload these urls, along with a thumbnail version of them, to a server via http request, which will render the image and collect feature information to use in various AI applications.
For some urls this works no problem.
http://images.asos-media.com/products/asos-waxed-parka-raincoat-with-zip-detail/7260214-1-khaki
resizes into
http://images.asos-media.com/products/asos-waxed-parka-raincoat-with-zip-detail/7260214-1-khaki?wid=200
but for actual .jpg images this method doesn't work, like for this one:
https://cdn-images.farfetch-contents.com/11/85/29/57/11852957_8811276_480.jpg
How can I resize the jpgs via url?
Resizing the image via the URL only works if the site you're hitting is using a dynamic media service or tool in their stack. That's why ASOS will allow you to append a query with the dimensions for resize, however different DM tools will have different query parameters.
If you want to make it tolerant you're best off downloading the image, resizing it with Python and then uploading it.
I want to email out a document that will be filled in by many people and emailed back to me. I will then parse the responses using Python and load them into my database.
What is the best format to send out the initial document in?
I was thinking an interactive .pdf but do not want to have to pay for Adobe XI. Alternatively maybe a .html file but I'm not sure how easy it is to save the state of it once its been filled in in order to be emailed back to me. A .xls file may also be a solution but I'm leaning away from it simply because it would not be a particularly professional looking format.
The key points are:
Answers can be easily parsed using Python
The format should common enough to open on most computers
The document should look relatively pleasing to the eye
Send them a web-page with a FORM section, complete with some Javascript to grab the contents of the controls and send them to you (e.g. in JSON format) when they press "submit".
Another option is to set it up as a web application. There are several Python web frameworks that could be used for that. You could then e-mail people a link to the web-app.
Why don't you use Google Docs for the form. Create the form in Google Docs and save the answer in an excel sheet. And then use any python Excel format reader (Google them) to read the file. This way you don't need to parse through mails and will be performance friendly too. Or you could just make a simple form using AppEngine and save the data directly to the database.
I am using Google App Engine to host a website where I want users to be able to upload any video and then I want to use flowplayer to display it, which requires MP4 and webm formats to support all browsers. I have it working correctly where a user uploads a video and then I can serve it but I need to convert it into those two formats so that I everyone can view the video.
Is there any Python project I can import to do the conversion on App Engine or any resources showing how I can do it with something like Google Compute Engine? I need it to be done automatically on the server and most projects that look stable for this in python are written to by done by command line on a personal computer.
I'm not sure about Google App Engine, but you may want to look into using FFMPEG. I am currently hosting a site on heroku and have been able to use it spawning a task that will automatically grab an image from the uploaded video for display and convert the uploaded file to mp4. In order for conversion to mp4, you will need a compilation using libx264. I am no expert on this, but it may be something you want to look into if you haven't already. In my app on heroku, I am able to convert uploads to mp4, but it has definitely taken some time to figure out the right configuration and it still takes longer than I would like. However, I am also a fairly new developer and this is my first app ever created, so it might be easier for you to get working the way you want it.
I developed a photo gallery in python, now I want to insert a new feature, "Download Multiple Photos": a user can select some photos to download and system creates a compressed file with the photos.
In your opinion: in the frontend what is the best way to send the ids? Json? input hidden? and in the backend there is a django library that compress the selected photos and return the compressed file?
Thanks,
Marco
Once you get the IDs of all the selected image in the client, you can zip them by using the zipfile or tarfile module . The way to collect the files to compress totally depends on how did you save the image. If you save the image such as uploaded date/id/, then on the client side, you need to send those information back to the server as well to reduce the server load. Hidden fields are ok in this situation
I think the only way to do it is in the backend, because in the frontend you will only have to select which photos you want to download and send the ids or some identifiers to the server side, then retrieve those selected photos from the filesystem (based on the identifiers), compress them in a single file and return that compressed file in a response as attached content.
If you do it in the front end how would you get each file and compress them all?
Doing it in server side is the best solution in my opinion :)